ntb.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
From: Frank Li <Frank.Li@nxp.com>
To: fancer.lancer@gmail.com, helgaas@kernel.org,
	sergey.semin@baikalelectronics.ru, kw@linux.com,
	linux-pci@vger.kernel.org, manivannan.sadhasivam@linaro.org,
	ntb@lists.linux.dev, jdmason@kudzu.us, kishon@ti.com,
	haotian.wang@sifive.com, lznuaa@gmail.com, imx@lists.linux.dev
Subject: [RFC] PCI EP/RC network transfer by using eDMA
Date: Wed, 28 Sep 2022 16:38:56 -0500	[thread overview]
Message-ID: <20220928213856.54211-1-Frank.Li@nxp.com> (raw)


ALL:

       Recently some important PCI EP function patch already merged.  
Especially DWC EDMA support. 
       PCIe EDMA have nice feature, which can read/write all PCI host
memory regardless EP side PCI memory map windows size.
       Pci-epf-vntb.c also merged into mainline.  
       And part of vntb msi patch already merged. 
		https://lore.kernel.org/imx/86mtaj7hdw.wl-maz@kernel.org/T/#m35546867af07735c1070f596d653a2666f453c52

       Although msi can improve transfer latency,  the transfer speed
still quite slow because DMA have not supported yet. 

       I plan continue to improve transfer speed. But I find some
fundamental limitation at original framework, which can’t use EDMA 100% benefits. 
       After research some old thread: 
		https://lore.kernel.org/linux-pci/20200702082143.25259-1-kishon@ti.com/
		https://lore.kernel.org/linux-pci/9f8e596f-b601-7f97-a98a-111763f966d1@ti.com/T/
		Some RDMA document and https://github.com/ntrdma/ntrdma-ext

       I think the solution, which based on haotian wang will be best one. 

  ┌─────────────────────────────────┐   ┌──────────────┐
  │                                 │   │              │
  │                                 │   │              │
  │   VirtQueue             RX      │   │  VirtQueue   │
  │     TX                 ┌──┐     │   │    TX        │
  │  ┌─────────┐           │  │     │   │ ┌─────────┐  │
  │  │ SRC LEN ├─────┐  ┌──┤  │◄────┼───┼─┤ SRC LEN │  │
  │  ├─────────┤     │  │  │  │     │   │ ├─────────┤  │
  │  │         │     │  │  │  │     │   │ │         │  │
  │  ├─────────┤     │  │  │  │     │   │ ├─────────┤  │
  │  │         │     │  │  │  │     │   │ │         │  │
  │  └─────────┘     │  │  └──┘     │   │ └─────────┘  │
  │                  │  │           │   │              │
  │     RX       ┌───┼──┘   TX      │   │    RX        │
  │  ┌─────────┐ │   │     ┌──┐     │   │ ┌─────────┐  │
  │  │         │◄┘   └────►│  ├─────┼───┼─┤         │  │
  │  ├─────────┤           │  │     │   │ ├─────────┤  │
  │  │         │           │  │     │   │ │         │  │
  │  ├─────────┤           │  │     │   │ ├─────────┤  │
  │  │         │           │  │     │   │ │         │  │
  │  └─────────┘           │  │     │   │ └─────────┘  │
  │   virtio_net           └──┘     │   │ virtio_net   │
  │  Virtual PCI BUS   EDMA Queue   │   │              │
  ├─────────────────────────────────┤   │              │
  │  PCI EP Controller with eDMA    │   │  PCI Host    │
  └─────────────────────────────────┘   └──────────────┘


       Basic idea is
	1.	Both EP and host probe virtio_net driver
	2.	There are two queues,  one is EP side(EQ),  the other is Host side. 
	3.	EP side epf driver map Host side’s queue into EP’s space. , Called HQ.
	4.	One working thread 
	a.	pick one TX from EQ and RX from HQ, combine and generate EDMA request, and put into DMA TX queue.
	b.	Pick one RX from EQ and TX from HQ, combine and generate EDMA request, and put into DMA RX queue. 
	5.	EDMA done irq will mark related item in EP and HQ finished.

The whole transfer is zero copied and use DMA queue.

      RDMA have similar idea and more coding efforts. 
      I think Kishon Vijay Abraham I prefer use vhost,  but I don’t know how to build a queue at host side.
      NTB transfer just do one directory EDMA transfer (DMA write) because Read actually local memory
 to local memory.

      Any comments about overall solution?

best regards
Frank Li

             reply	other threads:[~2022-09-28 21:39 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-28 21:38 Frank Li [this message]
2022-10-11 11:37 ` [RFC] PCI EP/RC network transfer by using eDMA Kishon Vijay Abraham I
2022-10-11 15:09   ` [EXT] " Frank Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220928213856.54211-1-Frank.Li@nxp.com \
    --to=frank.li@nxp.com \
    --cc=fancer.lancer@gmail.com \
    --cc=haotian.wang@sifive.com \
    --cc=helgaas@kernel.org \
    --cc=imx@lists.linux.dev \
    --cc=jdmason@kudzu.us \
    --cc=kishon@ti.com \
    --cc=kw@linux.com \
    --cc=linux-pci@vger.kernel.org \
    --cc=lznuaa@gmail.com \
    --cc=manivannan.sadhasivam@linaro.org \
    --cc=ntb@lists.linux.dev \
    --cc=sergey.semin@baikalelectronics.ru \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).