linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Randy Dunlap <rdunlap@infradead.org>
To: Kenneth Lee <nek.in.cn@gmail.com>,
	Jonathan Corbet <corbet@lwn.net>,
	Herbert Xu <herbert@gondor.apana.org.au>,
	"David S . Miller" <davem@davemloft.net>,
	Joerg Roedel <joro@8bytes.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	Kenneth Lee <liguozhu@hisilicon.com>,
	Hao Fang <fanghao11@huawei.com>,
	Zhou Wang <wangzhou1@hisilicon.com>,
	Zaibo Xu <xuzaibo@huawei.com>,
	Philippe Ombredanne <pombredanne@nexb.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-crypto@vger.kernel.org, iommu@lists.linux-foundation.org,
	kvm@vger.kernel.org, linux-accelerators@lists.ozlabs.org,
	Lu Baolu <baolu.lu@linux.intel.com>,
	Sanjay Kumar <sanjay.k.kumar@intel.com>
Cc: linuxarm@huawei.com
Subject: Re: [PATCH 1/7] vfio/sdmdev: Add documents for WarpDrive framework
Date: Thu, 6 Sep 2018 11:36:36 -0700	[thread overview]
Message-ID: <56f5f66d-f6d9-f4fa-40ca-e4a8bad170c1@infradead.org> (raw)
In-Reply-To: <20180903005204.26041-2-nek.in.cn@gmail.com>

Hi,

On 09/02/2018 05:51 PM, Kenneth Lee wrote:
> From: Kenneth Lee <liguozhu@hisilicon.com>
> 
> WarpDrive is a common user space accelerator framework.  Its main component
> in Kernel is called sdmdev, Share Domain Mediated Device. It exposes
> the hardware capabilities to the user space via vfio-mdev. So processes in
> user land can obtain a "queue" by open the device and direct access the
> hardware MMIO space or do DMA operation via VFIO interface.
> 
> WarpDrive is intended to be used with Jean Philippe Brucker's SVA
> patchset to support multi-process. But This is not a must.  Without the
> SVA patches, WarpDrive can still work for one process for every hardware
> device.
> 
> This patch add detail documents for the framework.
> 
> Signed-off-by: Kenneth Lee <liguozhu@hisilicon.com>
> ---
>  Documentation/00-INDEX                |   2 +
>  Documentation/warpdrive/warpdrive.rst | 100 ++++
>  Documentation/warpdrive/wd-arch.svg   | 728 ++++++++++++++++++++++++++
>  3 files changed, 830 insertions(+)
>  create mode 100644 Documentation/warpdrive/warpdrive.rst
>  create mode 100644 Documentation/warpdrive/wd-arch.svg

> diff --git a/Documentation/warpdrive/warpdrive.rst b/Documentation/warpdrive/warpdrive.rst
> new file mode 100644
> index 000000000000..6d2a5d1e08c4
> --- /dev/null
> +++ b/Documentation/warpdrive/warpdrive.rst
> @@ -0,0 +1,100 @@
> +Introduction of WarpDrive
> +=========================
> +
> +*WarpDrive* is a general accelerator framework for user space. It intends to
> +provide interface for the user process to send request to hardware
> +accelerator without heavy user-kernel interaction cost.
> +
> +The *WarpDrive* user library is supposed to provide a pipe-based API, such as:

Do you say "is supposed to" because it doesn't do that (yet)?
Or you could just change that to say:

   The WarpDrive user library provides a pipe-based API, such as:


> +        ::
> +        int wd_request_queue(struct wd_queue *q);
> +        void wd_release_queue(struct wd_queue *q);
> +
> +        int wd_send(struct wd_queue *q, void *req);
> +        int wd_recv(struct wd_queue *q, void **req);
> +        int wd_recv_sync(struct wd_queue *q, void **req);
> +        int wd_flush(struct wd_queue *q);
> +
> +*wd_request_queue* creates the pipe connection, *queue*, between the
> +application and the hardware. The application sends request and pulls the
> +answer back by asynchronized wd_send/wd_recv, which directly interact with the
> +hardware (by MMIO or share memory) without syscall.
> +
> +*WarpDrive* maintains a unified application address space among all involved
> +accelerators.  With the following APIs: ::

Seems like an extra '.' there.  How about:

  accelerators with the following APIs: ::

> +
> +        int wd_mem_share(struct wd_queue *q, const void *addr,
> +                         size_t size, int flags);
> +        void wd_mem_unshare(struct wd_queue *q, const void *addr, size_t size);
> +
> +The referred process space shared by these APIs can be directly referred by the
> +hardware. The process can also dedicate its whole process space with flags,
> +*WD_SHARE_ALL* (not in this patch yet).
> +
> +The name *WarpDrive* is simply a cool and general name meaning the framework
> +makes the application faster. As it will be explained in this text later, the
> +facility in kernel is called *SDMDEV*, namely "Share Domain Mediated Device".
> +
> +
> +How does it work
> +================
> +
> +*WarpDrive* is built upon *VFIO-MDEV*. The queue is wrapped as *mdev* in VFIO.
> +So memory sharing can be done via standard VFIO standard DMA interface.
> +
> +The architecture is illustrated as follow figure:
> +
> +.. image:: wd-arch.svg
> +        :alt: WarpDrive Architecture
> +
> +Accelerator driver shares its capability via *SDMDEV* API: ::
> +
> +        vfio_sdmdev_register(struct vfio_sdmdev *sdmdev);
> +        vfio_sdmdev_unregister(struct vfio_sdmdev *sdmdev);
> +        vfio_sdmdev_wake_up(struct spimdev_queue *q);
> +
> +*vfio_sdmdev_register* is a helper function to register the hardware to the
> +*VFIO_MDEV* framework. The queue creation is done by *mdev* creation interface.
> +
> +*WarpDrive* User library mmap the mdev to access its mmio space and shared

s/mmio/MMIO/

> +memory. Request can be sent to, or receive from, hardware in this mmap-ed
> +space until the queue is full or empty.
> +
> +The user library can wait on the queue by ioctl(VFIO_SDMDEV_CMD_WAIT) the mdev
> +if the queue is full or empty. If the queue status is changed, the hardware
> +driver use *vfio_sdmdev_wake_up* to wake up the waiting process.
> +
> +
> +Multiple processes support
> +==========================
> +
> +In the latest mainline kernel (4.18) when this document is written,
> +multi-process is not supported in VFIO yet.
> +
> +Jean Philippe Brucker has a patchset to enable it[1]_. We have tested it
> +with our hardware (which is known as *D06*). It works well. *WarpDrive* rely
> +on them to support multiple processes. If it is not enabled, *WarpDrive* can
> +still work, but it support only one mdev for a process, which will share the
> +same io map table with kernel. (But it is not going to be a security problem,
> +since the user application cannot access the kernel address space)
> +
> +When multiprocess is support, mdev can be created based on how many
> +hardware resource (queue) is available. Because the VFIO framework accepts only
> +one open from one mdev iommu_group. Mdev become the smallest unit for process
> +to use queue. And the mdev will not be released if the user process exist. So
> +it will need a resource agent to manage the mdev allocation for the user
> +process. This is not in this document's range.
> +
> +
> +Legacy Mode Support
> +===================
> +For the hardware on which IOMMU is not support, WarpDrive can run on *NOIOMMU*
> +mode. That require some update to the mdev driver, which is not included in
> +this version yet.
> +
> +
> +References
> +==========
> +.. [1] https://patchwork.kernel.org/patch/10394851/
> +
> +.. vim: tw=78

thanks,
-- 
~Randy

  reply	other threads:[~2018-09-06 18:36 UTC|newest]

Thread overview: 58+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-03  0:51 [RFCv2 PATCH 0/7] A General Accelerator Framework, WarpDrive Kenneth Lee
2018-09-03  0:51 ` [PATCH 1/7] vfio/sdmdev: Add documents for WarpDrive framework Kenneth Lee
2018-09-06 18:36   ` Randy Dunlap [this message]
2018-09-07  2:21     ` Kenneth Lee
2018-09-03  0:51 ` [PATCH 2/7] iommu: Add share domain interface in iommu for sdmdev Kenneth Lee
2018-09-03  0:52 ` [PATCH 3/7] vfio: add sdmdev support Kenneth Lee
2018-09-03  2:11   ` Randy Dunlap
2018-09-06  8:08     ` Kenneth Lee
2018-09-03  2:55   ` Lu Baolu
2018-09-06  9:01     ` Kenneth Lee
2018-09-04 15:31   ` [RFC PATCH] vfio: vfio_sdmdev_groups[] can be static kbuild test robot
2018-09-04 15:32   ` [PATCH 3/7] vfio: add sdmdev support kbuild test robot
2018-09-04 15:32   ` kbuild test robot
2018-09-05  7:27   ` Dan Carpenter
2018-09-03  0:52 ` [PATCH 4/7] crypto: add hisilicon Queue Manager driver Kenneth Lee
2018-09-03  2:15   ` Randy Dunlap
2018-09-06  9:08     ` Kenneth Lee
2018-09-03  0:52 ` [PATCH 5/7] crypto: Add Hisilicon Zip driver Kenneth Lee
2018-09-03  0:52 ` [PATCH 6/7] crypto: add sdmdev support to Hisilicon QM Kenneth Lee
2018-09-03  2:19   ` Randy Dunlap
2018-09-06  9:09     ` Kenneth Lee
2018-09-03  0:52 ` [PATCH 7/7] vfio/sdmdev: add user sample Kenneth Lee
2018-09-03  2:25   ` Randy Dunlap
2018-09-06  9:10     ` Kenneth Lee
2018-09-03  2:32 ` [RFCv2 PATCH 0/7] A General Accelerator Framework, WarpDrive Lu Baolu
2018-09-06  9:11   ` Kenneth Lee
2018-09-04 15:00 ` Jerome Glisse
2018-09-04 16:15   ` Alex Williamson
2018-09-06  9:45     ` Kenneth Lee
2018-09-06 13:31       ` Jerome Glisse
2018-09-07  4:01         ` Kenneth Lee
2018-09-07 16:53           ` Jerome Glisse
2018-09-07 17:55             ` Jean-Philippe Brucker
2018-09-07 18:04               ` Jerome Glisse
2018-09-10  3:28             ` Kenneth Lee
2018-09-10 14:54               ` Jerome Glisse
2018-09-11  2:42                 ` Kenneth Lee
2018-09-11  3:33                   ` Jerome Glisse
2018-09-11  6:40                     ` Kenneth Lee
2018-09-11 13:40                       ` Jerome Glisse
2018-09-13  8:32                         ` Kenneth Lee
2018-09-13 14:51                           ` Jerome Glisse
2018-09-14  3:12                             ` Kenneth Lee
2018-09-14 14:05                               ` Jerome Glisse
2018-09-14  6:50                             ` Tian, Kevin
2018-09-14 13:05                               ` Kenneth Lee
2018-09-14 14:13                               ` Jerome Glisse
2018-09-17  1:42 ` Jerome Glisse
2018-09-17  8:39   ` Kenneth Lee
2018-09-17 12:37     ` Jerome Glisse
2018-09-18  6:00       ` Kenneth Lee
2018-09-18 13:03         ` Jerome Glisse
2018-09-20  5:55           ` Kenneth Lee
2018-09-20 14:23             ` Jerome Glisse
2018-09-21 10:05               ` Kenneth Lee
2018-09-21 10:03   ` Kenneth Lee
2018-09-21 14:52     ` Jerome Glisse
2018-09-25  5:55       ` Kenneth Lee

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56f5f66d-f6d9-f4fa-40ca-e4a8bad170c1@infradead.org \
    --to=rdunlap@infradead.org \
    --cc=alex.williamson@redhat.com \
    --cc=baolu.lu@linux.intel.com \
    --cc=corbet@lwn.net \
    --cc=davem@davemloft.net \
    --cc=fanghao11@huawei.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=herbert@gondor.apana.org.au \
    --cc=iommu@lists.linux-foundation.org \
    --cc=joro@8bytes.org \
    --cc=kvm@vger.kernel.org \
    --cc=liguozhu@hisilicon.com \
    --cc=linux-accelerators@lists.ozlabs.org \
    --cc=linux-crypto@vger.kernel.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxarm@huawei.com \
    --cc=nek.in.cn@gmail.com \
    --cc=pombredanne@nexb.com \
    --cc=sanjay.k.kumar@intel.com \
    --cc=tglx@linutronix.de \
    --cc=wangzhou1@hisilicon.com \
    --cc=xuzaibo@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).