linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jerome Glisse <jglisse@redhat.com>
To: Kenneth Lee <nek.in.cn@gmail.com>
Cc: Kenneth Lee <liguozhu@hisilicon.com>,
	"Tian, Kevin" <kevin.tian@intel.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	Herbert Xu <herbert@gondor.apana.org.au>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Jonathan Corbet <corbet@lwn.net>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Zaibo Xu <xuzaibo@huawei.com>,
	"linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>,
	"Kumar, Sanjay K" <sanjay.k.kumar@intel.com>,
	Hao Fang <fanghao11@huawei.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linuxarm@huawei.com" <linuxarm@huawei.com>,
	"iommu@lists.linux-foundation.org"
	<iommu@lists.linux-foundation.org>,
	"linux-crypto@vger.kernel.org" <linux-crypto@vger.kernel.org>,
	Philippe Ombredanne <pombredanne@nexb.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	"David S . Miller" <davem@davemloft.net>,
	"linux-accelerators@lists.ozlabs.org" 
	<linux-accelerators@lists.ozlabs.org>
Subject: Re: [RFC PATCH 0/7] A General Accelerator Framework, WarpDrive
Date: Wed, 8 Aug 2018 11:18:35 -0400	[thread overview]
Message-ID: <20180808151835.GA3429@redhat.com> (raw)
In-Reply-To: <11bace0e-dc14-5d2c-f65c-25b852f4e9ca@gmail.com>

On Wed, Aug 08, 2018 at 09:08:42AM +0800, Kenneth Lee wrote:
> 
> 
> 在 2018年08月06日 星期一 11:32 下午, Jerome Glisse 写道:
> > On Mon, Aug 06, 2018 at 11:12:52AM +0800, Kenneth Lee wrote:
> > > On Fri, Aug 03, 2018 at 10:39:44AM -0400, Jerome Glisse wrote:
> > > > On Fri, Aug 03, 2018 at 11:47:21AM +0800, Kenneth Lee wrote:
> > > > > On Thu, Aug 02, 2018 at 10:22:43AM -0400, Jerome Glisse wrote:
> > > > > > On Thu, Aug 02, 2018 at 12:05:57PM +0800, Kenneth Lee wrote:
> > > > > > > On Thu, Aug 02, 2018 at 02:33:12AM +0000, Tian, Kevin wrote:
> > > > > > > > > On Wed, Aug 01, 2018 at 06:22:14PM +0800, Kenneth Lee wrote:

[...]

> > > > > > > > > My more general question is do we want to grow VFIO to become
> > > > > > > > > a more generic device driver API. This patchset adds a command
> > > > > > > > > queue concept to it (i don't think it exist today but i have
> > > > > > > > > not follow VFIO closely).
> > > > > > > > > 
> > > > > > > The thing is, VFIO is the only place to support DMA from user land. If we don't
> > > > > > > put it here, we have to create another similar facility to support the same.
> > > > > > No it is not, network device, GPU, block device, ... they all do
> > > > > > support DMA. The point i am trying to make here is that even in
> > > > > Sorry, wait a minute, are we talking the same thing? I meant "DMA from user
> > > > > land", not "DMA from kernel driver". To do that we have to manipulate the
> > > > > IOMMU(Unit). I think it can only be done by default_domain or vfio domain. Or
> > > > > the user space have to directly access the IOMMU.
> > > > GPU do DMA in the sense that you pass to the kernel a valid
> > > > virtual address (kernel driver do all the proper check) and
> > > > then you can use the GPU to copy from or to that range of
> > > > virtual address. Exactly how you want to use this compression
> > > > engine. It does not rely on SVM but SVM going forward would
> > > > still be the prefered option.
> > > > 
> > > No, SVM is not the reason why we rely on Jean's SVM(SVA) series. We rely on
> > > Jean's series because of multi-process (PASID or substream ID) support.
> > > 
> > > But of couse, WarpDrive can still benefit from the SVM feature.
> > We are getting side tracked here. PASID/ID do not require VFIO.
> > 
> Yes, PASID itself do not require VFIO. But what if:
> 
> 1. Support DMA from user space.
> 2. The hardware makes use of standard IOMMU/SMMU for IO address translation.
> 3. The IOMMU facility is shared by both kernel and user drivers.
> 4. Support PASID with the current IOMMU facility

I do not see how any of this means it has to be in VFIO.
Other devices do just that. GPUs driver for instance share
DMA engine (that copy data around) between kernel and user
space. Sometime kernel use it to move things around. Evict
some memory to make room for a new process is the common
example. Same DMA engines is often use by userspace itself
during rendering or compute (program moving things on there
own). So they are already kernel driver that do all 4 of
the above and are not in VFIO.


> > > > > > your mechanisms the userspace must have a specific userspace
> > > > > > drivers for each hardware and thus there are virtually no
> > > > > > differences between having this userspace driver open a device
> > > > > > file in vfio or somewhere else in the device filesystem. This is
> > > > > > just a different path.
> > > > > > 
> > > > > The basic problem WarpDrive want to solve it to avoid syscall. This is important
> > > > > to accelerators. We have some data here:
> > > > > https://www.slideshare.net/linaroorg/progress-and-demonstration-of-wrapdrive-a-accelerator-framework-sfo17317
> > > > > 
> > > > > (see page 3)
> > > > > 
> > > > > The performance is different on using kernel and user drivers.
> > > > Yes and example i point to is exactly that. You have a one time setup
> > > > cost (creating command buffer binding PASID with command buffer and
> > > > couple other setup steps). Then userspace no longer have to do any
> > > > ioctl to schedule work on the GPU. It is all down from userspace and
> > > > it use a doorbell to notify hardware when it should go look at command
> > > > buffer for new thing to execute.
> > > > 
> > > > My point stands on that. You have existing driver already doing so
> > > > with no new framework and in your scheme you need a userspace driver.
> > > > So i do not see the value add, using one path or the other in the
> > > > userspace driver is litteraly one line to change.
> > > > 
> > > Sorry, I'd got confuse here. I partially agree that the user driver is
> > > redundance of kernel driver. (But for WarpDrive, the kernel driver is a full
> > > driver include all preparation and setup stuff for the hardware, the user driver
> > > is simply to send request and receive answer). Yes, it is just a choice of path.
> > > But the user path is faster if the request come from use space. And to do that,
> > > we need user land DMA support. Then why is it invaluable to let VFIO involved?
> > Some drivers in the kernel already do exactly what you said. The user
> > space emit commands without ever going into kernel by directly scheduling
> > commands and ringing a doorbell. They do not need VFIO either and they
> > can map userspace address into the DMA address space of the device and
> > again they do not need VFIO for that.
> Could you please directly point out which driver you refer to here? Thank
> you.

drivers/gpu/drm/amd/

Sub-directory of interest is amdkfd

Because it is a big driver here is a highlevel overview of how it works
(this is a simplification):
  - Process can allocate GPUs buffer (through ioclt) and map them into
    its address space (through mmap of device file at buffer object
    specific offset).
  - Process can map any valid range of virtual address space into device
    address space (IOMMU mapping). This must be regular memory ie not an
    mmap of a device file or any special file (this is the non PASID
    path)
  - Process can create a command queue and bind its process to it aka
    PASID, this is done through an ioctl.
  - Process can schedule commands onto queues it created from userspace
    without ioctl. For that it just write command into a ring buffer
    that it mapped during the command queue creation process and it
    rings a doorbell when commands are ready to be consume by the
    hardware.
  - Commands can reference (access) all 3 types of object above ie
    either full GPUs buffer, process regular memory maped as object
    (non PASID) and PASID memory all at the same time ie you can
    mix all of the above in same commands queue.
  - Kernel can evict, unbind any process command queues, unbind commands
    queue are still valid from process point of view but commands
    process schedules on them will not be executed until kernel re-bind
    the queue.
  - Kernel can schedule commands itself onto its dedicated command
    queues (kernel driver create its own command queues).
  - Kernel can control priorities between all the queues ie it can
    decides which queues should the hardware executed first next.

I believe all of the above are the aspects that matters to you. The main
reason i don't like creating a new driver infrastructure is that a lot
of existing drivers will want to use some of the new features that are
coming (memory topology, where to place process memory, pipeline devices,
...) and thus existing drivers are big (GPU drivers are the biggest of
all the kernel drivers).

So rewritting those existing drivers into VFIO or into any new infra-
structure so that they can leverage new features is a no go from my
point of view.

I would rather see a set of helpers so that each features can be use
either by new drivers or existing drivers. For instance a new way to
expose memory topology. A new way to expose how you can pipe devices
from one to another ...


Hence i do not see any value in a whole new infra-structure in which
drivers must be part of to leverage new features.

Cheers,
Jérôme

  reply	other threads:[~2018-08-08 15:18 UTC|newest]

Thread overview: 61+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-01 10:22 [RFC PATCH 0/7] A General Accelerator Framework, WarpDrive Kenneth Lee
2018-08-01 10:22 ` [RFC PATCH 1/7] vfio/spimdev: Add documents for WarpDrive framework Kenneth Lee
2018-08-02  3:14   ` Tian, Kevin
2018-08-02  4:22     ` Kenneth Lee
2018-08-02  4:41       ` Tian, Kevin
2018-08-06 12:27   ` Pavel Machek
2018-08-08  1:43     ` Kenneth Lee
2018-08-01 10:22 ` [RFC PATCH 2/7] iommu: Add share domain interface in iommu for spimdev Kenneth Lee
2018-08-02  3:17   ` Tian, Kevin
2018-08-02  4:15     ` Kenneth Lee
2018-08-02  4:39       ` Tian, Kevin
2018-08-08  9:13   ` Joerg Roedel
2018-08-09  1:09     ` Kenneth Lee
2018-08-01 10:22 ` [RFC PATCH 3/7] vfio: add spimdev support Kenneth Lee
2018-08-01 16:23   ` Randy Dunlap
2018-08-02  3:07     ` Kenneth Lee
2018-08-02  3:21   ` Tian, Kevin
2018-08-02  3:47     ` Kenneth Lee
2018-08-02  4:24       ` Tian, Kevin
2018-08-02  7:34         ` Kenneth Lee
     [not found]           ` <20180802103528.0b863030.cohuck@redhat.com>
     [not found]             ` <20180802124327.403b10ab@t450s.home>
2018-08-06  1:40               ` Kenneth Lee
2018-08-06 15:49                 ` Alex Williamson
2018-08-06 16:34                   ` Raj, Ashok
2018-08-06 17:05                     ` Alex Williamson
2018-08-08  1:32                       ` Kenneth Lee
2018-08-01 10:22 ` [RFC PATCH 4/7] crypto: add hisilicon Queue Manager driver Kenneth Lee
2018-08-01 10:22 ` [RFC PATCH 5/7] crypto: Add Hisilicon Zip driver Kenneth Lee
2018-08-01 10:22 ` [RFC PATCH 6/7] crypto: add spimdev support to Hisilicon QM Kenneth Lee
2018-08-01 10:22 ` [RFC PATCH 7/7] vfio/spimdev: add user sample for spimdev Kenneth Lee
2018-08-01 16:56 ` [RFC PATCH 0/7] A General Accelerator Framework, WarpDrive Jerome Glisse
2018-08-02  2:33   ` Tian, Kevin
2018-08-02  4:05     ` Kenneth Lee
2018-08-02 14:22       ` Jerome Glisse
2018-08-03  3:47         ` Kenneth Lee
2018-08-03 14:39           ` Jerome Glisse
2018-08-06  3:12             ` Kenneth Lee
2018-08-06 15:32               ` Jerome Glisse
2018-08-08  1:08                 ` Kenneth Lee
2018-08-08 15:18                   ` Jerome Glisse [this message]
2018-08-09  8:03                     ` Kenneth Lee
2018-08-09  8:31                       ` Tian, Kevin
2018-08-10  1:37                         ` Kenneth Lee
2018-08-09 14:46                       ` Jerome Glisse
2018-08-10  3:39                         ` Kenneth Lee
2018-08-10 13:12                           ` Jean-Philippe Brucker
2018-08-11 15:26                             ` Kenneth Lee
2018-08-13  9:29                               ` Kenneth Lee
2018-08-13 19:23                                 ` Jerome Glisse
2018-08-14  3:46                                   ` Kenneth Lee
2018-08-10 14:32                           ` Jerome Glisse
2018-08-11 14:44                             ` Kenneth Lee
2018-08-02 10:10     ` Alan Cox
2018-08-02 12:24       ` Xu Zaibo
2018-08-02 14:46       ` Jerome Glisse
2018-08-03 14:20         ` Alan Cox
2018-08-03 14:55           ` Jerome Glisse
2018-08-06  1:26           ` Kenneth Lee
2018-08-02  2:59 ` Tian, Kevin
2018-08-02  3:40   ` Kenneth Lee
2018-08-02  4:36     ` Tian, Kevin
2018-08-02  5:35       ` Kenneth Lee

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180808151835.GA3429@redhat.com \
    --to=jglisse@redhat.com \
    --cc=alex.williamson@redhat.com \
    --cc=corbet@lwn.net \
    --cc=davem@davemloft.net \
    --cc=fanghao11@huawei.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=herbert@gondor.apana.org.au \
    --cc=iommu@lists.linux-foundation.org \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=liguozhu@hisilicon.com \
    --cc=linux-accelerators@lists.ozlabs.org \
    --cc=linux-crypto@vger.kernel.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxarm@huawei.com \
    --cc=nek.in.cn@gmail.com \
    --cc=pombredanne@nexb.com \
    --cc=sanjay.k.kumar@intel.com \
    --cc=tglx@linutronix.de \
    --cc=xuzaibo@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).