linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jerome Glisse <jglisse@redhat.com>
To: Kenneth Lee <liguozhu@hisilicon.com>
Cc: Kenneth Lee <nek.in.cn@gmail.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	Herbert Xu <herbert@gondor.apana.org.au>,
	kvm@vger.kernel.org, Jonathan Corbet <corbet@lwn.net>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Joerg Roedel <joro@8bytes.org>,
	linux-doc@vger.kernel.org,
	Sanjay Kumar <sanjay.k.kumar@intel.com>,
	Hao Fang <fanghao11@huawei.com>,
	linux-kernel@vger.kernel.org, linuxarm@huawei.com,
	iommu@lists.linux-foundation.org,
	"David S . Miller" <davem@davemloft.net>,
	linux-crypto@vger.kernel.org, Zhou Wang <wangzhou1@hisilicon.com>,
	Philippe Ombredanne <pombredanne@nexb.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Zaibo Xu <xuzaibo@huawei.com>,
	linux-accelerators@lists.ozlabs.org,
	Lu Baolu <baolu.lu@linux.intel.com>
Subject: Re: [RFCv2 PATCH 0/7] A General Accelerator Framework, WarpDrive
Date: Mon, 10 Sep 2018 10:54:23 -0400	[thread overview]
Message-ID: <20180910145423.GA3488@redhat.com> (raw)
In-Reply-To: <20180910032809.GJ230707@Turing-Arch-b>

On Mon, Sep 10, 2018 at 11:28:09AM +0800, Kenneth Lee wrote:
> On Fri, Sep 07, 2018 at 12:53:06PM -0400, Jerome Glisse wrote:
> > On Fri, Sep 07, 2018 at 12:01:38PM +0800, Kenneth Lee wrote:
> > > On Thu, Sep 06, 2018 at 09:31:33AM -0400, Jerome Glisse wrote:
> > > > On Thu, Sep 06, 2018 at 05:45:32PM +0800, Kenneth Lee wrote:
> > > > > On Tue, Sep 04, 2018 at 10:15:09AM -0600, Alex Williamson wrote:
> > > > > > On Tue, 4 Sep 2018 11:00:19 -0400 Jerome Glisse <jglisse@redhat.com> wrote:
> > > > > > > On Mon, Sep 03, 2018 at 08:51:57AM +0800, Kenneth Lee wrote:

[...]

> > > I took a look at i915_gem_execbuffer_ioctl(). It seems it "copy_from_user" the
> > > user memory to the kernel. That is not what we need. What we try to get is: the
> > > user application do something on its data, and push it away to the accelerator,
> > > and says: "I'm tied, it is your turn to do the job...". Then the accelerator has
> > > the memory, referring any portion of it with the same VAs of the application,
> > > even the VAs are stored inside the memory itself.
> > 
> > You were not looking at right place see drivers/gpu/drm/i915/i915_gem_userptr.c
> > It does GUP and create GEM object AFAICR you can wrap that GEM object into a
> > dma buffer object.
> > 
> 
> Thank you for directing me to this implementation. It is interesting:).
> 
> But it is not yet solve my problem. If I understand it right, the userptr in
> i915 do the following:
> 
> 1. The user process sets a user pointer with size to the kernel via ioctl.
> 2. The kernel wraps it as a dma-buf and keeps the process's mm for further
>    reference.
> 3. The user pages are allocated, GUPed or DMA mapped to the device. So the data
>    can be shared between the user space and the hardware.
> 
> But my scenario is: 
> 
> 1. The user process has some data in the user space, pointed by a pointer, say
>    ptr1. And within the memory, there may be some other pointers, let's say one
>    of them is ptr2.
> 2. Now I need to assign ptr1 *directly* to the hardware MMIO space. And the
>    hardware must refer ptr1 and ptr2 *directly* for data.
> 
> Userptr lets the hardware and process share the same memory space. But I need
> them to share the same *address space*. So IOMMU is a MUST for WarpDrive,
> NOIOMMU mode, as Jean said, is just for verifying some of the procedure is OK.

So to be 100% clear should we _ignore_ the non SVA/SVM case ?
If so then wait for necessary SVA/SVM to land and do warp drive
without non SVA/SVM path.

If you still want non SVA/SVM path what you want to do only works
if both ptr1 and ptr2 are in a range that is DMA mapped to the
device (moreover you need DMA address to match process address
which is not an easy feat).

Now even if you only want SVA/SVM, i do not see what is the point
of doing this inside VFIO. AMD GPU driver does not and there would
be no benefit for them to be there. Well a AMD VFIO mdev device
driver for QEMU guest might be useful but they have SVIO IIRC.

For SVA/SVM your usage model is:

Setup:
    - user space create a warp drive context for the process
    - user space create a device specific context for the process
    - user space create a user space command queue for the device
    - user space bind command queue

    At this point the kernel driver has bound the process address
    space to the device with a command queue and userspace

Usage:
    - user space schedule work and call appropriate flush/update
      ioctl from time to time. Might be optional depends on the
      hardware, but probably a good idea to enforce so that kernel
      can unbind the command queue to bind another process command
      queue.
    ...

Cleanup:
    - user space unbind command queue
    - user space destroy device specific context
    - user space destroy warp drive context
    All the above can be implicit when closing the device file.

So again in the above model i do not see anywhere something from
VFIO that would benefit this model.


> > > And I don't understand why I should avoid to use VFIO? As Alex said, VFIO is the
> > > user driver framework. And I need exactly a user driver interface. Why should I
> > > invent another wheel? It has most of stuff I need:
> > > 
> > > 1. Connecting multiple devices to the same application space
> > > 2. Pinning and DMA from the application space to the whole set of device
> > > 3. Managing hardware resource by device
> > > 
> > > We just need the last step: make sure multiple applications and the kernel can
> > > share the same IOMMU. Then why shouldn't we use VFIO?
> > 
> > Because tons of other drivers already do all of the above outside VFIO. Many
> > driver have a sizeable userspace side to them (anything with ioctl do) so they
> > can be construded as userspace driver too.
> > 
> 
> Ignoring if there are *tons* of drivers are doing that;), even I do the same as
> i915 and solve the address space problem. And if I don't need to with VFIO, why
> should I spend so much effort to do it again?

Because you do not need any code from VFIO, nor do you need to reinvent
things. If non SVA/SVM matters to you then use dma buffer. If not then
i do not see anything in VFIO that you need.


> > So there is no reasons to do that under VFIO. Especialy as in your example
> > it is not a real user space device driver, the userspace portion only knows
> > about writting command into command buffer AFAICT.
> > 
> > VFIO is for real userspace driver where interrupt, configurations, ... ie
> > all the driver is handled in userspace. This means that the userspace have
> > to be trusted as it could program the device to do DMA to anywhere (if
> > IOMMU is disabled at boot which is still the default configuration in the
> > kernel).
> > 
> 
> But as Alex explained, VFIO is not simply used by VM. So it need not to have all
> stuffs as a driver in host system. And I do need to share the user space as DMA
> buffer to the hardware. And I can get it with just a little update, then it can
> service me perfectly. I don't understand why I should choose a long route.

Again this is not the long route i do not see anything in VFIO that
benefit you in the SVA/SVM case. A basic character device driver can
do that.


> > So i do not see any reasons to do anything you want inside VFIO. All you
> > want to do can be done outside as easily. Moreover it would be better if
> > you define clearly each scenario because from where i sit it looks like
> > you are opening the door wide open to userspace to DMA anywhere when IOMMU
> > is disabled.
> > 
> > When IOMMU is disabled you can _not_ expose command queue to userspace
> > unless your device has its own page table and all commands are relative
> > to that page table and the device page table is populated by kernel driver
> > in secure way (ie by checking that what is populated can be access).
> > 
> > I do not believe your example device to have such page table nor do i see
> > a fallback path when IOMMU is disabled that force user to do ioctl for
> > each commands.
> > 
> > Yes i understand that you target SVA/SVM but still you claim to support
> > non SVA/SVM. The point is that userspace can not be trusted if you want
> > to have random program use your device. I am pretty sure that all user
> > of VFIO are trusted process (like QEMU).
> > 
> > 
> > Finaly i am convince that the IOMMU grouping stuff related to VFIO is
> > useless for your usecase. I really do not see the point of that, it
> > does complicate things for you for no reasons AFAICT.
> 
> Indeed, I don't like the group thing. I believe VFIO's maintains would not like
> it very much either;). But the problem is, the group reflects to the same
> IOMMU(unit), which may shared with other devices.  It is a security problem. I
> cannot ignore it. I have to take it into account event I don't use VFIO.

To me it seems you are making a policy decission in kernel space ie
wether the device should be isolated in its own group or not is a
decission that is up to the sys admin or something in userspace.
Right now existing user of SVA/SVM don't (at least AFAICT).

Do we really want to force such isolation ?


> > > And personally, I believe the maturity and correctness of a framework are driven
> > > by applications. Now the problem in accelerator world is that we don't have a
> > > direction. If we believe the requirement is right, the method itself is not a
> > > big problem in the end. We just need to let people have a unify platform to
> > > share their work together.
> > 
> > I am not against that but it seems to me that all you want to do is only
> > a matter of simplifying discovery of such devices and sharing few common
> > ioctl (DMA mapping, creating command queue, managing command queue, ...)
> > and again for all this i do not see the point of doing this under VFIO.
> 
> It is not a problem of device management, it is a problem of sharing address
> space.

This ties back to IOMMU SVA/SVM group isolation above.

Jérôme

  reply	other threads:[~2018-09-10 14:54 UTC|newest]

Thread overview: 58+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-03  0:51 [RFCv2 PATCH 0/7] A General Accelerator Framework, WarpDrive Kenneth Lee
2018-09-03  0:51 ` [PATCH 1/7] vfio/sdmdev: Add documents for WarpDrive framework Kenneth Lee
2018-09-06 18:36   ` Randy Dunlap
2018-09-07  2:21     ` Kenneth Lee
2018-09-03  0:51 ` [PATCH 2/7] iommu: Add share domain interface in iommu for sdmdev Kenneth Lee
2018-09-03  0:52 ` [PATCH 3/7] vfio: add sdmdev support Kenneth Lee
2018-09-03  2:11   ` Randy Dunlap
2018-09-06  8:08     ` Kenneth Lee
2018-09-03  2:55   ` Lu Baolu
2018-09-06  9:01     ` Kenneth Lee
2018-09-04 15:31   ` [RFC PATCH] vfio: vfio_sdmdev_groups[] can be static kbuild test robot
2018-09-04 15:32   ` [PATCH 3/7] vfio: add sdmdev support kbuild test robot
2018-09-04 15:32   ` kbuild test robot
2018-09-05  7:27   ` Dan Carpenter
2018-09-03  0:52 ` [PATCH 4/7] crypto: add hisilicon Queue Manager driver Kenneth Lee
2018-09-03  2:15   ` Randy Dunlap
2018-09-06  9:08     ` Kenneth Lee
2018-09-03  0:52 ` [PATCH 5/7] crypto: Add Hisilicon Zip driver Kenneth Lee
2018-09-03  0:52 ` [PATCH 6/7] crypto: add sdmdev support to Hisilicon QM Kenneth Lee
2018-09-03  2:19   ` Randy Dunlap
2018-09-06  9:09     ` Kenneth Lee
2018-09-03  0:52 ` [PATCH 7/7] vfio/sdmdev: add user sample Kenneth Lee
2018-09-03  2:25   ` Randy Dunlap
2018-09-06  9:10     ` Kenneth Lee
2018-09-03  2:32 ` [RFCv2 PATCH 0/7] A General Accelerator Framework, WarpDrive Lu Baolu
2018-09-06  9:11   ` Kenneth Lee
2018-09-04 15:00 ` Jerome Glisse
2018-09-04 16:15   ` Alex Williamson
2018-09-06  9:45     ` Kenneth Lee
2018-09-06 13:31       ` Jerome Glisse
2018-09-07  4:01         ` Kenneth Lee
2018-09-07 16:53           ` Jerome Glisse
2018-09-07 17:55             ` Jean-Philippe Brucker
2018-09-07 18:04               ` Jerome Glisse
2018-09-10  3:28             ` Kenneth Lee
2018-09-10 14:54               ` Jerome Glisse [this message]
2018-09-11  2:42                 ` Kenneth Lee
2018-09-11  3:33                   ` Jerome Glisse
2018-09-11  6:40                     ` Kenneth Lee
2018-09-11 13:40                       ` Jerome Glisse
2018-09-13  8:32                         ` Kenneth Lee
2018-09-13 14:51                           ` Jerome Glisse
2018-09-14  3:12                             ` Kenneth Lee
2018-09-14 14:05                               ` Jerome Glisse
2018-09-14  6:50                             ` Tian, Kevin
2018-09-14 13:05                               ` Kenneth Lee
2018-09-14 14:13                               ` Jerome Glisse
2018-09-17  1:42 ` Jerome Glisse
2018-09-17  8:39   ` Kenneth Lee
2018-09-17 12:37     ` Jerome Glisse
2018-09-18  6:00       ` Kenneth Lee
2018-09-18 13:03         ` Jerome Glisse
2018-09-20  5:55           ` Kenneth Lee
2018-09-20 14:23             ` Jerome Glisse
2018-09-21 10:05               ` Kenneth Lee
2018-09-21 10:03   ` Kenneth Lee
2018-09-21 14:52     ` Jerome Glisse
2018-09-25  5:55       ` Kenneth Lee

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180910145423.GA3488@redhat.com \
    --to=jglisse@redhat.com \
    --cc=alex.williamson@redhat.com \
    --cc=baolu.lu@linux.intel.com \
    --cc=corbet@lwn.net \
    --cc=davem@davemloft.net \
    --cc=fanghao11@huawei.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=herbert@gondor.apana.org.au \
    --cc=iommu@lists.linux-foundation.org \
    --cc=joro@8bytes.org \
    --cc=kvm@vger.kernel.org \
    --cc=liguozhu@hisilicon.com \
    --cc=linux-accelerators@lists.ozlabs.org \
    --cc=linux-crypto@vger.kernel.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxarm@huawei.com \
    --cc=nek.in.cn@gmail.com \
    --cc=pombredanne@nexb.com \
    --cc=sanjay.k.kumar@intel.com \
    --cc=tglx@linutronix.de \
    --cc=wangzhou1@hisilicon.com \
    --cc=xuzaibo@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).