iommu.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: "Liu, Yi L" <yi.l.liu@intel.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: "Tian, Jun J" <jun.j.tian@intel.com>,
	"Tian, Kevin" <kevin.tian@intel.com>,
	"Raj, Ashok" <ashok.raj@intel.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"jean-philippe@linaro.org" <jean-philippe@linaro.org>,
	"stefanha@gmail.com" <stefanha@gmail.com>,
	Jason Wang <jasowang@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Sun, Yi Y" <yi.y.sun@intel.com>,
	"alex.williamson@redhat.com" <alex.williamson@redhat.com>,
	"iommu@lists.linux-foundation.org"
	<iommu@lists.linux-foundation.org>,
	"Zhu, Lingshan" <lingshan.zhu@intel.com>,
	"Wu, Hao" <hao.wu@intel.com>
Subject: RE: (proposal) RE: [PATCH v7 00/16] vfio: expose virtual Shared Virtual Addressing to VMs
Date: Tue, 20 Oct 2020 10:21:41 +0000	[thread overview]
Message-ID: <DM5PR11MB14354A8A126E686A5F20FEC2C31F0@DM5PR11MB1435.namprd11.prod.outlook.com> (raw)
In-Reply-To: <20201019142526.GJ6219@nvidia.com>

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Monday, October 19, 2020 10:25 PM
> 
> On Mon, Oct 19, 2020 at 08:39:03AM +0000, Liu, Yi L wrote:
> > Hi Jason,
> >
> > Good to see your response.
> 
> Ah, I was away

got it. :-)

> > > > > Second, IOMMU nested translation is a per IOMMU domain
> > > > > capability. Since IOMMU domains are managed by VFIO/VDPA
> > > > > (alloc/free domain, attach/detach device, set/get domain
> > > > > attribute, etc.), reporting/enabling the nesting capability is
> > > > > an natural extension to the domain uAPI of existing passthrough
> frameworks.
> > > > > Actually, VFIO already includes a nesting enable interface even
> > > > > before this series. So it doesn't make sense to generalize this
> > > > > uAPI out.
> > >
> > > The subsystem that obtains an IOMMU domain for a device would have
> > > to register it with an open FD of the '/dev/sva'. That is the
> > > connection between the two subsystems. It would be some simple
> > > kernel internal
> > > stuff:
> > >
> > >   sva = get_sva_from_file(fd);
> >
> > Is this fd provided by userspace? I suppose the /dev/sva has a set of
> > uAPIs which will finally program page table to host iommu driver. As
> > far as I know, it's weird for VFIO user. Why should VFIO user connect
> > to a /dev/sva fd after it sets a proper iommu type to the opened
> > container. VFIO container already stands for an iommu context with
> > which userspace could program page mapping to host iommu.
> 
> Again the point is to dis-aggregate the vIOMMU related stuff from VFIO so it
> can
> be shared between more subsystems that need it.

I understand you here. :-)

> I'm sure there will be some
> weird overlaps because we can't delete any of the existing VFIO APIs, but
> that
> should not be a blocker.

but the weird thing is what we should consider. And it perhaps not just
overlap, it may be a re-definition of VFIO container. As I mentioned, VFIO
container is IOMMU context from the day it was defined. It could be the
blocker. :-(

> Having VFIO run in a mode where '/dev/sva' provides all the IOMMU handling is
> a possible path.

This looks to be similar with the proposal from Jason Wang and Kevin Tian.
It is an idea to add "/dev/iommu" and delegate the IOMMU domain alloc,
device attach/detach which is no in passthru framework to an independent
kernel driver. Just as Jason Wang said replace vfio iommu type1 driver.

Jason Wang:
 "And all the proposal in this series is to reuse the container fd. It 
 should be possible to replace e.g type1 IOMMU with a unified module."
link: https://lore.kernel.org/kvm/20201019142526.GJ6219@nvidia.com/T/#md49fe9ac9d9eff6ddf5b8c2ee2f27eb2766f66f3

Kevin Tian:
 "Based on above, I feel a more reasonable way is to first make a 
 /dev/iommu uAPI supporting DMA map/unmap usages and then 
 introduce vSVA to it. Doing this order is because DMA map/unmap 
 is widely used thus can better help verify the core logic with 
 many existing devices."
link: https://lore.kernel.org/kvm/MWHPR11MB1645C702D148A2852B41FCA08C230@MWHPR11MB1645.namprd11.prod.outlook.com/

> 
> If your plan is to just opencode everything into VFIO then I don't
> see how VDPA will work well, and if proper in-kernel abstractions are built I
> fail to see how
> routing some of it through userspace is a fundamental problem.

I'm not expert on vDPA for now, but I saw you three open source
veterans have a similar idea for a place to cover IOMMU handling,
I think it may be a valuable thing to do. I said "may be" as I'm not
sure about Alex's opinion on such idea. But the sure thing is this
idea may introduce weird overlap even re-definition of existing
thing as I replied above. We need to evaluate the impact and mature
the idea step by step. That means it would take time, so perhaps we
may do it in a staging way. First having a "/dev/iommu" be ready to
handle page MAP/UNMAP which can be used by both VFIO and vDPA, mean-
while let VFIO grow up (adding features) by itself and consider to
adopt the new /dev/iommu later once /dev/iommu is competent. Of
course this needs Alex's approval. And then adding new features
to /dev/iommu, like SVA.

> 
> > >   sva_register_device_to_pasid(sva, pasid, pci_device,
> > > iommu_domain);
> >
> > So this is supposed to be called by VFIO/VDPA to register the info to
> > /dev/sva.
> > right? And in dev/sva, it will also maintain the device/iommu_domain
> > and pasid info? will it be duplicated with VFIO/VDPA?
> 
> Each part needs to have the information it needs?

yeah, but it's the duplication which I'm not very much in. Perhaps the idea
from Jason Wang and Kevin would avoid such duplication.

> > > > > Moreover, mapping page fault to subdevice requires pre-
> > > > > registering subdevice fault data to IOMMU layer when binding
> > > > > guest page table, while such fault data can be only retrieved
> > > > > from parent driver through VFIO/VDPA.
> > >
> > > Not sure what this means, page fault should be tied to the PASID,
> > > any hookup needed for that should be done in-kernel when the device
> > > is connected to the PASID.
> >
> > you may refer to chapter 7.4.1.1 of VT-d spec. Page request is
> > reported to software together with the requestor id of the device. For
> > the page request injects to guest, it should have the device info.
> 
> Whoever provides the vIOMMU emulation and relays the page fault to the guest
> has to translate the RID -

that's the point. But the device info (especially the sub-device info) is
within the passthru framework (e.g. VFIO). So page fault reporting needs
to go through passthru framework.

> what does that have to do with VFIO?
> 
> How will VPDA provide the vIOMMU emulation?

a pardon here. I believe vIOMMU emulation should be based on IOMMU vendor
specification, right? you may correct me if I'm missing anything.

> Jason

Regards,
Yi Liu
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

  reply	other threads:[~2020-10-20 10:21 UTC|newest]

Thread overview: 55+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-12  8:38 (proposal) RE: [PATCH v7 00/16] vfio: expose virtual Shared Virtual Addressing to VMs Tian, Kevin
2020-10-13  6:22 ` Jason Wang
2020-10-14  3:08   ` Tian, Kevin
2020-10-14 23:10     ` Alex Williamson
2020-10-15  7:02       ` Jason Wang
2020-10-15  6:52     ` Jason Wang
2020-10-15  7:58       ` Tian, Kevin
2020-10-15  8:40         ` Jason Wang
2020-10-15 10:14           ` Liu, Yi L
2020-10-20  6:18             ` Jason Wang
2020-10-20  8:19               ` Liu, Yi L
2020-10-20  9:19                 ` Jason Wang
2020-10-20  9:40                   ` Liu, Yi L
2020-10-20 13:54                     ` Jason Gunthorpe
2020-10-20 14:00                       ` Liu, Yi L
2020-10-20 14:05                         ` Jason Gunthorpe
2020-10-20 14:09                           ` Liu, Yi L
2020-10-13 10:27 ` Jean-Philippe Brucker
2020-10-14  2:11   ` Tian, Kevin
2020-10-14  3:16 ` Tian, Kevin
2020-10-16 15:36   ` Jason Gunthorpe
2020-10-19  8:39     ` Liu, Yi L
2020-10-19 14:25       ` Jason Gunthorpe
2020-10-20 10:21         ` Liu, Yi L [this message]
2020-10-20 14:02           ` Jason Gunthorpe
2020-10-20 14:19             ` Liu, Yi L
2020-10-21  2:21               ` Jason Wang
2020-10-20 16:24             ` Raj, Ashok
2020-10-20 17:03               ` Jason Gunthorpe
2020-10-20 19:51                 ` Raj, Ashok
2020-10-20 19:55                   ` Jason Gunthorpe
2020-10-20 20:08                     ` Raj, Ashok
2020-10-20 20:14                       ` Jason Gunthorpe
2020-10-20 20:27                         ` Raj, Ashok
2020-10-21 11:48                           ` Jason Gunthorpe
2020-10-21 17:51                             ` Raj, Ashok
2020-10-21 18:24                               ` Jason Gunthorpe
2020-10-21 20:03                                 ` Raj, Ashok
2020-10-21 23:32                                   ` Jason Gunthorpe
2020-10-21 23:53                                     ` Raj, Ashok
2020-10-22  2:55                               ` Jason Wang
2020-10-22  3:54                                 ` Liu, Yi L
2020-10-22  4:38                                   ` Jason Wang
2020-11-03  9:52 ` joro
2020-11-03 12:56   ` Jason Gunthorpe
2020-11-03 13:18     ` joro
2020-11-03 13:23       ` Jason Gunthorpe
2020-11-03 14:03         ` joro
2020-11-03 14:06           ` Jason Gunthorpe
2020-11-03 14:35             ` joro
2020-11-03 15:22               ` Jason Gunthorpe
2020-11-03 16:55                 ` joro
2020-11-03 17:48                   ` Jason Gunthorpe
2020-11-03 19:14                     ` joro
2020-11-04 19:29                       ` Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DM5PR11MB14354A8A126E686A5F20FEC2C31F0@DM5PR11MB1435.namprd11.prod.outlook.com \
    --to=yi.l.liu@intel.com \
    --cc=alex.williamson@redhat.com \
    --cc=ashok.raj@intel.com \
    --cc=hao.wu@intel.com \
    --cc=iommu@lists.linux-foundation.org \
    --cc=jasowang@redhat.com \
    --cc=jean-philippe@linaro.org \
    --cc=jgg@nvidia.com \
    --cc=jun.j.tian@intel.com \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=lingshan.zhu@intel.com \
    --cc=mst@redhat.com \
    --cc=stefanha@gmail.com \
    --cc=yi.y.sun@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).