kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Liu, Yi L" <yi.l.liu@intel.com>
To: Alex Williamson <alex.williamson@redhat.com>
Cc: "kwankhede@nvidia.com" <kwankhede@nvidia.com>,
	"Tian, Kevin" <kevin.tian@intel.com>,
	"baolu.lu@linux.intel.com" <baolu.lu@linux.intel.com>,
	"Sun, Yi Y" <yi.y.sun@intel.com>,
	"joro@8bytes.org" <joro@8bytes.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Masahiro Yamada <yamada.masahiro@socionext.com>
Subject: RE: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
Date: Mon, 24 Jun 2019 08:20:38 +0000	[thread overview]
Message-ID: <A2975661238FB949B60364EF0F2C257439F05415@SHSMSX104.ccr.corp.intel.com> (raw)
In-Reply-To: <20190621095740.41e6e98e@x1.home>

Hi Alex,

> From: Alex Williamson [mailto:alex.williamson@redhat.com]
> Sent: Friday, June 21, 2019 11:58 PM
> To: Liu, Yi L <yi.l.liu@intel.com>
> Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> 
> On Fri, 21 Jun 2019 10:23:10 +0000
> "Liu, Yi L" <yi.l.liu@intel.com> wrote:
> 
> > Hi Alex,
> >
> > > From: Alex Williamson [mailto:alex.williamson@redhat.com]
> > > Sent: Friday, June 21, 2019 5:08 AM
> > > To: Liu, Yi L <yi.l.liu@intel.com>
> > > Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> > >
> > > On Thu, 20 Jun 2019 13:00:34 +0000
> > > "Liu, Yi L" <yi.l.liu@intel.com> wrote:
> > >
> > > > Hi Alex,
> > > >
> > > > > From: Alex Williamson [mailto:alex.williamson@redhat.com]
> > > > > Sent: Thursday, June 20, 2019 12:27 PM
> > > > > To: Liu, Yi L <yi.l.liu@intel.com>
> > > > > Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> > > > >
> > > > > On Sat,  8 Jun 2019 21:21:11 +0800
> > > > > Liu Yi L <yi.l.liu@intel.com> wrote:
> > > > >
> > > > > > This patch adds sample driver named vfio-mdev-pci. It is to wrap
> > > > > > a PCI device as a mediated device. For a pci device, once bound
> > > > > > to vfio-mdev-pci driver, user space access of this device will
> > > > > > go through vfio mdev framework. The usage of the device follows
> > > > > > mdev management method. e.g. user should create a mdev before
> > > > > > exposing the device to user-space.
> > [...]
> > > >
> > > > > However, the patch below just makes the mdev interface behave
> > > > > correctly, I can't make it work on my system because commit
> > > > > 7bd50f0cd2fd ("vfio/type1: Add domain at(de)taching group helpers")
> > > >
> > > > What error did you encounter. I tested the patch with a device in a
> > > > singleton iommu group. I'm also searching a proper machine with
> > > > multiple devices in an iommu group and test it.
> > >
> > > In vfio_iommu_type1, iommu backed mdev devices use the
> > > iommu_attach_device() interface, which includes:
> > >
> > >         if (iommu_group_device_count(group) != 1)
> > >                 goto out_unlock;
> > >
> > > So it's impossible to use with non-singleton groups currently.
> >
> > Hmmm, I think it is no longer good to use iommu_attach_device() for iommu
> > backed mdev devices now. In this flow, the purpose here is to attach a device
> > to a domain and no need to check whether the device is in a singleton iommu
> > group. I think it would be better to use __iommu_attach_device() instead of
> > iommu_attach_device().
> 
> That's a static and unexported, it's intentionally not an exposed
> interface.  We can't attach devices in the same group to separate
> domains allocated through iommu_domain_alloc(), this would violate the
> iommu group isolation principles.

Go it. :-) Then not good to expose such interface. But to support devices in
non-singleton iommu group, we need to have a new interface which doesn't
count the devices but attach all the devices.

> > Also I found a potential mutex lock issue if using iommu_attach_device().
> > In vfio_iommu_attach_group(), it uses iommu_group_for_each_dev() to loop
> > all the devices in the group. It holds group->mutex. And then
> vfio_mdev_attach_domain()
> > calls iommu_attach_device() which also tries to get group->mutex. This would be
> > an issue. If you are fine with it, I may post another patch for it. :-)
> 
> Gack, yes, please send a patch.

Would do it, may be together with the support of vfio-mdev-pci on devices in
non-singleton iommu group.

> 
> > > > > used iommu_attach_device() rather than iommu_attach_group() for non-aux
> > > > > mdev iommu_device.  Is there a requirement that the mdev parent device
> > > > > is in a singleton iommu group?
> > > >
> > > > I don't think there should have such limitation. Per my understanding,
> > > > vfio-mdev-pci should also be able to bind to devices which shares
> > > > iommu group with other devices. vfio-pci works well for such devices.
> > > > And since the two drivers share most of the codes, I think vfio-mdev-pci
> > > > should naturally support it as well.
> > >
> > > Yes, the difference though is that vfio.c knows when devices are in the
> > > same group, which mdev vfio.c only knows about the non-iommu backed
> > > group, not the group that is actually used for the iommu backing.  So
> > > we either need to enlighten vfio.c or further abstract those details in
> > > vfio_iommu_type1.c.
> >
> > Not sure if it is necessary to introduce more changes to vfio.c or
> > vfio_iommu_type1.c. If it's only for the scenario which two devices share an
> > iommu_group, I guess it could be supported by using __iommu_attach_device()
> > which has no device counting for the group. But maybe I missed something
> > here. It would be great if you can elaborate a bit for it. :-)
> 
> We need to use the group semantics, there's a reason
> __iommu_attach_device() is not exposed, it's an internal helper.  I
> think there's no way around that we need to somewhere track the actual
> group we're attaching to and have the smarts to re-use it for other
> devices in the same group.

Hmmm, exposing __iommu_attach_device() is not good, let's forget it. :-)

> > > > > If this is a simplification, then
> > > > > vfio-mdev-pci should not bind to devices where this is violated since
> > > > > there's no way to use the device.  Can we support it though?
> > > >
> > > > yeah, I think we need to support it.
> > > >
> > > > > If I have two devices in the same group and bind them both to
> > > > > vfio-mdev-pci, I end up with three groups, one for each mdev device and
> > > > > the original physical device group.  vfio.c works with the mdev groups
> > > > > and will try to match both groups to the container.  vfio_iommu_type1.c
> > > > > also works with the mdev groups, except for the point where we actually
> > > > > try to attach a group to a domain, which is the only window where we use
> > > > > the iommu_device rather than the provided group, but we don't record
> > > > > that anywhere.  Should struct vfio_group have a pointer to a reference
> > > > > counted object that tracks the actual iommu_group attached, such that
> > > > > we can determine that the group is already attached to the domain and
> > > > > not try to attach again?
> > > >
> > > > Agreed, we need to avoid such duplicated attach. Instead of adding
> > > > reference counted object in vfio_group. I'm also considering the logic
> > > > below:
> >
> > Re-walked the code, I find the duplicated attach will happen on the vfio-mdev-pci
> > device as vfio_mdev_attach_domain() only attaches the parent devices of
> > iommu backed mdevs instead of all the devices within the physical iommu_group.
> > While for a vfio-pci device, it will use iommu_attach_group() which attaches all the
> > devices within the iommu backed group. The same with detach,
> > vfio_mdev_detach_domain() detaches selective devices instead of all devices
> within
> > the iommu backed group.
> 
> Yep, that's not good, for the non-aux case we need to follow the usual
> group semantics or else we're limited to singleton groups.

yep.

> 
> > > >     /*
> > > >       * Do this check in vfio_iommu_type1_attach_group(), after mdev_group
> > > >       * is initialized.
> > > >       */
> > > >     if (vfio_group->mdev_group) {
> > > >          /*
> > > >            * vfio_group->mdev_group is true means vfio_group->iommu_group
> > > >            * is not the actual iommu_group which is going to be attached to
> > > >            * domain. To avoid duplicate iommu_group attach, needs to check if
> > > >            * the actual iommu_group. vfio_get_parent_iommu_group() is a
> > > >            * newly added helper function which returns the actual attach
> > > >            * iommu_group going to be attached for this mdev group.
> > > >               */
> > > >          p_iommu_group = vfio_get_parent_iommu_group(
> > > >                                                                          vfio_group->iommu_group);
> > > >          list_for_each_entry(d, &iommu->domain_list, next) {
> > > >                  if (find_iommu_group(d, p_iommu_group)) {
> > > >                          mutex_unlock(&iommu->lock);
> > > >                          // skip group attach;
> > > >                  }
> > > >          }
> > >
> > > We don't currently create a struct vfio_group for the parent, only for
> > > the mdev iommu group.  The iommu_attach for an iommu backed mdev
> > > doesn't leave any traces of where it is actually attached, we just
> > > count on retracing our steps for the detach.  That's why I'm thinking
> > > we need an object somewhere to track it and it needs to be reference
> > > counted so that if both a vfio-mdev-pci device and a vfio-pci device
> > > are using it, we leave it in place if either one is removed.
> >
> > Hmmm, here we are talking about tracking in iommu_group level though
> > no good idea on where the object should  be placed yet. However, we may
> > need to tack in device level as I mentioned in above paragraph. If not,
> > there may be sequence issue. e.g. if vfio-mdev-pci device is attached
> > firstly, then the object will be initialized, and when vfio-pci device is
> > attached, we will find the attach should be skipped and just inc the ref count.
> > But actually it should not be skipped since the vfio-mdev-pci attach does not
> > attach all devices within the iommu backed group.
> 
> We can't do that though, the entire group needs to be attached.

Agree, may be getting another interface which is similar with
iommu_attach_device(), but works for devices which is in non-singleton
groups. So the attach for iommu backed mdev will also result in a sound
attach to all the devices which share iommu group with the parent device.
This is just like vfio-pci devices. For the object for tracking purpose may be
as below:

struct vfio_iommu_object {
	struct iommu_group *group;
	struct kref kref;
};

And I think it should be per-domain and per-iommu backed group since
aux-domain support allows a iommu backed group to be attached to
multiple domains. I'm considering if it is ok to have a list in vfio_domain.
Before each domain attach, vfio should do a check in the list if the iommu
backed group has been attached already. For vfio-pci devices, use its iommu
group to do a search in the list. For vfio-mdev-pci devices, use its parent
devices iommu group to do a search. Thus avoid duplicate attach. Thoughts?
 
> > What's more, regards to sIOV case,  a parent devices may have multiple
> > mdevs and the mdevs may be assigned to the same VM. Thus there will be multiple
> > attach on this parent device. This also makes me believe track in device level would
> > be better.
> 
> The aux domain support essentially specifies that the device can be
> attached to multiple domains, so I think we're ok for device-level
> group attach there, but not for bare iommu backed devices.  Thanks,

Got it.

Thanks,
Yi Liu

  reply	other threads:[~2019-06-24  8:20 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-08 13:21 [PATCH v1 0/9] vfio_pci: wrap pci device as a mediated device Liu Yi L
2019-06-08 13:21 ` [PATCH v1 1/9] vfio_pci: move vfio_pci_is_vga/vfio_vga_disabled to header Liu Yi L
2019-06-08 13:21 ` [PATCH v1 2/9] vfio_pci: refine user config reference in vfio-pci module Liu Yi L
2019-06-08 13:21 ` [PATCH v1 3/9] vfio_pci: refine vfio_pci_driver reference in vfio_pci.c Liu Yi L
2019-06-08 13:21 ` [PATCH v1 4/9] vfio_pci: make common functions be extern Liu Yi L
2019-06-08 13:21 ` [PATCH v1 5/9] vfio_pci: duplicate vfio_pci.c Liu Yi L
2019-06-08 13:21 ` [PATCH v1 6/9] vfio_pci: shrink vfio_pci_common.c Liu Yi L
2019-06-08 13:21 ` [PATCH v1 7/9] vfio_pci: shrink vfio_pci.c Liu Yi L
2019-06-08 13:21 ` [PATCH v1 8/9] vfio/pci: protect cap/ecap_perm bits alloc/free with atomic op Liu Yi L
2019-06-08 13:21 ` [PATCH v1 9/9] smaples: add vfio-mdev-pci driver Liu Yi L
2019-06-20  4:26   ` Alex Williamson
2019-06-20 13:00     ` Liu, Yi L
2019-06-20 21:07       ` Alex Williamson
2019-06-21 10:23         ` Liu, Yi L
2019-06-21 15:57           ` Alex Williamson
2019-06-24  8:20             ` Liu, Yi L [this message]
2019-06-28 15:07               ` Alex Williamson
2019-07-03  8:25                 ` Liu, Yi L
2019-07-03 17:22                   ` Alex Williamson
2019-07-04  9:11                     ` Liu, Yi L
2019-07-05 15:55                       ` Alex Williamson
2019-07-11 12:27                         ` Liu, Yi L
2019-07-11 19:08                           ` Alex Williamson
2019-07-12 12:55                             ` Liu, Yi L
2019-07-19 20:57                               ` Alex Williamson
2019-07-26  9:04                                 ` Liu, Yi L

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=A2975661238FB949B60364EF0F2C257439F05415@SHSMSX104.ccr.corp.intel.com \
    --to=yi.l.liu@intel.com \
    --cc=alex.williamson@redhat.com \
    --cc=baolu.lu@linux.intel.com \
    --cc=joro@8bytes.org \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=kwankhede@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=yamada.masahiro@socionext.com \
    --cc=yi.y.sun@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).