linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jacob Pan <jacob.jun.pan@linux.intel.com>
To: "Tian, Kevin" <kevin.tian@intel.com>
Cc: Alex Williamson <alex.williamson@redhat.com>,
	"Liu, Yi L" <yi.l.liu@intel.com>,
	"eric.auger@redhat.com" <eric.auger@redhat.com>,
	"joro@8bytes.org" <joro@8bytes.org>,
	"Raj, Ashok" <ashok.raj@intel.com>,
	"Tian, Jun J" <jun.j.tian@intel.com>,
	"Sun, Yi Y" <yi.y.sun@intel.com>,
	"jean-philippe@linaro.org" <jean-philippe@linaro.org>,
	"peterx@redhat.com" <peterx@redhat.com>,
	"iommu@lists.linux-foundation.org"
	<iommu@lists.linux-foundation.org>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"Wu, Hao" <hao.wu@intel.com>,
	jacob.jun.pan@linux.intel.com
Subject: Re: [PATCH v1 7/8] vfio/type1: Add VFIO_IOMMU_CACHE_INVALIDATE
Date: Fri, 3 Apr 2020 08:31:45 -0700	[thread overview]
Message-ID: <20200403083145.5097d346@jacob-builder> (raw)
In-Reply-To: <AADFC41AFE54684AB9EE6CBC0274A5D19D807C4A@SHSMSX104.ccr.corp.intel.com>

On Fri, 3 Apr 2020 06:39:22 +0000
"Tian, Kevin" <kevin.tian@intel.com> wrote:

> > From: Alex Williamson <alex.williamson@redhat.com>
> > Sent: Friday, April 3, 2020 4:24 AM
> > 
> > On Sun, 22 Mar 2020 05:32:04 -0700
> > "Liu, Yi L" <yi.l.liu@intel.com> wrote:
> >   
> > > From: Liu Yi L <yi.l.liu@linux.intel.com>
> > >
> > > For VFIO IOMMUs with the type VFIO_TYPE1_NESTING_IOMMU, guest  
> > "owns" the  
> > > first-level/stage-1 translation structures, the host IOMMU driver
> > > has no knowledge of first-level/stage-1 structure cache updates
> > > unless the guest invalidation requests are trapped and propagated
> > > to the host.
> > >
> > > This patch adds a new IOCTL VFIO_IOMMU_CACHE_INVALIDATE to  
> > propagate guest  
> > > first-level/stage-1 IOMMU cache invalidations to host to ensure
> > > IOMMU  
> > cache  
> > > correctness.
> > >
> > > With this patch, vSVA (Virtual Shared Virtual Addressing) can be
> > > used safely as the host IOMMU iotlb correctness are ensured.
> > >
> > > Cc: Kevin Tian <kevin.tian@intel.com>
> > > CC: Jacob Pan <jacob.jun.pan@linux.intel.com>
> > > Cc: Alex Williamson <alex.williamson@redhat.com>
> > > Cc: Eric Auger <eric.auger@redhat.com>
> > > Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>
> > > Signed-off-by: Liu Yi L <yi.l.liu@linux.intel.com>
> > > Signed-off-by: Eric Auger <eric.auger@redhat.com>
> > > Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
> > > ---
> > >  drivers/vfio/vfio_iommu_type1.c | 49  
> > +++++++++++++++++++++++++++++++++++++++++  
> > >  include/uapi/linux/vfio.h       | 22 ++++++++++++++++++
> > >  2 files changed, 71 insertions(+)
> > >
> > > diff --git a/drivers/vfio/vfio_iommu_type1.c  
> > b/drivers/vfio/vfio_iommu_type1.c  
> > > index a877747..937ec3f 100644
> > > --- a/drivers/vfio/vfio_iommu_type1.c
> > > +++ b/drivers/vfio/vfio_iommu_type1.c
> > > @@ -2423,6 +2423,15 @@ static long  
> > vfio_iommu_type1_unbind_gpasid(struct vfio_iommu *iommu,  
> > >  	return ret;
> > >  }
> > >
> > > +static int vfio_cache_inv_fn(struct device *dev, void *data)
> > > +{
> > > +	struct domain_capsule *dc = (struct domain_capsule
> > > *)data;
> > > +	struct iommu_cache_invalidate_info *cache_inv_info =
> > > +		(struct iommu_cache_invalidate_info *) dc->data;
> > > +
> > > +	return iommu_cache_invalidate(dc->domain, dev,
> > > cache_inv_info); +}
> > > +
> > >  static long vfio_iommu_type1_ioctl(void *iommu_data,
> > >  				   unsigned int cmd, unsigned
> > > long arg) {
> > > @@ -2629,6 +2638,46 @@ static long vfio_iommu_type1_ioctl(void  
> > *iommu_data,  
> > >  		}
> > >  		kfree(gbind_data);
> > >  		return ret;
> > > +	} else if (cmd == VFIO_IOMMU_CACHE_INVALIDATE) {
> > > +		struct vfio_iommu_type1_cache_invalidate
> > > cache_inv;
> > > +		u32 version;
> > > +		int info_size;
> > > +		void *cache_info;
> > > +		int ret;
> > > +
> > > +		minsz = offsetofend(struct  
> > vfio_iommu_type1_cache_invalidate,  
> > > +				    flags);  
> > 
> > This breaks backward compatibility as soon as struct
> > iommu_cache_invalidate_info changes size by its defined versioning
> > scheme.  ie. a field gets added, the version is bumped, all existing
> > userspace breaks.  Our minsz is offsetofend to the version field,
> > interpret the version to size, then reevaluate argsz.  
> 
> btw the version scheme is challenged by Christoph Hellwig. After
> some discussions, we need your guidance how to move forward.
> Jacob summarized available options below:
> 	https://lkml.org/lkml/2020/4/2/876
> 
For this particular case, I don't quite get the difference between
minsz=flags and minsz=version. Our IOMMU version scheme will only
change size at the end where the variable union is used for vendor
specific extensions. Version bump does not change size (only re-purpose
padding) from the start of the UAPI structure to the union, i.e. version
will __always__ be after struct vfio_iommu_type1_cache_invalidate.flags.


> >   
> > > +
> > > +		if (copy_from_user(&cache_inv, (void __user
> > > *)arg, minsz))
> > > +			return -EFAULT;
> > > +
> > > +		if (cache_inv.argsz < minsz || cache_inv.flags)
> > > +			return -EINVAL;
> > > +
> > > +		/* Get the version of struct
> > > iommu_cache_invalidate_info */
> > > +		if (copy_from_user(&version,
> > > +			(void __user *) (arg + minsz),
> > > sizeof(version)))
> > > +			return -EFAULT;
> > > +
> > > +		info_size = iommu_uapi_get_data_size(
> > > +					IOMMU_UAPI_CACHE_INVAL,  
> > version);  
> > > +
> > > +		cache_info = kzalloc(info_size, GFP_KERNEL);
> > > +		if (!cache_info)
> > > +			return -ENOMEM;
> > > +
> > > +		if (copy_from_user(cache_info,
> > > +			(void __user *) (arg + minsz),
> > > info_size)) {
> > > +			kfree(cache_info);
> > > +			return -EFAULT;
> > > +		}
> > > +
> > > +		mutex_lock(&iommu->lock);
> > > +		ret = vfio_iommu_for_each_dev(iommu,
> > > vfio_cache_inv_fn,
> > > +					    cache_info);  
> > 
> > How does a user respond when their cache invalidate fails?  Isn't
> > this also another case where our for_each_dev can fail at an
> > arbitrary point leaving us with no idea whether each device even
> > had the opportunity to perform the invalidation request.  I don't
> > see how we have any chance to maintain coherency after this
> > faults.  
> 
> Then can we make it simple to support singleton group only? 
> 
> >   
> > > +		mutex_unlock(&iommu->lock);
> > > +		kfree(cache_info);
> > > +		return ret;
> > >  	}
> > >
> > >  	return -ENOTTY;
> > > diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> > > index 2235bc6..62ca791 100644
> > > --- a/include/uapi/linux/vfio.h
> > > +++ b/include/uapi/linux/vfio.h
> > > @@ -899,6 +899,28 @@ struct vfio_iommu_type1_bind {
> > >   */
> > >  #define VFIO_IOMMU_BIND		_IO(VFIO_TYPE, VFIO_BASE
> > > + 23)
> > >
> > > +/**
> > > + * VFIO_IOMMU_CACHE_INVALIDATE - _IOW(VFIO_TYPE, VFIO_BASE + 24,
> > > + *			struct
> > > vfio_iommu_type1_cache_invalidate)
> > > + *
> > > + * Propagate guest IOMMU cache invalidation to the host. The
> > > cache
> > > + * invalidation information is conveyed by @cache_info, the
> > > content
> > > + * format would be structures defined in uapi/linux/iommu.h. User
> > > + * should be aware of that the struct
> > > iommu_cache_invalidate_info
> > > + * has a @version field, vfio needs to parse this field before
> > > getting
> > > + * data from userspace.
> > > + *
> > > + * Availability of this IOCTL is after VFIO_SET_IOMMU.  
> > 
> > Is this a necessary qualifier?  A user can try to call this ioctl at
> > any point, it only makes sense in certain configurations, but it
> > should always "do the right thing" relative to the container iommu
> > config.
> > 
> > Also, I don't see anything in these last few patches testing the
> > operating IOMMU model, what happens when a user calls them when not
> > using the nesting IOMMU?
> > 
> > Is this ioctl and the previous BIND ioctl only valid when configured
> > for the nesting IOMMU type?  
> 
> I think so. We should add the nesting check in those new ioctls.
> 
I also added nesting domain attribute check in IOMMU driver, so bind
guest PASID will fail if nesting mode is not supported. There will be
no invalidation w/o bind.

> >   
> > > + *
> > > + * returns: 0 on success, -errno on failure.
> > > + */
> > > +struct vfio_iommu_type1_cache_invalidate {
> > > +	__u32   argsz;
> > > +	__u32   flags;
> > > +	struct	iommu_cache_invalidate_info cache_info;
> > > +};
> > > +#define VFIO_IOMMU_CACHE_INVALIDATE      _IO(VFIO_TYPE,
> > > VFIO_BASE  
> > + 24)
> > 
> > The future extension capabilities of this ioctl worry me, I wonder
> > if we should do another data[] with flag defining that data as
> > CACHE_INFO.  
> 
> Can you elaborate? Does it mean with this way we don't rely on iommu
> driver to provide version_to_size conversion and instead we just pass
> data[] to iommu driver for further audit?
> 
I guess Alex meant we do something similar to:
struct vfio_irq_set {
	__u32	argsz;
	__u32	flags;
#define VFIO_IRQ_SET_DATA_NONE		(1 << 0) /* Data not present */
#define VFIO_IRQ_SET_DATA_BOOL		(1 << 1) /* Data is bool (u8) */
#define VFIO_IRQ_SET_DATA_EVENTFD	(1 << 2) /* Data is eventfd (s32) */
#define VFIO_IRQ_SET_ACTION_MASK	(1 << 3) /* Mask interrupt */
#define VFIO_IRQ_SET_ACTION_UNMASK	(1 << 4) /* Unmask interrupt */
#define VFIO_IRQ_SET_ACTION_TRIGGER	(1 << 5) /* Trigger interrupt */
	__u32	index;
	__u32	start;
	__u32	count;
	__u8	data[];
};

So we could do:
struct vfio_iommu_type1_cache_invalidate {
	__u32   argsz;
#define VFIO_INVL_DATA_NONE
#define VFIO_INVL_DATA_CACHE_INFO		(1 << 1)
	__u32   flags;
	__u8	data[];
}

We still need version_to_size version, but under
if (flag & VFIO_INVL_DATA_CACHE_INFO)
	get_size_from_version();

> >   
> > > +
> > >  /* -------- Additional API for SPAPR TCE (Server POWERPC) IOMMU
> > > --------  
> > */  
> > >
> > >  /*  
> 
> Thanks
> Kevin

[Jacob Pan]

  reply	other threads:[~2020-04-03 15:25 UTC|newest]

Thread overview: 110+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-22 12:31 [PATCH v1 0/8] vfio: expose virtual Shared Virtual Addressing to VMs Liu, Yi L
2020-03-22 12:31 ` [PATCH v1 1/8] vfio: Add VFIO_IOMMU_PASID_REQUEST(alloc/free) Liu, Yi L
2020-03-22 16:21   ` kbuild test robot
2020-03-30  8:32   ` Tian, Kevin
2020-03-30 14:36     ` Liu, Yi L
2020-03-31  5:40       ` Tian, Kevin
2020-03-31 13:22         ` Liu, Yi L
2020-04-01  5:43           ` Tian, Kevin
2020-04-01  5:48             ` Liu, Yi L
2020-03-31  7:53   ` Christoph Hellwig
2020-03-31  8:17     ` Liu, Yi L
2020-03-31  8:32     ` Liu, Yi L
2020-03-31  8:36       ` Liu, Yi L
2020-03-31  9:15         ` Christoph Hellwig
2020-04-02 13:52   ` Jean-Philippe Brucker
2020-04-03 11:56     ` Liu, Yi L
2020-04-03 12:39       ` Jean-Philippe Brucker
2020-04-03 12:44         ` Liu, Yi L
2020-04-02 17:50   ` Alex Williamson
2020-04-03  5:58     ` Tian, Kevin
2020-04-03 15:14       ` Alex Williamson
2020-04-07  4:42         ` Tian, Kevin
2020-04-07 15:14           ` Alex Williamson
2020-04-03 13:12     ` Liu, Yi L
2020-04-03 17:50       ` Alex Williamson
2020-04-07  4:52         ` Tian, Kevin
2020-04-08  0:52         ` Liu, Yi L
2020-03-22 12:31 ` [PATCH v1 2/8] vfio/type1: Add vfio_iommu_type1 parameter for quota tuning Liu, Yi L
2020-03-22 17:20   ` kbuild test robot
2020-03-30  8:40   ` Tian, Kevin
2020-03-30  8:52     ` Liu, Yi L
2020-03-30  9:19       ` Tian, Kevin
2020-03-30  9:26         ` Liu, Yi L
2020-03-30 11:44           ` Tian, Kevin
2020-04-02 17:58             ` Alex Williamson
2020-04-03  8:15               ` Liu, Yi L
2020-03-22 12:32 ` [PATCH v1 3/8] vfio/type1: Report PASID alloc/free support to userspace Liu, Yi L
2020-03-30  9:43   ` Tian, Kevin
2020-04-01  7:46     ` Liu, Yi L
2020-04-01  9:41   ` Auger Eric
2020-04-01 13:13     ` Liu, Yi L
2020-04-02 18:01   ` Alex Williamson
2020-04-03  8:17     ` Liu, Yi L
2020-04-03 17:28       ` Alex Williamson
2020-04-04 11:36         ` Liu, Yi L
2020-03-22 12:32 ` [PATCH v1 4/8] vfio: Check nesting iommu uAPI version Liu, Yi L
2020-03-22 18:30   ` kbuild test robot
2020-03-22 12:32 ` [PATCH v1 5/8] vfio/type1: Report 1st-level/stage-1 format to userspace Liu, Yi L
2020-03-22 16:44   ` kbuild test robot
2020-03-30 11:48   ` Tian, Kevin
2020-04-01  7:38     ` Liu, Yi L
2020-04-01  7:56       ` Tian, Kevin
2020-04-01  8:06         ` Liu, Yi L
2020-04-01  8:08           ` Tian, Kevin
2020-04-01  8:09             ` Liu, Yi L
2020-04-01  8:51   ` Auger Eric
2020-04-01 12:51     ` Liu, Yi L
2020-04-01 13:01       ` Auger Eric
2020-04-03  8:23         ` Jean-Philippe Brucker
2020-04-07  9:43           ` Liu, Yi L
2020-04-08  1:02             ` Liu, Yi L
2020-04-08 10:27             ` Auger Eric
2020-04-09  8:14               ` Jean-Philippe Brucker
2020-04-09  9:01                 ` Auger Eric
2020-04-09 12:47                 ` Liu, Yi L
2020-04-10  3:28                   ` Auger Eric
2020-04-10  3:48                     ` Liu, Yi L
2020-04-10 12:30                   ` Liu, Yi L
2020-04-02 19:20   ` Alex Williamson
2020-04-03 11:59     ` Liu, Yi L
2020-03-22 12:32 ` [PATCH v1 6/8] vfio/type1: Bind guest page tables to host Liu, Yi L
2020-03-22 18:10   ` kbuild test robot
2020-03-30 12:46   ` Tian, Kevin
2020-04-01  9:13     ` Liu, Yi L
2020-04-02  2:12       ` Tian, Kevin
2020-04-02  8:05         ` Liu, Yi L
2020-04-03  8:34           ` Jean-Philippe Brucker
2020-04-07 10:33             ` Liu, Yi L
2020-04-09  8:28               ` Jean-Philippe Brucker
2020-04-09  9:15                 ` Liu, Yi L
2020-04-09  9:38                   ` Jean-Philippe Brucker
2020-04-02 19:57   ` Alex Williamson
2020-04-03 13:30     ` Liu, Yi L
2020-04-03 18:11       ` Alex Williamson
2020-04-04 10:28         ` Liu, Yi L
2020-04-11  5:52     ` Liu, Yi L
2020-03-22 12:32 ` [PATCH v1 7/8] vfio/type1: Add VFIO_IOMMU_CACHE_INVALIDATE Liu, Yi L
2020-03-30 12:58   ` Tian, Kevin
2020-04-01  7:49     ` Liu, Yi L
2020-03-31  7:56   ` Christoph Hellwig
2020-03-31 10:48     ` Liu, Yi L
2020-04-02 20:24   ` Alex Williamson
2020-04-03  6:39     ` Tian, Kevin
2020-04-03 15:31       ` Jacob Pan [this message]
2020-04-03 15:34       ` Alex Williamson
2020-04-08  2:28         ` Liu, Yi L
2020-04-16 10:40         ` Liu, Yi L
2020-04-16 12:09           ` Tian, Kevin
2020-04-16 12:42             ` Auger Eric
2020-04-16 13:28               ` Tian, Kevin
2020-04-16 15:12                 ` Auger Eric
2020-04-16 14:40           ` Alex Williamson
2020-04-16 14:48             ` Alex Williamson
2020-04-17  6:03             ` Liu, Yi L
2020-03-22 12:32 ` [PATCH v1 8/8] vfio/type1: Add vSVA support for IOMMU-backed mdevs Liu, Yi L
2020-03-30 13:18   ` Tian, Kevin
2020-04-01  7:51     ` Liu, Yi L
2020-04-02 20:33   ` Alex Williamson
2020-04-03 13:39     ` Liu, Yi L
2020-03-26 12:56 ` [PATCH v1 0/8] vfio: expose virtual Shared Virtual Addressing to VMs Liu, Yi L

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200403083145.5097d346@jacob-builder \
    --to=jacob.jun.pan@linux.intel.com \
    --cc=alex.williamson@redhat.com \
    --cc=ashok.raj@intel.com \
    --cc=eric.auger@redhat.com \
    --cc=hao.wu@intel.com \
    --cc=iommu@lists.linux-foundation.org \
    --cc=jean-philippe@linaro.org \
    --cc=joro@8bytes.org \
    --cc=jun.j.tian@intel.com \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=peterx@redhat.com \
    --cc=yi.l.liu@intel.com \
    --cc=yi.y.sun@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).