iommu.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: Auger Eric <eric.auger@redhat.com>
To: "Liu, Yi L" <yi.l.liu@intel.com>,
	"alex.williamson@redhat.com" <alex.williamson@redhat.com>,
	"baolu.lu@linux.intel.com" <baolu.lu@linux.intel.com>,
	"joro@8bytes.org" <joro@8bytes.org>
Cc: "jean-philippe@linaro.org" <jean-philippe@linaro.org>,
	"Tian, Kevin" <kevin.tian@intel.com>,
	"Raj, Ashok" <ashok.raj@intel.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"stefanha@gmail.com" <stefanha@gmail.com>,
	"Tian, Jun J" <jun.j.tian@intel.com>,
	"iommu@lists.linux-foundation.org"
	<iommu@lists.linux-foundation.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"Sun, Yi Y" <yi.y.sun@intel.com>, "Wu, Hao" <hao.wu@intel.com>
Subject: Re: [PATCH v4 04/15] vfio/type1: Report iommu nesting info to userspace
Date: Mon, 6 Jul 2020 15:45:29 +0200	[thread overview]
Message-ID: <94b4e5d3-8d24-9a55-6bee-ed86f3846996@redhat.com> (raw)
In-Reply-To: <DM5PR11MB1435290B6CD561EC61027892C3690@DM5PR11MB1435.namprd11.prod.outlook.com>

Hi Yi,

On 7/6/20 3:10 PM, Liu, Yi L wrote:
> Hi Eric,
> 
>> From: Auger Eric <eric.auger@redhat.com>
>> Sent: Monday, July 6, 2020 6:37 PM
>>
>> Yi,
>>
>> On 7/4/20 1:26 PM, Liu Yi L wrote:
>>> This patch exports iommu nesting capability info to user space through
>>> VFIO. User space is expected to check this info for supported uAPIs (e.g.
>>> PASID alloc/free, bind page table, and cache invalidation) and the vendor
>>> specific format information for first level/stage page table that will be
>>> bound to.
>>>
>>> The nesting info is available only after the nesting iommu type is set
>>> for a container. Current implementation imposes one limitation - one
>>> nesting container should include at most one group. The philosophy of
>>> vfio container is having all groups/devices within the container share
>>> the same IOMMU context. When vSVA is enabled, one IOMMU context could
>>> include one 2nd-level address space and multiple 1st-level address spaces.
>>> While the 2nd-leve address space is reasonably sharable by multiple groups
>> level
> 
> oh, yes.
> 
>>> , blindly sharing 1st-level address spaces across all groups within the
>>> container might instead break the guest expectation. In the future sub/
>>> super container concept might be introduced to allow partial address space
>>> sharing within an IOMMU context. But for now let's go with this restriction
>>> by requiring singleton container for using nesting iommu features. Below
>>> link has the related discussion about this decision.
>>>
>>> https://lkml.org/lkml/2020/5/15/1028
>>>
>>> Cc: Kevin Tian <kevin.tian@intel.com>
>>> CC: Jacob Pan <jacob.jun.pan@linux.intel.com>
>>> Cc: Alex Williamson <alex.williamson@redhat.com>
>>> Cc: Eric Auger <eric.auger@redhat.com>
>>> Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>
>>> Cc: Joerg Roedel <joro@8bytes.org>
>>> Cc: Lu Baolu <baolu.lu@linux.intel.com>
>>> Signed-off-by: Liu Yi L <yi.l.liu@intel.com>
>>> ---
>>> v3 -> v4:
>>> *) address comments against v3.
>>>
>>> v1 -> v2:
>>> *) added in v2
>>> ---
>>>
>>>  drivers/vfio/vfio_iommu_type1.c | 105
>> +++++++++++++++++++++++++++++++++++-----
>>>  include/uapi/linux/vfio.h       |  16 ++++++
>>>  2 files changed, 109 insertions(+), 12 deletions(-)
>>>
>>> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
>>> index 7accb59..80623b8 100644
>>> --- a/drivers/vfio/vfio_iommu_type1.c
>>> +++ b/drivers/vfio/vfio_iommu_type1.c
>>> @@ -62,18 +62,20 @@ MODULE_PARM_DESC(dma_entry_limit,
>>>  		 "Maximum number of user DMA mappings per container (65535).");
>>>
>>>  struct vfio_iommu {
>>> -	struct list_head	domain_list;
>>> -	struct list_head	iova_list;
>>> -	struct vfio_domain	*external_domain; /* domain for external user */
>>> -	struct mutex		lock;
>>> -	struct rb_root		dma_list;
>>> -	struct blocking_notifier_head notifier;
>>> -	unsigned int		dma_avail;
>>> -	uint64_t		pgsize_bitmap;
>>> -	bool			v2;
>>> -	bool			nesting;
>>> -	bool			dirty_page_tracking;
>>> -	bool			pinned_page_dirty_scope;
>>> +	struct list_head		domain_list;
>>> +	struct list_head		iova_list;
>>> +	struct vfio_domain		*external_domain; /* domain for
>>> +							     external user */
>> nit: put the comment before the field?
> 
> do you mean below?
> 
> +	/* domain for external user */
> +	struct vfio_domain		*external_domain;
yes that's what I meant
> 
>>> +	struct mutex			lock;
>>> +	struct rb_root			dma_list;
>>> +	struct blocking_notifier_head	notifier;
>>> +	unsigned int			dma_avail;
>>> +	uint64_t			pgsize_bitmap;
>>> +	bool				v2;
>>> +	bool				nesting;
>>> +	bool				dirty_page_tracking;
>>> +	bool				pinned_page_dirty_scope;
>>> +	struct iommu_nesting_info	*nesting_info;
>>>  };
>>>
>>>  struct vfio_domain {
>>> @@ -130,6 +132,9 @@ struct vfio_regions {
>>>  #define IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)	\
>>>  					(!list_empty(&iommu->domain_list))
>>>
>>> +#define IS_DOMAIN_IN_CONTAINER(iommu)	((iommu->external_domain) || \
>>> +					 (!list_empty(&iommu->domain_list)))
>> rename into something like CONTAINER_HAS_DOMAIN()?
> 
> got it.
> 
>>> +
>>>  #define DIRTY_BITMAP_BYTES(n)	(ALIGN(n, BITS_PER_TYPE(u64)) /
>> BITS_PER_BYTE)
>>>
>>>  /*
>>> @@ -1929,6 +1934,13 @@ static void vfio_iommu_iova_insert_copy(struct
>> vfio_iommu *iommu,
>>>
>>>  	list_splice_tail(iova_copy, iova);
>>>  }
>>> +
>>> +static void vfio_iommu_release_nesting_info(struct vfio_iommu *iommu)
>>> +{
>>> +	kfree(iommu->nesting_info);
>>> +	iommu->nesting_info = NULL;
>>> +}
>>> +
>>>  static int vfio_iommu_type1_attach_group(void *iommu_data,
>>>  					 struct iommu_group *iommu_group)
>>>  {
>>> @@ -1959,6 +1971,12 @@ static int vfio_iommu_type1_attach_group(void
>> *iommu_data,
>>>  		}
>>>  	}
>>>
>>> +	/* Nesting type container can include only one group */
>>> +	if (iommu->nesting && IS_DOMAIN_IN_CONTAINER(iommu)) {
>>> +		mutex_unlock(&iommu->lock);
>>> +		return -EINVAL;
>>> +	}
>>> +
>>>  	group = kzalloc(sizeof(*group), GFP_KERNEL);
>>>  	domain = kzalloc(sizeof(*domain), GFP_KERNEL);
>>>  	if (!group || !domain) {
>>> @@ -2029,6 +2047,36 @@ static int vfio_iommu_type1_attach_group(void
>> *iommu_data,
>>>  	if (ret)
>>>  		goto out_domain;
>>>
>>> +	/* Nesting cap info is available only after attaching */
>>> +	if (iommu->nesting) {
>>> +		struct iommu_nesting_info tmp;
>>> +		struct iommu_nesting_info *info;
>>> +
>>> +		/* First get the size of vendor specific nesting info */
>>> +		ret = iommu_domain_get_attr(domain->domain,
>>> +					    DOMAIN_ATTR_NESTING,
>>> +					    &tmp);
>>> +		if (ret)
>>> +			goto out_detach;
>>> +
>>> +		info = kzalloc(tmp.size, GFP_KERNEL);
>> nit: you may directly use iommu->nesting_info
> 
> got you.
> 
>>> +		if (!info) {
>>> +			ret = -ENOMEM;
>>> +			goto out_detach;
>>> +		}
>>> +
>>> +		/* Now get the nesting info */
>>> +		info->size = tmp.size;
>>> +		ret = iommu_domain_get_attr(domain->domain,
>>> +					    DOMAIN_ATTR_NESTING,
>>> +					    info);
>>> +		if (ret) {
>>> +			kfree(info);
>> ... and set it back to NULL here if it fails
> 
> and maybe no need to free it here as vfio_iommu_release_nesting_info()
> will free the nesting_info.
> 
>>> +			goto out_detach;
>>> +		}
>>> +		iommu->nesting_info = info;
>>> +	}
>>> +
>>>  	/* Get aperture info */
>>>  	iommu_domain_get_attr(domain->domain, DOMAIN_ATTR_GEOMETRY,
>> &geo);
>>>
>>> @@ -2138,6 +2186,7 @@ static int vfio_iommu_type1_attach_group(void
>> *iommu_data,
>>>  	return 0;
>>>
>>>  out_detach:
>>> +	vfio_iommu_release_nesting_info(iommu);
>>>  	vfio_iommu_detach_group(domain, group);
>>>  out_domain:
>>>  	iommu_domain_free(domain->domain);
>>> @@ -2338,6 +2387,8 @@ static void vfio_iommu_type1_detach_group(void
>> *iommu_data,
>>>  					vfio_iommu_unmap_unpin_all(iommu);
>>>  				else
>>>
>> 	vfio_iommu_unmap_unpin_reaccount(iommu);
>>> +
>>> +				vfio_iommu_release_nesting_info(iommu);
>>>  			}
>>>  			iommu_domain_free(domain->domain);
>>>  			list_del(&domain->next);
>>> @@ -2546,6 +2597,30 @@ static int vfio_iommu_migration_build_caps(struct
>> vfio_iommu *iommu,
>>>  	return vfio_info_add_capability(caps, &cap_mig.header, sizeof(cap_mig));
>>>  }
>>>
>>> +static int vfio_iommu_info_add_nesting_cap(struct vfio_iommu *iommu,
>>> +					   struct vfio_info_cap *caps)
>>> +{
>>> +	struct vfio_info_cap_header *header;
>>> +	struct vfio_iommu_type1_info_cap_nesting *nesting_cap;
>>> +	size_t size;
>>> +
>>> +	size = sizeof(*nesting_cap) + iommu->nesting_info->size;
>>> +
>>> +	header = vfio_info_cap_add(caps, size,
>>> +				   VFIO_IOMMU_TYPE1_INFO_CAP_NESTING, 1);
>>> +	if (IS_ERR(header))
>>> +		return PTR_ERR(header);
>>> +
>>> +	nesting_cap = container_of(header,
>>> +				   struct vfio_iommu_type1_info_cap_nesting,
>>> +				   header);
>>> +
>>> +	memcpy(&nesting_cap->info, iommu->nesting_info,
>>> +	       iommu->nesting_info->size);
>>> +
>>> +	return 0;
>>> +}
>>> +
>>>  static int vfio_iommu_type1_get_info(struct vfio_iommu *iommu,
>>>  				     unsigned long arg)
>>>  {
>>> @@ -2586,6 +2661,12 @@ static int vfio_iommu_type1_get_info(struct
>> vfio_iommu *iommu,
>>>  	if (ret)
>>>  		return ret;
>>>
>>> +	if (iommu->nesting_info) {
>>> +		ret = vfio_iommu_info_add_nesting_cap(iommu, &caps);
>>> +		if (ret)
>>> +			return ret;
>>> +	}
>>> +
>>>  	if (caps.size) {
>>>  		info.flags |= VFIO_IOMMU_INFO_CAPS;
>>>
>>> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
>>> index 9204705..3e3de9c 100644
>>> --- a/include/uapi/linux/vfio.h
>>> +++ b/include/uapi/linux/vfio.h
>>> @@ -1039,6 +1039,22 @@ struct vfio_iommu_type1_info_cap_migration {
>>>  	__u64	max_dirty_bitmap_size;		/* in bytes */
>>>  };
>>>
>>> +#define VFIO_IOMMU_TYPE1_INFO_CAP_NESTING  3
>>
>> You may improve the documentation by taking examples from the above caps.
> 
> yes, it is. I somehow broke the style. how about below?
> 
> 
> 
> /*
>  * The nesting capability allows to report the related capability
>  * and info for nesting iommu type.
>  *
>  * The structures below define version 1 of this capability.
>  *
>  * User space should check this cap for setup nesting iommu type.
before setting up stage 1 information? The wording above sounds a bit
confusing to me as it can be interpreted as before choosing
VFIO_TYPE1_NESTING_IOMMU.

You also need to document it returns the capability only after a group
is attached - which looks strange by the way -.

Thanks

Eric
>  *
>  * @info:	the nesting info provided by IOMMU driver. Today
>  *		it is expected to be a struct iommu_nesting_info
>  *		data.
> #define VFIO_IOMMU_TYPE1_INFO_CAP_NESTING  3
> 
> struct vfio_iommu_type1_info_cap_nesting {
> 	...
> };
> 
>>> +
>>> +/*
>>> + * Reporting nesting info to user space.
>>> + *
>>> + * @info:	the nesting info provided by IOMMU driver. Today
>>> + *		it is expected to be a struct iommu_nesting_info
>>> + *		data.
>> Is it expected to change?
> 
> honestly, I'm not quite sure on it. I did considered to embed
> struct iommu_nesting_info here instead of using info[]. but I
> hesitated as using info[] may leave more flexibility on this
> struct. how about your opinion? perhaps it's fine to embed the
> struct iommu_nesting_info here as long as VFIO is setup nesting
> based on IOMMU UAPI.
> 
>>> + */
>>> +struct vfio_iommu_type1_info_cap_nesting {
>>> +	struct	vfio_info_cap_header header;
>>> +	__u32	flags;
>> You may document flags.
> 
> sure. it's reserved for future.
> 
> Regards,
> Yi Liu
> 
>>> +	__u32	padding;
>>> +	__u8	info[];
>>> +};
>>> +
>>>  #define VFIO_IOMMU_GET_INFO _IO(VFIO_TYPE, VFIO_BASE + 12)
>>>
>>>  /**
>>>
>> Thanks
>>
>> Eric
> 

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

  reply	other threads:[~2020-07-06 13:45 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-04 11:26 [PATCH v4 00/15] vfio: expose virtual Shared Virtual Addressing to VMs Liu Yi L
2020-07-04 11:26 ` [PATCH v4 01/15] vfio/type1: Refactor vfio_iommu_type1_ioctl() Liu Yi L
2020-07-06  9:34   ` Auger Eric
2020-07-06 12:27     ` Liu, Yi L
2020-07-06 12:55       ` Auger Eric
2020-07-06 13:00         ` Liu, Yi L
2020-07-04 11:26 ` [PATCH v4 02/15] iommu: Report domain nesting info Liu Yi L
2020-07-06  9:34   ` Auger Eric
2020-07-06 12:20     ` Liu, Yi L
2020-07-06 13:00       ` Auger Eric
2020-07-06 13:17         ` Liu, Yi L
2020-07-04 11:26 ` [PATCH v4 03/15] iommu/smmu: Report empty " Liu Yi L
2020-07-06 10:37   ` Auger Eric
2020-07-06 12:46     ` Liu, Yi L
2020-07-06 13:21       ` Auger Eric
2020-07-06 13:26         ` Liu, Yi L
2020-07-04 11:26 ` [PATCH v4 04/15] vfio/type1: Report iommu nesting info to userspace Liu Yi L
2020-07-06 10:37   ` Auger Eric
2020-07-06 13:10     ` Liu, Yi L
2020-07-06 13:45       ` Auger Eric [this message]
2020-07-07  9:31         ` Liu, Yi L
2020-07-08  8:08           ` Liu, Yi L
2020-07-08 19:29             ` Alex Williamson
2020-07-09  0:25               ` Liu, Yi L
2020-07-06 14:06   ` Auger Eric
2020-07-07  9:34     ` Liu, Yi L
2020-07-04 11:26 ` [PATCH v4 05/15] vfio: Add PASID allocation/free support Liu Yi L
2020-07-06 14:52   ` Auger Eric
2020-07-07  9:45     ` Liu, Yi L
2020-07-04 11:26 ` [PATCH v4 06/15] iommu/vt-d: Support setting ioasid set to domain Liu Yi L
2020-07-06 14:52   ` Auger Eric
2020-07-07  9:37     ` Liu, Yi L
2020-07-04 11:26 ` [PATCH v4 07/15] vfio/type1: Add VFIO_IOMMU_PASID_REQUEST (alloc/free) Liu Yi L
2020-07-06 15:17   ` Auger Eric
2020-07-07  9:51     ` Liu, Yi L
2020-07-04 11:26 ` [PATCH v4 08/15] iommu: Pass domain to sva_unbind_gpasid() Liu Yi L
2020-07-04 11:26 ` [PATCH v4 09/15] iommu/vt-d: Check ownership for PASIDs from user-space Liu Yi L
2020-07-04 11:26 ` [PATCH v4 10/15] vfio/type1: Support binding guest page tables to PASID Liu Yi L
2020-07-04 11:26 ` [PATCH v4 11/15] vfio/type1: Allow invalidating first-level/stage IOMMU cache Liu Yi L
2020-07-04 11:26 ` [PATCH v4 12/15] vfio/type1: Add vSVA support for IOMMU-backed mdevs Liu Yi L
2020-07-04 11:26 ` [PATCH v4 13/15] vfio/pci: Expose PCIe PASID capability to guest Liu Yi L
2020-07-04 11:26 ` [PATCH v4 14/15] vfio: Document dual stage control Liu Yi L
2020-07-04 11:26 ` [PATCH v4 15/15] iommu/vt-d: Support reporting nesting capability info Liu Yi L

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=94b4e5d3-8d24-9a55-6bee-ed86f3846996@redhat.com \
    --to=eric.auger@redhat.com \
    --cc=alex.williamson@redhat.com \
    --cc=ashok.raj@intel.com \
    --cc=baolu.lu@linux.intel.com \
    --cc=hao.wu@intel.com \
    --cc=iommu@lists.linux-foundation.org \
    --cc=jean-philippe@linaro.org \
    --cc=joro@8bytes.org \
    --cc=jun.j.tian@intel.com \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=stefanha@gmail.com \
    --cc=yi.l.liu@intel.com \
    --cc=yi.y.sun@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).