From: Auger Eric <eric.auger@redhat.com>
To: "Liu, Yi L" <yi.l.liu@intel.com>,
Alex Williamson <alex.williamson@redhat.com>
Cc: "Tian, Kevin" <kevin.tian@intel.com>,
"jacob.jun.pan@linux.intel.com" <jacob.jun.pan@linux.intel.com>,
"joro@8bytes.org" <joro@8bytes.org>,
"Raj, Ashok" <ashok.raj@intel.com>,
"Tian, Jun J" <jun.j.tian@intel.com>,
"Sun, Yi Y" <yi.y.sun@intel.com>,
"jean-philippe.brucker@arm.com" <jean-philippe.brucker@arm.com>,
"peterx@redhat.com" <peterx@redhat.com>,
"iommu@lists.linux-foundation.org"
<iommu@lists.linux-foundation.org>,
"kvm@vger.kernel.org" <kvm@vger.kernel.org>
Subject: Re: [RFC v2 1/3] vfio: VFIO_IOMMU_CACHE_INVALIDATE
Date: Wed, 13 Nov 2019 08:50:03 +0100 [thread overview]
Message-ID: <e3bc03ba-8537-dd1f-135a-520a695dc2b9@redhat.com> (raw)
In-Reply-To: <A2975661238FB949B60364EF0F2C25743A0EED2E@SHSMSX104.ccr.corp.intel.com>
Hi Yi,
On 11/6/19 2:31 AM, Liu, Yi L wrote:
>> From: Alex Williamson [mailto:alex.williamson@redhat.com]
>> Sent: Wednesday, November 6, 2019 6:42 AM
>> To: Liu, Yi L <yi.l.liu@intel.com>
>> Subject: Re: [RFC v2 1/3] vfio: VFIO_IOMMU_CACHE_INVALIDATE
>>
>> On Fri, 25 Oct 2019 11:20:40 +0000
>> "Liu, Yi L" <yi.l.liu@intel.com> wrote:
>>
>>> Hi Kevin,
>>>
>>>> From: Tian, Kevin
>>>> Sent: Friday, October 25, 2019 5:14 PM
>>>> To: Liu, Yi L <yi.l.liu@intel.com>; alex.williamson@redhat.com;
>>>> Subject: RE: [RFC v2 1/3] vfio: VFIO_IOMMU_CACHE_INVALIDATE
>>>>
>>>>> From: Liu, Yi L
>>>>> Sent: Thursday, October 24, 2019 8:26 PM
>>>>>
>>>>> From: Liu Yi L <yi.l.liu@linux.intel.com>
>>>>>
>>>>> When the guest "owns" the stage 1 translation structures, the
>>>>> host IOMMU driver has no knowledge of caching structure updates
>>>>> unless the guest invalidation requests are trapped and passed down to the host.
>>>>>
>>>>> This patch adds the VFIO_IOMMU_CACHE_INVALIDATE ioctl with aims at
>>>>> propagating guest stage1 IOMMU cache invalidations to the host.
>>>>>
>>>>> Cc: Kevin Tian <kevin.tian@intel.com>
>>>>> Signed-off-by: Liu Yi L <yi.l.liu@linux.intel.com>
>>>>> Signed-off-by: Eric Auger <eric.auger@redhat.com>
>>>>> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
>>>>> ---
>>>>> drivers/vfio/vfio_iommu_type1.c | 55
>>>>> +++++++++++++++++++++++++++++++++++++++++
>>>>> include/uapi/linux/vfio.h | 13 ++++++++++
>>>>> 2 files changed, 68 insertions(+)
>>>>>
>>>>> diff --git a/drivers/vfio/vfio_iommu_type1.c
>>>>> b/drivers/vfio/vfio_iommu_type1.c index 96fddc1d..cd8d3a5 100644
>>>>> --- a/drivers/vfio/vfio_iommu_type1.c
>>>>> +++ b/drivers/vfio/vfio_iommu_type1.c
>>>>> @@ -124,6 +124,34 @@ struct vfio_regions {
>>>>> #define IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu) \
>>>>> (!list_empty(&iommu->domain_list))
>>>>>
>>>>> +struct domain_capsule {
>>>>> + struct iommu_domain *domain;
>>>>> + void *data;
>>>>> +};
>>>>> +
>>>>> +/* iommu->lock must be held */
>>>>> +static int
>>>>> +vfio_iommu_lookup_dev(struct vfio_iommu *iommu,
>>>>> + int (*fn)(struct device *dev, void *data),
>>>>> + void *data)
>>>>
>>>> 'lookup' usually means find a device and then return. But the real
>>>> purpose here is to loop all the devices within this container and
>>>> then do something. Does it make more sense to be vfio_iommu_for_each_dev?
>>
>> +1
>>
>>> yep, I can replace it.
>>>
>>>>
>>>>> +{
>>>>> + struct domain_capsule dc = {.data = data};
>>>>> + struct vfio_domain *d;
>>> [...]
>>>> 2315,6 +2352,24 @@
>>>>> static long vfio_iommu_type1_ioctl(void *iommu_data,
>>>>>
>>>>> return copy_to_user((void __user *)arg, &unmap, minsz) ?
>>>>> -EFAULT : 0;
>>>>> + } else if (cmd == VFIO_IOMMU_CACHE_INVALIDATE) {
>>>>> + struct vfio_iommu_type1_cache_invalidate ustruct;
>>>>
>>>> it's weird to call a variable as struct.
>>>
>>> Will fix it.
>>>
>>>>> + int ret;
>>>>> +
>>>>> + minsz = offsetofend(struct
>>>>> vfio_iommu_type1_cache_invalidate,
>>>>> + info);
>>>>> +
>>>>> + if (copy_from_user(&ustruct, (void __user *)arg, minsz))
>>>>> + return -EFAULT;
>>>>> +
>>>>> + if (ustruct.argsz < minsz || ustruct.flags)
>>>>> + return -EINVAL;
>>>>> +
>>>>> + mutex_lock(&iommu->lock);
>>>>> + ret = vfio_iommu_lookup_dev(iommu, vfio_cache_inv_fn,
>>>>> + &ustruct);
>>>>> + mutex_unlock(&iommu->lock);
>>>>> + return ret;
>>>>> }
>>>>>
>>>>> return -ENOTTY;
>>>>> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
>>>>> index 9e843a1..ccf60a2 100644
>>>>> --- a/include/uapi/linux/vfio.h
>>>>> +++ b/include/uapi/linux/vfio.h
>>>>> @@ -794,6 +794,19 @@ struct vfio_iommu_type1_dma_unmap {
>>>>> #define VFIO_IOMMU_ENABLE _IO(VFIO_TYPE, VFIO_BASE + 15)
>>>>> #define VFIO_IOMMU_DISABLE _IO(VFIO_TYPE, VFIO_BASE + 16)
>>>>>
>>>>> +/**
>>>>> + * VFIO_IOMMU_CACHE_INVALIDATE - _IOWR(VFIO_TYPE, VFIO_BASE +
>>>>> 24,
>>
>> What's going on with these ioctl numbers? AFAICT[1] we've used up through
>> VFIO_BASE + 21, this jumps to 24, the next patch skips to 27, then the last patch fills
>> in 28 & 29. Thanks,
>
> Hi Alex,
>
> I rebase my patch to Eric's nested stage translation patches. His base also introduced
> IOCTLs. I should have made it better. I'll try to sync with Eric to serialize the IOCTLs.
>
> [PATCH v6 00/22] SMMUv3 Nested Stage Setup by Eric Auger
> https://lkml.org/lkml/2019/3/17/124
Feel free to choose your IOCTL numbers without taking care of my series.
I will adapt to yours if my work gets unblocked at some point.
Thanks
Eric
>
> Thanks,
> Yi Liu
>
>> Alex
>>
>> [1] git grep -h VFIO_BASE | grep "VFIO_BASE +" | grep -e ^#define | \
>> awk '{print $NF}' | tr -d ')' | sort -u -n
>
next prev parent reply other threads:[~2019-11-13 7:50 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-24 12:26 [RFC v2 0/3] vfio: support Shared Virtual Addressing Liu Yi L
2019-10-24 12:26 ` [RFC v2 1/3] vfio: VFIO_IOMMU_CACHE_INVALIDATE Liu Yi L
2019-10-25 9:14 ` Tian, Kevin
2019-10-25 11:20 ` Liu, Yi L
2019-11-05 22:42 ` Alex Williamson
2019-11-06 1:31 ` Liu, Yi L
2019-11-13 7:50 ` Auger Eric [this message]
2019-10-24 12:26 ` [RFC v2 2/3] vfio/type1: VFIO_IOMMU_PASID_REQUEST(alloc/free) Liu Yi L
2019-10-25 10:06 ` Tian, Kevin
2019-10-25 11:16 ` Liu, Yi L
2019-11-05 23:35 ` Alex Williamson
2019-11-06 13:27 ` Liu, Yi L
2019-11-07 22:06 ` Alex Williamson
2019-11-08 12:23 ` Liu, Yi L
2019-11-08 15:15 ` Alex Williamson
2019-11-13 11:03 ` Liu, Yi L
2019-11-13 15:29 ` Alex Williamson
2019-11-13 19:45 ` Jacob Pan
2019-11-25 8:32 ` Liu, Yi L
2019-11-18 4:50 ` Liu, Yi L
2019-10-24 12:26 ` [RFC v2 3/3] vfio/type1: bind guest pasid (guest page tables) to host Liu Yi L
2019-11-07 23:20 ` Alex Williamson
2019-11-12 11:21 ` Liu, Yi L
2019-11-12 17:25 ` Alex Williamson
2019-11-13 7:43 ` Liu, Yi L
2019-11-13 10:29 ` Jean-Philippe Brucker
2019-11-13 11:30 ` Liu, Yi L
2019-11-25 7:45 ` Liu, Yi L
2019-12-03 0:11 ` Alex Williamson
2019-12-05 12:19 ` Liu, Yi L
2019-10-25 8:59 ` [RFC v2 0/3] vfio: support Shared Virtual Addressing Tian, Kevin
2019-10-25 11:18 ` Liu, Yi L
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e3bc03ba-8537-dd1f-135a-520a695dc2b9@redhat.com \
--to=eric.auger@redhat.com \
--cc=alex.williamson@redhat.com \
--cc=ashok.raj@intel.com \
--cc=iommu@lists.linux-foundation.org \
--cc=jacob.jun.pan@linux.intel.com \
--cc=jean-philippe.brucker@arm.com \
--cc=joro@8bytes.org \
--cc=jun.j.tian@intel.com \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=peterx@redhat.com \
--cc=yi.l.liu@intel.com \
--cc=yi.y.sun@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).