From: Shenming Lu <lushenming@huawei.com>
To: "Tian, Kevin" <kevin.tian@intel.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>,
"Alex Williamson (alex.williamson@redhat.com)"
<alex.williamson@redhat.com>,
"Jean-Philippe Brucker" <jean-philippe@linaro.org>,
David Gibson <david@gibson.dropbear.id.au>,
Jason Wang <jasowang@redhat.com>,
"parav@mellanox.com" <parav@mellanox.com>,
"Enrico Weigelt, metux IT consult" <lkml@metux.net>,
Paolo Bonzini <pbonzini@redhat.com>,
Joerg Roedel <joro@8bytes.org>,
Eric Auger <eric.auger@redhat.com>,
Jonathan Corbet <corbet@lwn.net>,
"Raj, Ashok" <ashok.raj@intel.com>,
"Liu, Yi L" <yi.l.liu@intel.com>, "Wu, Hao" <hao.wu@intel.com>,
"Jiang, Dave" <dave.jiang@intel.com>,
Jacob Pan <jacob.jun.pan@linux.intel.com>,
"Kirti Wankhede" <kwankhede@nvidia.com>,
Robin Murphy <robin.murphy@arm.com>,
"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
"iommu@lists.linux-foundation.org"
<iommu@lists.linux-foundation.org>,
"David Woodhouse" <dwmw2@infradead.org>,
LKML <linux-kernel@vger.kernel.org>,
"Lu Baolu" <baolu.lu@linux.intel.com>,
"wanghaibin.wang@huawei.com" <wanghaibin.wang@huawei.com>
Subject: Re: [RFC v2] /dev/iommu uAPI proposal
Date: Thu, 15 Jul 2021 16:14:02 +0800 [thread overview]
Message-ID: <5962d403-80c4-0ac4-4f37-96b055a2b4d0@huawei.com> (raw)
In-Reply-To: <BN9PR11MB54336D6A8CAE31F951770A428C129@BN9PR11MB5433.namprd11.prod.outlook.com>
On 2021/7/15 14:49, Tian, Kevin wrote:
>> From: Shenming Lu <lushenming@huawei.com>
>> Sent: Thursday, July 15, 2021 2:29 PM
>>
>> On 2021/7/15 11:55, Tian, Kevin wrote:
>>>> From: Shenming Lu <lushenming@huawei.com>
>>>> Sent: Thursday, July 15, 2021 11:21 AM
>>>>
>>>> On 2021/7/9 15:48, Tian, Kevin wrote:
>>>>> 4.6. I/O page fault
>>>>> +++++++++++++++++++
>>>>>
>>>>> uAPI is TBD. Here is just about the high-level flow from host IOMMU
>> driver
>>>>> to guest IOMMU driver and backwards. This flow assumes that I/O page
>>>> faults
>>>>> are reported via IOMMU interrupts. Some devices report faults via
>> device
>>>>> specific way instead of going through the IOMMU. That usage is not
>>>> covered
>>>>> here:
>>>>>
>>>>> - Host IOMMU driver receives a I/O page fault with raw fault_data {rid,
>>>>> pasid, addr};
>>>>>
>>>>> - Host IOMMU driver identifies the faulting I/O page table according to
>>>>> {rid, pasid} and calls the corresponding fault handler with an opaque
>>>>> object (registered by the handler) and raw fault_data (rid, pasid, addr);
>>>>>
>>>>> - IOASID fault handler identifies the corresponding ioasid and device
>>>>> cookie according to the opaque object, generates an user fault_data
>>>>> (ioasid, cookie, addr) in the fault region, and triggers eventfd to
>>>>> userspace;
>>>>>
>>>>
>>>> Hi, I have some doubts here:
>>>>
>>>> For mdev, it seems that the rid in the raw fault_data is the parent device's,
>>>> then in the vSVA scenario, how can we get to know the mdev(cookie) from
>>>> the
>>>> rid and pasid?
>>>>
>>>> And from this point of view,would it be better to register the mdev
>>>> (iommu_register_device()) with the parent device info?
>>>>
>>>
>>> This is what is proposed in this RFC. A successful binding generates a new
>>> iommu_dev object for each vfio device. For mdev this object includes
>>> its parent device, the defPASID marking this mdev, and the cookie
>>> representing it in userspace. Later it is iommu_dev being recorded in
>>> the attaching_data when the mdev is attached to an IOASID:
>>>
>>> struct iommu_attach_data *__iommu_device_attach(
>>> struct iommu_dev *dev, u32 ioasid, u32 pasid, int flags);
>>>
>>> Then when a fault is reported, the fault handler just needs to figure out
>>> iommu_dev according to {rid, pasid} in the raw fault data.
>>>
>>
>> Yeah, we have the defPASID that marks the mdev and refers to the default
>> I/O address space, but how about the non-default I/O address spaces?
>> Is there a case that two different mdevs (on the same parent device)
>> are used by the same process in the guest, thus have a same pasid route
>> in the physical IOMMU? It seems that we can't figure out the mdev from
>> the rid and pasid in this case...
>>
>> Did I misunderstand something?... :-)
>>
>
> No. You are right on this case. I don't think there is a way to
> differentiate one mdev from the other if they come from the
> same parent and attached by the same guest process. In this
> case the fault could be reported on either mdev (e.g. the first
> matching one) to get it fixed in the guest.
>
OK. Thanks,
Shenming
next prev parent reply other threads:[~2021-07-15 8:14 UTC|newest]
Thread overview: 60+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-09 7:48 [RFC v2] /dev/iommu uAPI proposal Tian, Kevin
2021-07-09 21:50 ` Alex Williamson
2021-07-12 1:22 ` Tian, Kevin
2021-07-12 18:41 ` Alex Williamson
2021-07-12 23:41 ` Tian, Kevin
2021-07-12 23:56 ` Tian, Kevin
2021-07-13 12:55 ` Jason Gunthorpe
2021-07-13 16:26 ` Alex Williamson
2021-07-13 16:32 ` Jason Gunthorpe
2021-07-13 22:48 ` Tian, Kevin
2021-07-13 23:02 ` Jason Gunthorpe
2021-07-13 23:20 ` Tian, Kevin
2021-07-13 23:22 ` Jason Gunthorpe
2021-07-13 23:24 ` Tian, Kevin
2021-07-15 3:20 ` Shenming Lu
2021-07-15 3:55 ` Tian, Kevin
2021-07-15 6:29 ` Shenming Lu
2021-07-15 6:49 ` Tian, Kevin
2021-07-15 8:14 ` Shenming Lu [this message]
2021-07-15 12:48 ` Jason Gunthorpe
2021-07-15 13:57 ` Raj, Ashok
2021-07-15 15:23 ` Jason Gunthorpe
2021-07-15 16:21 ` Raj, Ashok
2021-07-15 17:18 ` Jason Gunthorpe
2021-07-15 17:48 ` Raj, Ashok
2021-07-15 17:53 ` Jason Gunthorpe
2021-07-15 18:05 ` Raj, Ashok
2021-07-15 18:13 ` Jason Gunthorpe
2021-07-16 1:20 ` Tian, Kevin
2021-07-16 12:20 ` Shenming Lu
2021-07-21 2:13 ` Tian, Kevin
2021-07-22 16:30 ` Jason Gunthorpe
2021-07-16 18:30 ` Jason Gunthorpe
2021-07-21 2:11 ` Tian, Kevin
2021-07-26 4:50 ` David Gibson
2021-07-28 4:04 ` Tian, Kevin
2021-08-03 1:50 ` David Gibson
2021-08-03 3:19 ` Tian, Kevin
2021-08-06 4:45 ` David Gibson
2021-08-06 12:32 ` Jason Gunthorpe
2021-08-10 6:10 ` David Gibson
2021-08-09 8:34 ` Tian, Kevin
2021-08-10 4:47 ` David Gibson
2021-08-10 6:04 ` Tian, Kevin
2021-07-30 14:51 ` Jason Gunthorpe
2021-08-02 2:49 ` Tian, Kevin
2021-08-04 14:04 ` Jason Gunthorpe
2021-08-04 22:59 ` Tian, Kevin
2021-08-05 11:27 ` Jason Gunthorpe
2021-08-05 22:44 ` Tian, Kevin
2021-08-06 4:47 ` David Gibson
2021-08-03 1:58 ` David Gibson
2021-08-04 14:07 ` Jason Gunthorpe
2021-08-06 4:24 ` David Gibson
2021-07-26 8:14 ` Jean-Philippe Brucker
2021-07-28 4:05 ` Tian, Kevin
2021-08-04 15:59 ` Eric Auger
2021-08-05 0:36 ` Tian, Kevin
2021-08-10 7:17 ` Eric Auger
2021-08-10 9:00 ` Tian, Kevin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5962d403-80c4-0ac4-4f37-96b055a2b4d0@huawei.com \
--to=lushenming@huawei.com \
--cc=alex.williamson@redhat.com \
--cc=ashok.raj@intel.com \
--cc=baolu.lu@linux.intel.com \
--cc=corbet@lwn.net \
--cc=dave.jiang@intel.com \
--cc=david@gibson.dropbear.id.au \
--cc=dwmw2@infradead.org \
--cc=eric.auger@redhat.com \
--cc=hao.wu@intel.com \
--cc=iommu@lists.linux-foundation.org \
--cc=jacob.jun.pan@linux.intel.com \
--cc=jasowang@redhat.com \
--cc=jean-philippe@linaro.org \
--cc=jgg@nvidia.com \
--cc=joro@8bytes.org \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=kwankhede@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lkml@metux.net \
--cc=parav@mellanox.com \
--cc=pbonzini@redhat.com \
--cc=robin.murphy@arm.com \
--cc=wanghaibin.wang@huawei.com \
--cc=yi.l.liu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).