iommu.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>,
	"Alex Williamson \(alex.williamson@redhat.com\)"
	<alex.williamson@redhat.com>, "Raj, Ashok" <ashok.raj@intel.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Jonathan Corbet <corbet@lwn.net>,
	Robin Murphy <robin.murphy@arm.com>,
	LKML <linux-kernel@vger.kernel.org>,
	Kirti Wankhede <kwankhede@nvidia.com>,
	"iommu@lists.linux-foundation.org"
	<iommu@lists.linux-foundation.org>,
	David Gibson <david@gibson.dropbear.id.au>,
	"Jiang, Dave" <dave.jiang@intel.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Jason Wang <jasowang@redhat.com>
Subject: RE: [RFC] /dev/ioasid uAPI proposal
Date: Tue, 1 Jun 2021 08:38:00 +0000	[thread overview]
Message-ID: <MWHPR11MB1886A17F36CF744857C531148C3E9@MWHPR11MB1886.namprd11.prod.outlook.com> (raw)
In-Reply-To: <20210528195839.GO1002214@nvidia.com>

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Saturday, May 29, 2021 3:59 AM
> 
> On Thu, May 27, 2021 at 07:58:12AM +0000, Tian, Kevin wrote:
> >
> > 5. Use Cases and Flows
> >
> > Here assume VFIO will support a new model where every bound device
> > is explicitly listed under /dev/vfio thus a device fd can be acquired w/o
> > going through legacy container/group interface. For illustration purpose
> > those devices are just called dev[1...N]:
> >
> > 	device_fd[1...N] = open("/dev/vfio/devices/dev[1...N]", mode);
> >
> > As explained earlier, one IOASID fd is sufficient for all intended use cases:
> >
> > 	ioasid_fd = open("/dev/ioasid", mode);
> >
> > For simplicity below examples are all made for the virtualization story.
> > They are representative and could be easily adapted to a non-virtualization
> > scenario.
> 
> For others, I don't think this is *strictly* necessary, we can
> probably still get to the device_fd using the group_fd and fit in
> /dev/ioasid. It does make the rest of this more readable though.

Jason, want to confirm here. Per earlier discussion we remain an
impression that you want VFIO to be a pure device driver thus
container/group are used only for legacy application. From this
comment are you suggesting that VFIO can still keep container/
group concepts and user just deprecates the use of vfio iommu
uAPI (e.g. VFIO_SET_IOMMU) by using /dev/ioasid (which has
a simple policy that an IOASID will reject cmd if partially-attached 
group exists)?

> 
> 
> > Three types of IOASIDs are considered:
> >
> > 	gpa_ioasid[1...N]: 	for GPA address space
> > 	giova_ioasid[1...N]:	for guest IOVA address space
> > 	gva_ioasid[1...N]:	for guest CPU VA address space
> >
> > At least one gpa_ioasid must always be created per guest, while the other
> > two are relevant as far as vIOMMU is concerned.
> >
> > Examples here apply to both pdev and mdev, if not explicitly marked out
> > (e.g. in section 5.5). VFIO device driver in the kernel will figure out the
> > associated routing information in the attaching operation.
> >
> > For illustration simplicity, IOASID_CHECK_EXTENSION and IOASID_GET_
> > INFO are skipped in these examples.
> >
> > 5.1. A simple example
> > ++++++++++++++++++
> >
> > Dev1 is assigned to the guest. One gpa_ioasid is created. The GPA address
> > space is managed through DMA mapping protocol:
> >
> > 	/* Bind device to IOASID fd */
> > 	device_fd = open("/dev/vfio/devices/dev1", mode);
> > 	ioasid_fd = open("/dev/ioasid", mode);
> > 	ioctl(device_fd, VFIO_BIND_IOASID_FD, ioasid_fd);
> >
> > 	/* Attach device to IOASID */
> > 	gpa_ioasid = ioctl(ioasid_fd, IOASID_ALLOC);
> > 	at_data = { .ioasid = gpa_ioasid};
> > 	ioctl(device_fd, VFIO_ATTACH_IOASID, &at_data);
> >
> > 	/* Setup GPA mapping */
> > 	dma_map = {
> > 		.ioasid	= gpa_ioasid;
> > 		.iova	= 0;		// GPA
> > 		.vaddr	= 0x40000000;	// HVA
> > 		.size	= 1GB;
> > 	};
> > 	ioctl(ioasid_fd, IOASID_DMA_MAP, &dma_map);
> >
> > If the guest is assigned with more than dev1, user follows above sequence
> > to attach other devices to the same gpa_ioasid i.e. sharing the GPA
> > address space cross all assigned devices.
> 
> eg
> 
>  	device2_fd = open("/dev/vfio/devices/dev1", mode);
>  	ioctl(device2_fd, VFIO_BIND_IOASID_FD, ioasid_fd);
>  	ioctl(device2_fd, VFIO_ATTACH_IOASID, &at_data);
> 
> Right?

Exactly, except a small typo ('dev1' -> 'dev2'). :)

> 
> >
> > 5.2. Multiple IOASIDs (no nesting)
> > ++++++++++++++++++++++++++++
> >
> > Dev1 and dev2 are assigned to the guest. vIOMMU is enabled. Initially
> > both devices are attached to gpa_ioasid. After boot the guest creates
> > an GIOVA address space (giova_ioasid) for dev2, leaving dev1 in pass
> > through mode (gpa_ioasid).
> >
> > Suppose IOASID nesting is not supported in this case. Qemu need to
> > generate shadow mappings in userspace for giova_ioasid (like how
> > VFIO works today).
> >
> > To avoid duplicated locked page accounting, it's recommended to pre-
> > register the virtual address range that will be used for DMA:
> >
> > 	device_fd1 = open("/dev/vfio/devices/dev1", mode);
> > 	device_fd2 = open("/dev/vfio/devices/dev2", mode);
> > 	ioasid_fd = open("/dev/ioasid", mode);
> > 	ioctl(device_fd1, VFIO_BIND_IOASID_FD, ioasid_fd);
> > 	ioctl(device_fd2, VFIO_BIND_IOASID_FD, ioasid_fd);
> >
> > 	/* pre-register the virtual address range for accounting */
> > 	mem_info = { .vaddr = 0x40000000; .size = 1GB };
> > 	ioctl(ioasid_fd, IOASID_REGISTER_MEMORY, &mem_info);
> >
> > 	/* Attach dev1 and dev2 to gpa_ioasid */
> > 	gpa_ioasid = ioctl(ioasid_fd, IOASID_ALLOC);
> > 	at_data = { .ioasid = gpa_ioasid};
> > 	ioctl(device_fd1, VFIO_ATTACH_IOASID, &at_data);
> > 	ioctl(device_fd2, VFIO_ATTACH_IOASID, &at_data);
> >
> > 	/* Setup GPA mapping */
> > 	dma_map = {
> > 		.ioasid	= gpa_ioasid;
> > 		.iova	= 0; 		// GPA
> > 		.vaddr	= 0x40000000;	// HVA
> > 		.size	= 1GB;
> > 	};
> > 	ioctl(ioasid_fd, IOASID_DMA_MAP, &dma_map);
> >
> > 	/* After boot, guest enables an GIOVA space for dev2 */
> > 	giova_ioasid = ioctl(ioasid_fd, IOASID_ALLOC);
> >
> > 	/* First detach dev2 from previous address space */
> > 	at_data = { .ioasid = gpa_ioasid};
> > 	ioctl(device_fd2, VFIO_DETACH_IOASID, &at_data);
> >
> > 	/* Then attach dev2 to the new address space */
> > 	at_data = { .ioasid = giova_ioasid};
> > 	ioctl(device_fd2, VFIO_ATTACH_IOASID, &at_data);
> >
> > 	/* Setup a shadow DMA mapping according to vIOMMU
> > 	  * GIOVA (0x2000) -> GPA (0x1000) -> HVA (0x40001000)
> > 	  */
> 
> Here "shadow DMA" means relay the guest's vIOMMU page tables to the HW
> IOMMU?

'shadow' means the merged mapping: GIOVA(0x2000) -> HVA (0x40001000)

> 
> > 	dma_map = {
> > 		.ioasid	= giova_ioasid;
> > 		.iova	= 0x2000; 	// GIOVA
> > 		.vaddr	= 0x40001000;	// HVA
> 
> eg HVA came from reading the guest's page tables and finding it wanted
> GPA 0x1000 mapped to IOVA 0x2000?

yes

> 
> 
> > 5.3. IOASID nesting (software)
> > +++++++++++++++++++++++++
> >
> > Same usage scenario as 5.2, with software-based IOASID nesting
> > available. In this mode it is the kernel instead of user to create the
> > shadow mapping.
> >
> > The flow before guest boots is same as 5.2, except one point. Because
> > giova_ioasid is nested on gpa_ioasid, locked accounting is only
> > conducted for gpa_ioasid. So it's not necessary to pre-register virtual
> > memory.
> >
> > To save space we only list the steps after boots (i.e. both dev1/dev2
> > have been attached to gpa_ioasid before guest boots):
> >
> > 	/* After boots */
> > 	/* Make GIOVA space nested on GPA space */
> > 	giova_ioasid = ioctl(ioasid_fd, IOASID_CREATE_NESTING,
> > 				gpa_ioasid);
> >
> > 	/* Attach dev2 to the new address space (child)
> > 	  * Note dev2 is still attached to gpa_ioasid (parent)
> > 	  */
> > 	at_data = { .ioasid = giova_ioasid};
> > 	ioctl(device_fd2, VFIO_ATTACH_IOASID, &at_data);
> >
> > 	/* Setup a GIOVA->GPA mapping for giova_ioasid, which will be
> > 	  * merged by the kernel with GPA->HVA mapping of gpa_ioasid
> > 	  * to form a shadow mapping.
> > 	  */
> > 	dma_map = {
> > 		.ioasid	= giova_ioasid;
> > 		.iova	= 0x2000;	// GIOVA
> > 		.vaddr	= 0x1000;	// GPA
> > 		.size	= 4KB;
> > 	};
> > 	ioctl(ioasid_fd, IOASID_DMA_MAP, &dma_map);
> 
> And in this version the kernel reaches into the parent IOASID's page
> tables to translate 0x1000 to 0x40001000 to physical page? So we
> basically remove the qemu process address space entirely from this
> translation. It does seem convenient

yes.

> 
> > 5.4. IOASID nesting (hardware)
> > +++++++++++++++++++++++++
> >
> > Same usage scenario as 5.2, with hardware-based IOASID nesting
> > available. In this mode the pgtable binding protocol is used to
> > bind the guest IOVA page table with the IOMMU:
> >
> > 	/* After boots */
> > 	/* Make GIOVA space nested on GPA space */
> > 	giova_ioasid = ioctl(ioasid_fd, IOASID_CREATE_NESTING,
> > 				gpa_ioasid);
> >
> > 	/* Attach dev2 to the new address space (child)
> > 	  * Note dev2 is still attached to gpa_ioasid (parent)
> > 	  */
> > 	at_data = { .ioasid = giova_ioasid};
> > 	ioctl(device_fd2, VFIO_ATTACH_IOASID, &at_data);
> >
> > 	/* Bind guest I/O page table  */
> > 	bind_data = {
> > 		.ioasid	= giova_ioasid;
> > 		.addr	= giova_pgtable;
> > 		// and format information
> > 	};
> > 	ioctl(ioasid_fd, IOASID_BIND_PGTABLE, &bind_data);
> 
> I really think you need to use consistent language. Things that
> allocate a new IOASID should be calle IOASID_ALLOC_IOASID. If multiple
> IOCTLs are needed then it is IOASID_ALLOC_IOASID_PGTABLE, etc.
> alloc/create/bind is too confusing.
> 
> > 5.5. Guest SVA (vSVA)
> > ++++++++++++++++++
> >
> > After boots the guest further create a GVA address spaces (gpasid1) on
> > dev1. Dev2 is not affected (still attached to giova_ioasid).
> >
> > As explained in section 4, user should avoid expose ENQCMD on both
> > pdev and mdev.
> >
> > The sequence applies to all device types (being pdev or mdev), except
> > one additional step to call KVM for ENQCMD-capable mdev:
> >
> > 	/* After boots */
> > 	/* Make GVA space nested on GPA space */
> > 	gva_ioasid = ioctl(ioasid_fd, IOASID_CREATE_NESTING,
> > 				gpa_ioasid);
> >
> > 	/* Attach dev1 to the new address space and specify vPASID */
> > 	at_data = {
> > 		.ioasid		= gva_ioasid;
> > 		.flag 		= IOASID_ATTACH_USER_PASID;
> > 		.user_pasid	= gpasid1;
> > 	};
> > 	ioctl(device_fd1, VFIO_ATTACH_IOASID, &at_data);
> 
> Still a little unsure why the vPASID is here not on the gva_ioasid. Is
> there any scenario where we want different vpasid's for the same
> IOASID? I guess it is OK like this. Hum.

Yes, it's completely sane that the guest links a I/O page table to 
different vpasids on dev1 and dev2. The IOMMU doesn't mandate
that when multiple devices share an I/O page table they must use
the same PASID#. 

> 
> > 	/* if dev1 is ENQCMD-capable mdev, update CPU PASID
> > 	  * translation structure through KVM
> > 	  */
> > 	pa_data = {
> > 		.ioasid_fd	= ioasid_fd;
> > 		.ioasid		= gva_ioasid;
> > 		.guest_pasid	= gpasid1;
> > 	};
> > 	ioctl(kvm_fd, KVM_MAP_PASID, &pa_data);
> 
> Make sense
> 
> > 	/* Bind guest I/O page table  */
> > 	bind_data = {
> > 		.ioasid	= gva_ioasid;
> > 		.addr	= gva_pgtable1;
> > 		// and format information
> > 	};
> > 	ioctl(ioasid_fd, IOASID_BIND_PGTABLE, &bind_data);
> 
> Again I do wonder if this should just be part of alloc_ioasid. Is
> there any reason to split these things? The only advantage to the
> split is the device is known, but the device shouldn't impact
> anything..

I summarized this as open#4 in another mail for focused discussion.

> 
> > 5.6. I/O page fault
> > +++++++++++++++
> >
> > (uAPI is TBD. Here is just about the high-level flow from host IOMMU driver
> > to guest IOMMU driver and backwards).
> >
> > -   Host IOMMU driver receives a page request with raw fault_data {rid,
> >     pasid, addr};
> >
> > -   Host IOMMU driver identifies the faulting I/O page table according to
> >     information registered by IOASID fault handler;
> >
> > -   IOASID fault handler is called with raw fault_data (rid, pasid, addr),
> which
> >     is saved in ioasid_data->fault_data (used for response);
> >
> > -   IOASID fault handler generates an user fault_data (ioasid, addr), links it
> >     to the shared ring buffer and triggers eventfd to userspace;
> 
> Here rid should be translated to a labeled device and return the
> device label from VFIO_BIND_IOASID_FD. Depending on how the device
> bound the label might match to a rid or to a rid,pasid

Yes, I acknowledged this input from you and Jean about page fault and 
bind_pasid_table. I summarized it as open#3 in another mail.

thus following is skipped...

Thanks
Kevin

> 
> > -   Upon received event, Qemu needs to find the virtual routing information
> >     (v_rid + v_pasid) of the device attached to the faulting ioasid. If there are
> >     multiple, pick a random one. This should be fine since the purpose is to
> >     fix the I/O page table on the guest;
> 
> The device label should fix this
> 
> > -   Qemu finds the pending fault event, converts virtual completion data
> >     into (ioasid, response_code), and then calls a /dev/ioasid ioctl to
> >     complete the pending fault;
> >
> > -   /dev/ioasid finds out the pending fault data {rid, pasid, addr} saved in
> >     ioasid_data->fault_data, and then calls iommu api to complete it with
> >     {rid, pasid, response_code};
> 
> So resuming a fault on an ioasid will resume all devices pending on
> the fault?
> 
> > 5.7. BIND_PASID_TABLE
> > ++++++++++++++++++++
> >
> > PASID table is put in the GPA space on some platform, thus must be
> updated
> > by the guest. It is treated as another user page table to be bound with the
> > IOMMU.
> >
> > As explained earlier, the user still needs to explicitly bind every user I/O
> > page table to the kernel so the same pgtable binding protocol (bind, cache
> > invalidate and fault handling) is unified cross platforms.
> >
> > vIOMMUs may include a caching mode (or paravirtualized way) which,
> once
> > enabled, requires the guest to invalidate PASID cache for any change on the
> > PASID table. This allows Qemu to track the lifespan of guest I/O page tables.
> >
> > In case of missing such capability, Qemu could enable write-protection on
> > the guest PASID table to achieve the same effect.
> >
> > 	/* After boots */
> > 	/* Make vPASID space nested on GPA space */
> > 	pasidtbl_ioasid = ioctl(ioasid_fd, IOASID_CREATE_NESTING,
> > 				gpa_ioasid);
> >
> > 	/* Attach dev1 to pasidtbl_ioasid */
> > 	at_data = { .ioasid = pasidtbl_ioasid};
> > 	ioctl(device_fd1, VFIO_ATTACH_IOASID, &at_data);
> >
> > 	/* Bind PASID table */
> > 	bind_data = {
> > 		.ioasid	= pasidtbl_ioasid;
> > 		.addr	= gpa_pasid_table;
> > 		// and format information
> > 	};
> > 	ioctl(ioasid_fd, IOASID_BIND_PASID_TABLE, &bind_data);
> >
> > 	/* vIOMMU detects a new GVA I/O space created */
> > 	gva_ioasid = ioctl(ioasid_fd, IOASID_CREATE_NESTING,
> > 				gpa_ioasid);
> >
> > 	/* Attach dev1 to the new address space, with gpasid1 */
> > 	at_data = {
> > 		.ioasid		= gva_ioasid;
> > 		.flag 		= IOASID_ATTACH_USER_PASID;
> > 		.user_pasid	= gpasid1;
> > 	};
> > 	ioctl(device_fd1, VFIO_ATTACH_IOASID, &at_data);
> >
> > 	/* Bind guest I/O page table. Because SET_PASID_TABLE has been
> > 	  * used, the kernel will not update the PASID table. Instead, just
> > 	  * track the bound I/O page table for handling invalidation and
> > 	  * I/O page faults.
> > 	  */
> > 	bind_data = {
> > 		.ioasid	= gva_ioasid;
> > 		.addr	= gva_pgtable1;
> > 		// and format information
> > 	};
> > 	ioctl(ioasid_fd, IOASID_BIND_PGTABLE, &bind_data);
> 
> I still don't quite get the benifit from doing this.
> 
> The idea to create an all PASID IOASID seems to work better with less
> fuss on HW that is directly parsing the guest's PASID table.
> 
> Cache invalidate seems easy enough to support
> 
> Fault handling needs to return the (ioasid, device_label, pasid) when
> working with this kind of ioasid.
> 
> It is true that it does create an additional flow qemu has to
> implement, but it does directly mirror the HW.
> 
> Jason
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

  reply	other threads:[~2021-06-01  8:38 UTC|newest]

Thread overview: 260+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-27  7:58 [RFC] /dev/ioasid uAPI proposal Tian, Kevin
2021-05-28  2:24 ` Jason Wang
2021-05-28 20:25   ` Jason Gunthorpe
2021-05-31  8:41   ` Liu Yi L
2021-06-01  2:36     ` Jason Wang
2021-06-01  3:31       ` Liu Yi L
2021-06-01  5:08         ` Jason Wang
2021-06-01  5:23           ` Lu Baolu
2021-06-01  5:29             ` Jason Wang
2021-06-01  5:42               ` Tian, Kevin
2021-06-01  6:07                 ` Jason Wang
2021-06-01  6:16                   ` Tian, Kevin
2021-06-01  8:47                     ` Jason Wang
2021-06-01 17:31                       ` Jason Gunthorpe
2021-06-02  8:54                         ` Jason Wang
2021-06-02 17:21                           ` Jason Gunthorpe
2021-06-07 13:30                             ` Enrico Weigelt, metux IT consult
2021-06-07 18:01                               ` Jason Gunthorpe
2021-06-08 10:45                                 ` Enrico Weigelt, metux IT consult
2021-06-10  2:16                                   ` Jason Wang
2021-06-08  1:10                             ` Jason Wang
2021-06-08 13:20                               ` Jason Gunthorpe
2021-06-10  2:00                                 ` Jason Wang
2021-06-10  4:03                                   ` Jason Wang
2021-06-10 11:47                                   ` Jason Gunthorpe
2021-06-11  5:43                                     ` Jason Wang
2021-06-01 17:29                   ` Jason Gunthorpe
2021-06-02  8:58                     ` Jason Wang
2021-06-01  4:27       ` Shenming Lu
2021-06-01  5:10         ` Jason Wang
2021-05-28 16:23 ` Jean-Philippe Brucker
2021-05-28 20:16   ` Jason Gunthorpe
2021-06-01  7:50   ` Tian, Kevin
2021-05-28 17:35 ` Jason Gunthorpe
2021-06-01  8:10   ` Tian, Kevin
2021-06-01 17:42     ` Jason Gunthorpe
2021-06-02  1:33       ` Tian, Kevin
2021-06-02 16:09         ` Jason Gunthorpe
2021-06-03  1:29           ` Tian, Kevin
2021-06-03  5:09             ` David Gibson
2021-06-03  6:49               ` Tian, Kevin
2021-06-03 11:47                 ` Jason Gunthorpe
2021-06-04  2:15                   ` Tian, Kevin
2021-06-08  0:49                 ` David Gibson
2021-06-09  2:52                   ` Tian, Kevin
2021-06-02  6:32   ` David Gibson
2021-06-02 16:16     ` Jason Gunthorpe
2021-06-03  2:11       ` Tian, Kevin
2021-06-03  5:13       ` David Gibson
2021-06-03 11:52         ` Jason Gunthorpe
2021-06-08  0:53           ` David Gibson
2021-06-08 19:04             ` Jason Gunthorpe
2021-06-17  2:42               ` David Gibson
2021-05-28 19:58 ` Jason Gunthorpe
2021-06-01  8:38   ` Tian, Kevin [this message]
2021-06-01 17:56     ` Jason Gunthorpe
2021-06-02  2:00       ` Tian, Kevin
2021-06-02  6:57       ` David Gibson
2021-06-02 16:37         ` Jason Gunthorpe
2021-06-03  5:23           ` David Gibson
2021-06-03 12:28             ` Jason Gunthorpe
2021-06-08  6:04               ` David Gibson
2021-06-08 19:23                 ` Jason Gunthorpe
2021-06-02  6:48   ` David Gibson
2021-06-02 16:58     ` Jason Gunthorpe
2021-06-03  2:49       ` Tian, Kevin
2021-06-03  5:48         ` David Gibson
2021-06-03  5:45       ` David Gibson
2021-06-03 12:11         ` Jason Gunthorpe
2021-06-04  6:08           ` Tian, Kevin
2021-06-04 12:33             ` Jason Gunthorpe
2021-06-04 23:20               ` Tian, Kevin
2021-06-08  6:13           ` David Gibson
2021-06-04 10:24         ` Jean-Philippe Brucker
2021-06-04 12:05           ` Jason Gunthorpe
2021-06-04 17:27             ` Jacob Pan
2021-06-04 17:40               ` Jason Gunthorpe
2021-06-08  6:31           ` David Gibson
2021-06-10 16:37             ` Jean-Philippe Brucker
2021-06-17  3:00               ` David Gibson
2021-06-18 17:03                 ` Jean-Philippe Brucker
2021-06-18 18:30                   ` Jason Gunthorpe
2021-06-23  8:19                     ` Tian, Kevin
2021-06-23  7:57                   ` Tian, Kevin
2021-06-24  3:49                   ` David Gibson
2021-05-28 20:03 ` Jason Gunthorpe
2021-06-01  7:01   ` Tian, Kevin
2021-06-01 20:28     ` Jason Gunthorpe
2021-06-02  1:25       ` Tian, Kevin
2021-06-02 23:27         ` Jason Gunthorpe
2021-06-04  8:17         ` Jean-Philippe Brucker
2021-06-04  8:43           ` Tian, Kevin
2021-06-02  8:52       ` Jason Wang
2021-06-02 16:07         ` Jason Gunthorpe
2021-06-01 22:22     ` Alex Williamson
2021-06-02  2:20       ` Tian, Kevin
2021-06-02 16:01         ` Jason Gunthorpe
2021-06-02 17:11           ` Alex Williamson
2021-06-02 17:35             ` Jason Gunthorpe
2021-06-02 18:01               ` Alex Williamson
2021-06-02 18:09                 ` Jason Gunthorpe
2021-06-02 19:00                   ` Alex Williamson
2021-06-02 19:54                     ` Jason Gunthorpe
2021-06-02 20:37                       ` Alex Williamson
2021-06-02 22:45                         ` Jason Gunthorpe
2021-06-03  2:50                           ` Alex Williamson
2021-06-03  3:22                             ` Tian, Kevin
2021-06-03  4:14                               ` Alex Williamson
2021-06-03  5:18                                 ` Tian, Kevin
2021-06-03 12:40                               ` Jason Gunthorpe
2021-06-03 20:41                                 ` Alex Williamson
2021-06-04  9:19                                   ` Tian, Kevin
2021-06-04 15:37                                     ` Alex Williamson
2021-06-04 12:13                                   ` Jason Gunthorpe
2021-06-04 21:45                                     ` Alex Williamson
2021-06-04  7:33                                 ` Tian, Kevin
2021-06-03 12:34                             ` Jason Gunthorpe
2021-06-03 20:01                               ` Alex Williamson
2021-06-03 20:10                                 ` Jason Gunthorpe
2021-06-03 21:44                                   ` Alex Williamson
2021-06-04  8:38                                     ` Tian, Kevin
2021-06-04 12:28                                       ` Jason Gunthorpe
2021-06-04 15:26                                         ` Alex Williamson
2021-06-04 15:40                                           ` Paolo Bonzini
2021-06-04 15:50                                             ` Jason Gunthorpe
2021-06-04 15:57                                               ` Paolo Bonzini
2021-06-04 16:03                                                 ` Jason Gunthorpe
2021-06-04 16:10                                                   ` Paolo Bonzini
2021-06-04 17:22                                                     ` Jason Gunthorpe
2021-06-04 21:29                                                       ` Alex Williamson
2021-06-04 23:01                                                         ` Jason Gunthorpe
2021-06-07 15:41                                                           ` Alex Williamson
2021-06-07 18:18                                                             ` Jason Gunthorpe
2021-06-07 18:59                                                               ` Alex Williamson
2021-06-07 19:08                                                                 ` Jason Gunthorpe
2021-06-07 19:41                                                                   ` Alex Williamson
2021-06-07 23:03                                                                     ` Jason Gunthorpe
2021-06-08  0:30                                                                       ` Alex Williamson
2021-06-08  1:20                                                                     ` Jason Wang
2021-06-30  6:53                                                                       ` Christoph Hellwig
2021-06-30  6:49                                                                   ` Christoph Hellwig
2021-06-07  3:25                                                         ` Tian, Kevin
2021-06-07  6:51                                                           ` Paolo Bonzini
2021-06-07 18:01                                                             ` Jason Gunthorpe
2021-06-30  6:56                                                           ` Christoph Hellwig
2021-06-05  6:22                                                       ` Paolo Bonzini
2021-06-07  3:50                                                         ` Tian, Kevin
2021-06-07 17:59                                                         ` Jason Gunthorpe
2021-06-08  7:56                                                           ` Paolo Bonzini
2021-06-08 13:15                                                             ` Jason Gunthorpe
2021-06-08 13:44                                                               ` Paolo Bonzini
2021-06-08 18:47                                                                 ` Alex Williamson
2021-06-08 19:00                                                                   ` Jason Gunthorpe
2021-06-09  8:51                                                                     ` Enrico Weigelt, metux IT consult
2021-06-09  9:11                                                                       ` Paolo Bonzini
2021-06-09 11:54                                                                         ` Jason Gunthorpe
2021-06-09 14:31                                                                           ` Alex Williamson
2021-06-09 14:45                                                                             ` Jason Gunthorpe
2021-06-09 15:20                                                                               ` Paolo Bonzini
2021-10-27  6:18                                                                                 ` Tian, Kevin
2021-10-27 10:32                                                                                   ` Paolo Bonzini
2021-10-28  1:50                                                                                     ` Tian, Kevin
2021-06-09  2:49                                                                   ` Tian, Kevin
2021-06-09 11:57                                                                     ` Jason Gunthorpe
2021-06-09 12:46                                                                       ` Paolo Bonzini
2021-06-09 12:47                                                                         ` Jason Gunthorpe
2021-06-09 13:24                                                                           ` Paolo Bonzini
2021-06-09 14:32                                                                             ` Jason Gunthorpe
2021-06-30  7:01                                                                           ` Christoph Hellwig
2021-06-09 18:09                                                                     ` Alex Williamson
2021-06-03  2:52                         ` Jason Wang
2021-06-03 13:09                           ` Jason Gunthorpe
2021-06-04  1:11                             ` Jason Wang
2021-06-04 11:58                               ` Jason Gunthorpe
2021-06-07  3:18                                 ` Jason Wang
2021-06-07 14:14                                   ` Jason Gunthorpe
2021-06-08  1:00                                     ` Jason Wang
2021-06-08  8:54                                       ` Enrico Weigelt, metux IT consult
2021-06-08 12:52                                         ` Jason Gunthorpe
2021-06-30  7:07                                     ` Christoph Hellwig
2021-06-30  7:05                                 ` Christoph Hellwig
2021-06-08  2:37       ` David Gibson
2021-06-08 13:17         ` Jason Gunthorpe
2021-06-17  3:47           ` David Gibson
2021-06-23  7:59             ` Tian, Kevin
2021-06-24  3:53               ` David Gibson
2021-05-28 23:36 ` Jason Gunthorpe
2021-05-31 11:31   ` Liu Yi L
2021-05-31 18:09     ` Jason Gunthorpe
2021-06-01  3:08       ` Lu Baolu
2021-06-01 17:24         ` Jason Gunthorpe
2021-06-01  1:25     ` Lu Baolu
2021-06-01 11:09   ` Lu Baolu
2021-06-01 17:26     ` Jason Gunthorpe
2021-06-02  4:01       ` Lu Baolu
2021-06-02 23:23         ` Jason Gunthorpe
2021-06-03  5:49           ` Lu Baolu
2021-06-03  5:54     ` David Gibson
2021-06-03  6:50       ` Lu Baolu
2021-06-03 12:56         ` Jason Gunthorpe
2021-06-02  7:22   ` David Gibson
2021-06-03  6:39   ` Tian, Kevin
2021-06-03 13:05     ` Jason Gunthorpe
2021-06-04  6:37       ` Tian, Kevin
2021-06-04 12:09         ` Jason Gunthorpe
2021-06-04 23:10           ` Tian, Kevin
2021-06-07 17:54             ` Jason Gunthorpe
2021-06-15  8:59       ` Tian, Kevin
2021-06-15 15:06         ` Jason Gunthorpe
2021-06-15 22:59           ` Tian, Kevin
2021-06-15 23:02             ` Jason Gunthorpe
2021-06-15 23:09               ` Tian, Kevin
2021-06-15 23:40                 ` Jason Gunthorpe
2021-06-15 23:56                   ` Tian, Kevin
2021-06-15 23:59                     ` Jason Gunthorpe
2021-06-16  0:02                       ` Tian, Kevin
2021-05-31 17:37 ` Parav Pandit
2021-05-31 18:12   ` Jason Gunthorpe
2021-06-01 12:04     ` Parav Pandit
2021-06-01 17:36       ` Jason Gunthorpe
2021-06-02  8:38   ` Enrico Weigelt, metux IT consult
2021-06-02 12:41     ` Parav Pandit
2021-06-01  4:31 ` Shenming Lu
2021-06-01  5:10   ` Lu Baolu
2021-06-01  7:15     ` Shenming Lu
2021-06-01 12:30       ` Lu Baolu
2021-06-01 13:10         ` Shenming Lu
2021-06-01 17:33         ` Jason Gunthorpe
2021-06-02  4:50           ` Shenming Lu
2021-06-03 18:19             ` Jacob Pan
2021-06-04  1:30               ` Jason Wang
2021-06-04 16:22                 ` Jacob Pan
2021-06-04 16:22                   ` Jason Gunthorpe
2021-06-04 18:05                     ` Jacob Pan
2021-06-04  2:03               ` Shenming Lu
2021-06-07 12:19                 ` Liu, Yi L
2021-06-08  1:09                   ` Shenming Lu
2021-06-01 17:30 ` Parav Pandit
2021-06-03 20:58   ` Jacob Pan
2021-06-08  6:30     ` Parav Pandit
2021-06-02  6:15 ` David Gibson
2021-06-02 17:19   ` Jason Gunthorpe
2021-06-03  3:02     ` Tian, Kevin
2021-06-03  6:26     ` David Gibson
2021-06-03 12:46       ` Jason Gunthorpe
2021-06-04  6:27         ` Tian, Kevin
2021-06-03  7:17   ` Tian, Kevin
2021-06-03 12:49     ` Jason Gunthorpe
2021-06-08  5:49     ` David Gibson
2021-06-03  8:12   ` Tian, Kevin
2021-06-17  4:07     ` David Gibson
2021-06-23  8:00       ` Tian, Kevin
2021-06-24  3:55         ` David Gibson
2021-06-02  8:56 ` Enrico Weigelt, metux IT consult
2021-06-02 17:24   ` Jason Gunthorpe
2021-06-04 10:44     ` Enrico Weigelt, metux IT consult
2021-06-04 12:30       ` Jason Gunthorpe
2021-06-08  1:15         ` David Gibson
2021-06-08 10:43         ` Enrico Weigelt, metux IT consult
2021-06-08 13:11           ` Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=MWHPR11MB1886A17F36CF744857C531148C3E9@MWHPR11MB1886.namprd11.prod.outlook.com \
    --to=kevin.tian@intel.com \
    --cc=alex.williamson@redhat.com \
    --cc=ashok.raj@intel.com \
    --cc=corbet@lwn.net \
    --cc=dave.jiang@intel.com \
    --cc=david@gibson.dropbear.id.au \
    --cc=dwmw2@infradead.org \
    --cc=iommu@lists.linux-foundation.org \
    --cc=jasowang@redhat.com \
    --cc=jean-philippe@linaro.org \
    --cc=jgg@nvidia.com \
    --cc=kvm@vger.kernel.org \
    --cc=kwankhede@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=robin.murphy@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).