linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Liu Yi L <yi.l.liu@linux.intel.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: yi.l.liu@intel.com, "Tian, Kevin" <kevin.tian@intel.com>,
	Jean-Philippe Brucker <jean-philippe@linaro.org>,
	"Alex Williamson \(alex.williamson@redhat.com\)" 
	<alex.williamson@redhat.com>, "Raj, Ashok" <ashok.raj@intel.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Jonathan Corbet <corbet@lwn.net>,
	Robin Murphy <robin.murphy@arm.com>,
	LKML <linux-kernel@vger.kernel.org>,
	Kirti Wankhede <kwankhede@nvidia.com>,
	"iommu@lists.linux-foundation.org"
	<iommu@lists.linux-foundation.org>,
	David Gibson <david@gibson.dropbear.id.au>,
	"Jiang, Dave" <dave.jiang@intel.com>,
	"Wu, Hao" <hao.wu@intel.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Jason Wang <jasowang@redhat.com>
Subject: Re: [RFC] /dev/ioasid uAPI proposal
Date: Mon, 31 May 2021 19:31:57 +0800	[thread overview]
Message-ID: <20210531193157.5494e6c6@yiliu-dev> (raw)
In-Reply-To: <20210528233649.GB3816344@nvidia.com>

On Fri, 28 May 2021 20:36:49 -0300, Jason Gunthorpe wrote:

> On Thu, May 27, 2021 at 07:58:12AM +0000, Tian, Kevin wrote:
> 
> > 2.1. /dev/ioasid uAPI
> > +++++++++++++++++
> > 
> > /*
> >   * Check whether an uAPI extension is supported. 
> >   *
> >   * This is for FD-level capabilities, such as locked page pre-registration. 
> >   * IOASID-level capabilities are reported through IOASID_GET_INFO.
> >   *
> >   * Return: 0 if not supported, 1 if supported.
> >   */
> > #define IOASID_CHECK_EXTENSION	_IO(IOASID_TYPE, IOASID_BASE + 0)  
> 
>  
> > /*
> >   * Register user space memory where DMA is allowed.
> >   *
> >   * It pins user pages and does the locked memory accounting so sub-
> >   * sequent IOASID_MAP/UNMAP_DMA calls get faster.
> >   *
> >   * When this ioctl is not used, one user page might be accounted
> >   * multiple times when it is mapped by multiple IOASIDs which are
> >   * not nested together.
> >   *
> >   * Input parameters:
> >   *	- vaddr;
> >   *	- size;
> >   *
> >   * Return: 0 on success, -errno on failure.
> >   */
> > #define IOASID_REGISTER_MEMORY	_IO(IOASID_TYPE, IOASID_BASE + 1)
> > #define IOASID_UNREGISTER_MEMORY	_IO(IOASID_TYPE, IOASID_BASE + 2)  
> 
> So VA ranges are pinned and stored in a tree and later references to
> those VA ranges by any other IOASID use the pin cached in the tree?
> 
> It seems reasonable and is similar to the ioasid parent/child I
> suggested for PPC.
> 
> IMHO this should be merged with the all SW IOASID that is required for
> today's mdev drivers. If this can be done while keeping this uAPI then
> great, otherwise I don't think it is so bad to weakly nest a physical
> IOASID under a SW one just to optimize page pinning.
> 
> Either way this seems like a smart direction
> 
> > /*
> >   * Allocate an IOASID. 
> >   *
> >   * IOASID is the FD-local software handle representing an I/O address 
> >   * space. Each IOASID is associated with a single I/O page table. User 
> >   * must call this ioctl to get an IOASID for every I/O address space that is
> >   * intended to be enabled in the IOMMU.
> >   *
> >   * A newly-created IOASID doesn't accept any command before it is 
> >   * attached to a device. Once attached, an empty I/O page table is 
> >   * bound with the IOMMU then the user could use either DMA mapping 
> >   * or pgtable binding commands to manage this I/O page table.  
> 
> Can the IOASID can be populated before being attached?

perhaps a MAP/UNMAP operation on a gpa_ioasid?

> 
> >   * Device attachment is initiated through device driver uAPI (e.g. VFIO)
> >   *
> >   * Return: allocated ioasid on success, -errno on failure.
> >   */
> > #define IOASID_ALLOC	_IO(IOASID_TYPE, IOASID_BASE + 3)
> > #define IOASID_FREE	_IO(IOASID_TYPE, IOASID_BASE + 4)  
> 
> I assume alloc will include quite a big structure to satisfy the
> various vendor needs?
>
> 
> > /*
> >   * Get information about an I/O address space
> >   *
> >   * Supported capabilities:
> >   *	- VFIO type1 map/unmap;
> >   *	- pgtable/pasid_table binding
> >   *	- hardware nesting vs. software nesting;
> >   *	- ...
> >   *
> >   * Related attributes:
> >   * 	- supported page sizes, reserved IOVA ranges (DMA mapping);
> >   *	- vendor pgtable formats (pgtable binding);
> >   *	- number of child IOASIDs (nesting);
> >   *	- ...
> >   *
> >   * Above information is available only after one or more devices are
> >   * attached to the specified IOASID. Otherwise the IOASID is just a
> >   * number w/o any capability or attribute.  
> 
> This feels wrong to learn most of these attributes of the IOASID after
> attaching to a device.

but an IOASID is just a software handle before attached to a specific
device. e.g. before attaching to a device, we have no idea about the
supported page size in underlying iommu, coherent etc.

> The user should have some idea how it intends to use the IOASID when
> it creates it and the rest of the system should match the intention.
> 
> For instance if the user is creating a IOASID to cover the guest GPA
> with the intention of making children it should indicate this during
> alloc.
> 
> If the user is intending to point a child IOASID to a guest page table
> in a certain descriptor format then it should indicate it during
> alloc.

Actually, we have only two kinds of IOASIDs so far. One is used as parent
and another is child. For child, this proposal has defined IOASID_CREATE_NESTING
for it. But yeah, I think it is doable to indicate the type in ALLOC. But
for child IOASID, there require one more step to config its parent IOASID
or may include such info in the ioctl input as well.
 
> device bind should fail if the device somehow isn't compatible with
> the scheme the user is tring to use.

yeah, I guess you mean to fail the device attach when the IOASID is a
nesting IOASID but the device is behind an iommu without nesting support.
right?

> 
> > /*
> >   * Map/unmap process virtual addresses to I/O virtual addresses.
> >   *
> >   * Provide VFIO type1 equivalent semantics. Start with the same 
> >   * restriction e.g. the unmap size should match those used in the 
> >   * original mapping call. 
> >   *
> >   * If IOASID_REGISTER_MEMORY has been called, the mapped vaddr
> >   * must be already in the preregistered list.
> >   *
> >   * Input parameters:
> >   *	- u32 ioasid;
> >   *	- refer to vfio_iommu_type1_dma_{un}map
> >   *
> >   * Return: 0 on success, -errno on failure.
> >   */
> > #define IOASID_MAP_DMA	_IO(IOASID_TYPE, IOASID_BASE + 6)
> > #define IOASID_UNMAP_DMA	_IO(IOASID_TYPE, IOASID_BASE + 7)  
> 
> What about nested IOASIDs?

at first glance, it looks like we should prevent the MAP/UNMAP usage on
nested IOASIDs. At least hardware nested translation only allows MAP/UNMAP
on the parent IOASIDs and page table bind on nested IOASIDs. But considering
about software nesting, it seems still useful to allow MAP/UNMAP usage
on nested IOASIDs. This is how I understand it, how about your opinion
on it? do you think it's better to allow MAP/UNMAP usage only on parent
IOASIDs as a start?

> 
> > /*
> >   * Create a nesting IOASID (child) on an existing IOASID (parent)
> >   *
> >   * IOASIDs can be nested together, implying that the output address 
> >   * from one I/O page table (child) must be further translated by 
> >   * another I/O page table (parent).
> >   *
> >   * As the child adds essentially another reference to the I/O page table 
> >   * represented by the parent, any device attached to the child ioasid 
> >   * must be already attached to the parent.
> >   *
> >   * In concept there is no limit on the number of the nesting levels. 
> >   * However for the majority case one nesting level is sufficient. The
> >   * user should check whether an IOASID supports nesting through 
> >   * IOASID_GET_INFO. For example, if only one nesting level is allowed,
> >   * the nesting capability is reported only on the parent instead of the
> >   * child.
> >   *
> >   * User also needs check (via IOASID_GET_INFO) whether the nesting 
> >   * is implemented in hardware or software. If software-based, DMA 
> >   * mapping protocol should be used on the child IOASID. Otherwise, 
> >   * the child should be operated with pgtable binding protocol.
> >   *
> >   * Input parameters:
> >   *	- u32 parent_ioasid;
> >   *
> >   * Return: child_ioasid on success, -errno on failure;
> >   */
> > #define IOASID_CREATE_NESTING	_IO(IOASID_TYPE, IOASID_BASE + 8)  
> 
> Do you think another ioctl is best? Should this just be another
> parameter to alloc?

either is fine. This ioctl is following one of your previous comment.

https://lore.kernel.org/linux-iommu/20210422121020.GT1370958@nvidia.com/

> 
> > /*
> >   * Bind an user-managed I/O page table with the IOMMU
> >   *
> >   * Because user page table is untrusted, IOASID nesting must be enabled 
> >   * for this ioasid so the kernel can enforce its DMA isolation policy 
> >   * through the parent ioasid.
> >   *
> >   * Pgtable binding protocol is different from DMA mapping. The latter 
> >   * has the I/O page table constructed by the kernel and updated 
> >   * according to user MAP/UNMAP commands. With pgtable binding the 
> >   * whole page table is created and updated by userspace, thus different 
> >   * set of commands are required (bind, iotlb invalidation, page fault, etc.).
> >   *
> >   * Because the page table is directly walked by the IOMMU, the user 
> >   * must  use a format compatible to the underlying hardware. It can 
> >   * check the format information through IOASID_GET_INFO.
> >   *
> >   * The page table is bound to the IOMMU according to the routing 
> >   * information of each attached device under the specified IOASID. The
> >   * routing information (RID and optional PASID) is registered when a 
> >   * device is attached to this IOASID through VFIO uAPI. 
> >   *
> >   * Input parameters:
> >   *	- child_ioasid;
> >   *	- address of the user page table;
> >   *	- formats (vendor, address_width, etc.);
> >   * 
> >   * Return: 0 on success, -errno on failure.
> >   */
> > #define IOASID_BIND_PGTABLE		_IO(IOASID_TYPE, IOASID_BASE + 9)
> > #define IOASID_UNBIND_PGTABLE	_IO(IOASID_TYPE, IOASID_BASE + 10)  
> 
> Also feels backwards, why wouldn't we specify this, and the required
> page table format, during alloc time?

here the model is user-space gets the page table format from kernel and
decide if it can proceed. So what you are suggesting is user-space should
tell kernel the page table format it has in ALLOC and kenrel should fail
the ALLOC if the user-space page table format is not compatible with underlying
iommu?

> 
> > /*
> >   * Bind an user-managed PASID table to the IOMMU
> >   *
> >   * This is required for platforms which place PASID table in the GPA space.
> >   * In this case the specified IOASID represents the per-RID PASID space.
> >   *
> >   * Alternatively this may be replaced by IOASID_BIND_PGTABLE plus a
> >   * special flag to indicate the difference from normal I/O address spaces.
> >   *
> >   * The format info of the PASID table is reported in IOASID_GET_INFO.
> >   *
> >   * As explained in the design section, user-managed I/O page tables must
> >   * be explicitly bound to the kernel even on these platforms. It allows
> >   * the kernel to uniformly manage I/O address spaces cross all platforms.
> >   * Otherwise, the iotlb invalidation and page faulting uAPI must be hacked
> >   * to carry device routing information to indirectly mark the hidden I/O
> >   * address spaces.
> >   *
> >   * Input parameters:
> >   *	- child_ioasid;
> >   *	- address of PASID table;
> >   *	- formats (vendor, size, etc.);
> >   *
> >   * Return: 0 on success, -errno on failure.
> >   */
> > #define IOASID_BIND_PASID_TABLE	_IO(IOASID_TYPE, IOASID_BASE + 11)
> > #define IOASID_UNBIND_PASID_TABLE	_IO(IOASID_TYPE, IOASID_BASE + 12)  
> 
> Ditto
> 
> > 
> > /*
> >   * Invalidate IOTLB for an user-managed I/O page table
> >   *
> >   * Unlike what's defined in include/uapi/linux/iommu.h, this command 
> >   * doesn't allow the user to specify cache type and likely support only
> >   * two granularities (all, or a specified range) in the I/O address space.
> >   *
> >   * Physical IOMMU have three cache types (iotlb, dev_iotlb and pasid
> >   * cache). If the IOASID represents an I/O address space, the invalidation
> >   * always applies to the iotlb (and dev_iotlb if enabled). If the IOASID
> >   * represents a vPASID space, then this command applies to the PASID
> >   * cache.
> >   *
> >   * Similarly this command doesn't provide IOMMU-like granularity
> >   * info (domain-wide, pasid-wide, range-based), since it's all about the
> >   * I/O address space itself. The ioasid driver walks the attached
> >   * routing information to match the IOMMU semantics under the
> >   * hood. 
> >   *
> >   * Input parameters:
> >   *	- child_ioasid;
> >   *	- granularity
> >   * 
> >   * Return: 0 on success, -errno on failure
> >   */
> > #define IOASID_INVALIDATE_CACHE	_IO(IOASID_TYPE, IOASID_BASE + 13)  
> 
> This should have an IOVA range too?
> 
> > /*
> >   * Page fault report and response
> >   *
> >   * This is TBD. Can be added after other parts are cleared up. Likely it 
> >   * will be a ring buffer shared between user/kernel, an eventfd to notify 
> >   * the user and an ioctl to complete the fault.
> >   *
> >   * The fault data is per I/O address space, i.e.: IOASID + faulting_addr
> >   */  
> 
> Any reason not to just use read()?

a ring buffer may be mmap to user-space, thus reading fault data from kernel
would be faster. This is also how Eric's fault reporting is doing today.

https://lore.kernel.org/linux-iommu/20210411114659.15051-5-eric.auger@redhat.com/

> >
> > 2.2. /dev/vfio uAPI
> > ++++++++++++++++  
> 
> To be clear you mean the 'struct vfio_device' API, these are not
> IOCTLs on the container or group?
> 
> > /*
> >    * Bind a vfio_device to the specified IOASID fd
> >    *
> >    * Multiple vfio devices can be bound to a single ioasid_fd, but a single
> >    * vfio device should not be bound to multiple ioasid_fd's.
> >    *
> >    * Input parameters:
> >    *  - ioasid_fd;
> >    *
> >    * Return: 0 on success, -errno on failure.
> >    */
> > #define VFIO_BIND_IOASID_FD           _IO(VFIO_TYPE, VFIO_BASE + 22)
> > #define VFIO_UNBIND_IOASID_FD _IO(VFIO_TYPE, VFIO_BASE + 23)  
> 
> This is where it would make sense to have an output "device id" that
> allows /dev/ioasid to refer to this "device" by number in events and
> other related things.

perhaps this is the device info Jean Philippe wants in page fault reporting
path?

-- 
Regards,
Yi Liu

  reply	other threads:[~2021-05-31 11:33 UTC|newest]

Thread overview: 258+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-27  7:58 [RFC] /dev/ioasid uAPI proposal Tian, Kevin
2021-05-28  2:24 ` Jason Wang
2021-05-28 20:25   ` Jason Gunthorpe
     [not found]   ` <20210531164118.265789ee@yiliu-dev>
2021-06-01  2:36     ` Jason Wang
2021-06-01  4:27       ` Shenming Lu
2021-06-01  5:10         ` Jason Wang
     [not found]       ` <20210601113152.6d09e47b@yiliu-dev>
2021-06-01  5:08         ` Jason Wang
2021-06-01  5:23           ` Lu Baolu
2021-06-01  5:29             ` Jason Wang
2021-06-01  5:42               ` Tian, Kevin
2021-06-01  6:07                 ` Jason Wang
2021-06-01  6:16                   ` Tian, Kevin
2021-06-01  8:47                     ` Jason Wang
2021-06-01 17:31                       ` Jason Gunthorpe
2021-06-02  8:54                         ` Jason Wang
2021-06-02 17:21                           ` Jason Gunthorpe
2021-06-07 13:30                             ` Enrico Weigelt, metux IT consult
2021-06-07 18:01                               ` Jason Gunthorpe
2021-06-08 10:45                                 ` Enrico Weigelt, metux IT consult
2021-06-10  2:16                                   ` Jason Wang
2021-06-08  1:10                             ` Jason Wang
2021-06-08 13:20                               ` Jason Gunthorpe
2021-06-10  2:00                                 ` Jason Wang
2021-06-10  4:03                                   ` Jason Wang
2021-06-10 11:47                                   ` Jason Gunthorpe
2021-06-11  5:43                                     ` Jason Wang
2021-06-01 17:29                   ` Jason Gunthorpe
2021-06-02  8:58                     ` Jason Wang
2021-05-28 16:23 ` Jean-Philippe Brucker
2021-05-28 20:16   ` Jason Gunthorpe
2021-06-01  7:50   ` Tian, Kevin
2021-05-28 17:35 ` Jason Gunthorpe
2021-06-01  8:10   ` Tian, Kevin
2021-06-01 17:42     ` Jason Gunthorpe
2021-06-02  1:33       ` Tian, Kevin
2021-06-02 16:09         ` Jason Gunthorpe
2021-06-03  1:29           ` Tian, Kevin
2021-06-03  5:09             ` David Gibson
2021-06-03  6:49               ` Tian, Kevin
2021-06-03 11:47                 ` Jason Gunthorpe
2021-06-04  2:15                   ` Tian, Kevin
2021-06-08  0:49                 ` David Gibson
2021-06-09  2:52                   ` Tian, Kevin
2021-06-02  6:32   ` David Gibson
2021-06-02 16:16     ` Jason Gunthorpe
2021-06-03  2:11       ` Tian, Kevin
2021-06-03  5:13       ` David Gibson
2021-06-03 11:52         ` Jason Gunthorpe
2021-06-08  0:53           ` David Gibson
2021-06-08 19:04             ` Jason Gunthorpe
2021-06-17  2:42               ` David Gibson
2021-05-28 19:58 ` Jason Gunthorpe
2021-06-01  8:38   ` Tian, Kevin
2021-06-01 17:56     ` Jason Gunthorpe
2021-06-02  2:00       ` Tian, Kevin
2021-06-02  6:57       ` David Gibson
2021-06-02 16:37         ` Jason Gunthorpe
2021-06-03  5:23           ` David Gibson
2021-06-03 12:28             ` Jason Gunthorpe
2021-06-08  6:04               ` David Gibson
2021-06-08 19:23                 ` Jason Gunthorpe
2021-06-02  6:48   ` David Gibson
2021-06-02 16:58     ` Jason Gunthorpe
2021-06-03  2:49       ` Tian, Kevin
2021-06-03  5:48         ` David Gibson
2021-06-03  5:45       ` David Gibson
2021-06-03 12:11         ` Jason Gunthorpe
2021-06-04  6:08           ` Tian, Kevin
2021-06-04 12:33             ` Jason Gunthorpe
2021-06-04 23:20               ` Tian, Kevin
2021-06-08  6:13           ` David Gibson
2021-06-04 10:24         ` Jean-Philippe Brucker
2021-06-04 12:05           ` Jason Gunthorpe
2021-06-04 17:27             ` Jacob Pan
2021-06-04 17:40               ` Jason Gunthorpe
2021-06-08  6:31           ` David Gibson
2021-06-10 16:37             ` Jean-Philippe Brucker
2021-06-17  3:00               ` David Gibson
2021-06-18 17:03                 ` Jean-Philippe Brucker
2021-06-18 18:30                   ` Jason Gunthorpe
2021-06-23  8:19                     ` Tian, Kevin
2021-06-23  7:57                   ` Tian, Kevin
2021-06-24  3:49                   ` David Gibson
2021-05-28 20:03 ` Jason Gunthorpe
2021-06-01  7:01   ` Tian, Kevin
2021-06-01 20:28     ` Jason Gunthorpe
2021-06-02  1:25       ` Tian, Kevin
2021-06-02 23:27         ` Jason Gunthorpe
2021-06-04  8:17         ` Jean-Philippe Brucker
2021-06-04  8:43           ` Tian, Kevin
2021-06-02  8:52       ` Jason Wang
2021-06-02 16:07         ` Jason Gunthorpe
2021-06-01 22:22     ` Alex Williamson
2021-06-02  2:20       ` Tian, Kevin
2021-06-02 16:01         ` Jason Gunthorpe
2021-06-02 17:11           ` Alex Williamson
2021-06-02 17:35             ` Jason Gunthorpe
2021-06-02 18:01               ` Alex Williamson
2021-06-02 18:09                 ` Jason Gunthorpe
2021-06-02 19:00                   ` Alex Williamson
2021-06-02 19:54                     ` Jason Gunthorpe
2021-06-02 20:37                       ` Alex Williamson
2021-06-02 22:45                         ` Jason Gunthorpe
2021-06-03  2:50                           ` Alex Williamson
2021-06-03  3:22                             ` Tian, Kevin
2021-06-03  4:14                               ` Alex Williamson
2021-06-03  5:18                                 ` Tian, Kevin
2021-06-03 12:40                               ` Jason Gunthorpe
2021-06-03 20:41                                 ` Alex Williamson
2021-06-04  9:19                                   ` Tian, Kevin
2021-06-04 15:37                                     ` Alex Williamson
2021-06-04 12:13                                   ` Jason Gunthorpe
2021-06-04 21:45                                     ` Alex Williamson
2021-06-04  7:33                                 ` Tian, Kevin
2021-06-03 12:34                             ` Jason Gunthorpe
2021-06-03 20:01                               ` Alex Williamson
2021-06-03 20:10                                 ` Jason Gunthorpe
2021-06-03 21:44                                   ` Alex Williamson
2021-06-04  8:38                                     ` Tian, Kevin
2021-06-04 12:28                                       ` Jason Gunthorpe
2021-06-04 15:26                                         ` Alex Williamson
2021-06-04 15:40                                           ` Paolo Bonzini
2021-06-04 15:50                                             ` Jason Gunthorpe
2021-06-04 15:57                                               ` Paolo Bonzini
2021-06-04 16:03                                                 ` Jason Gunthorpe
2021-06-04 16:10                                                   ` Paolo Bonzini
2021-06-04 17:22                                                     ` Jason Gunthorpe
2021-06-04 21:29                                                       ` Alex Williamson
2021-06-04 23:01                                                         ` Jason Gunthorpe
2021-06-07 15:41                                                           ` Alex Williamson
2021-06-07 18:18                                                             ` Jason Gunthorpe
2021-06-07 18:59                                                               ` Alex Williamson
2021-06-07 19:08                                                                 ` Jason Gunthorpe
2021-06-07 19:41                                                                   ` Alex Williamson
2021-06-07 23:03                                                                     ` Jason Gunthorpe
2021-06-08  0:30                                                                       ` Alex Williamson
2021-06-08  1:20                                                                     ` Jason Wang
2021-06-30  6:53                                                                       ` Christoph Hellwig
2021-06-30  6:49                                                                   ` Christoph Hellwig
2021-06-07  3:25                                                         ` Tian, Kevin
2021-06-07  6:51                                                           ` Paolo Bonzini
2021-06-07 18:01                                                             ` Jason Gunthorpe
2021-06-30  6:56                                                           ` Christoph Hellwig
2021-06-05  6:22                                                       ` Paolo Bonzini
2021-06-07  3:50                                                         ` Tian, Kevin
2021-06-07 17:59                                                         ` Jason Gunthorpe
2021-06-08  7:56                                                           ` Paolo Bonzini
2021-06-08 13:15                                                             ` Jason Gunthorpe
2021-06-08 13:44                                                               ` Paolo Bonzini
2021-06-08 18:47                                                                 ` Alex Williamson
2021-06-08 19:00                                                                   ` Jason Gunthorpe
2021-06-09  8:51                                                                     ` Enrico Weigelt, metux IT consult
2021-06-09  9:11                                                                       ` Paolo Bonzini
2021-06-09 11:54                                                                         ` Jason Gunthorpe
2021-06-09 14:31                                                                           ` Alex Williamson
2021-06-09 14:45                                                                             ` Jason Gunthorpe
2021-06-09 15:20                                                                               ` Paolo Bonzini
2021-10-27  6:18                                                                                 ` Tian, Kevin
2021-10-27 10:32                                                                                   ` Paolo Bonzini
2021-10-28  1:50                                                                                     ` Tian, Kevin
2021-06-09  2:49                                                                   ` Tian, Kevin
2021-06-09 11:57                                                                     ` Jason Gunthorpe
2021-06-09 12:46                                                                       ` Paolo Bonzini
2021-06-09 12:47                                                                         ` Jason Gunthorpe
2021-06-09 13:24                                                                           ` Paolo Bonzini
2021-06-09 14:32                                                                             ` Jason Gunthorpe
2021-06-30  7:01                                                                           ` Christoph Hellwig
2021-06-09 18:09                                                                     ` Alex Williamson
2021-06-03  2:52                         ` Jason Wang
2021-06-03 13:09                           ` Jason Gunthorpe
2021-06-04  1:11                             ` Jason Wang
2021-06-04 11:58                               ` Jason Gunthorpe
2021-06-07  3:18                                 ` Jason Wang
2021-06-07 14:14                                   ` Jason Gunthorpe
2021-06-08  1:00                                     ` Jason Wang
2021-06-08  8:54                                       ` Enrico Weigelt, metux IT consult
2021-06-08 12:52                                         ` Jason Gunthorpe
2021-06-30  7:07                                     ` Christoph Hellwig
2021-06-30  7:05                                 ` Christoph Hellwig
2021-06-08  2:37       ` David Gibson
2021-06-08 13:17         ` Jason Gunthorpe
2021-06-17  3:47           ` David Gibson
2021-06-23  7:59             ` Tian, Kevin
2021-06-24  3:53               ` David Gibson
2021-05-28 23:36 ` Jason Gunthorpe
2021-05-31 11:31   ` Liu Yi L [this message]
2021-05-31 18:09     ` Jason Gunthorpe
2021-06-01  3:08       ` Lu Baolu
2021-06-01 17:24         ` Jason Gunthorpe
2021-06-01  1:25     ` Lu Baolu
2021-06-01 11:09   ` Lu Baolu
2021-06-01 17:26     ` Jason Gunthorpe
2021-06-02  4:01       ` Lu Baolu
2021-06-02 23:23         ` Jason Gunthorpe
2021-06-03  5:49           ` Lu Baolu
2021-06-03  5:54     ` David Gibson
2021-06-03  6:50       ` Lu Baolu
2021-06-03 12:56         ` Jason Gunthorpe
2021-06-02  7:22   ` David Gibson
2021-06-03  6:39   ` Tian, Kevin
2021-06-03 13:05     ` Jason Gunthorpe
2021-06-04  6:37       ` Tian, Kevin
2021-06-04 12:09         ` Jason Gunthorpe
2021-06-04 23:10           ` Tian, Kevin
2021-06-07 17:54             ` Jason Gunthorpe
2021-06-15  8:59       ` Tian, Kevin
2021-06-15 15:06         ` Jason Gunthorpe
2021-06-15 22:59           ` Tian, Kevin
2021-06-15 23:02             ` Jason Gunthorpe
2021-06-15 23:09               ` Tian, Kevin
2021-06-15 23:40                 ` Jason Gunthorpe
2021-06-15 23:56                   ` Tian, Kevin
2021-06-15 23:59                     ` Jason Gunthorpe
2021-06-16  0:02                       ` Tian, Kevin
2021-05-31 17:37 ` Parav Pandit
2021-05-31 18:12   ` Jason Gunthorpe
2021-06-01 12:04     ` Parav Pandit
2021-06-01 17:36       ` Jason Gunthorpe
2021-06-02  8:38   ` Enrico Weigelt, metux IT consult
2021-06-02 12:41     ` Parav Pandit
2021-06-01  4:31 ` Shenming Lu
2021-06-01  5:10   ` Lu Baolu
2021-06-01  7:15     ` Shenming Lu
2021-06-01 12:30       ` Lu Baolu
2021-06-01 13:10         ` Shenming Lu
2021-06-01 17:33         ` Jason Gunthorpe
2021-06-02  4:50           ` Shenming Lu
2021-06-03 18:19             ` Jacob Pan
2021-06-04  1:30               ` Jason Wang
2021-06-04 16:22                 ` Jacob Pan
2021-06-04 16:22                   ` Jason Gunthorpe
2021-06-04 18:05                     ` Jacob Pan
2021-06-04  2:03               ` Shenming Lu
2021-06-07 12:19                 ` Liu, Yi L
2021-06-08  1:09                   ` Shenming Lu
2021-06-01 17:30 ` Parav Pandit
2021-06-03 20:58   ` Jacob Pan
2021-06-08  6:30     ` Parav Pandit
2021-06-02  6:15 ` David Gibson
2021-06-02 17:19   ` Jason Gunthorpe
2021-06-03  3:02     ` Tian, Kevin
2021-06-03  6:26     ` David Gibson
2021-06-03 12:46       ` Jason Gunthorpe
2021-06-04  6:27         ` Tian, Kevin
2021-06-03  7:17   ` Tian, Kevin
2021-06-03 12:49     ` Jason Gunthorpe
2021-06-08  5:49     ` David Gibson
2021-06-03  8:12   ` Tian, Kevin
2021-06-17  4:07     ` David Gibson
2021-06-23  8:00       ` Tian, Kevin
2021-06-24  3:55         ` David Gibson
2021-06-02  8:56 ` Enrico Weigelt, metux IT consult
2021-06-02 17:24   ` Jason Gunthorpe
2021-06-04 10:44     ` Enrico Weigelt, metux IT consult
2021-06-04 12:30       ` Jason Gunthorpe
2021-06-08  1:15         ` David Gibson
2021-06-08 10:43         ` Enrico Weigelt, metux IT consult
2021-06-08 13:11           ` Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210531193157.5494e6c6@yiliu-dev \
    --to=yi.l.liu@linux.intel.com \
    --cc=alex.williamson@redhat.com \
    --cc=ashok.raj@intel.com \
    --cc=corbet@lwn.net \
    --cc=dave.jiang@intel.com \
    --cc=david@gibson.dropbear.id.au \
    --cc=dwmw2@infradead.org \
    --cc=hao.wu@intel.com \
    --cc=iommu@lists.linux-foundation.org \
    --cc=jasowang@redhat.com \
    --cc=jean-philippe@linaro.org \
    --cc=jgg@nvidia.com \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=kwankhede@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=robin.murphy@arm.com \
    --cc=yi.l.liu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).