From: "Tian, Kevin" <kevin.tian@intel.com> To: David Gibson <david@gibson.dropbear.id.au> Cc: LKML <linux-kernel@vger.kernel.org>, Joerg Roedel <joro@8bytes.org>, Jason Gunthorpe <jgg@nvidia.com>, Lu Baolu <baolu.lu@linux.intel.com>, David Woodhouse <dwmw2@infradead.org>, "iommu@lists.linux-foundation.org" <iommu@lists.linux-foundation.org>, "kvm@vger.kernel.org" <kvm@vger.kernel.org>, "Alex Williamson (alex.williamson@redhat.com)" <alex.williamson@redhat.com>, Jason Wang <jasowang@redhat.com>, Eric Auger <eric.auger@redhat.com>, Jonathan Corbet <corbet@lwn.net>, "Raj, Ashok" <ashok.raj@intel.com>, "Liu, Yi L" <yi.l.liu@intel.com>, "Wu, Hao" <hao.wu@intel.com>, "Jiang, Dave" <dave.jiang@intel.com>, Jacob Pan <jacob.jun.pan@linux.intel.com>, Jean-Philippe Brucker <jean-philippe@linaro.org>, Kirti Wankhede <kwankhede@nvidia.com>, "Robin Murphy" <robin.murphy@arm.com> Subject: RE: [RFC] /dev/ioasid uAPI proposal Date: Thu, 3 Jun 2021 08:12:27 +0000 [thread overview] Message-ID: <MWHPR11MB188668D220E1BF7360F2A6BE8C3C9@MWHPR11MB1886.namprd11.prod.outlook.com> (raw) In-Reply-To: <YLch6zbbYqV4PyVf@yekko> > From: David Gibson <david@gibson.dropbear.id.au> > Sent: Wednesday, June 2, 2021 2:15 PM > [...] > > > > /* > > * Get information about an I/O address space > > * > > * Supported capabilities: > > * - VFIO type1 map/unmap; > > * - pgtable/pasid_table binding > > * - hardware nesting vs. software nesting; > > * - ... > > * > > * Related attributes: > > * - supported page sizes, reserved IOVA ranges (DMA mapping); > > Can I request we represent this in terms of permitted IOVA ranges, > rather than reserved IOVA ranges. This works better with the "window" > model I have in mind for unifying the restrictions of the POWER IOMMU > with Type1 like mapping. Can you elaborate how permitted range work better here? > > #define IOASID_MAP_DMA _IO(IOASID_TYPE, IOASID_BASE + 6) > > #define IOASID_UNMAP_DMA _IO(IOASID_TYPE, IOASID_BASE + 7) > > I'm assuming these would be expected to fail if a user managed > pagetable has been bound? yes. Following Jason's suggestion the format will be specified when creating an IOASID, thus incompatible cmd will be simply rejected. > > #define IOASID_BIND_PGTABLE _IO(IOASID_TYPE, > IOASID_BASE + 9) > > #define IOASID_UNBIND_PGTABLE _IO(IOASID_TYPE, IOASID_BASE + 10) > > I'm assuming that UNBIND would return the IOASID to a kernel-managed > pagetable? There will be no UNBIND call in the next version. unbind will be automatically handled when destroying the IOASID. > > For debugging and certain hypervisor edge cases it might be useful to > have a call to allow userspace to lookup and specific IOVA in a guest > managed pgtable. Since all the mapping metadata is from userspace, why would one rely on the kernel to provide such service? Or are you simply asking for some debugfs node to dump the I/O page table for a given IOASID? > > > > /* > > * Bind an user-managed PASID table to the IOMMU > > * > > * This is required for platforms which place PASID table in the GPA space. > > * In this case the specified IOASID represents the per-RID PASID space. > > * > > * Alternatively this may be replaced by IOASID_BIND_PGTABLE plus a > > * special flag to indicate the difference from normal I/O address spaces. > > * > > * The format info of the PASID table is reported in IOASID_GET_INFO. > > * > > * As explained in the design section, user-managed I/O page tables must > > * be explicitly bound to the kernel even on these platforms. It allows > > * the kernel to uniformly manage I/O address spaces cross all platforms. > > * Otherwise, the iotlb invalidation and page faulting uAPI must be hacked > > * to carry device routing information to indirectly mark the hidden I/O > > * address spaces. > > * > > * Input parameters: > > * - child_ioasid; > > Wouldn't this be the parent ioasid, rather than one of the potentially > many child ioasids? there is just one child IOASID (per device) for this PASID table parent ioasid in this case carries the GPA mapping. > > > > /* > > * Invalidate IOTLB for an user-managed I/O page table > > * > > * Unlike what's defined in include/uapi/linux/iommu.h, this command > > * doesn't allow the user to specify cache type and likely support only > > * two granularities (all, or a specified range) in the I/O address space. > > * > > * Physical IOMMU have three cache types (iotlb, dev_iotlb and pasid > > * cache). If the IOASID represents an I/O address space, the invalidation > > * always applies to the iotlb (and dev_iotlb if enabled). If the IOASID > > * represents a vPASID space, then this command applies to the PASID > > * cache. > > * > > * Similarly this command doesn't provide IOMMU-like granularity > > * info (domain-wide, pasid-wide, range-based), since it's all about the > > * I/O address space itself. The ioasid driver walks the attached > > * routing information to match the IOMMU semantics under the > > * hood. > > * > > * Input parameters: > > * - child_ioasid; > > And couldn't this be be any ioasid, not just a child one, depending on > whether you want PASID scope or RID scope invalidation? yes, any ioasid could accept invalidation cmd. This was based on the old assumption that bind+invalidate only applies to child, which will be fixed in next version. > > /* > > * Attach a vfio device to the specified IOASID > > * > > * Multiple vfio devices can be attached to the same IOASID, and vice > > * versa. > > * > > * User may optionally provide a "virtual PASID" to mark an I/O page > > * table on this vfio device. Whether the virtual PASID is physically used > > * or converted to another kernel-allocated PASID is a policy in vfio device > > * driver. > > * > > * There is no need to specify ioasid_fd in this call due to the assumption > > * of 1:1 connection between vfio device and the bound fd. > > * > > * Input parameter: > > * - ioasid; > > * - flag; > > * - user_pasid (if specified); > > Wouldn't the PASID be communicated by whether you give a parent or > child ioasid, rather than needing an extra value? No. ioasid is just the software handle. > > struct ioasid_data { > > // link to ioasid_ctx->ioasid_list > > struct list_head next; > > > > // the IOASID number > > u32 ioasid; > > > > // the handle to convey iommu operations > > // hold the pgd (TBD until discussing iommu api) > > struct iommu_domain *domain; > > > > // map metadata (vfio type1 semantics) > > struct rb_node dma_list; > > Why do you need this? Can't you just store the kernel managed > mappings in the host IO pgtable? A simple reason is that to implement vfio type1 semantics we need make sure unmap with size same as what is used for map. The metadata allows verifying this assumption. Another reason is when doing software nesting, the page table linked into the iommu domain is the shadow one. It's better to keep the original metadata so it can be used to update the shadow when another level (parent or child) changes the mapping. > > > > 5.3. IOASID nesting (software) > > +++++++++++++++++++++++++ > > > > Same usage scenario as 5.2, with software-based IOASID nesting > > available. In this mode it is the kernel instead of user to create the > > shadow mapping. > > In this case, I feel like the preregistration is redundant with the > GPA level mapping. As long as the gIOVA mappings (which might be > frequent) can piggyback on the accounting done for the GPA mapping we > accomplish what we need from preregistration. yes, preregistration makes more sense when multiple IOASIDs are used but are not nested together. > > 5.5. Guest SVA (vSVA) > > ++++++++++++++++++ > > > > After boots the guest further create a GVA address spaces (gpasid1) on > > dev1. Dev2 is not affected (still attached to giova_ioasid). > > > > As explained in section 4, user should avoid expose ENQCMD on both > > pdev and mdev. > > > > The sequence applies to all device types (being pdev or mdev), except > > one additional step to call KVM for ENQCMD-capable mdev: > > > > /* After boots */ > > /* Make GVA space nested on GPA space */ > > gva_ioasid = ioctl(ioasid_fd, IOASID_CREATE_NESTING, > > gpa_ioasid); > > I'm not clear what gva_ioasid is representing. Is it representing a > single vPASID's address space, or a whole bunch of vPASIDs address > spaces? a single vPASID's address space. Thanks Kevin
WARNING: multiple messages have this Message-ID (diff)
From: "Tian, Kevin" <kevin.tian@intel.com> To: David Gibson <david@gibson.dropbear.id.au> Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>, "Alex Williamson \(alex.williamson@redhat.com\)" <alex.williamson@redhat.com>, "Raj, Ashok" <ashok.raj@intel.com>, "kvm@vger.kernel.org" <kvm@vger.kernel.org>, Jonathan Corbet <corbet@lwn.net>, Robin Murphy <robin.murphy@arm.com>, LKML <linux-kernel@vger.kernel.org>, Kirti Wankhede <kwankhede@nvidia.com>, "iommu@lists.linux-foundation.org" <iommu@lists.linux-foundation.org>, Jason Gunthorpe <jgg@nvidia.com>, "Jiang, Dave" <dave.jiang@intel.com>, David Woodhouse <dwmw2@infradead.org>, Jason Wang <jasowang@redhat.com> Subject: RE: [RFC] /dev/ioasid uAPI proposal Date: Thu, 3 Jun 2021 08:12:27 +0000 [thread overview] Message-ID: <MWHPR11MB188668D220E1BF7360F2A6BE8C3C9@MWHPR11MB1886.namprd11.prod.outlook.com> (raw) In-Reply-To: <YLch6zbbYqV4PyVf@yekko> > From: David Gibson <david@gibson.dropbear.id.au> > Sent: Wednesday, June 2, 2021 2:15 PM > [...] > > > > /* > > * Get information about an I/O address space > > * > > * Supported capabilities: > > * - VFIO type1 map/unmap; > > * - pgtable/pasid_table binding > > * - hardware nesting vs. software nesting; > > * - ... > > * > > * Related attributes: > > * - supported page sizes, reserved IOVA ranges (DMA mapping); > > Can I request we represent this in terms of permitted IOVA ranges, > rather than reserved IOVA ranges. This works better with the "window" > model I have in mind for unifying the restrictions of the POWER IOMMU > with Type1 like mapping. Can you elaborate how permitted range work better here? > > #define IOASID_MAP_DMA _IO(IOASID_TYPE, IOASID_BASE + 6) > > #define IOASID_UNMAP_DMA _IO(IOASID_TYPE, IOASID_BASE + 7) > > I'm assuming these would be expected to fail if a user managed > pagetable has been bound? yes. Following Jason's suggestion the format will be specified when creating an IOASID, thus incompatible cmd will be simply rejected. > > #define IOASID_BIND_PGTABLE _IO(IOASID_TYPE, > IOASID_BASE + 9) > > #define IOASID_UNBIND_PGTABLE _IO(IOASID_TYPE, IOASID_BASE + 10) > > I'm assuming that UNBIND would return the IOASID to a kernel-managed > pagetable? There will be no UNBIND call in the next version. unbind will be automatically handled when destroying the IOASID. > > For debugging and certain hypervisor edge cases it might be useful to > have a call to allow userspace to lookup and specific IOVA in a guest > managed pgtable. Since all the mapping metadata is from userspace, why would one rely on the kernel to provide such service? Or are you simply asking for some debugfs node to dump the I/O page table for a given IOASID? > > > > /* > > * Bind an user-managed PASID table to the IOMMU > > * > > * This is required for platforms which place PASID table in the GPA space. > > * In this case the specified IOASID represents the per-RID PASID space. > > * > > * Alternatively this may be replaced by IOASID_BIND_PGTABLE plus a > > * special flag to indicate the difference from normal I/O address spaces. > > * > > * The format info of the PASID table is reported in IOASID_GET_INFO. > > * > > * As explained in the design section, user-managed I/O page tables must > > * be explicitly bound to the kernel even on these platforms. It allows > > * the kernel to uniformly manage I/O address spaces cross all platforms. > > * Otherwise, the iotlb invalidation and page faulting uAPI must be hacked > > * to carry device routing information to indirectly mark the hidden I/O > > * address spaces. > > * > > * Input parameters: > > * - child_ioasid; > > Wouldn't this be the parent ioasid, rather than one of the potentially > many child ioasids? there is just one child IOASID (per device) for this PASID table parent ioasid in this case carries the GPA mapping. > > > > /* > > * Invalidate IOTLB for an user-managed I/O page table > > * > > * Unlike what's defined in include/uapi/linux/iommu.h, this command > > * doesn't allow the user to specify cache type and likely support only > > * two granularities (all, or a specified range) in the I/O address space. > > * > > * Physical IOMMU have three cache types (iotlb, dev_iotlb and pasid > > * cache). If the IOASID represents an I/O address space, the invalidation > > * always applies to the iotlb (and dev_iotlb if enabled). If the IOASID > > * represents a vPASID space, then this command applies to the PASID > > * cache. > > * > > * Similarly this command doesn't provide IOMMU-like granularity > > * info (domain-wide, pasid-wide, range-based), since it's all about the > > * I/O address space itself. The ioasid driver walks the attached > > * routing information to match the IOMMU semantics under the > > * hood. > > * > > * Input parameters: > > * - child_ioasid; > > And couldn't this be be any ioasid, not just a child one, depending on > whether you want PASID scope or RID scope invalidation? yes, any ioasid could accept invalidation cmd. This was based on the old assumption that bind+invalidate only applies to child, which will be fixed in next version. > > /* > > * Attach a vfio device to the specified IOASID > > * > > * Multiple vfio devices can be attached to the same IOASID, and vice > > * versa. > > * > > * User may optionally provide a "virtual PASID" to mark an I/O page > > * table on this vfio device. Whether the virtual PASID is physically used > > * or converted to another kernel-allocated PASID is a policy in vfio device > > * driver. > > * > > * There is no need to specify ioasid_fd in this call due to the assumption > > * of 1:1 connection between vfio device and the bound fd. > > * > > * Input parameter: > > * - ioasid; > > * - flag; > > * - user_pasid (if specified); > > Wouldn't the PASID be communicated by whether you give a parent or > child ioasid, rather than needing an extra value? No. ioasid is just the software handle. > > struct ioasid_data { > > // link to ioasid_ctx->ioasid_list > > struct list_head next; > > > > // the IOASID number > > u32 ioasid; > > > > // the handle to convey iommu operations > > // hold the pgd (TBD until discussing iommu api) > > struct iommu_domain *domain; > > > > // map metadata (vfio type1 semantics) > > struct rb_node dma_list; > > Why do you need this? Can't you just store the kernel managed > mappings in the host IO pgtable? A simple reason is that to implement vfio type1 semantics we need make sure unmap with size same as what is used for map. The metadata allows verifying this assumption. Another reason is when doing software nesting, the page table linked into the iommu domain is the shadow one. It's better to keep the original metadata so it can be used to update the shadow when another level (parent or child) changes the mapping. > > > > 5.3. IOASID nesting (software) > > +++++++++++++++++++++++++ > > > > Same usage scenario as 5.2, with software-based IOASID nesting > > available. In this mode it is the kernel instead of user to create the > > shadow mapping. > > In this case, I feel like the preregistration is redundant with the > GPA level mapping. As long as the gIOVA mappings (which might be > frequent) can piggyback on the accounting done for the GPA mapping we > accomplish what we need from preregistration. yes, preregistration makes more sense when multiple IOASIDs are used but are not nested together. > > 5.5. Guest SVA (vSVA) > > ++++++++++++++++++ > > > > After boots the guest further create a GVA address spaces (gpasid1) on > > dev1. Dev2 is not affected (still attached to giova_ioasid). > > > > As explained in section 4, user should avoid expose ENQCMD on both > > pdev and mdev. > > > > The sequence applies to all device types (being pdev or mdev), except > > one additional step to call KVM for ENQCMD-capable mdev: > > > > /* After boots */ > > /* Make GVA space nested on GPA space */ > > gva_ioasid = ioctl(ioasid_fd, IOASID_CREATE_NESTING, > > gpa_ioasid); > > I'm not clear what gva_ioasid is representing. Is it representing a > single vPASID's address space, or a whole bunch of vPASIDs address > spaces? a single vPASID's address space. Thanks Kevin _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
next prev parent reply other threads:[~2021-06-03 8:12 UTC|newest] Thread overview: 518+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-05-27 7:58 [RFC] /dev/ioasid uAPI proposal Tian, Kevin 2021-05-27 7:58 ` Tian, Kevin 2021-05-28 2:24 ` Jason Wang 2021-05-28 2:24 ` Jason Wang 2021-05-28 20:25 ` Jason Gunthorpe 2021-05-28 20:25 ` Jason Gunthorpe 2021-05-31 8:41 ` Liu Yi L 2021-06-01 2:36 ` Jason Wang 2021-06-01 2:36 ` Jason Wang 2021-06-01 3:31 ` Liu Yi L 2021-06-01 5:08 ` Jason Wang 2021-06-01 5:08 ` Jason Wang 2021-06-01 5:23 ` Lu Baolu 2021-06-01 5:23 ` Lu Baolu 2021-06-01 5:29 ` Jason Wang 2021-06-01 5:29 ` Jason Wang 2021-06-01 5:42 ` Tian, Kevin 2021-06-01 5:42 ` Tian, Kevin 2021-06-01 6:07 ` Jason Wang 2021-06-01 6:07 ` Jason Wang 2021-06-01 6:16 ` Tian, Kevin 2021-06-01 6:16 ` Tian, Kevin 2021-06-01 8:47 ` Jason Wang 2021-06-01 8:47 ` Jason Wang 2021-06-01 17:31 ` Jason Gunthorpe 2021-06-01 17:31 ` Jason Gunthorpe 2021-06-02 8:54 ` Jason Wang 2021-06-02 8:54 ` Jason Wang 2021-06-02 17:21 ` Jason Gunthorpe 2021-06-02 17:21 ` Jason Gunthorpe 2021-06-07 13:30 ` Enrico Weigelt, metux IT consult 2021-06-07 13:30 ` Enrico Weigelt, metux IT consult 2021-06-07 18:01 ` Jason Gunthorpe 2021-06-07 18:01 ` Jason Gunthorpe 2021-06-08 10:45 ` Enrico Weigelt, metux IT consult 2021-06-08 10:45 ` Enrico Weigelt, metux IT consult 2021-06-10 2:16 ` Jason Wang 2021-06-10 2:16 ` Jason Wang 2021-06-08 1:10 ` Jason Wang 2021-06-08 1:10 ` Jason Wang 2021-06-08 13:20 ` Jason Gunthorpe 2021-06-08 13:20 ` Jason Gunthorpe 2021-06-10 2:00 ` Jason Wang 2021-06-10 2:00 ` Jason Wang 2021-06-10 4:03 ` Jason Wang 2021-06-10 4:03 ` Jason Wang 2021-06-10 11:47 ` Jason Gunthorpe 2021-06-10 11:47 ` Jason Gunthorpe 2021-06-11 5:43 ` Jason Wang 2021-06-11 5:43 ` Jason Wang 2021-06-01 17:29 ` Jason Gunthorpe 2021-06-01 17:29 ` Jason Gunthorpe 2021-06-02 8:58 ` Jason Wang 2021-06-02 8:58 ` Jason Wang 2021-06-01 4:27 ` Shenming Lu 2021-06-01 4:27 ` Shenming Lu 2021-06-01 5:10 ` Jason Wang 2021-06-01 5:10 ` Jason Wang 2021-05-28 16:23 ` Jean-Philippe Brucker 2021-05-28 16:23 ` Jean-Philippe Brucker 2021-05-28 20:16 ` Jason Gunthorpe 2021-05-28 20:16 ` Jason Gunthorpe 2021-06-01 7:50 ` Tian, Kevin 2021-06-01 7:50 ` Tian, Kevin 2021-05-28 17:35 ` Jason Gunthorpe 2021-05-28 17:35 ` Jason Gunthorpe 2021-06-01 8:10 ` Tian, Kevin 2021-06-01 8:10 ` Tian, Kevin 2021-06-01 17:42 ` Jason Gunthorpe 2021-06-01 17:42 ` Jason Gunthorpe 2021-06-02 1:33 ` Tian, Kevin 2021-06-02 1:33 ` Tian, Kevin 2021-06-02 16:09 ` Jason Gunthorpe 2021-06-02 16:09 ` Jason Gunthorpe 2021-06-03 1:29 ` Tian, Kevin 2021-06-03 1:29 ` Tian, Kevin 2021-06-03 5:09 ` David Gibson 2021-06-03 5:09 ` David Gibson 2021-06-03 6:49 ` Tian, Kevin 2021-06-03 6:49 ` Tian, Kevin 2021-06-03 11:47 ` Jason Gunthorpe 2021-06-03 11:47 ` Jason Gunthorpe 2021-06-04 2:15 ` Tian, Kevin 2021-06-04 2:15 ` Tian, Kevin 2021-06-08 0:49 ` David Gibson 2021-06-08 0:49 ` David Gibson 2021-06-09 2:52 ` Tian, Kevin 2021-06-09 2:52 ` Tian, Kevin 2021-06-02 6:32 ` David Gibson 2021-06-02 6:32 ` David Gibson 2021-06-02 16:16 ` Jason Gunthorpe 2021-06-02 16:16 ` Jason Gunthorpe 2021-06-03 2:11 ` Tian, Kevin 2021-06-03 2:11 ` Tian, Kevin 2021-06-03 5:13 ` David Gibson 2021-06-03 5:13 ` David Gibson 2021-06-03 11:52 ` Jason Gunthorpe 2021-06-03 11:52 ` Jason Gunthorpe 2021-06-08 0:53 ` David Gibson 2021-06-08 0:53 ` David Gibson 2021-06-08 19:04 ` Jason Gunthorpe 2021-06-08 19:04 ` Jason Gunthorpe 2021-06-17 2:42 ` David Gibson 2021-06-17 2:42 ` David Gibson 2021-05-28 19:58 ` Jason Gunthorpe 2021-05-28 19:58 ` Jason Gunthorpe 2021-06-01 8:38 ` Tian, Kevin 2021-06-01 8:38 ` Tian, Kevin 2021-06-01 17:56 ` Jason Gunthorpe 2021-06-01 17:56 ` Jason Gunthorpe 2021-06-02 2:00 ` Tian, Kevin 2021-06-02 2:00 ` Tian, Kevin 2021-06-02 6:57 ` David Gibson 2021-06-02 6:57 ` David Gibson 2021-06-02 16:37 ` Jason Gunthorpe 2021-06-02 16:37 ` Jason Gunthorpe 2021-06-03 5:23 ` David Gibson 2021-06-03 5:23 ` David Gibson 2021-06-03 12:28 ` Jason Gunthorpe 2021-06-03 12:28 ` Jason Gunthorpe 2021-06-08 6:04 ` David Gibson 2021-06-08 6:04 ` David Gibson 2021-06-08 19:23 ` Jason Gunthorpe 2021-06-08 19:23 ` Jason Gunthorpe 2021-06-02 6:48 ` David Gibson 2021-06-02 6:48 ` David Gibson 2021-06-02 16:58 ` Jason Gunthorpe 2021-06-02 16:58 ` Jason Gunthorpe 2021-06-03 2:49 ` Tian, Kevin 2021-06-03 2:49 ` Tian, Kevin 2021-06-03 5:48 ` David Gibson 2021-06-03 5:48 ` David Gibson 2021-06-03 5:45 ` David Gibson 2021-06-03 5:45 ` David Gibson 2021-06-03 12:11 ` Jason Gunthorpe 2021-06-03 12:11 ` Jason Gunthorpe 2021-06-04 6:08 ` Tian, Kevin 2021-06-04 6:08 ` Tian, Kevin 2021-06-04 12:33 ` Jason Gunthorpe 2021-06-04 12:33 ` Jason Gunthorpe 2021-06-04 23:20 ` Tian, Kevin 2021-06-04 23:20 ` Tian, Kevin 2021-06-08 6:13 ` David Gibson 2021-06-08 6:13 ` David Gibson 2021-06-04 10:24 ` Jean-Philippe Brucker 2021-06-04 10:24 ` Jean-Philippe Brucker 2021-06-04 12:05 ` Jason Gunthorpe 2021-06-04 12:05 ` Jason Gunthorpe 2021-06-04 17:27 ` Jacob Pan 2021-06-04 17:27 ` Jacob Pan 2021-06-04 17:40 ` Jason Gunthorpe 2021-06-04 17:40 ` Jason Gunthorpe 2021-06-08 6:31 ` David Gibson 2021-06-08 6:31 ` David Gibson 2021-06-10 16:37 ` Jean-Philippe Brucker 2021-06-10 16:37 ` Jean-Philippe Brucker 2021-06-17 3:00 ` David Gibson 2021-06-17 3:00 ` David Gibson 2021-06-18 17:03 ` Jean-Philippe Brucker 2021-06-18 17:03 ` Jean-Philippe Brucker 2021-06-18 18:30 ` Jason Gunthorpe 2021-06-18 18:30 ` Jason Gunthorpe 2021-06-23 8:19 ` Tian, Kevin 2021-06-23 8:19 ` Tian, Kevin 2021-06-23 7:57 ` Tian, Kevin 2021-06-23 7:57 ` Tian, Kevin 2021-06-24 3:49 ` David Gibson 2021-06-24 3:49 ` David Gibson 2021-05-28 20:03 ` Jason Gunthorpe 2021-05-28 20:03 ` Jason Gunthorpe 2021-06-01 7:01 ` Tian, Kevin 2021-06-01 7:01 ` Tian, Kevin 2021-06-01 20:28 ` Jason Gunthorpe 2021-06-01 20:28 ` Jason Gunthorpe 2021-06-02 1:25 ` Tian, Kevin 2021-06-02 1:25 ` Tian, Kevin 2021-06-02 23:27 ` Jason Gunthorpe 2021-06-02 23:27 ` Jason Gunthorpe 2021-06-04 8:17 ` Jean-Philippe Brucker 2021-06-04 8:17 ` Jean-Philippe Brucker 2021-06-04 8:43 ` Tian, Kevin 2021-06-04 8:43 ` Tian, Kevin 2021-06-02 8:52 ` Jason Wang 2021-06-02 8:52 ` Jason Wang 2021-06-02 16:07 ` Jason Gunthorpe 2021-06-02 16:07 ` Jason Gunthorpe 2021-06-01 22:22 ` Alex Williamson 2021-06-01 22:22 ` Alex Williamson 2021-06-02 2:20 ` Tian, Kevin 2021-06-02 2:20 ` Tian, Kevin 2021-06-02 16:01 ` Jason Gunthorpe 2021-06-02 16:01 ` Jason Gunthorpe 2021-06-02 17:11 ` Alex Williamson 2021-06-02 17:11 ` Alex Williamson 2021-06-02 17:35 ` Jason Gunthorpe 2021-06-02 17:35 ` Jason Gunthorpe 2021-06-02 18:01 ` Alex Williamson 2021-06-02 18:01 ` Alex Williamson 2021-06-02 18:09 ` Jason Gunthorpe 2021-06-02 18:09 ` Jason Gunthorpe 2021-06-02 19:00 ` Alex Williamson 2021-06-02 19:00 ` Alex Williamson 2021-06-02 19:54 ` Jason Gunthorpe 2021-06-02 19:54 ` Jason Gunthorpe 2021-06-02 20:37 ` Alex Williamson 2021-06-02 20:37 ` Alex Williamson 2021-06-02 22:45 ` Jason Gunthorpe 2021-06-02 22:45 ` Jason Gunthorpe 2021-06-03 2:50 ` Alex Williamson 2021-06-03 2:50 ` Alex Williamson 2021-06-03 3:22 ` Tian, Kevin 2021-06-03 3:22 ` Tian, Kevin 2021-06-03 4:14 ` Alex Williamson 2021-06-03 4:14 ` Alex Williamson 2021-06-03 5:18 ` Tian, Kevin 2021-06-03 5:18 ` Tian, Kevin 2021-06-03 12:40 ` Jason Gunthorpe 2021-06-03 12:40 ` Jason Gunthorpe 2021-06-03 20:41 ` Alex Williamson 2021-06-03 20:41 ` Alex Williamson 2021-06-04 9:19 ` Tian, Kevin 2021-06-04 9:19 ` Tian, Kevin 2021-06-04 15:37 ` Alex Williamson 2021-06-04 15:37 ` Alex Williamson 2021-06-04 12:13 ` Jason Gunthorpe 2021-06-04 12:13 ` Jason Gunthorpe 2021-06-04 21:45 ` Alex Williamson 2021-06-04 21:45 ` Alex Williamson 2021-06-04 7:33 ` Tian, Kevin 2021-06-04 7:33 ` Tian, Kevin 2021-06-03 12:34 ` Jason Gunthorpe 2021-06-03 12:34 ` Jason Gunthorpe 2021-06-03 20:01 ` Alex Williamson 2021-06-03 20:01 ` Alex Williamson 2021-06-03 20:10 ` Jason Gunthorpe 2021-06-03 20:10 ` Jason Gunthorpe 2021-06-03 21:44 ` Alex Williamson 2021-06-03 21:44 ` Alex Williamson 2021-06-04 8:38 ` Tian, Kevin 2021-06-04 8:38 ` Tian, Kevin 2021-06-04 12:28 ` Jason Gunthorpe 2021-06-04 12:28 ` Jason Gunthorpe 2021-06-04 15:26 ` Alex Williamson 2021-06-04 15:26 ` Alex Williamson 2021-06-04 15:40 ` Paolo Bonzini 2021-06-04 15:40 ` Paolo Bonzini 2021-06-04 15:50 ` Jason Gunthorpe 2021-06-04 15:50 ` Jason Gunthorpe 2021-06-04 15:57 ` Paolo Bonzini 2021-06-04 15:57 ` Paolo Bonzini 2021-06-04 16:03 ` Jason Gunthorpe 2021-06-04 16:03 ` Jason Gunthorpe 2021-06-04 16:10 ` Paolo Bonzini 2021-06-04 16:10 ` Paolo Bonzini 2021-06-04 17:22 ` Jason Gunthorpe 2021-06-04 17:22 ` Jason Gunthorpe 2021-06-04 21:29 ` Alex Williamson 2021-06-04 21:29 ` Alex Williamson 2021-06-04 23:01 ` Jason Gunthorpe 2021-06-04 23:01 ` Jason Gunthorpe 2021-06-07 15:41 ` Alex Williamson 2021-06-07 15:41 ` Alex Williamson 2021-06-07 18:18 ` Jason Gunthorpe 2021-06-07 18:18 ` Jason Gunthorpe 2021-06-07 18:59 ` Alex Williamson 2021-06-07 18:59 ` Alex Williamson 2021-06-07 19:08 ` Jason Gunthorpe 2021-06-07 19:08 ` Jason Gunthorpe 2021-06-07 19:41 ` Alex Williamson 2021-06-07 19:41 ` Alex Williamson 2021-06-07 23:03 ` Jason Gunthorpe 2021-06-07 23:03 ` Jason Gunthorpe 2021-06-08 0:30 ` Alex Williamson 2021-06-08 0:30 ` Alex Williamson 2021-06-08 1:20 ` Jason Wang 2021-06-08 1:20 ` Jason Wang 2021-06-30 6:53 ` Christoph Hellwig 2021-06-30 6:53 ` Christoph Hellwig 2021-06-30 6:49 ` Christoph Hellwig 2021-06-30 6:49 ` Christoph Hellwig 2021-06-07 3:25 ` Tian, Kevin 2021-06-07 3:25 ` Tian, Kevin 2021-06-07 6:51 ` Paolo Bonzini 2021-06-07 6:51 ` Paolo Bonzini 2021-06-07 18:01 ` Jason Gunthorpe 2021-06-07 18:01 ` Jason Gunthorpe 2021-06-30 6:56 ` Christoph Hellwig 2021-06-30 6:56 ` Christoph Hellwig 2021-06-05 6:22 ` Paolo Bonzini 2021-06-05 6:22 ` Paolo Bonzini 2021-06-07 3:50 ` Tian, Kevin 2021-06-07 3:50 ` Tian, Kevin 2021-06-07 17:59 ` Jason Gunthorpe 2021-06-07 17:59 ` Jason Gunthorpe 2021-06-08 7:56 ` Paolo Bonzini 2021-06-08 7:56 ` Paolo Bonzini 2021-06-08 13:15 ` Jason Gunthorpe 2021-06-08 13:15 ` Jason Gunthorpe 2021-06-08 13:44 ` Paolo Bonzini 2021-06-08 13:44 ` Paolo Bonzini 2021-06-08 18:47 ` Alex Williamson 2021-06-08 18:47 ` Alex Williamson 2021-06-08 19:00 ` Jason Gunthorpe 2021-06-08 19:00 ` Jason Gunthorpe 2021-06-09 8:51 ` Enrico Weigelt, metux IT consult 2021-06-09 8:51 ` Enrico Weigelt, metux IT consult 2021-06-09 9:11 ` Paolo Bonzini 2021-06-09 9:11 ` Paolo Bonzini 2021-06-09 11:54 ` Jason Gunthorpe 2021-06-09 11:54 ` Jason Gunthorpe 2021-06-09 14:31 ` Alex Williamson 2021-06-09 14:31 ` Alex Williamson 2021-06-09 14:45 ` Jason Gunthorpe 2021-06-09 14:45 ` Jason Gunthorpe 2021-06-09 15:20 ` Paolo Bonzini 2021-06-09 15:20 ` Paolo Bonzini 2021-10-27 6:18 ` Tian, Kevin 2021-10-27 6:18 ` Tian, Kevin 2021-10-27 10:32 ` Paolo Bonzini 2021-10-27 10:32 ` Paolo Bonzini 2021-10-28 1:50 ` Tian, Kevin 2021-10-28 1:50 ` Tian, Kevin 2021-06-09 2:49 ` Tian, Kevin 2021-06-09 2:49 ` Tian, Kevin 2021-06-09 11:57 ` Jason Gunthorpe 2021-06-09 11:57 ` Jason Gunthorpe 2021-06-09 12:46 ` Paolo Bonzini 2021-06-09 12:46 ` Paolo Bonzini 2021-06-09 12:47 ` Jason Gunthorpe 2021-06-09 12:47 ` Jason Gunthorpe 2021-06-09 13:24 ` Paolo Bonzini 2021-06-09 13:24 ` Paolo Bonzini 2021-06-09 14:32 ` Jason Gunthorpe 2021-06-09 14:32 ` Jason Gunthorpe 2021-06-30 7:01 ` Christoph Hellwig 2021-06-30 7:01 ` Christoph Hellwig 2021-06-09 18:09 ` Alex Williamson 2021-06-09 18:09 ` Alex Williamson 2021-06-03 2:52 ` Jason Wang 2021-06-03 2:52 ` Jason Wang 2021-06-03 13:09 ` Jason Gunthorpe 2021-06-03 13:09 ` Jason Gunthorpe 2021-06-04 1:11 ` Jason Wang 2021-06-04 1:11 ` Jason Wang 2021-06-04 11:58 ` Jason Gunthorpe 2021-06-04 11:58 ` Jason Gunthorpe 2021-06-07 3:18 ` Jason Wang 2021-06-07 3:18 ` Jason Wang 2021-06-07 14:14 ` Jason Gunthorpe 2021-06-07 14:14 ` Jason Gunthorpe 2021-06-08 1:00 ` Jason Wang 2021-06-08 1:00 ` Jason Wang 2021-06-08 8:54 ` Enrico Weigelt, metux IT consult 2021-06-08 8:54 ` Enrico Weigelt, metux IT consult 2021-06-08 12:52 ` Jason Gunthorpe 2021-06-08 12:52 ` Jason Gunthorpe 2021-06-30 7:07 ` Christoph Hellwig 2021-06-30 7:07 ` Christoph Hellwig 2021-06-30 7:05 ` Christoph Hellwig 2021-06-30 7:05 ` Christoph Hellwig 2021-06-08 2:37 ` David Gibson 2021-06-08 2:37 ` David Gibson 2021-06-08 13:17 ` Jason Gunthorpe 2021-06-08 13:17 ` Jason Gunthorpe 2021-06-17 3:47 ` David Gibson 2021-06-17 3:47 ` David Gibson 2021-06-23 7:59 ` Tian, Kevin 2021-06-23 7:59 ` Tian, Kevin 2021-06-24 3:53 ` David Gibson 2021-06-24 3:53 ` David Gibson 2021-05-28 23:36 ` Jason Gunthorpe 2021-05-28 23:36 ` Jason Gunthorpe 2021-05-31 11:31 ` Liu Yi L 2021-05-31 11:31 ` Liu Yi L 2021-05-31 18:09 ` Jason Gunthorpe 2021-05-31 18:09 ` Jason Gunthorpe 2021-06-01 3:08 ` Lu Baolu 2021-06-01 3:08 ` Lu Baolu 2021-06-01 17:24 ` Jason Gunthorpe 2021-06-01 17:24 ` Jason Gunthorpe 2021-06-01 1:25 ` Lu Baolu 2021-06-01 1:25 ` Lu Baolu 2021-06-01 11:09 ` Lu Baolu 2021-06-01 11:09 ` Lu Baolu 2021-06-01 17:26 ` Jason Gunthorpe 2021-06-01 17:26 ` Jason Gunthorpe 2021-06-02 4:01 ` Lu Baolu 2021-06-02 4:01 ` Lu Baolu 2021-06-02 23:23 ` Jason Gunthorpe 2021-06-02 23:23 ` Jason Gunthorpe 2021-06-03 5:49 ` Lu Baolu 2021-06-03 5:49 ` Lu Baolu 2021-06-03 5:54 ` David Gibson 2021-06-03 5:54 ` David Gibson 2021-06-03 6:50 ` Lu Baolu 2021-06-03 6:50 ` Lu Baolu 2021-06-03 12:56 ` Jason Gunthorpe 2021-06-03 12:56 ` Jason Gunthorpe 2021-06-02 7:22 ` David Gibson 2021-06-02 7:22 ` David Gibson 2021-06-03 6:39 ` Tian, Kevin 2021-06-03 6:39 ` Tian, Kevin 2021-06-03 13:05 ` Jason Gunthorpe 2021-06-03 13:05 ` Jason Gunthorpe 2021-06-04 6:37 ` Tian, Kevin 2021-06-04 6:37 ` Tian, Kevin 2021-06-04 12:09 ` Jason Gunthorpe 2021-06-04 12:09 ` Jason Gunthorpe 2021-06-04 23:10 ` Tian, Kevin 2021-06-04 23:10 ` Tian, Kevin 2021-06-07 17:54 ` Jason Gunthorpe 2021-06-07 17:54 ` Jason Gunthorpe 2021-06-15 8:59 ` Tian, Kevin 2021-06-15 8:59 ` Tian, Kevin 2021-06-15 15:06 ` Jason Gunthorpe 2021-06-15 15:06 ` Jason Gunthorpe 2021-06-15 22:59 ` Tian, Kevin 2021-06-15 22:59 ` Tian, Kevin 2021-06-15 23:02 ` Jason Gunthorpe 2021-06-15 23:02 ` Jason Gunthorpe 2021-06-15 23:09 ` Tian, Kevin 2021-06-15 23:09 ` Tian, Kevin 2021-06-15 23:40 ` Jason Gunthorpe 2021-06-15 23:40 ` Jason Gunthorpe 2021-06-15 23:56 ` Tian, Kevin 2021-06-15 23:56 ` Tian, Kevin 2021-06-15 23:59 ` Jason Gunthorpe 2021-06-15 23:59 ` Jason Gunthorpe 2021-06-16 0:02 ` Tian, Kevin 2021-06-16 0:02 ` Tian, Kevin 2021-05-31 17:37 ` Parav Pandit 2021-05-31 17:37 ` Parav Pandit 2021-05-31 18:12 ` Jason Gunthorpe 2021-05-31 18:12 ` Jason Gunthorpe 2021-06-01 12:04 ` Parav Pandit 2021-06-01 12:04 ` Parav Pandit 2021-06-01 17:36 ` Jason Gunthorpe 2021-06-01 17:36 ` Jason Gunthorpe 2021-06-02 8:38 ` Enrico Weigelt, metux IT consult 2021-06-02 8:38 ` Enrico Weigelt, metux IT consult 2021-06-02 12:41 ` Parav Pandit 2021-06-02 12:41 ` Parav Pandit 2021-06-01 4:31 ` Shenming Lu 2021-06-01 4:31 ` Shenming Lu 2021-06-01 5:10 ` Lu Baolu 2021-06-01 5:10 ` Lu Baolu 2021-06-01 7:15 ` Shenming Lu 2021-06-01 7:15 ` Shenming Lu 2021-06-01 12:30 ` Lu Baolu 2021-06-01 12:30 ` Lu Baolu 2021-06-01 13:10 ` Shenming Lu 2021-06-01 13:10 ` Shenming Lu 2021-06-01 17:33 ` Jason Gunthorpe 2021-06-01 17:33 ` Jason Gunthorpe 2021-06-02 4:50 ` Shenming Lu 2021-06-02 4:50 ` Shenming Lu 2021-06-03 18:19 ` Jacob Pan 2021-06-03 18:19 ` Jacob Pan 2021-06-04 1:30 ` Jason Wang 2021-06-04 1:30 ` Jason Wang 2021-06-04 16:22 ` Jacob Pan 2021-06-04 16:22 ` Jacob Pan 2021-06-04 16:22 ` Jason Gunthorpe 2021-06-04 16:22 ` Jason Gunthorpe 2021-06-04 18:05 ` Jacob Pan 2021-06-04 18:05 ` Jacob Pan 2021-06-04 2:03 ` Shenming Lu 2021-06-04 2:03 ` Shenming Lu 2021-06-07 12:19 ` Liu, Yi L 2021-06-07 12:19 ` Liu, Yi L 2021-06-08 1:09 ` Shenming Lu 2021-06-08 1:09 ` Shenming Lu 2021-06-01 17:30 ` Parav Pandit 2021-06-01 17:30 ` Parav Pandit 2021-06-03 20:58 ` Jacob Pan 2021-06-03 20:58 ` Jacob Pan 2021-06-08 6:30 ` Parav Pandit 2021-06-08 6:30 ` Parav Pandit 2021-06-02 6:15 ` David Gibson 2021-06-02 6:15 ` David Gibson 2021-06-02 17:19 ` Jason Gunthorpe 2021-06-02 17:19 ` Jason Gunthorpe 2021-06-03 3:02 ` Tian, Kevin 2021-06-03 3:02 ` Tian, Kevin 2021-06-03 6:26 ` David Gibson 2021-06-03 6:26 ` David Gibson 2021-06-03 12:46 ` Jason Gunthorpe 2021-06-03 12:46 ` Jason Gunthorpe 2021-06-04 6:27 ` Tian, Kevin 2021-06-04 6:27 ` Tian, Kevin 2021-06-03 7:17 ` Tian, Kevin 2021-06-03 7:17 ` Tian, Kevin 2021-06-03 12:49 ` Jason Gunthorpe 2021-06-03 12:49 ` Jason Gunthorpe 2021-06-08 5:49 ` David Gibson 2021-06-08 5:49 ` David Gibson 2021-06-03 8:12 ` Tian, Kevin [this message] 2021-06-03 8:12 ` Tian, Kevin 2021-06-17 4:07 ` David Gibson 2021-06-17 4:07 ` David Gibson 2021-06-23 8:00 ` Tian, Kevin 2021-06-23 8:00 ` Tian, Kevin 2021-06-24 3:55 ` David Gibson 2021-06-24 3:55 ` David Gibson 2021-06-02 8:56 ` Enrico Weigelt, metux IT consult 2021-06-02 8:56 ` Enrico Weigelt, metux IT consult 2021-06-02 17:24 ` Jason Gunthorpe 2021-06-02 17:24 ` Jason Gunthorpe 2021-06-04 10:44 ` Enrico Weigelt, metux IT consult 2021-06-04 10:44 ` Enrico Weigelt, metux IT consult 2021-06-04 12:30 ` Jason Gunthorpe 2021-06-04 12:30 ` Jason Gunthorpe 2021-06-08 1:15 ` David Gibson 2021-06-08 1:15 ` David Gibson 2021-06-08 10:43 ` Enrico Weigelt, metux IT consult 2021-06-08 10:43 ` Enrico Weigelt, metux IT consult 2021-06-08 13:11 ` Jason Gunthorpe 2021-06-08 13:11 ` Jason Gunthorpe
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=MWHPR11MB188668D220E1BF7360F2A6BE8C3C9@MWHPR11MB1886.namprd11.prod.outlook.com \ --to=kevin.tian@intel.com \ --cc=alex.williamson@redhat.com \ --cc=ashok.raj@intel.com \ --cc=baolu.lu@linux.intel.com \ --cc=corbet@lwn.net \ --cc=dave.jiang@intel.com \ --cc=david@gibson.dropbear.id.au \ --cc=dwmw2@infradead.org \ --cc=eric.auger@redhat.com \ --cc=hao.wu@intel.com \ --cc=iommu@lists.linux-foundation.org \ --cc=jacob.jun.pan@linux.intel.com \ --cc=jasowang@redhat.com \ --cc=jean-philippe@linaro.org \ --cc=jgg@nvidia.com \ --cc=joro@8bytes.org \ --cc=kvm@vger.kernel.org \ --cc=kwankhede@nvidia.com \ --cc=linux-kernel@vger.kernel.org \ --cc=robin.murphy@arm.com \ --cc=yi.l.liu@intel.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.