From: Jason Gunthorpe <jgg@nvidia.com> To: David Gibson <david@gibson.dropbear.id.au> Cc: Alex Williamson <alex.williamson@redhat.com>, Lu Baolu <baolu.lu@linux.intel.com>, Chaitanya Kulkarni <chaitanyak@nvidia.com>, Cornelia Huck <cohuck@redhat.com>, Daniel Jordan <daniel.m.jordan@oracle.com>, Eric Auger <eric.auger@redhat.com>, iommu@lists.linux-foundation.org, Jason Wang <jasowang@redhat.com>, Jean-Philippe Brucker <jean-philippe@linaro.org>, Joao Martins <joao.m.martins@oracle.com>, Kevin Tian <kevin.tian@intel.com>, kvm@vger.kernel.org, Matthew Rosato <mjrosato@linux.ibm.com>, "Michael S. Tsirkin" <mst@redhat.com>, Nicolin Chen <nicolinc@nvidia.com>, Niklas Schnelle <schnelle@linux.ibm.com>, Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>, Yi Liu <yi.l.liu@intel.com>, Keqian Zhu <zhukeqian1@huawei.com> Subject: Re: [PATCH RFC 11/12] iommufd: vfio container FD ioctl compatibility Date: Mon, 9 May 2022 11:00:41 -0300 [thread overview] Message-ID: <20220509140041.GK49344@nvidia.com> (raw) In-Reply-To: <YniuUMCBjy0BaJC6@yekko> On Mon, May 09, 2022 at 04:01:52PM +1000, David Gibson wrote: > > The default iommu_domain that the iommu driver creates will be used > > here, it is up to the iommu driver to choose something reasonable for > > use by applications like DPDK. ie PPC should probably pick its biggest > > x86-like aperture. > > So, using the big aperture means a very high base IOVA > (1<<59)... which means that it won't work at all if you want to attach > any devices that aren't capable of 64-bit DMA. I'd expect to include the 32 bit window too.. > Using the maximum possible window size would mean we either > potentially waste a lot of kernel memory on pagetables, or we use > unnecessarily large number of levels to the pagetable. All drivers have this issue to one degree or another. We seem to be ignoring it - in any case this is a micro optimization, not a functional need? > More generally, the problem with the interface advertising limitations > and it being up to userspace to work out if those are ok or not is > that it's fragile. It's pretty plausible that some future IOMMU model > will have some new kind of limitation that can't be expressed in the > query structure we invented now. The basic API is very simple - the driver needs to provide ranges of IOVA and map/unmap - I don't think we have a future problem here we need to try and guess and solve today. Even PPC fits this just fine, the open question for DPDK is more around optimization, not functional. > But if userspace requests the capabilities it wants, and the kernel > acks or nacks that, we can support the new host IOMMU with existing > software just fine. No, this just makes it fragile in the other direction because now userspace has to know what platform specific things to ask for *or it doesn't work at all*. This is not a improvement for the DPDK cases. Kernel decides, using all the kernel knowledge it has and tells the application what it can do - this is the basic simplified interface. > > The iommu-driver-specific struct is the "advanced" interface and > > allows a user-space IOMMU driver to tightly control the HW with full > > HW specific knowledge. This is where all the weird stuff that is not > > general should go. > > Right, but forcing anything more complicated than "give me some IOVA > region" to go through the advanced interface means that qemu (or any > hypervisor where the guest platform need not identically match the > host) has to have n^2 complexity to match each guest IOMMU model to > each host IOMMU model. I wouldn't say n^2, but yes, qemu needs to have a userspace driver for the platform IOMMU, and yes it needs this to reach optimal behavior. We already know this is a hard requirement for using nesting as acceleration, I don't see why apertures are so different. > Errr.. how do you figure? On ppc the ranges and pagesizes are > definitely negotiable. I'm not really familiar with other models, but > anything which allows *any* variations in the pagetable structure will > effectively have at least some negotiable properties. As above, if you ask for the wrong thing then you don't get anything. If DPDK asks for something that works on ARM like 0 -> 4G then PPC and x86 will always fail. How is this improving anything to require applications to carefully ask for exactly the right platform specific ranges? It isn't like there is some hard coded value we can put into DPDK that will work on every platform. So kernel must pick for DPDK, IMHO. I don't see any feasible alternative. > Which is why I'm suggesting that the base address be an optional > request. DPDK *will* care about the size of the range, so it just > requests that and gets told a base address. We've talked about a size of IOVA address space before, strictly as a hint, to possible optimize page table layout, or something, and I'm fine with that idea. But - we have no driver implementation today, so I'm not sure what we can really do with this right now.. Kevin could Intel consume a hint on IOVA space and optimize the number of IO page table levels? > > and IMHO, qemu > > is fine to have a PPC specific userspace driver to tweak this PPC > > unique thing if the default windows are not acceptable. > > Not really, because it's the ppc *target* (guest) side which requires > the specific properties, but selecting the "advanced" interface > requires special knowledge on the *host* side. The ppc specific driver would be on the generic side of qemu in its viommu support framework. There is lots of host driver optimization possible here with knowledge of the underlying host iommu HW. It should not be connected to the qemu target. It is not so different from today where qemu has to know about ppc's special vfio interface generically even to emulate x86. > > IMHO it is no different from imagining an Intel specific userspace > > driver that is using userspace IO pagetables to optimize > > cross-platform qemu vIOMMU emulation. > > I'm not quite sure what you have in mind here. How is it both Intel > specific and cross-platform? It is part of the generic qemu iommu interface layer. For nesting qemu would copy the guest page table format to the host page table format in userspace and trigger invalidation - no pinning, no kernel map/unmap calls. It can only be done with detailed knowledge of the host iommu since the host iommu io page table format is exposed directly to userspace. > Note however, that having multiple apertures isn't really ppc specific. > Anything with an IO hole effectively has separate apertures above and > below the hole. They're much closer together in address than POWER's > two apertures, but I don't see that makes any fundamental difference > to the handling of them. In the iommu core it handled the io holes and things through the group reserved IOVA list - there isn't actualy a limit in the iommu_domain, it has a flat pagetable format - and in cases like PASID/SVA the group reserved list doesn't apply at all. > Another approach would be to give the required apertures / pagesizes > in the initial creation of the domain/IOAS. In that case they would > be static for the IOAS, as well as the underlying iommu_domains: any > ATTACH which would be incompatible would fail. This is the device-specific iommu_domain creation path. The domain can have information defining its aperture. > That makes life hard for the DPDK case, though. Obviously we can > still make the base address optional, but for it to be static the > kernel would have to pick it immediately, before we know what devices > will be attached, which will be a problem on any system where there > are multiple IOMMUs with different constraints. Which is why the current scheme is fully automatic and we rely on the iommu driver to automatically select something sane for DPDK/etc today. > > In general I have no issue with limiting the IOVA allocator in the > > kernel, I just don't have a use case of an application that could use > > the IOVA allocator (like DPDK) and also needs a limitation.. > > Well, I imagine DPDK has at least the minimal limitation that it needs > the aperture to be a certain minimum size (I'm guessing at least the > size of its pinned hugepage working memory region). That's a > limitation that's unlikely to fail on modern hardware, but it's there. Yes, DPDK does assume there is some fairly large available aperture, that should be the driver default behavior, IMHO. > > That breaks what I'm > > trying to do to make DPDK/etc portable and dead simple. > > It doesn't break portability at all. As for simplicity, yes it adds > an extra required step, but the payoff is that it's now impossible to > subtly screw up by failing to recheck your apertures after an ATTACH. > That is, it's taking a step which was implicitly required and > replacing it with one that's explicitly required. Again, as above, it breaks portability because apps have no hope to know what window range to ask for to succeed. It cannot just be a hard coded range. Jason
WARNING: multiple messages have this Message-ID (diff)
From: Jason Gunthorpe via iommu <iommu@lists.linux-foundation.org> To: David Gibson <david@gibson.dropbear.id.au> Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>, Chaitanya Kulkarni <chaitanyak@nvidia.com>, kvm@vger.kernel.org, "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>, Cornelia Huck <cohuck@redhat.com>, Niklas Schnelle <schnelle@linux.ibm.com>, iommu@lists.linux-foundation.org, Daniel Jordan <daniel.m.jordan@oracle.com>, Kevin Tian <kevin.tian@intel.com>, Alex Williamson <alex.williamson@redhat.com>, Joao Martins <joao.m.martins@oracle.com> Subject: Re: [PATCH RFC 11/12] iommufd: vfio container FD ioctl compatibility Date: Mon, 9 May 2022 11:00:41 -0300 [thread overview] Message-ID: <20220509140041.GK49344@nvidia.com> (raw) In-Reply-To: <YniuUMCBjy0BaJC6@yekko> On Mon, May 09, 2022 at 04:01:52PM +1000, David Gibson wrote: > > The default iommu_domain that the iommu driver creates will be used > > here, it is up to the iommu driver to choose something reasonable for > > use by applications like DPDK. ie PPC should probably pick its biggest > > x86-like aperture. > > So, using the big aperture means a very high base IOVA > (1<<59)... which means that it won't work at all if you want to attach > any devices that aren't capable of 64-bit DMA. I'd expect to include the 32 bit window too.. > Using the maximum possible window size would mean we either > potentially waste a lot of kernel memory on pagetables, or we use > unnecessarily large number of levels to the pagetable. All drivers have this issue to one degree or another. We seem to be ignoring it - in any case this is a micro optimization, not a functional need? > More generally, the problem with the interface advertising limitations > and it being up to userspace to work out if those are ok or not is > that it's fragile. It's pretty plausible that some future IOMMU model > will have some new kind of limitation that can't be expressed in the > query structure we invented now. The basic API is very simple - the driver needs to provide ranges of IOVA and map/unmap - I don't think we have a future problem here we need to try and guess and solve today. Even PPC fits this just fine, the open question for DPDK is more around optimization, not functional. > But if userspace requests the capabilities it wants, and the kernel > acks or nacks that, we can support the new host IOMMU with existing > software just fine. No, this just makes it fragile in the other direction because now userspace has to know what platform specific things to ask for *or it doesn't work at all*. This is not a improvement for the DPDK cases. Kernel decides, using all the kernel knowledge it has and tells the application what it can do - this is the basic simplified interface. > > The iommu-driver-specific struct is the "advanced" interface and > > allows a user-space IOMMU driver to tightly control the HW with full > > HW specific knowledge. This is where all the weird stuff that is not > > general should go. > > Right, but forcing anything more complicated than "give me some IOVA > region" to go through the advanced interface means that qemu (or any > hypervisor where the guest platform need not identically match the > host) has to have n^2 complexity to match each guest IOMMU model to > each host IOMMU model. I wouldn't say n^2, but yes, qemu needs to have a userspace driver for the platform IOMMU, and yes it needs this to reach optimal behavior. We already know this is a hard requirement for using nesting as acceleration, I don't see why apertures are so different. > Errr.. how do you figure? On ppc the ranges and pagesizes are > definitely negotiable. I'm not really familiar with other models, but > anything which allows *any* variations in the pagetable structure will > effectively have at least some negotiable properties. As above, if you ask for the wrong thing then you don't get anything. If DPDK asks for something that works on ARM like 0 -> 4G then PPC and x86 will always fail. How is this improving anything to require applications to carefully ask for exactly the right platform specific ranges? It isn't like there is some hard coded value we can put into DPDK that will work on every platform. So kernel must pick for DPDK, IMHO. I don't see any feasible alternative. > Which is why I'm suggesting that the base address be an optional > request. DPDK *will* care about the size of the range, so it just > requests that and gets told a base address. We've talked about a size of IOVA address space before, strictly as a hint, to possible optimize page table layout, or something, and I'm fine with that idea. But - we have no driver implementation today, so I'm not sure what we can really do with this right now.. Kevin could Intel consume a hint on IOVA space and optimize the number of IO page table levels? > > and IMHO, qemu > > is fine to have a PPC specific userspace driver to tweak this PPC > > unique thing if the default windows are not acceptable. > > Not really, because it's the ppc *target* (guest) side which requires > the specific properties, but selecting the "advanced" interface > requires special knowledge on the *host* side. The ppc specific driver would be on the generic side of qemu in its viommu support framework. There is lots of host driver optimization possible here with knowledge of the underlying host iommu HW. It should not be connected to the qemu target. It is not so different from today where qemu has to know about ppc's special vfio interface generically even to emulate x86. > > IMHO it is no different from imagining an Intel specific userspace > > driver that is using userspace IO pagetables to optimize > > cross-platform qemu vIOMMU emulation. > > I'm not quite sure what you have in mind here. How is it both Intel > specific and cross-platform? It is part of the generic qemu iommu interface layer. For nesting qemu would copy the guest page table format to the host page table format in userspace and trigger invalidation - no pinning, no kernel map/unmap calls. It can only be done with detailed knowledge of the host iommu since the host iommu io page table format is exposed directly to userspace. > Note however, that having multiple apertures isn't really ppc specific. > Anything with an IO hole effectively has separate apertures above and > below the hole. They're much closer together in address than POWER's > two apertures, but I don't see that makes any fundamental difference > to the handling of them. In the iommu core it handled the io holes and things through the group reserved IOVA list - there isn't actualy a limit in the iommu_domain, it has a flat pagetable format - and in cases like PASID/SVA the group reserved list doesn't apply at all. > Another approach would be to give the required apertures / pagesizes > in the initial creation of the domain/IOAS. In that case they would > be static for the IOAS, as well as the underlying iommu_domains: any > ATTACH which would be incompatible would fail. This is the device-specific iommu_domain creation path. The domain can have information defining its aperture. > That makes life hard for the DPDK case, though. Obviously we can > still make the base address optional, but for it to be static the > kernel would have to pick it immediately, before we know what devices > will be attached, which will be a problem on any system where there > are multiple IOMMUs with different constraints. Which is why the current scheme is fully automatic and we rely on the iommu driver to automatically select something sane for DPDK/etc today. > > In general I have no issue with limiting the IOVA allocator in the > > kernel, I just don't have a use case of an application that could use > > the IOVA allocator (like DPDK) and also needs a limitation.. > > Well, I imagine DPDK has at least the minimal limitation that it needs > the aperture to be a certain minimum size (I'm guessing at least the > size of its pinned hugepage working memory region). That's a > limitation that's unlikely to fail on modern hardware, but it's there. Yes, DPDK does assume there is some fairly large available aperture, that should be the driver default behavior, IMHO. > > That breaks what I'm > > trying to do to make DPDK/etc portable and dead simple. > > It doesn't break portability at all. As for simplicity, yes it adds > an extra required step, but the payoff is that it's now impossible to > subtly screw up by failing to recheck your apertures after an ATTACH. > That is, it's taking a step which was implicitly required and > replacing it with one that's explicitly required. Again, as above, it breaks portability because apps have no hope to know what window range to ask for to succeed. It cannot just be a hard coded range. Jason _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
next prev parent reply other threads:[~2022-05-09 14:01 UTC|newest] Thread overview: 244+ messages / expand[flat|nested] mbox.gz Atom feed top 2022-03-18 17:27 [PATCH RFC 00/12] IOMMUFD Generic interface Jason Gunthorpe 2022-03-18 17:27 ` Jason Gunthorpe via iommu 2022-03-18 17:27 ` [PATCH RFC 01/12] interval-tree: Add a utility to iterate over spans in an interval tree Jason Gunthorpe 2022-03-18 17:27 ` Jason Gunthorpe via iommu 2022-03-18 17:27 ` [PATCH RFC 02/12] iommufd: Overview documentation Jason Gunthorpe 2022-03-18 17:27 ` Jason Gunthorpe via iommu 2022-03-18 17:27 ` [PATCH RFC 03/12] iommufd: File descriptor, context, kconfig and makefiles Jason Gunthorpe 2022-03-18 17:27 ` Jason Gunthorpe via iommu 2022-03-22 14:18 ` Niklas Schnelle 2022-03-22 14:18 ` Niklas Schnelle 2022-03-22 14:50 ` Jason Gunthorpe 2022-03-22 14:50 ` Jason Gunthorpe via iommu 2022-03-18 17:27 ` [PATCH RFC 04/12] kernel/user: Allow user::locked_vm to be usable for iommufd Jason Gunthorpe 2022-03-18 17:27 ` Jason Gunthorpe via iommu 2022-03-22 14:28 ` Niklas Schnelle 2022-03-22 14:28 ` Niklas Schnelle 2022-03-22 14:57 ` Jason Gunthorpe 2022-03-22 14:57 ` Jason Gunthorpe via iommu 2022-03-22 15:29 ` Alex Williamson 2022-03-22 15:29 ` Alex Williamson 2022-03-22 16:15 ` Jason Gunthorpe 2022-03-22 16:15 ` Jason Gunthorpe via iommu 2022-03-24 2:11 ` Tian, Kevin 2022-03-24 2:11 ` Tian, Kevin 2022-03-24 2:27 ` Jason Wang 2022-03-24 2:27 ` Jason Wang 2022-03-24 2:42 ` Tian, Kevin 2022-03-24 2:42 ` Tian, Kevin 2022-03-24 2:57 ` Jason Wang 2022-03-24 2:57 ` Jason Wang 2022-03-24 3:15 ` Tian, Kevin 2022-03-24 3:15 ` Tian, Kevin 2022-03-24 3:50 ` Jason Wang 2022-03-24 3:50 ` Jason Wang 2022-03-24 4:29 ` Tian, Kevin 2022-03-24 4:29 ` Tian, Kevin 2022-03-24 11:46 ` Jason Gunthorpe 2022-03-24 11:46 ` Jason Gunthorpe via iommu 2022-03-28 1:53 ` Jason Wang 2022-03-28 1:53 ` Jason Wang 2022-03-28 12:22 ` Jason Gunthorpe 2022-03-28 12:22 ` Jason Gunthorpe via iommu 2022-03-29 4:59 ` Jason Wang 2022-03-29 4:59 ` Jason Wang 2022-03-29 11:46 ` Jason Gunthorpe 2022-03-29 11:46 ` Jason Gunthorpe via iommu 2022-03-28 13:14 ` Sean Mooney 2022-03-28 13:14 ` Sean Mooney 2022-03-28 14:27 ` Jason Gunthorpe via iommu 2022-03-28 14:27 ` Jason Gunthorpe 2022-03-24 20:40 ` Alex Williamson 2022-03-24 20:40 ` Alex Williamson 2022-03-24 22:27 ` Jason Gunthorpe 2022-03-24 22:27 ` Jason Gunthorpe via iommu 2022-03-24 22:41 ` Alex Williamson 2022-03-24 22:41 ` Alex Williamson 2022-03-22 16:31 ` Niklas Schnelle 2022-03-22 16:31 ` Niklas Schnelle 2022-03-22 16:41 ` Jason Gunthorpe via iommu 2022-03-22 16:41 ` Jason Gunthorpe 2022-03-18 17:27 ` [PATCH RFC 05/12] iommufd: PFN handling for iopt_pages Jason Gunthorpe 2022-03-18 17:27 ` Jason Gunthorpe via iommu 2022-03-23 15:37 ` Niklas Schnelle 2022-03-23 15:37 ` Niklas Schnelle 2022-03-23 16:09 ` Jason Gunthorpe 2022-03-23 16:09 ` Jason Gunthorpe via iommu 2022-03-18 17:27 ` [PATCH RFC 06/12] iommufd: Algorithms for PFN storage Jason Gunthorpe 2022-03-18 17:27 ` Jason Gunthorpe via iommu 2022-03-18 17:27 ` [PATCH RFC 07/12] iommufd: Data structure to provide IOVA to PFN mapping Jason Gunthorpe 2022-03-18 17:27 ` Jason Gunthorpe via iommu 2022-03-22 22:15 ` Alex Williamson 2022-03-22 22:15 ` Alex Williamson 2022-03-23 18:15 ` Jason Gunthorpe 2022-03-23 18:15 ` Jason Gunthorpe via iommu 2022-03-24 3:09 ` Tian, Kevin 2022-03-24 3:09 ` Tian, Kevin 2022-03-24 12:46 ` Jason Gunthorpe 2022-03-24 12:46 ` Jason Gunthorpe via iommu 2022-03-25 13:34 ` zhangfei.gao 2022-03-25 13:34 ` zhangfei.gao 2022-03-25 17:19 ` Jason Gunthorpe via iommu 2022-03-25 17:19 ` Jason Gunthorpe 2022-04-13 14:02 ` Yi Liu 2022-04-13 14:02 ` Yi Liu 2022-04-13 14:36 ` Jason Gunthorpe 2022-04-13 14:36 ` Jason Gunthorpe via iommu 2022-04-13 14:49 ` Yi Liu 2022-04-13 14:49 ` Yi Liu 2022-04-17 14:56 ` Yi Liu 2022-04-17 14:56 ` Yi Liu 2022-04-18 10:47 ` Yi Liu 2022-04-18 10:47 ` Yi Liu 2022-03-18 17:27 ` [PATCH RFC 08/12] iommufd: IOCTLs for the io_pagetable Jason Gunthorpe 2022-03-18 17:27 ` Jason Gunthorpe via iommu 2022-03-23 19:10 ` Alex Williamson 2022-03-23 19:10 ` Alex Williamson 2022-03-23 19:34 ` Jason Gunthorpe 2022-03-23 19:34 ` Jason Gunthorpe via iommu 2022-03-23 20:04 ` Alex Williamson 2022-03-23 20:04 ` Alex Williamson 2022-03-23 20:34 ` Jason Gunthorpe via iommu 2022-03-23 20:34 ` Jason Gunthorpe 2022-03-23 22:54 ` Jason Gunthorpe 2022-03-23 22:54 ` Jason Gunthorpe via iommu 2022-03-24 7:25 ` Tian, Kevin 2022-03-24 7:25 ` Tian, Kevin 2022-03-24 13:46 ` Jason Gunthorpe via iommu 2022-03-24 13:46 ` Jason Gunthorpe 2022-03-25 2:15 ` Tian, Kevin 2022-03-25 2:15 ` Tian, Kevin 2022-03-27 2:32 ` Tian, Kevin 2022-03-27 2:32 ` Tian, Kevin 2022-03-27 14:28 ` Jason Gunthorpe 2022-03-27 14:28 ` Jason Gunthorpe via iommu 2022-03-28 17:17 ` Alex Williamson 2022-03-28 17:17 ` Alex Williamson 2022-03-28 18:57 ` Jason Gunthorpe 2022-03-28 18:57 ` Jason Gunthorpe via iommu 2022-03-28 19:47 ` Jason Gunthorpe via iommu 2022-03-28 19:47 ` Jason Gunthorpe 2022-03-28 21:26 ` Alex Williamson 2022-03-28 21:26 ` Alex Williamson 2022-03-24 6:46 ` Tian, Kevin 2022-03-24 6:46 ` Tian, Kevin 2022-03-30 13:35 ` Yi Liu 2022-03-30 13:35 ` Yi Liu 2022-03-31 12:59 ` Jason Gunthorpe via iommu 2022-03-31 12:59 ` Jason Gunthorpe 2022-04-01 13:30 ` Yi Liu 2022-04-01 13:30 ` Yi Liu 2022-03-31 4:36 ` David Gibson 2022-03-31 4:36 ` David Gibson 2022-03-31 5:41 ` Tian, Kevin 2022-03-31 5:41 ` Tian, Kevin 2022-03-31 12:58 ` Jason Gunthorpe via iommu 2022-03-31 12:58 ` Jason Gunthorpe 2022-04-28 5:58 ` David Gibson 2022-04-28 5:58 ` David Gibson 2022-04-28 14:22 ` Jason Gunthorpe 2022-04-28 14:22 ` Jason Gunthorpe via iommu 2022-04-29 6:00 ` David Gibson 2022-04-29 6:00 ` David Gibson 2022-04-29 12:54 ` Jason Gunthorpe 2022-04-29 12:54 ` Jason Gunthorpe via iommu 2022-04-30 14:44 ` David Gibson 2022-04-30 14:44 ` David Gibson 2022-03-18 17:27 ` [PATCH RFC 09/12] iommufd: Add a HW pagetable object Jason Gunthorpe 2022-03-18 17:27 ` Jason Gunthorpe via iommu 2022-03-18 17:27 ` [PATCH RFC 10/12] iommufd: Add kAPI toward external drivers Jason Gunthorpe 2022-03-18 17:27 ` Jason Gunthorpe via iommu 2022-03-23 18:10 ` Alex Williamson 2022-03-23 18:10 ` Alex Williamson 2022-03-23 18:15 ` Jason Gunthorpe 2022-03-23 18:15 ` Jason Gunthorpe via iommu 2022-05-11 12:54 ` Yi Liu 2022-05-11 12:54 ` Yi Liu 2022-05-19 9:45 ` Yi Liu 2022-05-19 9:45 ` Yi Liu 2022-05-19 12:35 ` Jason Gunthorpe 2022-05-19 12:35 ` Jason Gunthorpe via iommu 2022-03-18 17:27 ` [PATCH RFC 11/12] iommufd: vfio container FD ioctl compatibility Jason Gunthorpe 2022-03-18 17:27 ` Jason Gunthorpe via iommu 2022-03-23 22:51 ` Alex Williamson 2022-03-23 22:51 ` Alex Williamson 2022-03-24 0:33 ` Jason Gunthorpe 2022-03-24 0:33 ` Jason Gunthorpe via iommu 2022-03-24 8:13 ` Eric Auger 2022-03-24 8:13 ` Eric Auger 2022-03-24 22:04 ` Alex Williamson 2022-03-24 22:04 ` Alex Williamson 2022-03-24 23:11 ` Jason Gunthorpe 2022-03-24 23:11 ` Jason Gunthorpe via iommu 2022-03-25 3:10 ` Tian, Kevin 2022-03-25 3:10 ` Tian, Kevin 2022-03-25 11:24 ` Joao Martins 2022-03-25 11:24 ` Joao Martins 2022-04-28 14:53 ` David Gibson 2022-04-28 14:53 ` David Gibson 2022-04-28 15:10 ` Jason Gunthorpe 2022-04-28 15:10 ` Jason Gunthorpe via iommu 2022-04-29 1:21 ` Tian, Kevin 2022-04-29 1:21 ` Tian, Kevin 2022-04-29 6:22 ` David Gibson 2022-04-29 6:22 ` David Gibson 2022-04-29 12:50 ` Jason Gunthorpe 2022-04-29 12:50 ` Jason Gunthorpe via iommu 2022-05-02 4:10 ` David Gibson 2022-05-02 4:10 ` David Gibson 2022-04-29 6:20 ` David Gibson 2022-04-29 6:20 ` David Gibson 2022-04-29 12:48 ` Jason Gunthorpe 2022-04-29 12:48 ` Jason Gunthorpe via iommu 2022-05-02 7:30 ` David Gibson 2022-05-02 7:30 ` David Gibson 2022-05-05 19:07 ` Jason Gunthorpe 2022-05-05 19:07 ` Jason Gunthorpe via iommu 2022-05-06 5:25 ` David Gibson 2022-05-06 5:25 ` David Gibson 2022-05-06 10:42 ` Tian, Kevin 2022-05-06 10:42 ` Tian, Kevin 2022-05-09 3:36 ` David Gibson 2022-05-09 3:36 ` David Gibson 2022-05-06 12:48 ` Jason Gunthorpe 2022-05-06 12:48 ` Jason Gunthorpe via iommu 2022-05-09 6:01 ` David Gibson 2022-05-09 6:01 ` David Gibson 2022-05-09 14:00 ` Jason Gunthorpe [this message] 2022-05-09 14:00 ` Jason Gunthorpe via iommu 2022-05-10 7:12 ` David Gibson 2022-05-10 7:12 ` David Gibson 2022-05-10 19:00 ` Jason Gunthorpe 2022-05-10 19:00 ` Jason Gunthorpe via iommu 2022-05-11 3:15 ` Tian, Kevin 2022-05-11 3:15 ` Tian, Kevin 2022-05-11 16:32 ` Jason Gunthorpe 2022-05-11 16:32 ` Jason Gunthorpe via iommu 2022-05-11 23:23 ` Tian, Kevin 2022-05-11 23:23 ` Tian, Kevin 2022-05-13 4:35 ` David Gibson 2022-05-13 4:35 ` David Gibson 2022-05-11 4:40 ` David Gibson 2022-05-11 4:40 ` David Gibson 2022-05-11 2:46 ` Tian, Kevin 2022-05-11 2:46 ` Tian, Kevin 2022-05-23 6:02 ` Alexey Kardashevskiy 2022-05-23 6:02 ` Alexey Kardashevskiy 2022-05-24 13:25 ` Jason Gunthorpe 2022-05-24 13:25 ` Jason Gunthorpe via iommu 2022-05-25 1:39 ` David Gibson 2022-05-25 1:39 ` David Gibson 2022-05-25 2:09 ` Alexey Kardashevskiy 2022-05-25 2:09 ` Alexey Kardashevskiy 2022-03-29 9:17 ` Yi Liu 2022-03-29 9:17 ` Yi Liu 2022-03-18 17:27 ` [PATCH RFC 12/12] iommufd: Add a selftest Jason Gunthorpe 2022-03-18 17:27 ` Jason Gunthorpe via iommu 2022-04-12 20:13 ` [PATCH RFC 00/12] IOMMUFD Generic interface Eric Auger 2022-04-12 20:13 ` Eric Auger 2022-04-12 20:22 ` Jason Gunthorpe via iommu 2022-04-12 20:22 ` Jason Gunthorpe 2022-04-12 20:50 ` Eric Auger 2022-04-12 20:50 ` Eric Auger 2022-04-14 10:56 ` Yi Liu 2022-04-14 10:56 ` Yi Liu
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20220509140041.GK49344@nvidia.com \ --to=jgg@nvidia.com \ --cc=alex.williamson@redhat.com \ --cc=baolu.lu@linux.intel.com \ --cc=chaitanyak@nvidia.com \ --cc=cohuck@redhat.com \ --cc=daniel.m.jordan@oracle.com \ --cc=david@gibson.dropbear.id.au \ --cc=eric.auger@redhat.com \ --cc=iommu@lists.linux-foundation.org \ --cc=jasowang@redhat.com \ --cc=jean-philippe@linaro.org \ --cc=joao.m.martins@oracle.com \ --cc=kevin.tian@intel.com \ --cc=kvm@vger.kernel.org \ --cc=mjrosato@linux.ibm.com \ --cc=mst@redhat.com \ --cc=nicolinc@nvidia.com \ --cc=schnelle@linux.ibm.com \ --cc=shameerali.kolothum.thodi@huawei.com \ --cc=yi.l.liu@intel.com \ --cc=zhukeqian1@huawei.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.