From: Logan Gunthorpe <logang@deltatee.com> To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: "Stephen Bates" <sbates@raithlin.com>, "Christoph Hellwig" <hch@lst.de>, "Dan Williams" <dan.j.williams@intel.com>, "Jason Gunthorpe" <jgg@ziepe.ca>, "Christian König" <christian.koenig@amd.com>, "John Hubbard" <jhubbard@nvidia.com>, "Don Dutile" <ddutile@redhat.com>, "Matthew Wilcox" <willy@infradead.org>, "Daniel Vetter" <daniel.vetter@ffwll.ch>, "Minturn Dave B" <dave.b.minturn@intel.com>, "Jason Ekstrand" <jason@jlekstrand.net>, "Dave Hansen" <dave.hansen@linux.intel.com>, "Xiong Jianxin" <jianxin.xiong@intel.com>, "Bjorn Helgaas" <helgaas@kernel.org>, "Ira Weiny" <ira.weiny@intel.com>, "Robin Murphy" <robin.murphy@arm.com>, "Martin Oliveira" <martin.oliveira@eideticom.com>, "Chaitanya Kulkarni" <ckulkarnilinux@gmail.com>, "Ralph Campbell" <rcampbell@nvidia.com>, "Logan Gunthorpe" <logang@deltatee.com>, "Jason Gunthorpe" <jgg@nvidia.com> Subject: [PATCH v7 08/21] iommu/dma: support PCI P2PDMA pages in dma-iommu map_sg Date: Wed, 15 Jun 2022 10:12:20 -0600 [thread overview] Message-ID: <20220615161233.17527-9-logang@deltatee.com> (raw) In-Reply-To: <20220615161233.17527-1-logang@deltatee.com> When a PCI P2PDMA page is seen, set the IOVA length of the segment to zero so that it is not mapped into the IOVA. Then, in finalise_sg(), apply the appropriate bus address to the segment. The IOVA is not created if the scatterlist only consists of P2PDMA pages. A P2PDMA page may have three possible outcomes when being mapped: 1) If the data path between the two devices doesn't go through the root port, then it should be mapped with a PCI bus address 2) If the data path goes through the host bridge, it should be mapped normally with an IOMMU IOVA. 3) It is not possible for the two devices to communicate and thus the mapping operation should fail (and it will return -EREMOTEIO). Similar to dma-direct, the sg_dma_mark_pci_p2pdma() flag is used to indicate bus address segments. On unmap, P2PDMA segments are skipped over when determining the start and end IOVA addresses. With this change, the flags variable in the dma_map_ops is set to DMA_F_PCI_P2PDMA_SUPPORTED to indicate support for P2PDMA pages. Signed-off-by: Logan Gunthorpe <logang@deltatee.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> --- drivers/iommu/dma-iommu.c | 68 +++++++++++++++++++++++++++++++++++---- 1 file changed, 61 insertions(+), 7 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index f90251572a5d..b01ca0c6a7ab 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -21,6 +21,7 @@ #include <linux/iova.h> #include <linux/irq.h> #include <linux/list_sort.h> +#include <linux/memremap.h> #include <linux/mm.h> #include <linux/mutex.h> #include <linux/pci.h> @@ -1062,6 +1063,16 @@ static int __finalise_sg(struct device *dev, struct scatterlist *sg, int nents, sg_dma_address(s) = DMA_MAPPING_ERROR; sg_dma_len(s) = 0; + if (is_pci_p2pdma_page(sg_page(s)) && !s_iova_len) { + if (i > 0) + cur = sg_next(cur); + + pci_p2pdma_map_bus_segment(s, cur); + count++; + cur_len = 0; + continue; + } + /* * Now fill in the real DMA data. If... * - there is a valid output segment to append to @@ -1158,6 +1169,8 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, struct iova_domain *iovad = &cookie->iovad; struct scatterlist *s, *prev = NULL; int prot = dma_info_to_prot(dir, dev_is_dma_coherent(dev), attrs); + struct dev_pagemap *pgmap = NULL; + enum pci_p2pdma_map_type map_type; dma_addr_t iova; size_t iova_len = 0; unsigned long mask = dma_get_seg_boundary(dev); @@ -1193,6 +1206,35 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, s_length = iova_align(iovad, s_length + s_iova_off); s->length = s_length; + if (is_pci_p2pdma_page(sg_page(s))) { + if (sg_page(s)->pgmap != pgmap) { + pgmap = sg_page(s)->pgmap; + map_type = pci_p2pdma_map_type(pgmap, dev); + } + + switch (map_type) { + case PCI_P2PDMA_MAP_BUS_ADDR: + /* + * A zero length will be ignored by + * iommu_map_sg() and then can be detected + * in __finalise_sg() to actually map the + * bus address. + */ + s->length = 0; + continue; + case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: + /* + * Mapping through host bridge should be + * mapped with regular IOVAs, thus we + * do nothing here and continue below. + */ + break; + default: + ret = -EREMOTEIO; + goto out_restore_sg; + } + } + /* * Due to the alignment of our single IOVA allocation, we can * depend on these assumptions about the segment boundary mask: @@ -1215,6 +1257,9 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, prev = s; } + if (!iova_len) + return __finalise_sg(dev, sg, nents, 0); + iova = iommu_dma_alloc_iova(domain, iova_len, dma_get_mask(dev), dev); if (!iova) { ret = -ENOMEM; @@ -1236,7 +1281,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, out_restore_sg: __invalidate_sg(sg, nents); out: - if (ret != -ENOMEM) + if (ret != -ENOMEM && ret != -EREMOTEIO) return -EINVAL; return ret; } @@ -1244,7 +1289,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir, unsigned long attrs) { - dma_addr_t start, end; + dma_addr_t end, start = DMA_MAPPING_ERROR; struct scatterlist *tmp; int i; @@ -1260,14 +1305,22 @@ static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg, * The scatterlist segments are mapped into a single * contiguous IOVA allocation, so this is incredibly easy. */ - start = sg_dma_address(sg); - for_each_sg(sg_next(sg), tmp, nents - 1, i) { + for_each_sg(sg, tmp, nents, i) { + if (sg_is_dma_bus_address(tmp)) { + sg_dma_unmark_bus_address(tmp); + continue; + } if (sg_dma_len(tmp) == 0) break; - sg = tmp; + + if (start == DMA_MAPPING_ERROR) + start = sg_dma_address(tmp); + + end = sg_dma_address(tmp) + sg_dma_len(tmp); } - end = sg_dma_address(sg) + sg_dma_len(sg); - __iommu_dma_unmap(dev, start, end - start); + + if (start != DMA_MAPPING_ERROR) + __iommu_dma_unmap(dev, start, end - start); } static dma_addr_t iommu_dma_map_resource(struct device *dev, phys_addr_t phys, @@ -1460,6 +1513,7 @@ static unsigned long iommu_dma_get_merge_boundary(struct device *dev) } static const struct dma_map_ops iommu_dma_ops = { + .flags = DMA_F_PCI_P2PDMA_SUPPORTED, .alloc = iommu_dma_alloc, .free = iommu_dma_free, .alloc_pages = dma_common_alloc_pages, -- 2.30.2
WARNING: multiple messages have this Message-ID (diff)
From: Logan Gunthorpe <logang@deltatee.com> To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: "Minturn Dave B" <dave.b.minturn@intel.com>, "Martin Oliveira" <martin.oliveira@eideticom.com>, "Ralph Campbell" <rcampbell@nvidia.com>, "Jason Gunthorpe" <jgg@nvidia.com>, "John Hubbard" <jhubbard@nvidia.com>, "Dave Hansen" <dave.hansen@linux.intel.com>, "Robin Murphy" <robin.murphy@arm.com>, "Matthew Wilcox" <willy@infradead.org>, "Christian König" <christian.koenig@amd.com>, "Jason Gunthorpe" <jgg@ziepe.ca>, "Logan Gunthorpe" <logang@deltatee.com>, "Chaitanya Kulkarni" <ckulkarnilinux@gmail.com>, "Jason Ekstrand" <jason@jlekstrand.net>, "Daniel Vetter" <daniel.vetter@ffwll.ch>, "Bjorn Helgaas" <helgaas@kernel.org>, "Dan Williams" <dan.j.williams@intel.com>, "Stephen Bates" <sbates@raithlin.com>, "Ira Weiny" <ira.weiny@intel.com>, "Christoph Hellwig" <hch@lst.de>, "Xiong Jianxin" <jianxin.xiong@intel.com> Subject: [PATCH v7 08/21] iommu/dma: support PCI P2PDMA pages in dma-iommu map_sg Date: Wed, 15 Jun 2022 10:12:20 -0600 [thread overview] Message-ID: <20220615161233.17527-9-logang@deltatee.com> (raw) In-Reply-To: <20220615161233.17527-1-logang@deltatee.com> When a PCI P2PDMA page is seen, set the IOVA length of the segment to zero so that it is not mapped into the IOVA. Then, in finalise_sg(), apply the appropriate bus address to the segment. The IOVA is not created if the scatterlist only consists of P2PDMA pages. A P2PDMA page may have three possible outcomes when being mapped: 1) If the data path between the two devices doesn't go through the root port, then it should be mapped with a PCI bus address 2) If the data path goes through the host bridge, it should be mapped normally with an IOMMU IOVA. 3) It is not possible for the two devices to communicate and thus the mapping operation should fail (and it will return -EREMOTEIO). Similar to dma-direct, the sg_dma_mark_pci_p2pdma() flag is used to indicate bus address segments. On unmap, P2PDMA segments are skipped over when determining the start and end IOVA addresses. With this change, the flags variable in the dma_map_ops is set to DMA_F_PCI_P2PDMA_SUPPORTED to indicate support for P2PDMA pages. Signed-off-by: Logan Gunthorpe <logang@deltatee.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> --- drivers/iommu/dma-iommu.c | 68 +++++++++++++++++++++++++++++++++++---- 1 file changed, 61 insertions(+), 7 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index f90251572a5d..b01ca0c6a7ab 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -21,6 +21,7 @@ #include <linux/iova.h> #include <linux/irq.h> #include <linux/list_sort.h> +#include <linux/memremap.h> #include <linux/mm.h> #include <linux/mutex.h> #include <linux/pci.h> @@ -1062,6 +1063,16 @@ static int __finalise_sg(struct device *dev, struct scatterlist *sg, int nents, sg_dma_address(s) = DMA_MAPPING_ERROR; sg_dma_len(s) = 0; + if (is_pci_p2pdma_page(sg_page(s)) && !s_iova_len) { + if (i > 0) + cur = sg_next(cur); + + pci_p2pdma_map_bus_segment(s, cur); + count++; + cur_len = 0; + continue; + } + /* * Now fill in the real DMA data. If... * - there is a valid output segment to append to @@ -1158,6 +1169,8 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, struct iova_domain *iovad = &cookie->iovad; struct scatterlist *s, *prev = NULL; int prot = dma_info_to_prot(dir, dev_is_dma_coherent(dev), attrs); + struct dev_pagemap *pgmap = NULL; + enum pci_p2pdma_map_type map_type; dma_addr_t iova; size_t iova_len = 0; unsigned long mask = dma_get_seg_boundary(dev); @@ -1193,6 +1206,35 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, s_length = iova_align(iovad, s_length + s_iova_off); s->length = s_length; + if (is_pci_p2pdma_page(sg_page(s))) { + if (sg_page(s)->pgmap != pgmap) { + pgmap = sg_page(s)->pgmap; + map_type = pci_p2pdma_map_type(pgmap, dev); + } + + switch (map_type) { + case PCI_P2PDMA_MAP_BUS_ADDR: + /* + * A zero length will be ignored by + * iommu_map_sg() and then can be detected + * in __finalise_sg() to actually map the + * bus address. + */ + s->length = 0; + continue; + case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: + /* + * Mapping through host bridge should be + * mapped with regular IOVAs, thus we + * do nothing here and continue below. + */ + break; + default: + ret = -EREMOTEIO; + goto out_restore_sg; + } + } + /* * Due to the alignment of our single IOVA allocation, we can * depend on these assumptions about the segment boundary mask: @@ -1215,6 +1257,9 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, prev = s; } + if (!iova_len) + return __finalise_sg(dev, sg, nents, 0); + iova = iommu_dma_alloc_iova(domain, iova_len, dma_get_mask(dev), dev); if (!iova) { ret = -ENOMEM; @@ -1236,7 +1281,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, out_restore_sg: __invalidate_sg(sg, nents); out: - if (ret != -ENOMEM) + if (ret != -ENOMEM && ret != -EREMOTEIO) return -EINVAL; return ret; } @@ -1244,7 +1289,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir, unsigned long attrs) { - dma_addr_t start, end; + dma_addr_t end, start = DMA_MAPPING_ERROR; struct scatterlist *tmp; int i; @@ -1260,14 +1305,22 @@ static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg, * The scatterlist segments are mapped into a single * contiguous IOVA allocation, so this is incredibly easy. */ - start = sg_dma_address(sg); - for_each_sg(sg_next(sg), tmp, nents - 1, i) { + for_each_sg(sg, tmp, nents, i) { + if (sg_is_dma_bus_address(tmp)) { + sg_dma_unmark_bus_address(tmp); + continue; + } if (sg_dma_len(tmp) == 0) break; - sg = tmp; + + if (start == DMA_MAPPING_ERROR) + start = sg_dma_address(tmp); + + end = sg_dma_address(tmp) + sg_dma_len(tmp); } - end = sg_dma_address(sg) + sg_dma_len(sg); - __iommu_dma_unmap(dev, start, end - start); + + if (start != DMA_MAPPING_ERROR) + __iommu_dma_unmap(dev, start, end - start); } static dma_addr_t iommu_dma_map_resource(struct device *dev, phys_addr_t phys, @@ -1460,6 +1513,7 @@ static unsigned long iommu_dma_get_merge_boundary(struct device *dev) } static const struct dma_map_ops iommu_dma_ops = { + .flags = DMA_F_PCI_P2PDMA_SUPPORTED, .alloc = iommu_dma_alloc, .free = iommu_dma_free, .alloc_pages = dma_common_alloc_pages, -- 2.30.2 _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
next prev parent reply other threads:[~2022-06-15 16:13 UTC|newest] Thread overview: 141+ messages / expand[flat|nested] mbox.gz Atom feed top 2022-06-15 16:12 [PATCH v7 00/21] Userspace P2PDMA with O_DIRECT NVMe devices Logan Gunthorpe 2022-06-15 16:12 ` Logan Gunthorpe 2022-06-15 16:12 ` [PATCH v7 01/21] lib/scatterlist: add flag for indicating P2PDMA segments in an SGL Logan Gunthorpe 2022-06-15 16:12 ` Logan Gunthorpe 2022-06-29 6:33 ` Christoph Hellwig 2022-06-29 6:33 ` Christoph Hellwig 2022-06-29 9:05 ` Robin Murphy 2022-06-29 9:05 ` Robin Murphy 2022-06-29 15:39 ` Logan Gunthorpe 2022-06-29 15:39 ` Logan Gunthorpe 2022-06-29 18:02 ` Robin Murphy 2022-06-29 18:02 ` Robin Murphy 2022-06-29 18:24 ` Logan Gunthorpe 2022-06-29 18:24 ` Logan Gunthorpe 2022-07-04 15:08 ` Robin Murphy 2022-07-04 15:08 ` Robin Murphy 2022-06-15 16:12 ` [PATCH v7 02/21] PCI/P2PDMA: Attempt to set map_type if it has not been set Logan Gunthorpe 2022-06-15 16:12 ` Logan Gunthorpe 2022-06-29 6:33 ` Christoph Hellwig 2022-06-29 6:33 ` Christoph Hellwig 2022-06-15 16:12 ` [PATCH v7 03/21] PCI/P2PDMA: Expose pci_p2pdma_map_type() Logan Gunthorpe 2022-06-15 16:12 ` Logan Gunthorpe 2022-06-29 6:39 ` Christoph Hellwig 2022-06-29 6:39 ` Christoph Hellwig 2022-06-15 16:12 ` [PATCH v7 04/21] PCI/P2PDMA: Introduce helpers for dma_map_sg implementations Logan Gunthorpe 2022-06-15 16:12 ` Logan Gunthorpe 2022-06-29 6:39 ` Christoph Hellwig 2022-06-29 6:39 ` Christoph Hellwig 2022-06-15 16:12 ` [PATCH v7 05/21] dma-mapping: allow EREMOTEIO return code for P2PDMA transfers Logan Gunthorpe 2022-06-15 16:12 ` Logan Gunthorpe 2022-06-29 6:40 ` Christoph Hellwig 2022-06-29 6:40 ` Christoph Hellwig 2022-06-15 16:12 ` [PATCH v7 06/21] dma-direct: support PCI P2PDMA pages in dma-direct map_sg Logan Gunthorpe 2022-06-15 16:12 ` Logan Gunthorpe 2022-06-29 6:40 ` Christoph Hellwig 2022-06-29 6:40 ` Christoph Hellwig 2022-06-15 16:12 ` [PATCH v7 07/21] dma-mapping: add flags to dma_map_ops to indicate PCI P2PDMA support Logan Gunthorpe 2022-06-15 16:12 ` Logan Gunthorpe 2022-06-29 6:41 ` Christoph Hellwig 2022-06-29 6:41 ` Christoph Hellwig 2022-06-15 16:12 ` Logan Gunthorpe [this message] 2022-06-15 16:12 ` [PATCH v7 08/21] iommu/dma: support PCI P2PDMA pages in dma-iommu map_sg Logan Gunthorpe 2022-06-29 12:07 ` Robin Murphy 2022-06-29 12:07 ` Robin Murphy 2022-06-29 15:57 ` Logan Gunthorpe 2022-06-29 15:57 ` Logan Gunthorpe 2022-06-29 19:15 ` Robin Murphy 2022-06-29 19:15 ` Robin Murphy 2022-06-29 22:41 ` Logan Gunthorpe 2022-06-29 22:41 ` Logan Gunthorpe 2022-06-30 14:56 ` Robin Murphy 2022-06-30 14:56 ` Robin Murphy 2022-06-30 21:21 ` Logan Gunthorpe 2022-06-30 21:21 ` Logan Gunthorpe 2022-06-15 16:12 ` [PATCH v7 09/21] nvme-pci: check DMA ops when indicating support for PCI P2PDMA Logan Gunthorpe 2022-06-15 16:12 ` Logan Gunthorpe 2022-06-29 6:41 ` Christoph Hellwig 2022-06-29 6:41 ` Christoph Hellwig 2022-06-15 16:12 ` [PATCH v7 10/21] nvme-pci: convert to using dma_map_sgtable() Logan Gunthorpe 2022-06-15 16:12 ` Logan Gunthorpe 2022-06-29 6:42 ` Christoph Hellwig 2022-06-29 6:42 ` Christoph Hellwig 2022-06-15 16:12 ` [PATCH v7 11/21] RDMA/core: introduce ib_dma_pci_p2p_dma_supported() Logan Gunthorpe 2022-06-15 16:12 ` Logan Gunthorpe 2022-06-29 6:42 ` Christoph Hellwig 2022-06-29 6:42 ` Christoph Hellwig 2022-06-15 16:12 ` [PATCH v7 12/21] RDMA/rw: drop pci_p2pdma_[un]map_sg() Logan Gunthorpe 2022-06-15 16:12 ` Logan Gunthorpe 2022-06-29 6:42 ` Christoph Hellwig 2022-06-29 6:42 ` Christoph Hellwig 2022-06-15 16:12 ` [PATCH v7 13/21] PCI/P2PDMA: Remove pci_p2pdma_[un]map_sg() Logan Gunthorpe 2022-06-15 16:12 ` Logan Gunthorpe 2022-06-29 6:43 ` Christoph Hellwig 2022-06-29 6:43 ` Christoph Hellwig 2022-06-15 16:12 ` [PATCH v7 14/21] mm: introduce FOLL_PCI_P2PDMA to gate getting PCI P2PDMA pages Logan Gunthorpe 2022-06-15 16:12 ` Logan Gunthorpe 2022-06-29 6:45 ` Christoph Hellwig 2022-06-29 6:45 ` Christoph Hellwig 2022-06-15 16:12 ` [PATCH v7 15/21] iov_iter: introduce iov_iter_get_pages_[alloc_]flags() Logan Gunthorpe 2022-06-15 16:12 ` Logan Gunthorpe 2022-06-29 6:45 ` Christoph Hellwig 2022-06-29 6:45 ` Christoph Hellwig 2022-06-15 16:12 ` [PATCH v7 16/21] block: add check when merging zone device pages Logan Gunthorpe 2022-06-15 16:12 ` Logan Gunthorpe 2022-06-29 6:46 ` Christoph Hellwig 2022-06-29 6:46 ` Christoph Hellwig 2022-06-29 16:06 ` Logan Gunthorpe 2022-06-29 16:06 ` Logan Gunthorpe 2022-06-30 21:50 ` Logan Gunthorpe 2022-06-30 21:50 ` Logan Gunthorpe 2022-07-04 6:07 ` Christoph Hellwig 2022-07-04 6:07 ` Christoph Hellwig 2022-06-15 16:12 ` [PATCH v7 17/21] lib/scatterlist: " Logan Gunthorpe 2022-06-15 16:12 ` Logan Gunthorpe 2022-06-15 16:12 ` [PATCH v7 18/21] block: set FOLL_PCI_P2PDMA in __bio_iov_iter_get_pages() Logan Gunthorpe 2022-06-15 16:12 ` Logan Gunthorpe 2022-06-15 16:12 ` [PATCH v7 19/21] block: set FOLL_PCI_P2PDMA in bio_map_user_iov() Logan Gunthorpe 2022-06-15 16:12 ` Logan Gunthorpe 2022-06-15 16:12 ` [PATCH v7 20/21] PCI/P2PDMA: Introduce pci_mmap_p2pmem() Logan Gunthorpe 2022-06-15 16:12 ` Logan Gunthorpe 2022-06-29 6:48 ` Christoph Hellwig 2022-06-29 6:48 ` Christoph Hellwig 2022-06-29 16:00 ` Logan Gunthorpe 2022-06-29 16:00 ` Logan Gunthorpe 2022-06-29 17:59 ` Jason Gunthorpe 2022-06-29 17:59 ` Jason Gunthorpe 2022-07-05 7:51 ` Christoph Hellwig 2022-07-05 7:51 ` Christoph Hellwig 2022-07-05 13:51 ` Jason Gunthorpe 2022-07-05 13:51 ` Jason Gunthorpe 2022-07-05 16:12 ` Christoph Hellwig 2022-07-05 16:12 ` Christoph Hellwig 2022-07-05 16:29 ` Jason Gunthorpe 2022-07-05 16:29 ` Jason Gunthorpe 2022-07-05 16:40 ` Christoph Hellwig 2022-07-05 16:40 ` Christoph Hellwig 2022-07-05 16:41 ` Logan Gunthorpe 2022-07-05 16:41 ` Logan Gunthorpe 2022-07-05 16:43 ` Christoph Hellwig 2022-07-05 16:43 ` Christoph Hellwig 2022-07-05 16:44 ` Logan Gunthorpe 2022-07-05 16:44 ` Logan Gunthorpe 2022-07-05 16:50 ` Christoph Hellwig 2022-07-05 16:50 ` Christoph Hellwig 2022-07-05 17:21 ` Greg Kroah-Hartman 2022-07-05 17:21 ` Greg Kroah-Hartman 2022-07-05 17:32 ` Logan Gunthorpe 2022-07-05 17:32 ` Logan Gunthorpe 2022-07-05 17:42 ` Greg Kroah-Hartman 2022-07-05 17:42 ` Greg Kroah-Hartman 2022-07-05 18:16 ` Logan Gunthorpe 2022-07-05 18:16 ` Logan Gunthorpe 2022-07-06 6:51 ` Christoph Hellwig 2022-07-06 6:51 ` Christoph Hellwig 2022-07-06 7:04 ` Greg Kroah-Hartman 2022-07-06 7:04 ` Greg Kroah-Hartman 2022-07-06 21:30 ` Logan Gunthorpe 2022-06-15 16:12 ` [PATCH v7 21/21] nvme-pci: allow mmaping the CMB in userspace Logan Gunthorpe 2022-06-15 16:12 ` Logan Gunthorpe 2022-06-29 6:49 ` [PATCH v7 00/21] Userspace P2PDMA with O_DIRECT NVMe devices Christoph Hellwig 2022-06-29 6:49 ` Christoph Hellwig
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20220615161233.17527-9-logang@deltatee.com \ --to=logang@deltatee.com \ --cc=christian.koenig@amd.com \ --cc=ckulkarnilinux@gmail.com \ --cc=dan.j.williams@intel.com \ --cc=daniel.vetter@ffwll.ch \ --cc=dave.b.minturn@intel.com \ --cc=dave.hansen@linux.intel.com \ --cc=ddutile@redhat.com \ --cc=hch@lst.de \ --cc=helgaas@kernel.org \ --cc=iommu@lists.linux-foundation.org \ --cc=ira.weiny@intel.com \ --cc=jason@jlekstrand.net \ --cc=jgg@nvidia.com \ --cc=jgg@ziepe.ca \ --cc=jhubbard@nvidia.com \ --cc=jianxin.xiong@intel.com \ --cc=linux-block@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=linux-nvme@lists.infradead.org \ --cc=linux-pci@vger.kernel.org \ --cc=martin.oliveira@eideticom.com \ --cc=rcampbell@nvidia.com \ --cc=robin.murphy@arm.com \ --cc=sbates@raithlin.com \ --cc=willy@infradead.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.