linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 00/23] Userspace P2PDMA with O_DIRECT NVMe devices
@ 2021-11-17 21:53 Logan Gunthorpe
  2021-11-17 21:53 ` [PATCH v4 01/23] lib/scatterlist: cleanup macros into static inline functions Logan Gunthorpe
                   ` (22 more replies)
  0 siblings, 23 replies; 40+ messages in thread
From: Logan Gunthorpe @ 2021-11-17 21:53 UTC (permalink / raw)
  To: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm, iommu
  Cc: Stephen Bates, Christoph Hellwig, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni,
	Logan Gunthorpe

Hi,

This patchset continues my work to add userspace P2PDMA access using
O_DIRECT NVMe devices. This posting fixes a lot of issues that were
addresed in the last posting, which is here[1].

The patchset enables userspace P2PDMA by allowing userspace to mmap()
allocated chunks of the CMB. The resulting VMA can be passed only
to O_DIRECT IO on NVMe backed files or block devices. A flag is added
to GUP() in Patch <>, then Patches <> through <> wire this flag up based
on whether the block queue indicates P2PDMA support. Patches <>
through <> enable the CMB to be mapped into userspace by mmaping
the nvme char device.

This is relatively straightforward, however the one significant
problem is that, presently, pci_p2pdma_map_sg() requires a homogeneous
SGL with all P2PDMA pages or all regular pages. Enhancing GUP to
support enforcing this rule would require a huge hack that I don't
expect would be all that pallatable. So the first 13 patches add
support for P2PDMA pages to dma_map_sg[table]() to the dma-direct
and dma-iommu implementations. Thus systems without an IOMMU plus
Intel and AMD IOMMUs are supported. (Other IOMMU implementations would
then be unsupported, notably ARM and PowerPC but support would be added
when they convert to dma-iommu).

dma_map_sgtable() is preferred when dealing with P2PDMA memory as it
will return -EREMOTEIO when the DMA device cannot map specific P2PDMA
pages based on the existing rules in calc_map_type_and_dist().

The other issue is dma_unmap_sg() needs a flag to determine whether a
given dma_addr_t was mapped regularly or as a PCI bus address. To allow
this, a third flag is added to the page_link field in struct
scatterlist. This effectively means support for P2PDMA will now depend
on CONFIG_64BIT.

Feedback welcome.

This series is based on v5.16-rc1. A git branch is available here:

  https://github.com/sbates130272/linux-p2pmem/  p2pdma_user_cmb_v4

Thanks,

Logan

[1] https://lore.kernel.org/all/20210916234100.122368-1-logang@deltatee.com

--

Changes since v3:
  - Add some comment and commit message cleanups I had missed for v3,
    also moved the prototypes for some of the p2pdma helpers to
    dma-map-ops.h (which I missed in v3 and was suggested in v2).
  - Add separate cleanup patch for scatterlist.h and change the macros
    to functions. (Suggested by Chaitanya and Jason, respectively)
  - Rename sg_dma_mark_pci_p2pdma() and sg_is_dma_pci_p2pdma() to
    sg_dma_mark_bus_address() and sg_is_dma_bus_address() which
    is a more generic name (As requested by Jason)
  - Fixes to some comments and commit messages as suggested by Bjorn
    and Jason.
  - Ensure swiotlb is not used with P2PDMA pages. (Per Jason)
  - The sgtable coversion in RDMA was split out and sent upstream
    separately, the new patch is only the removal. (Per Jason)
  - Moved the FOLL_PCI_P2PDMA check outside of get_dev_pagemap() as
    Jason suggested this will be removed in the near term.
  - Add two patches to ensure that zone device pages with different
    pgmaps are never merged in the block layer or
    sg_alloc_append_table_from_pages() (Per Jason)
  - Ensure synchronize_rcu() or call_rcu() is used before returning
    pages to the genalloc. (Jason pointed out that pages are not
    gauranteed to be unused in all architectures until at least
    after an RCU grace period, and that synchronize_rcu() was likely
    too slow to use in the vma close operation.
  - Collected Acks and Reviews by Bjorn, Jason and Max.

Logan Gunthorpe (23):
  lib/scatterlist: cleanup macros into static inline functions
  lib/scatterlist: add flag for indicating P2PDMA segments in an SGL
  PCI/P2PDMA: Attempt to set map_type if it has not been set
  PCI/P2PDMA: Expose pci_p2pdma_map_type()
  PCI/P2PDMA: Introduce helpers for dma_map_sg implementations
  dma-mapping: allow EREMOTEIO return code for P2PDMA transfers
  dma-direct: support PCI P2PDMA pages in dma-direct map_sg
  dma-mapping: add flags to dma_map_ops to indicate PCI P2PDMA support
  iommu/dma: support PCI P2PDMA pages in dma-iommu map_sg
  nvme-pci: check DMA ops when indicating support for PCI P2PDMA
  nvme-pci: convert to using dma_map_sgtable()
  RDMA/core: introduce ib_dma_pci_p2p_dma_supported()
  RDMA/rw: drop pci_p2pdma_[un]map_sg()
  PCI/P2PDMA: Remove pci_p2pdma_[un]map_sg()
  mm: introduce FOLL_PCI_P2PDMA to gate getting PCI P2PDMA pages
  iov_iter: introduce iov_iter_get_pages_[alloc_]flags()
  block: add check when merging zone device pages
  lib/scatterlist: add check when merging zone device pages
  block: set FOLL_PCI_P2PDMA in __bio_iov_iter_get_pages()
  block: set FOLL_PCI_P2PDMA in bio_map_user_iov()
  mm: use custom page_free for P2PDMA pages
  PCI/P2PDMA: Introduce pci_mmap_p2pmem()
  nvme-pci: allow mmaping the CMB in userspace

 block/bio.c                  |  10 +-
 block/blk-map.c              |   7 +-
 drivers/infiniband/core/rw.c |  45 +---
 drivers/iommu/dma-iommu.c    |  67 +++++-
 drivers/nvme/host/core.c     |  18 +-
 drivers/nvme/host/nvme.h     |   4 +-
 drivers/nvme/host/pci.c      |  98 ++++----
 drivers/nvme/target/rdma.c   |   2 +-
 drivers/pci/Kconfig          |   6 +
 drivers/pci/p2pdma.c         | 441 ++++++++++++++++++++++++++++++-----
 include/linux/dma-map-ops.h  |  76 ++++++
 include/linux/dma-mapping.h  |   5 +
 include/linux/mm.h           |  25 ++
 include/linux/pci-p2pdma.h   |  38 +--
 include/linux/scatterlist.h  |  71 +++++-
 include/linux/uio.h          |  21 +-
 include/rdma/ib_verbs.h      |  11 +
 include/uapi/linux/magic.h   |   1 +
 kernel/dma/Kconfig           |  10 +
 kernel/dma/direct.c          |  43 +++-
 kernel/dma/direct.h          |   7 +-
 kernel/dma/mapping.c         |  22 +-
 lib/iov_iter.c               |  15 +-
 lib/scatterlist.c            |  25 +-
 mm/gup.c                     |  22 +-
 mm/memremap.c                |  12 +-
 26 files changed, 882 insertions(+), 220 deletions(-)


base-commit: fa55b7dcdc43c1aa1ba12bca9d2dd4318c2a0dbf
--
2.30.2

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v4 01/23] lib/scatterlist: cleanup macros into static inline functions
  2021-11-17 21:53 [PATCH v4 00/23] Userspace P2PDMA with O_DIRECT NVMe devices Logan Gunthorpe
@ 2021-11-17 21:53 ` Logan Gunthorpe
  2021-12-13 21:51   ` Chaitanya Kulkarni
  2021-12-21  9:00   ` Christoph Hellwig
  2021-11-17 21:53 ` [PATCH v4 02/23] lib/scatterlist: add flag for indicating P2PDMA segments in an SGL Logan Gunthorpe
                   ` (21 subsequent siblings)
  22 siblings, 2 replies; 40+ messages in thread
From: Logan Gunthorpe @ 2021-11-17 21:53 UTC (permalink / raw)
  To: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm, iommu
  Cc: Stephen Bates, Christoph Hellwig, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni,
	Logan Gunthorpe, Jason Gunthorpe

Convert the sg_is_chain(), sg_is_last() and sg_chain_ptr() macros
into static inline functions. There's no reason for these to be macros
and static inline are generally preferred these days.

Also introduce the SG_PAGE_LINK_MASK define so the P2PDMA work, which is
adding another bit to this mask, can do so more easily.

Suggested-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
---
 include/linux/scatterlist.h | 29 +++++++++++++++++++++++------
 1 file changed, 23 insertions(+), 6 deletions(-)

diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
index 266754a55327..7ff9d6386c12 100644
--- a/include/linux/scatterlist.h
+++ b/include/linux/scatterlist.h
@@ -69,10 +69,27 @@ struct sg_append_table {
  * a valid sg entry, or whether it points to the start of a new scatterlist.
  * Those low bits are there for everyone! (thanks mason :-)
  */
-#define sg_is_chain(sg)		((sg)->page_link & SG_CHAIN)
-#define sg_is_last(sg)		((sg)->page_link & SG_END)
-#define sg_chain_ptr(sg)	\
-	((struct scatterlist *) ((sg)->page_link & ~(SG_CHAIN | SG_END)))
+#define SG_PAGE_LINK_MASK (SG_CHAIN | SG_END)
+
+static inline unsigned int __sg_flags(struct scatterlist *sg)
+{
+	return sg->page_link & SG_PAGE_LINK_MASK;
+}
+
+static inline struct scatterlist *sg_chain_ptr(struct scatterlist *sg)
+{
+	return (struct scatterlist *)(sg->page_link & ~SG_PAGE_LINK_MASK);
+}
+
+static inline bool sg_is_chain(struct scatterlist *sg)
+{
+	return __sg_flags(sg) & SG_CHAIN;
+}
+
+static inline bool sg_is_last(struct scatterlist *sg)
+{
+	return __sg_flags(sg) & SG_END;
+}
 
 /**
  * sg_assign_page - Assign a given page to an SG entry
@@ -92,7 +109,7 @@ static inline void sg_assign_page(struct scatterlist *sg, struct page *page)
 	 * In order for the low bit stealing approach to work, pages
 	 * must be aligned at a 32-bit boundary as a minimum.
 	 */
-	BUG_ON((unsigned long) page & (SG_CHAIN | SG_END));
+	BUG_ON((unsigned long)page & SG_PAGE_LINK_MASK);
 #ifdef CONFIG_DEBUG_SG
 	BUG_ON(sg_is_chain(sg));
 #endif
@@ -126,7 +143,7 @@ static inline struct page *sg_page(struct scatterlist *sg)
 #ifdef CONFIG_DEBUG_SG
 	BUG_ON(sg_is_chain(sg));
 #endif
-	return (struct page *)((sg)->page_link & ~(SG_CHAIN | SG_END));
+	return (struct page *)((sg)->page_link & ~SG_PAGE_LINK_MASK);
 }
 
 /**
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v4 02/23] lib/scatterlist: add flag for indicating P2PDMA segments in an SGL
  2021-11-17 21:53 [PATCH v4 00/23] Userspace P2PDMA with O_DIRECT NVMe devices Logan Gunthorpe
  2021-11-17 21:53 ` [PATCH v4 01/23] lib/scatterlist: cleanup macros into static inline functions Logan Gunthorpe
@ 2021-11-17 21:53 ` Logan Gunthorpe
  2021-12-13 21:55   ` Chaitanya Kulkarni
  2021-12-21  9:02   ` Christoph Hellwig
  2021-11-17 21:53 ` [PATCH v4 03/23] PCI/P2PDMA: Attempt to set map_type if it has not been set Logan Gunthorpe
                   ` (20 subsequent siblings)
  22 siblings, 2 replies; 40+ messages in thread
From: Logan Gunthorpe @ 2021-11-17 21:53 UTC (permalink / raw)
  To: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm, iommu
  Cc: Stephen Bates, Christoph Hellwig, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni,
	Logan Gunthorpe

Make use of the third free LSB in scatterlist's page_link on 64bit systems.

The extra bit will be used by dma_[un]map_sg_p2pdma() to determine when a
given SGL segments dma_address points to a PCI bus address.
dma_unmap_sg_p2pdma() will need to perform different cleanup when a
segment is marked as a bus address.

Create a CONFIG_NEED_SG_DMA_BUS_ADDR_FLAG bool which depends on
CONFIG_64BIT (so there is space in the page link for the new flag).
CONFIG_PCI_P2PDMA will then depend on this so this means PCI P2PDMA will
require CONFIG_64BIT. This should be acceptable as the majority of P2PDMA
use cases are restricted to newer root complexes and roughly require the
extra address space for memory BARs used in the transactions.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
---
 drivers/pci/Kconfig         |  5 +++++
 include/linux/scatterlist.h | 44 ++++++++++++++++++++++++++++++++++++-
 kernel/dma/Kconfig          | 10 +++++++++
 3 files changed, 58 insertions(+), 1 deletion(-)

diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig
index 43e615aa12ff..95f29601a4df 100644
--- a/drivers/pci/Kconfig
+++ b/drivers/pci/Kconfig
@@ -164,6 +164,11 @@ config PCI_PASID
 config PCI_P2PDMA
 	bool "PCI peer-to-peer transfer support"
 	depends on ZONE_DEVICE
+	#
+	# The need for the scatterlist DMA bus address flag means PCI P2PDMA
+	# requires 64bit
+	#
+	select NEED_SG_DMA_BUS_ADDR_FLAG
 	select GENERIC_ALLOCATOR
 	help
 	  Enableѕ drivers to do PCI peer-to-peer transactions to and from
diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
index 7ff9d6386c12..917c09dcc566 100644
--- a/include/linux/scatterlist.h
+++ b/include/linux/scatterlist.h
@@ -64,12 +64,24 @@ struct sg_append_table {
 #define SG_CHAIN	0x01UL
 #define SG_END		0x02UL
 
+/*
+ * bit 2 is the third free bit in the page_link on 64bit systems which
+ * is used by dma_unmap_sg() to determine if the dma_address is a
+ * bus address when doing P2PDMA.
+ */
+#ifdef CONFIG_NEED_SG_DMA_BUS_ADDR_FLAG
+#define SG_DMA_BUS_ADDRESS	0x04UL
+static_assert(__alignof__(struct page) >= 8);
+#else
+#define SG_DMA_BUS_ADDRESS	0x00UL
+#endif
+
 /*
  * We overload the LSB of the page pointer to indicate whether it's
  * a valid sg entry, or whether it points to the start of a new scatterlist.
  * Those low bits are there for everyone! (thanks mason :-)
  */
-#define SG_PAGE_LINK_MASK (SG_CHAIN | SG_END)
+#define SG_PAGE_LINK_MASK (SG_CHAIN | SG_END | SG_DMA_BUS_ADDRESS)
 
 static inline unsigned int __sg_flags(struct scatterlist *sg)
 {
@@ -91,6 +103,11 @@ static inline bool sg_is_last(struct scatterlist *sg)
 	return __sg_flags(sg) & SG_END;
 }
 
+static inline bool sg_is_dma_bus_address(struct scatterlist *sg)
+{
+	return __sg_flags(sg) & SG_DMA_BUS_ADDRESS;
+}
+
 /**
  * sg_assign_page - Assign a given page to an SG entry
  * @sg:		    SG entry
@@ -245,6 +262,31 @@ static inline void sg_unmark_end(struct scatterlist *sg)
 	sg->page_link &= ~SG_END;
 }
 
+/**
+ * sg_dma_mark_bus address - Mark the scatterlist entry as a bus address
+ * @sg:		 SG entryScatterlist
+ *
+ * Description:
+ *   Marks the passed in sg entry to indicate that the dma_address is
+ *   a bus address and doesn't need to be unmapped.
+ **/
+static inline void sg_dma_mark_bus_address(struct scatterlist *sg)
+{
+	sg->page_link |= SG_DMA_BUS_ADDRESS;
+}
+
+/**
+ * sg_unmark_pci_p2pdma - Unmark the scatterlist entry as a bus address
+ * @sg:		 SG entryScatterlist
+ *
+ * Description:
+ *   Clears the bus address mark.
+ **/
+static inline void sg_dma_unmark_bus_address(struct scatterlist *sg)
+{
+	sg->page_link &= ~SG_DMA_BUS_ADDRESS;
+}
+
 /**
  * sg_phys - Return physical address of an sg entry
  * @sg:	     SG entry
diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig
index 1b02179758cb..6e5e1d8e1329 100644
--- a/kernel/dma/Kconfig
+++ b/kernel/dma/Kconfig
@@ -27,6 +27,16 @@ config ARCH_HAS_DMA_MAP_DIRECT
 config NEED_SG_DMA_LENGTH
 	bool
 
+#
+# PCI P2PDMA needs to store bus addresses in the SGL's dma_address so that the
+# dma_unmap_sg() implementations can know not to unmap those segments.
+# The flag is stored in the 3rd bit in the page_link field in the SGL
+# which means this can only be done on 64bit systems.
+#
+config NEED_SG_DMA_BUS_ADDR_FLAG
+	depends on 64BIT
+	bool
+
 config NEED_DMA_MAP_STATE
 	bool
 
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v4 03/23] PCI/P2PDMA: Attempt to set map_type if it has not been set
  2021-11-17 21:53 [PATCH v4 00/23] Userspace P2PDMA with O_DIRECT NVMe devices Logan Gunthorpe
  2021-11-17 21:53 ` [PATCH v4 01/23] lib/scatterlist: cleanup macros into static inline functions Logan Gunthorpe
  2021-11-17 21:53 ` [PATCH v4 02/23] lib/scatterlist: add flag for indicating P2PDMA segments in an SGL Logan Gunthorpe
@ 2021-11-17 21:53 ` Logan Gunthorpe
  2021-12-13 22:00   ` Chaitanya Kulkarni
  2021-11-17 21:53 ` [PATCH v4 04/23] PCI/P2PDMA: Expose pci_p2pdma_map_type() Logan Gunthorpe
                   ` (19 subsequent siblings)
  22 siblings, 1 reply; 40+ messages in thread
From: Logan Gunthorpe @ 2021-11-17 21:53 UTC (permalink / raw)
  To: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm, iommu
  Cc: Stephen Bates, Christoph Hellwig, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni,
	Logan Gunthorpe, Bjorn Helgaas

Attempt to find the mapping type for P2PDMA pages on the first
DMA map attempt if it has not been done ahead of time.

Previously, the mapping type was expected to be calculated ahead of
time, but if pages are to come from userspace then there's no
way to ensure the path was checked ahead of time.

This change will calculate the mapping type if it hasn't pre-calculated
so it is no longer invalid to call pci_p2pdma_map_sg() before the mapping
type is calculated, so drop the WARN_ON when that is he case.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
---
 drivers/pci/p2pdma.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c
index 8d47cb7218d1..9a39c2c307ab 100644
--- a/drivers/pci/p2pdma.c
+++ b/drivers/pci/p2pdma.c
@@ -848,6 +848,7 @@ static enum pci_p2pdma_map_type pci_p2pdma_map_type(struct dev_pagemap *pgmap,
 	struct pci_dev *provider = to_p2p_pgmap(pgmap)->provider;
 	struct pci_dev *client;
 	struct pci_p2pdma *p2pdma;
+	int dist;
 
 	if (!provider->p2pdma)
 		return PCI_P2PDMA_MAP_NOT_SUPPORTED;
@@ -864,6 +865,10 @@ static enum pci_p2pdma_map_type pci_p2pdma_map_type(struct dev_pagemap *pgmap,
 		type = xa_to_value(xa_load(&p2pdma->map_types,
 					   map_types_idx(client)));
 	rcu_read_unlock();
+
+	if (type == PCI_P2PDMA_MAP_UNKNOWN)
+		return calc_map_type_and_dist(provider, client, &dist, true);
+
 	return type;
 }
 
@@ -906,7 +911,6 @@ int pci_p2pdma_map_sg_attrs(struct device *dev, struct scatterlist *sg,
 	case PCI_P2PDMA_MAP_BUS_ADDR:
 		return __pci_p2pdma_map_sg(p2p_pgmap, dev, sg, nents);
 	default:
-		WARN_ON_ONCE(1);
 		return 0;
 	}
 }
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v4 04/23] PCI/P2PDMA: Expose pci_p2pdma_map_type()
  2021-11-17 21:53 [PATCH v4 00/23] Userspace P2PDMA with O_DIRECT NVMe devices Logan Gunthorpe
                   ` (2 preceding siblings ...)
  2021-11-17 21:53 ` [PATCH v4 03/23] PCI/P2PDMA: Attempt to set map_type if it has not been set Logan Gunthorpe
@ 2021-11-17 21:53 ` Logan Gunthorpe
  2021-12-13 22:05   ` Chaitanya Kulkarni
  2021-11-17 21:53 ` [PATCH v4 05/23] PCI/P2PDMA: Introduce helpers for dma_map_sg implementations Logan Gunthorpe
                   ` (18 subsequent siblings)
  22 siblings, 1 reply; 40+ messages in thread
From: Logan Gunthorpe @ 2021-11-17 21:53 UTC (permalink / raw)
  To: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm, iommu
  Cc: Stephen Bates, Christoph Hellwig, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni,
	Logan Gunthorpe, Bjorn Helgaas, Jason Gunthorpe

pci_p2pdma_map_type() will be needed by the dma-iommu map_sg
implementation because it will need to determine the mapping type
ahead of actually doing the mapping to create the actual IOMMU mapping.

Prototypes for this helper are added to dma-map-ops.h as they are only
useful to dma map implementations and don't need to pollute the public
pci-p2pdma header

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/pci/p2pdma.c        | 25 +++++++++++++--------
 include/linux/dma-map-ops.h | 45 +++++++++++++++++++++++++++++++++++++
 2 files changed, 61 insertions(+), 9 deletions(-)

diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c
index 9a39c2c307ab..02a13a5ac680 100644
--- a/drivers/pci/p2pdma.c
+++ b/drivers/pci/p2pdma.c
@@ -10,6 +10,7 @@
 
 #define pr_fmt(fmt) "pci-p2pdma: " fmt
 #include <linux/ctype.h>
+#include <linux/dma-map-ops.h>
 #include <linux/pci-p2pdma.h>
 #include <linux/module.h>
 #include <linux/slab.h>
@@ -20,13 +21,6 @@
 #include <linux/seq_buf.h>
 #include <linux/xarray.h>
 
-enum pci_p2pdma_map_type {
-	PCI_P2PDMA_MAP_UNKNOWN = 0,
-	PCI_P2PDMA_MAP_NOT_SUPPORTED,
-	PCI_P2PDMA_MAP_BUS_ADDR,
-	PCI_P2PDMA_MAP_THRU_HOST_BRIDGE,
-};
-
 struct pci_p2pdma {
 	struct gen_pool *pool;
 	bool p2pmem_published;
@@ -841,8 +835,21 @@ void pci_p2pmem_publish(struct pci_dev *pdev, bool publish)
 }
 EXPORT_SYMBOL_GPL(pci_p2pmem_publish);
 
-static enum pci_p2pdma_map_type pci_p2pdma_map_type(struct dev_pagemap *pgmap,
-						    struct device *dev)
+/**
+ * pci_p2pdma_map_type - return the type of mapping that should be used for
+ *	a given device and pgmap
+ * @pgmap: the pagemap of a page to determine the mapping type for
+ * @dev: device that is mapping the page
+ *
+ * Returns one of:
+ *	PCI_P2PDMA_MAP_NOT_SUPPORTED - The mapping should not be done
+ *	PCI_P2PDMA_MAP_BUS_ADDR - The mapping should use the PCI bus address
+ *	PCI_P2PDMA_MAP_THRU_HOST_BRIDGE - The mapping should be done normally
+ *		using the CPU physical address (in dma-direct) or an IOVA
+ *		mapping for the IOMMU.
+ */
+enum pci_p2pdma_map_type pci_p2pdma_map_type(struct dev_pagemap *pgmap,
+					     struct device *dev)
 {
 	enum pci_p2pdma_map_type type = PCI_P2PDMA_MAP_NOT_SUPPORTED;
 	struct pci_dev *provider = to_p2p_pgmap(pgmap)->provider;
diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
index 0d5b06b3a4a6..d693a0e33bac 100644
--- a/include/linux/dma-map-ops.h
+++ b/include/linux/dma-map-ops.h
@@ -379,4 +379,49 @@ static inline void debug_dma_dump_mappings(struct device *dev)
 
 extern const struct dma_map_ops dma_dummy_ops;
 
+enum pci_p2pdma_map_type {
+	/*
+	 * PCI_P2PDMA_MAP_UNKNOWN: Used internally for indicating the mapping
+	 * type hasn't been calculated yet. Functions that return this enum
+	 * never return this value.
+	 */
+	PCI_P2PDMA_MAP_UNKNOWN = 0,
+
+	/*
+	 * PCI_P2PDMA_MAP_NOT_SUPPORTED: Indicates the transaction will
+	 * traverse the host bridge and the host bridge is not in the
+	 * allowlist. DMA Mapping routines should return an error when
+	 * this is returned.
+	 */
+	PCI_P2PDMA_MAP_NOT_SUPPORTED,
+
+	/*
+	 * PCI_P2PDMA_BUS_ADDR: Indicates that two devices can talk to
+	 * each other directly through a PCI switch and the transaction will
+	 * not traverse the host bridge. Such a mapping should program
+	 * the DMA engine with PCI bus addresses.
+	 */
+	PCI_P2PDMA_MAP_BUS_ADDR,
+
+	/*
+	 * PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: Indicates two devices can talk
+	 * to each other, but the transaction traverses a host bridge on the
+	 * allowlist. In this case, a normal mapping either with CPU physical
+	 * addresses (in the case of dma-direct) or IOVA addresses (in the
+	 * case of IOMMUs) should be used to program the DMA engine.
+	 */
+	PCI_P2PDMA_MAP_THRU_HOST_BRIDGE,
+};
+
+#ifdef CONFIG_PCI_P2PDMA
+enum pci_p2pdma_map_type pci_p2pdma_map_type(struct dev_pagemap *pgmap,
+					     struct device *dev);
+#else /* CONFIG_PCI_P2PDMA */
+static inline enum pci_p2pdma_map_type
+pci_p2pdma_map_type(struct dev_pagemap *pgmap, struct device *dev)
+{
+	return PCI_P2PDMA_MAP_NOT_SUPPORTED;
+}
+#endif /* CONFIG_PCI_P2PDMA */
+
 #endif /* _LINUX_DMA_MAP_OPS_H */
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v4 05/23] PCI/P2PDMA: Introduce helpers for dma_map_sg implementations
  2021-11-17 21:53 [PATCH v4 00/23] Userspace P2PDMA with O_DIRECT NVMe devices Logan Gunthorpe
                   ` (3 preceding siblings ...)
  2021-11-17 21:53 ` [PATCH v4 04/23] PCI/P2PDMA: Expose pci_p2pdma_map_type() Logan Gunthorpe
@ 2021-11-17 21:53 ` Logan Gunthorpe
  2021-11-17 21:53 ` [PATCH v4 06/23] dma-mapping: allow EREMOTEIO return code for P2PDMA transfers Logan Gunthorpe
                   ` (17 subsequent siblings)
  22 siblings, 0 replies; 40+ messages in thread
From: Logan Gunthorpe @ 2021-11-17 21:53 UTC (permalink / raw)
  To: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm, iommu
  Cc: Stephen Bates, Christoph Hellwig, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni,
	Logan Gunthorpe, Bjorn Helgaas

Add pci_p2pdma_map_segment() as a helper for simple dma_map_sg()
implementations. It takes an scatterlist segment that must point to a
pci_p2pdma struct page and will map it if the mapping requires a bus
address.

The return value indicates whether the mapping required a bus address
or whether the caller still needs to map the segment normally. If the
segment should not be mapped, -EREMOTEIO is returned.

This helper uses a state structure to track the changes to the
pgmap across calls and avoid needing to lookup into the xarray for
every page.

Also add pci_p2pdma_map_bus_segment() which is useful for IOMMU
dma_map_sg() implementations where the sg segment containing the page
differs from the sg segment containing the DMA address.

Prototypes for these helpers are added to dma-map-ops.h as they are only
useful to dma map implementations and don't need to pollute the public
pci-p2pdma header.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
---
 drivers/pci/p2pdma.c        | 59 +++++++++++++++++++++++++++++++++++++
 include/linux/dma-map-ops.h | 21 +++++++++++++
 2 files changed, 80 insertions(+)

diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c
index 02a13a5ac680..6ad3a8816677 100644
--- a/drivers/pci/p2pdma.c
+++ b/drivers/pci/p2pdma.c
@@ -944,6 +944,65 @@ void pci_p2pdma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg,
 }
 EXPORT_SYMBOL_GPL(pci_p2pdma_unmap_sg_attrs);
 
+/**
+ * pci_p2pdma_map_segment - map an sg segment determining the mapping type
+ * @state: State structure that should be declared outside of the for_each_sg()
+ *	loop and initialized to zero.
+ * @dev: DMA device that's doing the mapping operation
+ * @sg: scatterlist segment to map
+ *
+ * This is a helper to be used by non-IOMMU dma_map_sg() implementations where
+ * the sg segment is the same for the page_link and the dma_address.
+ *
+ * Attempt to map a single segment in an SGL with the PCI bus address.
+ * The segment must point to a PCI P2PDMA page and thus must be
+ * wrapped in a is_pci_p2pdma_page(sg_page(sg)) check.
+ *
+ * Returns the type of mapping used and maps the page if the type is
+ * PCI_P2PDMA_MAP_BUS_ADDR.
+ */
+enum pci_p2pdma_map_type
+pci_p2pdma_map_segment(struct pci_p2pdma_map_state *state, struct device *dev,
+		       struct scatterlist *sg)
+{
+	if (state->pgmap != sg_page(sg)->pgmap) {
+		state->pgmap = sg_page(sg)->pgmap;
+		state->map = pci_p2pdma_map_type(state->pgmap, dev);
+		state->bus_off = to_p2p_pgmap(state->pgmap)->bus_offset;
+	}
+
+	if (state->map == PCI_P2PDMA_MAP_BUS_ADDR) {
+		sg->dma_address = sg_phys(sg) + state->bus_off;
+		sg_dma_len(sg) = sg->length;
+		sg_dma_mark_bus_address(sg);
+	}
+
+	return state->map;
+}
+
+/**
+ * pci_p2pdma_map_bus_segment - map an sg segment pre determined to
+ *	be mapped with PCI_P2PDMA_MAP_BUS_ADDR
+ * @pg_sg: scatterlist segment with the page to map
+ * @dma_sg: scatterlist segment to assign a DMA address to
+ *
+ * This is a helper for iommu dma_map_sg() implementations when the
+ * segment for the DMA address differs from the segment containing the
+ * source page.
+ *
+ * pci_p2pdma_map_type() must have already been called on the pg_sg and
+ * returned PCI_P2PDMA_MAP_BUS_ADDR.
+ */
+void pci_p2pdma_map_bus_segment(struct scatterlist *pg_sg,
+				struct scatterlist *dma_sg)
+{
+	struct pci_p2pdma_pagemap *pgmap = to_p2p_pgmap(sg_page(pg_sg)->pgmap);
+
+	dma_sg->dma_address = sg_phys(pg_sg) + pgmap->bus_offset;
+	sg_dma_len(dma_sg) = pg_sg->length;
+	sg_dma_mark_bus_address(dma_sg);
+}
+
 /**
  * pci_p2pdma_enable_store - parse a configfs/sysfs attribute store
  *		to enable p2pdma
diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
index d693a0e33bac..752f91e5eb5d 100644
--- a/include/linux/dma-map-ops.h
+++ b/include/linux/dma-map-ops.h
@@ -413,15 +413,36 @@ enum pci_p2pdma_map_type {
 	PCI_P2PDMA_MAP_THRU_HOST_BRIDGE,
 };
 
+struct pci_p2pdma_map_state {
+	struct dev_pagemap *pgmap;
+	int map;
+	u64 bus_off;
+};
+
 #ifdef CONFIG_PCI_P2PDMA
 enum pci_p2pdma_map_type pci_p2pdma_map_type(struct dev_pagemap *pgmap,
 					     struct device *dev);
+enum pci_p2pdma_map_type
+pci_p2pdma_map_segment(struct pci_p2pdma_map_state *state, struct device *dev,
+		       struct scatterlist *sg);
+void pci_p2pdma_map_bus_segment(struct scatterlist *pg_sg,
+				struct scatterlist *dma_sg);
 #else /* CONFIG_PCI_P2PDMA */
 static inline enum pci_p2pdma_map_type
 pci_p2pdma_map_type(struct dev_pagemap *pgmap, struct device *dev)
 {
 	return PCI_P2PDMA_MAP_NOT_SUPPORTED;
 }
+static inline enum pci_p2pdma_map_type
+pci_p2pdma_map_segment(struct pci_p2pdma_map_state *state, struct device *dev,
+		       struct scatterlist *sg)
+{
+	return PCI_P2PDMA_MAP_NOT_SUPPORTED;
+}
+static inline void pci_p2pdma_map_bus_segment(struct scatterlist *pg_sg,
+					      struct scatterlist *dma_sg)
+{
+}
 #endif /* CONFIG_PCI_P2PDMA */
 
 #endif /* _LINUX_DMA_MAP_OPS_H */
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v4 06/23] dma-mapping: allow EREMOTEIO return code for P2PDMA transfers
  2021-11-17 21:53 [PATCH v4 00/23] Userspace P2PDMA with O_DIRECT NVMe devices Logan Gunthorpe
                   ` (4 preceding siblings ...)
  2021-11-17 21:53 ` [PATCH v4 05/23] PCI/P2PDMA: Introduce helpers for dma_map_sg implementations Logan Gunthorpe
@ 2021-11-17 21:53 ` Logan Gunthorpe
  2021-11-17 21:53 ` [PATCH v4 07/23] dma-direct: support PCI P2PDMA pages in dma-direct map_sg Logan Gunthorpe
                   ` (16 subsequent siblings)
  22 siblings, 0 replies; 40+ messages in thread
From: Logan Gunthorpe @ 2021-11-17 21:53 UTC (permalink / raw)
  To: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm, iommu
  Cc: Stephen Bates, Christoph Hellwig, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni,
	Logan Gunthorpe, Jason Gunthorpe

Add EREMOTEIO error return to dma_map_sgtable() which will be used
by .map_sg() implementations that detect P2PDMA pages that the
underlying DMA device cannot access.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
---
 kernel/dma/mapping.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
index 9478eccd1c8e..c056a1468189 100644
--- a/kernel/dma/mapping.c
+++ b/kernel/dma/mapping.c
@@ -197,7 +197,7 @@ static int __dma_map_sg_attrs(struct device *dev, struct scatterlist *sg,
 	if (ents > 0)
 		debug_dma_map_sg(dev, sg, nents, ents, dir, attrs);
 	else if (WARN_ON_ONCE(ents != -EINVAL && ents != -ENOMEM &&
-			      ents != -EIO))
+			      ents != -EIO && ents != -EREMOTEIO))
 		return -EIO;
 
 	return ents;
@@ -255,6 +255,8 @@ EXPORT_SYMBOL(dma_map_sg_attrs);
  *		complete the mapping. Should succeed if retried later.
  *   -EIO	Legacy error code with an unknown meaning. eg. this is
  *		returned if a lower level call returned DMA_MAPPING_ERROR.
+ *   -EREMOTEIO	The DMA device cannot access P2PDMA memory specified in
+ *		the sg_table. This will not succeed if retried.
  */
 int dma_map_sgtable(struct device *dev, struct sg_table *sgt,
 		    enum dma_data_direction dir, unsigned long attrs)
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v4 07/23] dma-direct: support PCI P2PDMA pages in dma-direct map_sg
  2021-11-17 21:53 [PATCH v4 00/23] Userspace P2PDMA with O_DIRECT NVMe devices Logan Gunthorpe
                   ` (5 preceding siblings ...)
  2021-11-17 21:53 ` [PATCH v4 06/23] dma-mapping: allow EREMOTEIO return code for P2PDMA transfers Logan Gunthorpe
@ 2021-11-17 21:53 ` Logan Gunthorpe
  2021-11-17 21:53 ` [PATCH v4 08/23] dma-mapping: add flags to dma_map_ops to indicate PCI P2PDMA support Logan Gunthorpe
                   ` (15 subsequent siblings)
  22 siblings, 0 replies; 40+ messages in thread
From: Logan Gunthorpe @ 2021-11-17 21:53 UTC (permalink / raw)
  To: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm, iommu
  Cc: Stephen Bates, Christoph Hellwig, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni,
	Logan Gunthorpe

Add PCI P2PDMA support for dma_direct_map_sg() so that it can map
PCI P2PDMA pages directly without a hack in the callers. This allows
for heterogeneous SGLs that contain both P2PDMA and regular pages.

A P2PDMA page may have three possible outcomes when being mapped:
  1) If the data path between the two devices doesn't go through the
     root port, then it should be mapped with a PCI bus address
  2) If the data path goes through the host bridge, it should be mapped
     normally, as though it were a CPU physical address
  3) It is not possible for the two devices to communicate and thus
     the mapping operation should fail (and it will return -EREMOTEIO).

SGL segments that contain PCI bus addresses are marked with
sg_dma_mark_pci_p2pdma() and are ignored when unmapped.

P2PDMA mappings are also failed if swiotlb needs to be used on the
mapping.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
---
 kernel/dma/direct.c | 43 +++++++++++++++++++++++++++++++++++++------
 kernel/dma/direct.h |  7 ++++++-
 2 files changed, 43 insertions(+), 7 deletions(-)

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 4c6c5e0635e3..f2368263f847 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -421,29 +421,60 @@ void dma_direct_sync_sg_for_cpu(struct device *dev,
 		arch_sync_dma_for_cpu_all();
 }
 
+/*
+ * Unmaps segments, except for ones marked as pci_p2pdma which do not
+ * require any further action as they contain a bus address.
+ */
 void dma_direct_unmap_sg(struct device *dev, struct scatterlist *sgl,
 		int nents, enum dma_data_direction dir, unsigned long attrs)
 {
 	struct scatterlist *sg;
 	int i;
 
-	for_each_sg(sgl, sg, nents, i)
-		dma_direct_unmap_page(dev, sg->dma_address, sg_dma_len(sg), dir,
-			     attrs);
+	for_each_sg(sgl,  sg, nents, i) {
+		if (sg_is_dma_bus_address(sg))
+			sg_dma_unmark_bus_address(sg);
+		else
+			dma_direct_unmap_page(dev, sg->dma_address,
+					      sg_dma_len(sg), dir, attrs);
+	}
 }
 #endif
 
 int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, int nents,
 		enum dma_data_direction dir, unsigned long attrs)
 {
-	int i;
+	struct pci_p2pdma_map_state p2pdma_state = {};
+	enum pci_p2pdma_map_type map;
 	struct scatterlist *sg;
+	int i, ret;
 
 	for_each_sg(sgl, sg, nents, i) {
+		if (is_pci_p2pdma_page(sg_page(sg))) {
+			map = pci_p2pdma_map_segment(&p2pdma_state, dev, sg);
+			switch (map) {
+			case PCI_P2PDMA_MAP_BUS_ADDR:
+				continue;
+			case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE:
+				/*
+				 * Any P2P mapping that traverses the PCI
+				 * host bridge must be mapped with CPU physical
+				 * address and not PCI bus addresses. This is
+				 * done with dma_direct_map_page() below.
+				 */
+				break;
+			default:
+				ret = -EREMOTEIO;
+				goto out_unmap;
+			}
+		}
+
 		sg->dma_address = dma_direct_map_page(dev, sg_page(sg),
 				sg->offset, sg->length, dir, attrs);
-		if (sg->dma_address == DMA_MAPPING_ERROR)
+		if (sg->dma_address == DMA_MAPPING_ERROR) {
+			ret = -EIO;
 			goto out_unmap;
+		}
 		sg_dma_len(sg) = sg->length;
 	}
 
@@ -451,7 +482,7 @@ int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, int nents,
 
 out_unmap:
 	dma_direct_unmap_sg(dev, sgl, i, dir, attrs | DMA_ATTR_SKIP_CPU_SYNC);
-	return -EIO;
+	return ret;
 }
 
 dma_addr_t dma_direct_map_resource(struct device *dev, phys_addr_t paddr,
diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index 4632b0f4f72e..a33152d79069 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -87,10 +87,15 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev,
 	phys_addr_t phys = page_to_phys(page) + offset;
 	dma_addr_t dma_addr = phys_to_dma(dev, phys);
 
-	if (is_swiotlb_force_bounce(dev))
+	if (is_swiotlb_force_bounce(dev)) {
+		if (is_pci_p2pdma_page(page))
+			return DMA_MAPPING_ERROR;
 		return swiotlb_map(dev, phys, size, dir, attrs);
+	}
 
 	if (unlikely(!dma_capable(dev, dma_addr, size, true))) {
+		if (is_pci_p2pdma_page(page))
+			return DMA_MAPPING_ERROR;
 		if (swiotlb_force != SWIOTLB_NO_FORCE)
 			return swiotlb_map(dev, phys, size, dir, attrs);
 
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v4 08/23] dma-mapping: add flags to dma_map_ops to indicate PCI P2PDMA support
  2021-11-17 21:53 [PATCH v4 00/23] Userspace P2PDMA with O_DIRECT NVMe devices Logan Gunthorpe
                   ` (6 preceding siblings ...)
  2021-11-17 21:53 ` [PATCH v4 07/23] dma-direct: support PCI P2PDMA pages in dma-direct map_sg Logan Gunthorpe
@ 2021-11-17 21:53 ` Logan Gunthorpe
  2021-11-17 21:53 ` [PATCH v4 09/23] iommu/dma: support PCI P2PDMA pages in dma-iommu map_sg Logan Gunthorpe
                   ` (14 subsequent siblings)
  22 siblings, 0 replies; 40+ messages in thread
From: Logan Gunthorpe @ 2021-11-17 21:53 UTC (permalink / raw)
  To: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm, iommu
  Cc: Stephen Bates, Christoph Hellwig, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni,
	Logan Gunthorpe, Jason Gunthorpe

Add a flags member to the dma_map_ops structure with one flag to
indicate support for PCI P2PDMA.

Also, add a helper to check if a device supports PCI P2PDMA.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
---
 include/linux/dma-map-ops.h | 10 ++++++++++
 include/linux/dma-mapping.h |  5 +++++
 kernel/dma/mapping.c        | 18 ++++++++++++++++++
 3 files changed, 33 insertions(+)

diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
index 752f91e5eb5d..4d4161d58ce0 100644
--- a/include/linux/dma-map-ops.h
+++ b/include/linux/dma-map-ops.h
@@ -11,7 +11,17 @@
 
 struct cma;
 
+/*
+ * Values for struct dma_map_ops.flags:
+ *
+ * DMA_F_PCI_P2PDMA_SUPPORTED: Indicates the dma_map_ops implementation can
+ * handle PCI P2PDMA pages in the map_sg/unmap_sg operation.
+ */
+#define DMA_F_PCI_P2PDMA_SUPPORTED     (1 << 0)
+
 struct dma_map_ops {
+	unsigned int flags;
+
 	void *(*alloc)(struct device *dev, size_t size,
 			dma_addr_t *dma_handle, gfp_t gfp,
 			unsigned long attrs);
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index dca2b1355bb1..f7c61b2b4b5e 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -140,6 +140,7 @@ int dma_mmap_attrs(struct device *dev, struct vm_area_struct *vma,
 		unsigned long attrs);
 bool dma_can_mmap(struct device *dev);
 int dma_supported(struct device *dev, u64 mask);
+bool dma_pci_p2pdma_supported(struct device *dev);
 int dma_set_mask(struct device *dev, u64 mask);
 int dma_set_coherent_mask(struct device *dev, u64 mask);
 u64 dma_get_required_mask(struct device *dev);
@@ -250,6 +251,10 @@ static inline int dma_supported(struct device *dev, u64 mask)
 {
 	return 0;
 }
+static inline bool dma_pci_p2pdma_supported(struct device *dev)
+{
+	return false;
+}
 static inline int dma_set_mask(struct device *dev, u64 mask)
 {
 	return -EIO;
diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
index c056a1468189..74858326ef94 100644
--- a/kernel/dma/mapping.c
+++ b/kernel/dma/mapping.c
@@ -724,6 +724,24 @@ int dma_supported(struct device *dev, u64 mask)
 }
 EXPORT_SYMBOL(dma_supported);
 
+bool dma_pci_p2pdma_supported(struct device *dev)
+{
+	const struct dma_map_ops *ops = get_dma_ops(dev);
+
+	/* if ops is not set, dma direct will be used which supports P2PDMA */
+	if (!ops)
+		return true;
+
+	/*
+	 * Note: dma_ops_bypass is not checked here because P2PDMA should
+	 * not be used with dma mapping ops that do not have support even
+	 * if the specific device is bypassing them.
+	 */
+
+	return ops->flags & DMA_F_PCI_P2PDMA_SUPPORTED;
+}
+EXPORT_SYMBOL_GPL(dma_pci_p2pdma_supported);
+
 #ifdef CONFIG_ARCH_HAS_DMA_SET_MASK
 void arch_dma_set_mask(struct device *dev, u64 mask);
 #else
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v4 09/23] iommu/dma: support PCI P2PDMA pages in dma-iommu map_sg
  2021-11-17 21:53 [PATCH v4 00/23] Userspace P2PDMA with O_DIRECT NVMe devices Logan Gunthorpe
                   ` (7 preceding siblings ...)
  2021-11-17 21:53 ` [PATCH v4 08/23] dma-mapping: add flags to dma_map_ops to indicate PCI P2PDMA support Logan Gunthorpe
@ 2021-11-17 21:53 ` Logan Gunthorpe
  2021-11-17 21:53 ` [PATCH v4 10/23] nvme-pci: check DMA ops when indicating support for PCI P2PDMA Logan Gunthorpe
                   ` (13 subsequent siblings)
  22 siblings, 0 replies; 40+ messages in thread
From: Logan Gunthorpe @ 2021-11-17 21:53 UTC (permalink / raw)
  To: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm, iommu
  Cc: Stephen Bates, Christoph Hellwig, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni,
	Logan Gunthorpe, Jason Gunthorpe

When a PCI P2PDMA page is seen, set the IOVA length of the segment
to zero so that it is not mapped into the IOVA. Then, in finalise_sg(),
apply the appropriate bus address to the segment. The IOVA is not
created if the scatterlist only consists of P2PDMA pages.

A P2PDMA page may have three possible outcomes when being mapped:
  1) If the data path between the two devices doesn't go through
     the root port, then it should be mapped with a PCI bus address
  2) If the data path goes through the host bridge, it should be mapped
     normally with an IOMMU IOVA.
  3) It is not possible for the two devices to communicate and thus
     the mapping operation should fail (and it will return -EREMOTEIO).

Similar to dma-direct, the sg_dma_mark_pci_p2pdma() flag is used to
indicate bus address segments. On unmap, P2PDMA segments are skipped
over when determining the start and end IOVA addresses.

With this change, the flags variable in the dma_map_ops is set to
DMA_F_PCI_P2PDMA_SUPPORTED to indicate support for P2PDMA pages.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/iommu/dma-iommu.c | 67 +++++++++++++++++++++++++++++++++++----
 1 file changed, 60 insertions(+), 7 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index b42e38a0dbe2..c70c661d8f98 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -883,6 +883,16 @@ static int __finalise_sg(struct device *dev, struct scatterlist *sg, int nents,
 		sg_dma_address(s) = DMA_MAPPING_ERROR;
 		sg_dma_len(s) = 0;
 
+		if (is_pci_p2pdma_page(sg_page(s)) && !s_iova_len) {
+			if (i > 0)
+				cur = sg_next(cur);
+
+			pci_p2pdma_map_bus_segment(s, cur);
+			count++;
+			cur_len = 0;
+			continue;
+		}
+
 		/*
 		 * Now fill in the real DMA data. If...
 		 * - there is a valid output segment to append to
@@ -979,6 +989,8 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
 	struct iova_domain *iovad = &cookie->iovad;
 	struct scatterlist *s, *prev = NULL;
 	int prot = dma_info_to_prot(dir, dev_is_dma_coherent(dev), attrs);
+	struct dev_pagemap *pgmap = NULL;
+	enum pci_p2pdma_map_type map_type;
 	dma_addr_t iova;
 	size_t iova_len = 0;
 	unsigned long mask = dma_get_seg_boundary(dev);
@@ -1014,6 +1026,35 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
 		s_length = iova_align(iovad, s_length + s_iova_off);
 		s->length = s_length;
 
+		if (is_pci_p2pdma_page(sg_page(s))) {
+			if (sg_page(s)->pgmap != pgmap) {
+				pgmap = sg_page(s)->pgmap;
+				map_type = pci_p2pdma_map_type(pgmap, dev);
+			}
+
+			switch (map_type) {
+			case PCI_P2PDMA_MAP_BUS_ADDR:
+				/*
+				 * A zero length will be ignored by
+				 * iommu_map_sg() and then can be detected
+				 * in __finalise_sg() to actually map the
+				 * bus address.
+				 */
+				s->length = 0;
+				continue;
+			case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE:
+				/*
+				 * Mapping through host bridge should be
+				 * mapped with regular IOVAs, thus we
+				 * do nothing here and continue below.
+				 */
+				break;
+			default:
+				ret = -EREMOTEIO;
+				goto out_restore_sg;
+			}
+		}
+
 		/*
 		 * Due to the alignment of our single IOVA allocation, we can
 		 * depend on these assumptions about the segment boundary mask:
@@ -1036,6 +1077,9 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
 		prev = s;
 	}
 
+	if (!iova_len)
+		return __finalise_sg(dev, sg, nents, 0);
+
 	iova = iommu_dma_alloc_iova(domain, iova_len, dma_get_mask(dev), dev);
 	if (!iova) {
 		ret = -ENOMEM;
@@ -1057,7 +1101,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
 out_restore_sg:
 	__invalidate_sg(sg, nents);
 out:
-	if (ret != -ENOMEM)
+	if (ret != -ENOMEM && ret != -EREMOTEIO)
 		return -EINVAL;
 	return ret;
 }
@@ -1065,7 +1109,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
 static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg,
 		int nents, enum dma_data_direction dir, unsigned long attrs)
 {
-	dma_addr_t start, end;
+	dma_addr_t end, start = DMA_MAPPING_ERROR;
 	struct scatterlist *tmp;
 	int i;
 
@@ -1081,14 +1125,22 @@ static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg,
 	 * The scatterlist segments are mapped into a single
 	 * contiguous IOVA allocation, so this is incredibly easy.
 	 */
-	start = sg_dma_address(sg);
-	for_each_sg(sg_next(sg), tmp, nents - 1, i) {
+	for_each_sg(sg, tmp, nents, i) {
+		if (sg_is_dma_bus_address(tmp)) {
+			sg_dma_unmark_bus_address(tmp);
+			continue;
+		}
 		if (sg_dma_len(tmp) == 0)
 			break;
-		sg = tmp;
+
+		if (start == DMA_MAPPING_ERROR)
+			start = sg_dma_address(tmp);
+
+		end = sg_dma_address(tmp) + sg_dma_len(tmp);
 	}
-	end = sg_dma_address(sg) + sg_dma_len(sg);
-	__iommu_dma_unmap(dev, start, end - start);
+
+	if (start != DMA_MAPPING_ERROR)
+		__iommu_dma_unmap(dev, start, end - start);
 }
 
 static dma_addr_t iommu_dma_map_resource(struct device *dev, phys_addr_t phys,
@@ -1281,6 +1333,7 @@ static unsigned long iommu_dma_get_merge_boundary(struct device *dev)
 }
 
 static const struct dma_map_ops iommu_dma_ops = {
+	.flags			= DMA_F_PCI_P2PDMA_SUPPORTED,
 	.alloc			= iommu_dma_alloc,
 	.free			= iommu_dma_free,
 	.alloc_pages		= dma_common_alloc_pages,
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v4 10/23] nvme-pci: check DMA ops when indicating support for PCI P2PDMA
  2021-11-17 21:53 [PATCH v4 00/23] Userspace P2PDMA with O_DIRECT NVMe devices Logan Gunthorpe
                   ` (8 preceding siblings ...)
  2021-11-17 21:53 ` [PATCH v4 09/23] iommu/dma: support PCI P2PDMA pages in dma-iommu map_sg Logan Gunthorpe
@ 2021-11-17 21:53 ` Logan Gunthorpe
  2021-12-13 22:10   ` Chaitanya Kulkarni
  2021-11-17 21:53 ` [PATCH v4 11/23] nvme-pci: convert to using dma_map_sgtable() Logan Gunthorpe
                   ` (12 subsequent siblings)
  22 siblings, 1 reply; 40+ messages in thread
From: Logan Gunthorpe @ 2021-11-17 21:53 UTC (permalink / raw)
  To: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm, iommu
  Cc: Stephen Bates, Christoph Hellwig, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni,
	Logan Gunthorpe

Introduce a supports_pci_p2pdma() operation in nvme_ctrl_ops to
replace the fixed NVME_F_PCI_P2PDMA flag such that the dma_map_ops
flags can be checked for PCI P2PDMA support.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
---
 drivers/nvme/host/core.c |  3 ++-
 drivers/nvme/host/nvme.h |  2 +-
 drivers/nvme/host/pci.c  | 11 +++++++++--
 3 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 4b5de8f5435a..344414351314 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3819,7 +3819,8 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid,
 		blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, ns->queue);
 
 	blk_queue_flag_set(QUEUE_FLAG_NONROT, ns->queue);
-	if (ctrl->ops->flags & NVME_F_PCI_P2PDMA)
+	if (ctrl->ops->supports_pci_p2pdma &&
+	    ctrl->ops->supports_pci_p2pdma(ctrl))
 		blk_queue_flag_set(QUEUE_FLAG_PCI_P2PDMA, ns->queue);
 
 	ns->ctrl = ctrl;
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index b334af8aa264..a9f60b12a32b 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -486,7 +486,6 @@ struct nvme_ctrl_ops {
 	unsigned int flags;
 #define NVME_F_FABRICS			(1 << 0)
 #define NVME_F_METADATA_SUPPORTED	(1 << 1)
-#define NVME_F_PCI_P2PDMA		(1 << 2)
 	int (*reg_read32)(struct nvme_ctrl *ctrl, u32 off, u32 *val);
 	int (*reg_write32)(struct nvme_ctrl *ctrl, u32 off, u32 val);
 	int (*reg_read64)(struct nvme_ctrl *ctrl, u32 off, u64 *val);
@@ -494,6 +493,7 @@ struct nvme_ctrl_ops {
 	void (*submit_async_event)(struct nvme_ctrl *ctrl);
 	void (*delete_ctrl)(struct nvme_ctrl *ctrl);
 	int (*get_address)(struct nvme_ctrl *ctrl, char *buf, int size);
+	bool (*supports_pci_p2pdma)(struct nvme_ctrl *ctrl);
 };
 
 /*
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index ca2ee806d74b..72f623999ba5 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2900,17 +2900,24 @@ static int nvme_pci_get_address(struct nvme_ctrl *ctrl, char *buf, int size)
 	return snprintf(buf, size, "%s\n", dev_name(&pdev->dev));
 }
 
+static bool nvme_pci_supports_pci_p2pdma(struct nvme_ctrl *ctrl)
+{
+	struct nvme_dev *dev = to_nvme_dev(ctrl);
+
+	return dma_pci_p2pdma_supported(dev->dev);
+}
+
 static const struct nvme_ctrl_ops nvme_pci_ctrl_ops = {
 	.name			= "pcie",
 	.module			= THIS_MODULE,
-	.flags			= NVME_F_METADATA_SUPPORTED |
-				  NVME_F_PCI_P2PDMA,
+	.flags			= NVME_F_METADATA_SUPPORTED,
 	.reg_read32		= nvme_pci_reg_read32,
 	.reg_write32		= nvme_pci_reg_write32,
 	.reg_read64		= nvme_pci_reg_read64,
 	.free_ctrl		= nvme_pci_free_ctrl,
 	.submit_async_event	= nvme_pci_submit_async_event,
 	.get_address		= nvme_pci_get_address,
+	.supports_pci_p2pdma	= nvme_pci_supports_pci_p2pdma,
 };
 
 static int nvme_dev_map(struct nvme_dev *dev)
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v4 11/23] nvme-pci: convert to using dma_map_sgtable()
  2021-11-17 21:53 [PATCH v4 00/23] Userspace P2PDMA with O_DIRECT NVMe devices Logan Gunthorpe
                   ` (9 preceding siblings ...)
  2021-11-17 21:53 ` [PATCH v4 10/23] nvme-pci: check DMA ops when indicating support for PCI P2PDMA Logan Gunthorpe
@ 2021-11-17 21:53 ` Logan Gunthorpe
  2021-12-13 22:21   ` Chaitanya Kulkarni
  2021-11-17 21:53 ` [PATCH v4 12/23] RDMA/core: introduce ib_dma_pci_p2p_dma_supported() Logan Gunthorpe
                   ` (11 subsequent siblings)
  22 siblings, 1 reply; 40+ messages in thread
From: Logan Gunthorpe @ 2021-11-17 21:53 UTC (permalink / raw)
  To: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm, iommu
  Cc: Stephen Bates, Christoph Hellwig, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni,
	Logan Gunthorpe, Max Gurtovoy

The dma_map operations now support P2PDMA pages directly. So remove
the calls to pci_p2pdma_[un]map_sg_attrs() and replace them with calls
to dma_map_sgtable().

dma_map_sgtable() returns more complete error codes than dma_map_sg()
and allows differentiating EREMOTEIO errors in case an unsupported
P2PDMA transfer is requested. When this happens, return BLK_STS_TARGET
so the request isn't retried.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 drivers/nvme/host/pci.c | 69 +++++++++++++++++------------------------
 1 file changed, 29 insertions(+), 40 deletions(-)

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 72f623999ba5..3f2bd1efe076 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -229,11 +229,10 @@ struct nvme_iod {
 	bool use_sgl;
 	int aborted;
 	int npages;		/* In the PRP list. 0 means small pool in use */
-	int nents;		/* Used in scatterlist */
 	dma_addr_t first_dma;
 	unsigned int dma_len;	/* length of single DMA segment mapping */
 	dma_addr_t meta_dma;
-	struct scatterlist *sg;
+	struct sg_table sgt;
 };
 
 static inline unsigned int nvme_dbbuf_size(struct nvme_dev *dev)
@@ -531,7 +530,7 @@ static void nvme_commit_rqs(struct blk_mq_hw_ctx *hctx)
 static void **nvme_pci_iod_list(struct request *req)
 {
 	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
-	return (void **)(iod->sg + blk_rq_nr_phys_segments(req));
+	return (void **)(iod->sgt.sgl + blk_rq_nr_phys_segments(req));
 }
 
 static inline bool nvme_pci_use_sgls(struct nvme_dev *dev, struct request *req)
@@ -583,17 +582,6 @@ static void nvme_free_sgls(struct nvme_dev *dev, struct request *req)
 	}
 }
 
-static void nvme_unmap_sg(struct nvme_dev *dev, struct request *req)
-{
-	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
-
-	if (is_pci_p2pdma_page(sg_page(iod->sg)))
-		pci_p2pdma_unmap_sg(dev->dev, iod->sg, iod->nents,
-				    rq_dma_dir(req));
-	else
-		dma_unmap_sg(dev->dev, iod->sg, iod->nents, rq_dma_dir(req));
-}
-
 static void nvme_unmap_data(struct nvme_dev *dev, struct request *req)
 {
 	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
@@ -604,9 +592,10 @@ static void nvme_unmap_data(struct nvme_dev *dev, struct request *req)
 		return;
 	}
 
-	WARN_ON_ONCE(!iod->nents);
+	WARN_ON_ONCE(!iod->sgt.nents);
+
+	dma_unmap_sgtable(dev->dev, &iod->sgt, rq_dma_dir(req), 0);
 
-	nvme_unmap_sg(dev, req);
 	if (iod->npages == 0)
 		dma_pool_free(dev->prp_small_pool, nvme_pci_iod_list(req)[0],
 			      iod->first_dma);
@@ -614,7 +603,7 @@ static void nvme_unmap_data(struct nvme_dev *dev, struct request *req)
 		nvme_free_sgls(dev, req);
 	else
 		nvme_free_prps(dev, req);
-	mempool_free(iod->sg, dev->iod_mempool);
+	mempool_free(iod->sgt.sgl, dev->iod_mempool);
 }
 
 static void nvme_print_sgl(struct scatterlist *sgl, int nents)
@@ -637,7 +626,7 @@ static blk_status_t nvme_pci_setup_prps(struct nvme_dev *dev,
 	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
 	struct dma_pool *pool;
 	int length = blk_rq_payload_bytes(req);
-	struct scatterlist *sg = iod->sg;
+	struct scatterlist *sg = iod->sgt.sgl;
 	int dma_len = sg_dma_len(sg);
 	u64 dma_addr = sg_dma_address(sg);
 	int offset = dma_addr & (NVME_CTRL_PAGE_SIZE - 1);
@@ -710,16 +699,16 @@ static blk_status_t nvme_pci_setup_prps(struct nvme_dev *dev,
 		dma_len = sg_dma_len(sg);
 	}
 done:
-	cmnd->dptr.prp1 = cpu_to_le64(sg_dma_address(iod->sg));
+	cmnd->dptr.prp1 = cpu_to_le64(sg_dma_address(iod->sgt.sgl));
 	cmnd->dptr.prp2 = cpu_to_le64(iod->first_dma);
 	return BLK_STS_OK;
 free_prps:
 	nvme_free_prps(dev, req);
 	return BLK_STS_RESOURCE;
 bad_sgl:
-	WARN(DO_ONCE(nvme_print_sgl, iod->sg, iod->nents),
+	WARN(DO_ONCE(nvme_print_sgl, iod->sgt.sgl, iod->sgt.nents),
 			"Invalid SGL for payload:%d nents:%d\n",
-			blk_rq_payload_bytes(req), iod->nents);
+			blk_rq_payload_bytes(req), iod->sgt.nents);
 	return BLK_STS_IOERR;
 }
 
@@ -745,12 +734,13 @@ static void nvme_pci_sgl_set_seg(struct nvme_sgl_desc *sge,
 }
 
 static blk_status_t nvme_pci_setup_sgls(struct nvme_dev *dev,
-		struct request *req, struct nvme_rw_command *cmd, int entries)
+		struct request *req, struct nvme_rw_command *cmd)
 {
 	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
 	struct dma_pool *pool;
 	struct nvme_sgl_desc *sg_list;
-	struct scatterlist *sg = iod->sg;
+	struct scatterlist *sg = iod->sgt.sgl;
+	int entries = iod->sgt.nents;
 	dma_addr_t sgl_dma;
 	int i = 0;
 
@@ -848,7 +838,7 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req,
 {
 	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
 	blk_status_t ret = BLK_STS_RESOURCE;
-	int nr_mapped;
+	int rc;
 
 	if (blk_rq_nr_phys_segments(req) == 1) {
 		struct bio_vec bv = req_bvec(req);
@@ -866,26 +856,25 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req,
 	}
 
 	iod->dma_len = 0;
-	iod->sg = mempool_alloc(dev->iod_mempool, GFP_ATOMIC);
-	if (!iod->sg)
+	iod->sgt.sgl = mempool_alloc(dev->iod_mempool, GFP_ATOMIC);
+	if (!iod->sgt.sgl)
 		return BLK_STS_RESOURCE;
-	sg_init_table(iod->sg, blk_rq_nr_phys_segments(req));
-	iod->nents = blk_rq_map_sg(req->q, req, iod->sg);
-	if (!iod->nents)
+	sg_init_table(iod->sgt.sgl, blk_rq_nr_phys_segments(req));
+	iod->sgt.orig_nents = blk_rq_map_sg(req->q, req, iod->sgt.sgl);
+	if (!iod->sgt.orig_nents)
 		goto out_free_sg;
 
-	if (is_pci_p2pdma_page(sg_page(iod->sg)))
-		nr_mapped = pci_p2pdma_map_sg_attrs(dev->dev, iod->sg,
-				iod->nents, rq_dma_dir(req), DMA_ATTR_NO_WARN);
-	else
-		nr_mapped = dma_map_sg_attrs(dev->dev, iod->sg, iod->nents,
-					     rq_dma_dir(req), DMA_ATTR_NO_WARN);
-	if (!nr_mapped)
+	rc = dma_map_sgtable(dev->dev, &iod->sgt, rq_dma_dir(req),
+			     DMA_ATTR_NO_WARN);
+	if (rc) {
+		if (rc == -EREMOTEIO)
+			ret = BLK_STS_TARGET;
 		goto out_free_sg;
+	}
 
 	iod->use_sgl = nvme_pci_use_sgls(dev, req);
 	if (iod->use_sgl)
-		ret = nvme_pci_setup_sgls(dev, req, &cmnd->rw, nr_mapped);
+		ret = nvme_pci_setup_sgls(dev, req, &cmnd->rw);
 	else
 		ret = nvme_pci_setup_prps(dev, req, &cmnd->rw);
 	if (ret != BLK_STS_OK)
@@ -893,9 +882,9 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req,
 	return BLK_STS_OK;
 
 out_unmap_sg:
-	nvme_unmap_sg(dev, req);
+	dma_unmap_sgtable(dev->dev, &iod->sgt, rq_dma_dir(req), 0);
 out_free_sg:
-	mempool_free(iod->sg, dev->iod_mempool);
+	mempool_free(iod->sgt.sgl, dev->iod_mempool);
 	return ret;
 }
 
@@ -928,7 +917,7 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
 
 	iod->aborted = 0;
 	iod->npages = -1;
-	iod->nents = 0;
+	iod->sgt.nents = 0;
 
 	/*
 	 * We should not need to do this, but we're still using this to
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v4 12/23] RDMA/core: introduce ib_dma_pci_p2p_dma_supported()
  2021-11-17 21:53 [PATCH v4 00/23] Userspace P2PDMA with O_DIRECT NVMe devices Logan Gunthorpe
                   ` (10 preceding siblings ...)
  2021-11-17 21:53 ` [PATCH v4 11/23] nvme-pci: convert to using dma_map_sgtable() Logan Gunthorpe
@ 2021-11-17 21:53 ` Logan Gunthorpe
  2021-11-17 21:54 ` [PATCH v4 13/23] RDMA/rw: drop pci_p2pdma_[un]map_sg() Logan Gunthorpe
                   ` (10 subsequent siblings)
  22 siblings, 0 replies; 40+ messages in thread
From: Logan Gunthorpe @ 2021-11-17 21:53 UTC (permalink / raw)
  To: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm, iommu
  Cc: Stephen Bates, Christoph Hellwig, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni,
	Logan Gunthorpe, Jason Gunthorpe, Max Gurtovoy

Introduce the helper function ib_dma_pci_p2p_dma_supported() to check
if a given ib_device can be used in P2PDMA transfers. This ensures
the ib_device is not using virt_dma and also that the underlying
dma_device supports P2PDMA.

Use the new helper in nvme-rdma to replace the existing check for
ib_uses_virt_dma(). Adding the dma_pci_p2pdma_supported() check allows
switching away from pci_p2pdma_[un]map_sg().

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 drivers/nvme/target/rdma.c |  2 +-
 include/rdma/ib_verbs.h    | 11 +++++++++++
 2 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index 1deb4043e242..22519739a874 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -415,7 +415,7 @@ static int nvmet_rdma_alloc_rsp(struct nvmet_rdma_device *ndev,
 	if (ib_dma_mapping_error(ndev->device, r->send_sge.addr))
 		goto out_free_rsp;
 
-	if (!ib_uses_virt_dma(ndev->device))
+	if (ib_dma_pci_p2p_dma_supported(ndev->device))
 		r->req.p2p_client = &ndev->device->dev;
 	r->send_sge.length = sizeof(*r->req.cqe);
 	r->send_sge.lkey = ndev->pd->local_dma_lkey;
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index 6e9ad656ecb7..6355a0d5fd00 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -4003,6 +4003,17 @@ static inline bool ib_uses_virt_dma(struct ib_device *dev)
 	return IS_ENABLED(CONFIG_INFINIBAND_VIRT_DMA) && !dev->dma_device;
 }
 
+/*
+ * Check if a IB device's underlying DMA mapping supports P2PDMA transfers.
+ */
+static inline bool ib_dma_pci_p2p_dma_supported(struct ib_device *dev)
+{
+	if (ib_uses_virt_dma(dev))
+		return false;
+
+	return dma_pci_p2pdma_supported(dev->dma_device);
+}
+
 /**
  * ib_dma_mapping_error - check a DMA addr for error
  * @dev: The device for which the dma_addr was created
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v4 13/23] RDMA/rw: drop pci_p2pdma_[un]map_sg()
  2021-11-17 21:53 [PATCH v4 00/23] Userspace P2PDMA with O_DIRECT NVMe devices Logan Gunthorpe
                   ` (11 preceding siblings ...)
  2021-11-17 21:53 ` [PATCH v4 12/23] RDMA/core: introduce ib_dma_pci_p2p_dma_supported() Logan Gunthorpe
@ 2021-11-17 21:54 ` Logan Gunthorpe
  2021-11-17 21:54 ` [PATCH v4 14/23] PCI/P2PDMA: Remove pci_p2pdma_[un]map_sg() Logan Gunthorpe
                   ` (9 subsequent siblings)
  22 siblings, 0 replies; 40+ messages in thread
From: Logan Gunthorpe @ 2021-11-17 21:54 UTC (permalink / raw)
  To: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm, iommu
  Cc: Stephen Bates, Christoph Hellwig, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni,
	Logan Gunthorpe, Jason Gunthorpe

dma_map_sg() now supports the use of P2PDMA pages so pci_p2pdma_map_sg()
is no longer necessary and may be dropped. This means the
rdma_rw_[un]map_sg() helpers are no longer necessary. Remove it all.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/infiniband/core/rw.c | 45 ++++++++----------------------------
 1 file changed, 9 insertions(+), 36 deletions(-)

diff --git a/drivers/infiniband/core/rw.c b/drivers/infiniband/core/rw.c
index 5a3bd41b331c..d4517b68d1ca 100644
--- a/drivers/infiniband/core/rw.c
+++ b/drivers/infiniband/core/rw.c
@@ -273,33 +273,6 @@ static int rdma_rw_init_single_wr(struct rdma_rw_ctx *ctx, struct ib_qp *qp,
 	return 1;
 }
 
-static void rdma_rw_unmap_sg(struct ib_device *dev, struct scatterlist *sg,
-			     u32 sg_cnt, enum dma_data_direction dir)
-{
-	if (is_pci_p2pdma_page(sg_page(sg)))
-		pci_p2pdma_unmap_sg(dev->dma_device, sg, sg_cnt, dir);
-	else
-		ib_dma_unmap_sg(dev, sg, sg_cnt, dir);
-}
-
-static int rdma_rw_map_sgtable(struct ib_device *dev, struct sg_table *sgt,
-			       enum dma_data_direction dir)
-{
-	int nents;
-
-	if (is_pci_p2pdma_page(sg_page(sgt->sgl))) {
-		if (WARN_ON_ONCE(ib_uses_virt_dma(dev)))
-			return 0;
-		nents = pci_p2pdma_map_sg(dev->dma_device, sgt->sgl,
-					  sgt->orig_nents, dir);
-		if (!nents)
-			return -EIO;
-		sgt->nents = nents;
-		return 0;
-	}
-	return ib_dma_map_sgtable_attrs(dev, sgt, dir, 0);
-}
-
 /**
  * rdma_rw_ctx_init - initialize a RDMA READ/WRITE context
  * @ctx:	context to initialize
@@ -326,7 +299,7 @@ int rdma_rw_ctx_init(struct rdma_rw_ctx *ctx, struct ib_qp *qp, u32 port_num,
 	};
 	int ret;
 
-	ret = rdma_rw_map_sgtable(dev, &sgt, dir);
+	ret = ib_dma_map_sgtable_attrs(dev, &sgt, dir, 0);
 	if (ret)
 		return ret;
 	sg_cnt = sgt.nents;
@@ -365,7 +338,7 @@ int rdma_rw_ctx_init(struct rdma_rw_ctx *ctx, struct ib_qp *qp, u32 port_num,
 	return ret;
 
 out_unmap_sg:
-	rdma_rw_unmap_sg(dev, sgt.sgl, sgt.orig_nents, dir);
+	ib_dma_unmap_sgtable_attrs(dev, &sgt, dir, 0);
 	return ret;
 }
 EXPORT_SYMBOL(rdma_rw_ctx_init);
@@ -413,12 +386,12 @@ int rdma_rw_ctx_signature_init(struct rdma_rw_ctx *ctx, struct ib_qp *qp,
 		return -EINVAL;
 	}
 
-	ret = rdma_rw_map_sgtable(dev, &sgt, dir);
+	ret = ib_dma_map_sgtable_attrs(dev, &sgt, dir, 0);
 	if (ret)
 		return ret;
 
 	if (prot_sg_cnt) {
-		ret = rdma_rw_map_sgtable(dev, &prot_sgt, dir);
+		ret = ib_dma_map_sgtable_attrs(dev, &prot_sgt, dir, 0);
 		if (ret)
 			goto out_unmap_sg;
 	}
@@ -485,9 +458,9 @@ int rdma_rw_ctx_signature_init(struct rdma_rw_ctx *ctx, struct ib_qp *qp,
 	kfree(ctx->reg);
 out_unmap_prot_sg:
 	if (prot_sgt.nents)
-		rdma_rw_unmap_sg(dev, prot_sgt.sgl, prot_sgt.orig_nents, dir);
+		ib_dma_unmap_sgtable_attrs(dev, &prot_sgt, dir, 0);
 out_unmap_sg:
-	rdma_rw_unmap_sg(dev, sgt.sgl, sgt.orig_nents, dir);
+	ib_dma_unmap_sgtable_attrs(dev, &sgt, dir, 0);
 	return ret;
 }
 EXPORT_SYMBOL(rdma_rw_ctx_signature_init);
@@ -620,7 +593,7 @@ void rdma_rw_ctx_destroy(struct rdma_rw_ctx *ctx, struct ib_qp *qp,
 		break;
 	}
 
-	rdma_rw_unmap_sg(qp->pd->device, sg, sg_cnt, dir);
+	ib_dma_unmap_sg(qp->pd->device, sg, sg_cnt, dir);
 }
 EXPORT_SYMBOL(rdma_rw_ctx_destroy);
 
@@ -648,8 +621,8 @@ void rdma_rw_ctx_destroy_signature(struct rdma_rw_ctx *ctx, struct ib_qp *qp,
 	kfree(ctx->reg);
 
 	if (prot_sg_cnt)
-		rdma_rw_unmap_sg(qp->pd->device, prot_sg, prot_sg_cnt, dir);
-	rdma_rw_unmap_sg(qp->pd->device, sg, sg_cnt, dir);
+		ib_dma_unmap_sg(qp->pd->device, prot_sg, prot_sg_cnt, dir);
+	ib_dma_unmap_sg(qp->pd->device, sg, sg_cnt, dir);
 }
 EXPORT_SYMBOL(rdma_rw_ctx_destroy_signature);
 
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v4 14/23] PCI/P2PDMA: Remove pci_p2pdma_[un]map_sg()
  2021-11-17 21:53 [PATCH v4 00/23] Userspace P2PDMA with O_DIRECT NVMe devices Logan Gunthorpe
                   ` (12 preceding siblings ...)
  2021-11-17 21:54 ` [PATCH v4 13/23] RDMA/rw: drop pci_p2pdma_[un]map_sg() Logan Gunthorpe
@ 2021-11-17 21:54 ` Logan Gunthorpe
  2021-11-17 21:54 ` [PATCH v4 15/23] mm: introduce FOLL_PCI_P2PDMA to gate getting PCI P2PDMA pages Logan Gunthorpe
                   ` (8 subsequent siblings)
  22 siblings, 0 replies; 40+ messages in thread
From: Logan Gunthorpe @ 2021-11-17 21:54 UTC (permalink / raw)
  To: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm, iommu
  Cc: Stephen Bates, Christoph Hellwig, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni,
	Logan Gunthorpe, Bjorn Helgaas, Jason Gunthorpe, Max Gurtovoy

This interface is superseded by support in dma_map_sg() which now supports
heterogeneous scatterlists. There are no longer any users, so remove it.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 drivers/pci/p2pdma.c       | 65 --------------------------------------
 include/linux/pci-p2pdma.h | 27 ----------------
 2 files changed, 92 deletions(-)

diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c
index 6ad3a8816677..563e9be9599e 100644
--- a/drivers/pci/p2pdma.c
+++ b/drivers/pci/p2pdma.c
@@ -879,71 +879,6 @@ enum pci_p2pdma_map_type pci_p2pdma_map_type(struct dev_pagemap *pgmap,
 	return type;
 }
 
-static int __pci_p2pdma_map_sg(struct pci_p2pdma_pagemap *p2p_pgmap,
-		struct device *dev, struct scatterlist *sg, int nents)
-{
-	struct scatterlist *s;
-	int i;
-
-	for_each_sg(sg, s, nents, i) {
-		s->dma_address = sg_phys(s) + p2p_pgmap->bus_offset;
-		sg_dma_len(s) = s->length;
-	}
-
-	return nents;
-}
-
-/**
- * pci_p2pdma_map_sg_attrs - map a PCI peer-to-peer scatterlist for DMA
- * @dev: device doing the DMA request
- * @sg: scatter list to map
- * @nents: elements in the scatterlist
- * @dir: DMA direction
- * @attrs: DMA attributes passed to dma_map_sg() (if called)
- *
- * Scatterlists mapped with this function should be unmapped using
- * pci_p2pdma_unmap_sg_attrs().
- *
- * Returns the number of SG entries mapped or 0 on error.
- */
-int pci_p2pdma_map_sg_attrs(struct device *dev, struct scatterlist *sg,
-		int nents, enum dma_data_direction dir, unsigned long attrs)
-{
-	struct pci_p2pdma_pagemap *p2p_pgmap =
-		to_p2p_pgmap(sg_page(sg)->pgmap);
-
-	switch (pci_p2pdma_map_type(sg_page(sg)->pgmap, dev)) {
-	case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE:
-		return dma_map_sg_attrs(dev, sg, nents, dir, attrs);
-	case PCI_P2PDMA_MAP_BUS_ADDR:
-		return __pci_p2pdma_map_sg(p2p_pgmap, dev, sg, nents);
-	default:
-		return 0;
-	}
-}
-EXPORT_SYMBOL_GPL(pci_p2pdma_map_sg_attrs);
-
-/**
- * pci_p2pdma_unmap_sg_attrs - unmap a PCI peer-to-peer scatterlist that was
- *	mapped with pci_p2pdma_map_sg()
- * @dev: device doing the DMA request
- * @sg: scatter list to map
- * @nents: number of elements returned by pci_p2pdma_map_sg()
- * @dir: DMA direction
- * @attrs: DMA attributes passed to dma_unmap_sg() (if called)
- */
-void pci_p2pdma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg,
-		int nents, enum dma_data_direction dir, unsigned long attrs)
-{
-	enum pci_p2pdma_map_type map_type;
-
-	map_type = pci_p2pdma_map_type(sg_page(sg)->pgmap, dev);
-
-	if (map_type == PCI_P2PDMA_MAP_THRU_HOST_BRIDGE)
-		dma_unmap_sg_attrs(dev, sg, nents, dir, attrs);
-}
-EXPORT_SYMBOL_GPL(pci_p2pdma_unmap_sg_attrs);
-
 /**
  * pci_p2pdma_map_segment - map an sg segment determining the mapping type
  * @state: State structure that should be declared outside of the for_each_sg()
diff --git a/include/linux/pci-p2pdma.h b/include/linux/pci-p2pdma.h
index 8318a97c9c61..2c07aa6b7665 100644
--- a/include/linux/pci-p2pdma.h
+++ b/include/linux/pci-p2pdma.h
@@ -30,10 +30,6 @@ struct scatterlist *pci_p2pmem_alloc_sgl(struct pci_dev *pdev,
 					 unsigned int *nents, u32 length);
 void pci_p2pmem_free_sgl(struct pci_dev *pdev, struct scatterlist *sgl);
 void pci_p2pmem_publish(struct pci_dev *pdev, bool publish);
-int pci_p2pdma_map_sg_attrs(struct device *dev, struct scatterlist *sg,
-		int nents, enum dma_data_direction dir, unsigned long attrs);
-void pci_p2pdma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg,
-		int nents, enum dma_data_direction dir, unsigned long attrs);
 int pci_p2pdma_enable_store(const char *page, struct pci_dev **p2p_dev,
 			    bool *use_p2pdma);
 ssize_t pci_p2pdma_enable_show(char *page, struct pci_dev *p2p_dev,
@@ -83,17 +79,6 @@ static inline void pci_p2pmem_free_sgl(struct pci_dev *pdev,
 static inline void pci_p2pmem_publish(struct pci_dev *pdev, bool publish)
 {
 }
-static inline int pci_p2pdma_map_sg_attrs(struct device *dev,
-		struct scatterlist *sg, int nents, enum dma_data_direction dir,
-		unsigned long attrs)
-{
-	return 0;
-}
-static inline void pci_p2pdma_unmap_sg_attrs(struct device *dev,
-		struct scatterlist *sg, int nents, enum dma_data_direction dir,
-		unsigned long attrs)
-{
-}
 static inline int pci_p2pdma_enable_store(const char *page,
 		struct pci_dev **p2p_dev, bool *use_p2pdma)
 {
@@ -119,16 +104,4 @@ static inline struct pci_dev *pci_p2pmem_find(struct device *client)
 	return pci_p2pmem_find_many(&client, 1);
 }
 
-static inline int pci_p2pdma_map_sg(struct device *dev, struct scatterlist *sg,
-				    int nents, enum dma_data_direction dir)
-{
-	return pci_p2pdma_map_sg_attrs(dev, sg, nents, dir, 0);
-}
-
-static inline void pci_p2pdma_unmap_sg(struct device *dev,
-		struct scatterlist *sg, int nents, enum dma_data_direction dir)
-{
-	pci_p2pdma_unmap_sg_attrs(dev, sg, nents, dir, 0);
-}
-
 #endif /* _LINUX_PCI_P2P_H */
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v4 15/23] mm: introduce FOLL_PCI_P2PDMA to gate getting PCI P2PDMA pages
  2021-11-17 21:53 [PATCH v4 00/23] Userspace P2PDMA with O_DIRECT NVMe devices Logan Gunthorpe
                   ` (13 preceding siblings ...)
  2021-11-17 21:54 ` [PATCH v4 14/23] PCI/P2PDMA: Remove pci_p2pdma_[un]map_sg() Logan Gunthorpe
@ 2021-11-17 21:54 ` Logan Gunthorpe
  2021-11-17 21:54 ` [PATCH v4 16/23] iov_iter: introduce iov_iter_get_pages_[alloc_]flags() Logan Gunthorpe
                   ` (7 subsequent siblings)
  22 siblings, 0 replies; 40+ messages in thread
From: Logan Gunthorpe @ 2021-11-17 21:54 UTC (permalink / raw)
  To: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm, iommu
  Cc: Stephen Bates, Christoph Hellwig, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni,
	Logan Gunthorpe

Callers that expect PCI P2PDMA pages can now set FOLL_PCI_P2PDMA to
allow obtaining P2PDMA pages. If a caller does not set this flag
and tries to map P2PDMA pages it will fail.

This is implemented by checking failing if PCI p2pdma pages are
found when FOLL_PCI_P2PDMA is set. This is only done if pte_devmap()
is set.

FOLL_PCI_P2PDMA cannot be set if FOLL_LONGTERM is set.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
---
 include/linux/mm.h |  1 +
 mm/gup.c           | 22 +++++++++++++++++++++-
 2 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index a7e4a9e7d807..65cb27cebbab 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2971,6 +2971,7 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
 #define FOLL_SPLIT_PMD	0x20000	/* split huge pmd before returning */
 #define FOLL_PIN	0x40000	/* pages must be released via unpin_user_page */
 #define FOLL_FAST_ONLY	0x80000	/* gup_fast: prevent fall-back to slow gup */
+#define FOLL_PCI_P2PDMA	0x100000 /* allow returning PCI P2PDMA pages */
 
 /*
  * FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each
diff --git a/mm/gup.c b/mm/gup.c
index 2c51e9748a6a..c31461c3d256 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -527,6 +527,12 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
 			page = pte_page(pte);
 		else
 			goto no_page;
+
+		if (unlikely(!(flags & FOLL_PCI_P2PDMA) &&
+			     is_pci_p2pdma_page(page))) {
+			page = ERR_PTR(-EREMOTEIO);
+			goto out;
+		}
 	} else if (unlikely(!page)) {
 		if (flags & FOLL_DUMP) {
 			/* Avoid special (like zero) pages in core dumps */
@@ -980,6 +986,9 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags)
 	if ((gup_flags & FOLL_LONGTERM) && vma_is_fsdax(vma))
 		return -EOPNOTSUPP;
 
+	if ((gup_flags & FOLL_LONGTERM) && (gup_flags & FOLL_PCI_P2PDMA))
+		return -EOPNOTSUPP;
+
 	if (vma_is_secretmem(vma))
 		return -EFAULT;
 
@@ -2297,6 +2306,10 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
 		VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
 		page = pte_page(pte);
 
+		if (unlikely(pte_devmap(pte) && !(flags & FOLL_PCI_P2PDMA) &&
+			     is_pci_p2pdma_page(page)))
+			goto pte_unmap;
+
 		head = try_grab_compound_head(page, 1, flags);
 		if (!head)
 			goto pte_unmap;
@@ -2374,6 +2387,12 @@ static int __gup_device_huge(unsigned long pfn, unsigned long addr,
 			undo_dev_pagemap(nr, nr_start, flags, pages);
 			break;
 		}
+
+		if (!(flags & FOLL_PCI_P2PDMA) && is_pci_p2pdma_page(page)) {
+			undo_dev_pagemap(nr, nr_start, flags, pages);
+			break;
+		}
+
 		SetPageReferenced(page);
 		pages[*nr] = page;
 		if (unlikely(!try_grab_page(page, flags))) {
@@ -2842,7 +2861,8 @@ static int internal_get_user_pages_fast(unsigned long start,
 
 	if (WARN_ON_ONCE(gup_flags & ~(FOLL_WRITE | FOLL_LONGTERM |
 				       FOLL_FORCE | FOLL_PIN | FOLL_GET |
-				       FOLL_FAST_ONLY | FOLL_NOFAULT)))
+				       FOLL_FAST_ONLY | FOLL_NOFAULT |
+				       FOLL_PCI_P2PDMA)))
 		return -EINVAL;
 
 	if (gup_flags & FOLL_PIN)
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v4 16/23] iov_iter: introduce iov_iter_get_pages_[alloc_]flags()
  2021-11-17 21:53 [PATCH v4 00/23] Userspace P2PDMA with O_DIRECT NVMe devices Logan Gunthorpe
                   ` (14 preceding siblings ...)
  2021-11-17 21:54 ` [PATCH v4 15/23] mm: introduce FOLL_PCI_P2PDMA to gate getting PCI P2PDMA pages Logan Gunthorpe
@ 2021-11-17 21:54 ` Logan Gunthorpe
  2021-12-21  9:04   ` Christoph Hellwig
  2021-11-17 21:54 ` [PATCH v4 17/23] block: add check when merging zone device pages Logan Gunthorpe
                   ` (6 subsequent siblings)
  22 siblings, 1 reply; 40+ messages in thread
From: Logan Gunthorpe @ 2021-11-17 21:54 UTC (permalink / raw)
  To: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm, iommu
  Cc: Stephen Bates, Christoph Hellwig, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni,
	Logan Gunthorpe

Add iov_iter_get_pages_flags() and iov_iter_get_pages_alloc_flags()
which take a flags argument that is passed to get_user_pages_fast().

This is so that FOLL_PCI_P2PDMA can be passed when appropriate.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
---
 include/linux/uio.h | 21 +++++++++++++++++----
 lib/iov_iter.c      | 15 +++++++--------
 2 files changed, 24 insertions(+), 12 deletions(-)

diff --git a/include/linux/uio.h b/include/linux/uio.h
index 6350354f97e9..4c6e64d2f7dd 100644
--- a/include/linux/uio.h
+++ b/include/linux/uio.h
@@ -243,10 +243,23 @@ void iov_iter_pipe(struct iov_iter *i, unsigned int direction, struct pipe_inode
 void iov_iter_discard(struct iov_iter *i, unsigned int direction, size_t count);
 void iov_iter_xarray(struct iov_iter *i, unsigned int direction, struct xarray *xarray,
 		     loff_t start, size_t count);
-ssize_t iov_iter_get_pages(struct iov_iter *i, struct page **pages,
-			size_t maxsize, unsigned maxpages, size_t *start);
-ssize_t iov_iter_get_pages_alloc(struct iov_iter *i, struct page ***pages,
-			size_t maxsize, size_t *start);
+ssize_t iov_iter_get_pages_flags(struct iov_iter *i, struct page **pages,
+		size_t maxsize, unsigned maxpages, size_t *start,
+		unsigned int gup_flags);
+ssize_t iov_iter_get_pages_alloc_flags(struct iov_iter *i,
+		struct page ***pages, size_t maxsize, size_t *start,
+		unsigned int gup_flags);
+static inline ssize_t iov_iter_get_pages(struct iov_iter *i,
+		struct page **pages, size_t maxsize, unsigned maxpages,
+		size_t *start)
+{
+	return iov_iter_get_pages_flags(i, pages, maxsize, maxpages, start, 0);
+}
+static inline ssize_t iov_iter_get_pages_alloc(struct iov_iter *i,
+		struct page ***pages, size_t maxsize, size_t *start)
+{
+	return iov_iter_get_pages_alloc_flags(i, pages, maxsize, start, 0);
+}
 int iov_iter_npages(const struct iov_iter *i, int maxpages);
 void iov_iter_restore(struct iov_iter *i, struct iov_iter_state *state);
 
diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index 66a740e6e153..0d557e0e82b2 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -1515,9 +1515,9 @@ static struct page *first_bvec_segment(const struct iov_iter *i,
 	return page;
 }
 
-ssize_t iov_iter_get_pages(struct iov_iter *i,
+ssize_t iov_iter_get_pages_flags(struct iov_iter *i,
 		   struct page **pages, size_t maxsize, unsigned maxpages,
-		   size_t *start)
+		   size_t *start, unsigned int gup_flags)
 {
 	size_t len;
 	int n, res;
@@ -1528,7 +1528,6 @@ ssize_t iov_iter_get_pages(struct iov_iter *i,
 		return 0;
 
 	if (likely(iter_is_iovec(i))) {
-		unsigned int gup_flags = 0;
 		unsigned long addr;
 
 		if (iov_iter_rw(i) != WRITE)
@@ -1558,7 +1557,7 @@ ssize_t iov_iter_get_pages(struct iov_iter *i,
 		return iter_xarray_get_pages(i, pages, maxsize, maxpages, start);
 	return -EFAULT;
 }
-EXPORT_SYMBOL(iov_iter_get_pages);
+EXPORT_SYMBOL(iov_iter_get_pages_flags);
 
 static struct page **get_pages_array(size_t n)
 {
@@ -1640,9 +1639,9 @@ static ssize_t iter_xarray_get_pages_alloc(struct iov_iter *i,
 	return actual;
 }
 
-ssize_t iov_iter_get_pages_alloc(struct iov_iter *i,
+ssize_t iov_iter_get_pages_alloc_flags(struct iov_iter *i,
 		   struct page ***pages, size_t maxsize,
-		   size_t *start)
+		   size_t *start, unsigned int gup_flags)
 {
 	struct page **p;
 	size_t len;
@@ -1654,7 +1653,6 @@ ssize_t iov_iter_get_pages_alloc(struct iov_iter *i,
 		return 0;
 
 	if (likely(iter_is_iovec(i))) {
-		unsigned int gup_flags = 0;
 		unsigned long addr;
 
 		if (iov_iter_rw(i) != WRITE)
@@ -1667,6 +1665,7 @@ ssize_t iov_iter_get_pages_alloc(struct iov_iter *i,
 		p = get_pages_array(n);
 		if (!p)
 			return -ENOMEM;
+
 		res = get_user_pages_fast(addr, n, gup_flags, p);
 		if (unlikely(res <= 0)) {
 			kvfree(p);
@@ -1694,7 +1693,7 @@ ssize_t iov_iter_get_pages_alloc(struct iov_iter *i,
 		return iter_xarray_get_pages_alloc(i, pages, maxsize, start);
 	return -EFAULT;
 }
-EXPORT_SYMBOL(iov_iter_get_pages_alloc);
+EXPORT_SYMBOL(iov_iter_get_pages_alloc_flags);
 
 size_t csum_and_copy_from_iter(void *addr, size_t bytes, __wsum *csum,
 			       struct iov_iter *i)
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v4 17/23] block: add check when merging zone device pages
  2021-11-17 21:53 [PATCH v4 00/23] Userspace P2PDMA with O_DIRECT NVMe devices Logan Gunthorpe
                   ` (15 preceding siblings ...)
  2021-11-17 21:54 ` [PATCH v4 16/23] iov_iter: introduce iov_iter_get_pages_[alloc_]flags() Logan Gunthorpe
@ 2021-11-17 21:54 ` Logan Gunthorpe
  2021-12-21  9:05   ` Christoph Hellwig
  2021-11-17 21:54 ` [PATCH v4 18/23] lib/scatterlist: " Logan Gunthorpe
                   ` (5 subsequent siblings)
  22 siblings, 1 reply; 40+ messages in thread
From: Logan Gunthorpe @ 2021-11-17 21:54 UTC (permalink / raw)
  To: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm, iommu
  Cc: Stephen Bates, Christoph Hellwig, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni,
	Logan Gunthorpe

Consecutive zone device pages should not be merged into the same sgl
or bvec segment with other types of pages or if they belong to different
pgmaps. Otherwise getting the pgmap of a given segment is not possible
without scanning the entire segment. This helper returns true either if
both pages are not zone device pages or both pages are zone device
pages with the same pgmap.

Add a helper to determine if zone device pages are mergeable and use
this helper in page_is_mergeable().

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
---
 block/bio.c        |  2 ++
 include/linux/mm.h | 23 +++++++++++++++++++++++
 2 files changed, 25 insertions(+)

diff --git a/block/bio.c b/block/bio.c
index 15ab0d6d1c06..f4e2e30d7a24 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -803,6 +803,8 @@ static inline bool page_is_mergeable(const struct bio_vec *bv,
 		return false;
 	if (xen_domain() && !xen_biovec_phys_mergeable(bv, page))
 		return false;
+	if (!zone_device_pages_are_mergeable(bv->bv_page, page))
+		return false;
 
 	*same_page = ((vec_end_addr & PAGE_MASK) == page_addr);
 	if (*same_page)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 65cb27cebbab..3367d936b256 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1118,6 +1118,24 @@ static inline bool is_zone_device_page(const struct page *page)
 {
 	return page_zonenum(page) == ZONE_DEVICE;
 }
+
+/*
+ * Consecutive zone device pages should not be merged into the same sgl
+ * or bvec segment with other types of pages or if they belong to different
+ * pgmaps. Otherwise getting the pgmap of a given segment is not possible
+ * without scanning the entire segment. This helper returns true either if
+ * both pages are not zone device pages or both pages are zone device pages
+ * with the same pgmap.
+ */
+static inline bool zone_device_pages_are_mergeable(const struct page *a,
+						   const struct page *b)
+{
+	if (is_zone_device_page(a) != is_zone_device_page(b))
+		return false;
+	if (!is_zone_device_page(a))
+		return true;
+	return a->pgmap == b->pgmap;
+}
 extern void memmap_init_zone_device(struct zone *, unsigned long,
 				    unsigned long, struct dev_pagemap *);
 #else
@@ -1125,6 +1143,11 @@ static inline bool is_zone_device_page(const struct page *page)
 {
 	return false;
 }
+static inline bool zone_device_pages_are_mergeable(const struct page *a,
+						   const struct page *b)
+{
+	return true;
+}
 #endif
 
 static inline bool is_zone_movable_page(const struct page *page)
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v4 18/23] lib/scatterlist: add check when merging zone device pages
  2021-11-17 21:53 [PATCH v4 00/23] Userspace P2PDMA with O_DIRECT NVMe devices Logan Gunthorpe
                   ` (16 preceding siblings ...)
  2021-11-17 21:54 ` [PATCH v4 17/23] block: add check when merging zone device pages Logan Gunthorpe
@ 2021-11-17 21:54 ` Logan Gunthorpe
  2021-11-17 21:54 ` [PATCH v4 19/23] block: set FOLL_PCI_P2PDMA in __bio_iov_iter_get_pages() Logan Gunthorpe
                   ` (4 subsequent siblings)
  22 siblings, 0 replies; 40+ messages in thread
From: Logan Gunthorpe @ 2021-11-17 21:54 UTC (permalink / raw)
  To: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm, iommu
  Cc: Stephen Bates, Christoph Hellwig, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni,
	Logan Gunthorpe

Consecutive zone device pages should not be merged into the same sgl
or bvec segment with other types of pages or if they belong to different
pgmaps. Otherwise getting the pgmap of a given segment is not possible
without scanning the entire segment. This helper returns true either if
both pages are not zone device pages or both pages are zone device
pages with the same pgmap.

Factor out the check for page mergability into a pages_are_mergable()
helper and add a check with zone_device_pages_are_mergeable().

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
---
 lib/scatterlist.c | 25 +++++++++++++++----------
 1 file changed, 15 insertions(+), 10 deletions(-)

diff --git a/lib/scatterlist.c b/lib/scatterlist.c
index d5e82e4a57ad..dc473010235c 100644
--- a/lib/scatterlist.c
+++ b/lib/scatterlist.c
@@ -410,6 +410,15 @@ static struct scatterlist *get_next_sg(struct sg_append_table *table,
 	return new_sg;
 }
 
+static bool pages_are_mergeable(struct page *a, struct page *b)
+{
+	if (page_to_pfn(a) != page_to_pfn(b) + 1)
+		return false;
+	if (!zone_device_pages_are_mergeable(a, b))
+		return false;
+	return true;
+}
+
 /**
  * sg_alloc_append_table_from_pages - Allocate and initialize an append sg
  *                                    table from an array of pages
@@ -447,6 +456,7 @@ int sg_alloc_append_table_from_pages(struct sg_append_table *sgt_append,
 	unsigned int chunks, cur_page, seg_len, i, prv_len = 0;
 	unsigned int added_nents = 0;
 	struct scatterlist *s = sgt_append->prv;
+	struct page *last_pg;
 
 	/*
 	 * The algorithm below requires max_segment to be aligned to PAGE_SIZE
@@ -460,21 +470,17 @@ int sg_alloc_append_table_from_pages(struct sg_append_table *sgt_append,
 		return -EOPNOTSUPP;
 
 	if (sgt_append->prv) {
-		unsigned long paddr =
-			(page_to_pfn(sg_page(sgt_append->prv)) * PAGE_SIZE +
-			 sgt_append->prv->offset + sgt_append->prv->length) /
-			PAGE_SIZE;
-
 		if (WARN_ON(offset))
 			return -EINVAL;
 
 		/* Merge contiguous pages into the last SG */
 		prv_len = sgt_append->prv->length;
-		while (n_pages && page_to_pfn(pages[0]) == paddr) {
+		last_pg = sg_page(sgt_append->prv);
+		while (n_pages && pages_are_mergeable(last_pg, pages[0])) {
 			if (sgt_append->prv->length + PAGE_SIZE > max_segment)
 				break;
 			sgt_append->prv->length += PAGE_SIZE;
-			paddr++;
+			last_pg = pages[0];
 			pages++;
 			n_pages--;
 		}
@@ -488,7 +494,7 @@ int sg_alloc_append_table_from_pages(struct sg_append_table *sgt_append,
 	for (i = 1; i < n_pages; i++) {
 		seg_len += PAGE_SIZE;
 		if (seg_len >= max_segment ||
-		    page_to_pfn(pages[i]) != page_to_pfn(pages[i - 1]) + 1) {
+		    !pages_are_mergeable(pages[i], pages[i - 1])) {
 			chunks++;
 			seg_len = 0;
 		}
@@ -504,8 +510,7 @@ int sg_alloc_append_table_from_pages(struct sg_append_table *sgt_append,
 		for (j = cur_page + 1; j < n_pages; j++) {
 			seg_len += PAGE_SIZE;
 			if (seg_len >= max_segment ||
-			    page_to_pfn(pages[j]) !=
-			    page_to_pfn(pages[j - 1]) + 1)
+			    !pages_are_mergeable(pages[j], pages[j - 1]))
 				break;
 		}
 
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v4 19/23] block: set FOLL_PCI_P2PDMA in __bio_iov_iter_get_pages()
  2021-11-17 21:53 [PATCH v4 00/23] Userspace P2PDMA with O_DIRECT NVMe devices Logan Gunthorpe
                   ` (17 preceding siblings ...)
  2021-11-17 21:54 ` [PATCH v4 18/23] lib/scatterlist: " Logan Gunthorpe
@ 2021-11-17 21:54 ` Logan Gunthorpe
  2021-11-17 21:54 ` [PATCH v4 20/23] block: set FOLL_PCI_P2PDMA in bio_map_user_iov() Logan Gunthorpe
                   ` (3 subsequent siblings)
  22 siblings, 0 replies; 40+ messages in thread
From: Logan Gunthorpe @ 2021-11-17 21:54 UTC (permalink / raw)
  To: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm, iommu
  Cc: Stephen Bates, Christoph Hellwig, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni,
	Logan Gunthorpe

When a bio's queue supports PCI P2PDMA, set FOLL_PCI_P2PDMA for
iov_iter_get_pages_flags(). This allows PCI P2PDMA pages to be passed
from userspace and enables the O_DIRECT path in iomap based filesystems
and direct to block devices.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
---
 block/bio.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/block/bio.c b/block/bio.c
index f4e2e30d7a24..f0a17c7f41c3 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1096,6 +1096,7 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
 	struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
 	struct page **pages = (struct page **)bv;
 	bool same_page = false;
+	unsigned int flags = 0;
 	ssize_t size, left;
 	unsigned len, i;
 	size_t offset;
@@ -1108,7 +1109,12 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
 	BUILD_BUG_ON(PAGE_PTRS_PER_BVEC < 2);
 	pages += entries_left * (PAGE_PTRS_PER_BVEC - 1);
 
-	size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset);
+	if (bio->bi_bdev && bio->bi_bdev->bd_disk &&
+	    blk_queue_pci_p2pdma(bio->bi_bdev->bd_disk->queue))
+		flags |= FOLL_PCI_P2PDMA;
+
+	size = iov_iter_get_pages_flags(iter, pages, LONG_MAX, nr_pages,
+					&offset, flags);
 	if (unlikely(size <= 0))
 		return size ? size : -EFAULT;
 
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v4 20/23] block: set FOLL_PCI_P2PDMA in bio_map_user_iov()
  2021-11-17 21:53 [PATCH v4 00/23] Userspace P2PDMA with O_DIRECT NVMe devices Logan Gunthorpe
                   ` (18 preceding siblings ...)
  2021-11-17 21:54 ` [PATCH v4 19/23] block: set FOLL_PCI_P2PDMA in __bio_iov_iter_get_pages() Logan Gunthorpe
@ 2021-11-17 21:54 ` Logan Gunthorpe
  2021-11-17 21:54 ` [PATCH v4 21/23] mm: use custom page_free for P2PDMA pages Logan Gunthorpe
                   ` (2 subsequent siblings)
  22 siblings, 0 replies; 40+ messages in thread
From: Logan Gunthorpe @ 2021-11-17 21:54 UTC (permalink / raw)
  To: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm, iommu
  Cc: Stephen Bates, Christoph Hellwig, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni,
	Logan Gunthorpe

When a bio's queue supports PCI P2PDMA, set FOLL_PCI_P2PDMA for
iov_iter_get_pages_flags(). This allows PCI P2PDMA pages to be
passed from userspace and enables the NVMe passthru requests to
use P2PDMA pages.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
---
 block/blk-map.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/block/blk-map.c b/block/blk-map.c
index 4526adde0156..7508448e290c 100644
--- a/block/blk-map.c
+++ b/block/blk-map.c
@@ -234,6 +234,7 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter,
 		gfp_t gfp_mask)
 {
 	unsigned int max_sectors = queue_max_hw_sectors(rq->q);
+	unsigned int flags = 0;
 	struct bio *bio;
 	int ret;
 	int j;
@@ -246,13 +247,17 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter,
 		return -ENOMEM;
 	bio->bi_opf |= req_op(rq);
 
+	if (blk_queue_pci_p2pdma(rq->q))
+		flags |= FOLL_PCI_P2PDMA;
+
 	while (iov_iter_count(iter)) {
 		struct page **pages;
 		ssize_t bytes;
 		size_t offs, added = 0;
 		int npages;
 
-		bytes = iov_iter_get_pages_alloc(iter, &pages, LONG_MAX, &offs);
+		bytes = iov_iter_get_pages_alloc_flags(iter, &pages, LONG_MAX,
+						       &offs, flags);
 		if (unlikely(bytes <= 0)) {
 			ret = bytes ? bytes : -EFAULT;
 			goto out_unmap;
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v4 21/23] mm: use custom page_free for P2PDMA pages
  2021-11-17 21:53 [PATCH v4 00/23] Userspace P2PDMA with O_DIRECT NVMe devices Logan Gunthorpe
                   ` (19 preceding siblings ...)
  2021-11-17 21:54 ` [PATCH v4 20/23] block: set FOLL_PCI_P2PDMA in bio_map_user_iov() Logan Gunthorpe
@ 2021-11-17 21:54 ` Logan Gunthorpe
  2021-12-21  9:06   ` Christoph Hellwig
  2021-11-17 21:54 ` [PATCH v4 22/23] PCI/P2PDMA: Introduce pci_mmap_p2pmem() Logan Gunthorpe
  2021-11-17 21:54 ` [PATCH v4 23/23] nvme-pci: allow mmaping the CMB in userspace Logan Gunthorpe
  22 siblings, 1 reply; 40+ messages in thread
From: Logan Gunthorpe @ 2021-11-17 21:54 UTC (permalink / raw)
  To: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm, iommu
  Cc: Stephen Bates, Christoph Hellwig, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni,
	Logan Gunthorpe

When P2PDMA pages are passed to userspace, they will need to be
reference counted properly and returned to their genalloc after their
reference count returns to 1. This is accomplished with the existing
DEV_PAGEMAP_OPS and the .page_free() operation.

Change CONFIG_P2PDMA to select CONFIG_DEV_PAGEMAP_OPS and add
MEMORY_DEVICE_PCI_P2PDMA to page_is_devmap_managed(),
devmap_managed_enable_[put|get]() and free_devmap_managed_page().

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
---
 drivers/pci/Kconfig  |  1 +
 drivers/pci/p2pdma.c | 13 +++++++++++++
 include/linux/mm.h   |  1 +
 mm/memremap.c        | 12 +++++++++---
 4 files changed, 24 insertions(+), 3 deletions(-)

diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig
index 95f29601a4df..da53799cddab 100644
--- a/drivers/pci/Kconfig
+++ b/drivers/pci/Kconfig
@@ -170,6 +170,7 @@ config PCI_P2PDMA
 	#
 	select NEED_SG_DMA_BUS_ADDR_FLAG
 	select GENERIC_ALLOCATOR
+	select DEV_PAGEMAP_OPS
 	help
 	  Enableѕ drivers to do PCI peer-to-peer transactions to and from
 	  BARs that are exposed in other devices that are the part of
diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c
index 563e9be9599e..16992b0f0c36 100644
--- a/drivers/pci/p2pdma.c
+++ b/drivers/pci/p2pdma.c
@@ -101,6 +101,18 @@ static const struct attribute_group p2pmem_group = {
 	.name = "p2pmem",
 };
 
+static void p2pdma_page_free(struct page *page)
+{
+	struct pci_p2pdma_pagemap *pgmap = to_p2p_pgmap(page->pgmap);
+
+	gen_pool_free(pgmap->provider->p2pdma->pool,
+		      (uintptr_t)page_to_virt(page), PAGE_SIZE);
+}
+
+static const struct dev_pagemap_ops p2pdma_pgmap_ops = {
+	.page_free = p2pdma_page_free,
+};
+
 static void pci_p2pdma_release(void *data)
 {
 	struct pci_dev *pdev = data;
@@ -198,6 +210,7 @@ int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size,
 	pgmap->range.end = pgmap->range.start + size - 1;
 	pgmap->nr_range = 1;
 	pgmap->type = MEMORY_DEVICE_PCI_P2PDMA;
+	pgmap->ops = &p2pdma_pgmap_ops;
 
 	p2p_pgmap->provider = pdev;
 	p2p_pgmap->bus_offset = pci_bus_address(pdev, bar) -
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 3367d936b256..f26ea7e1fc74 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1168,6 +1168,7 @@ static inline bool page_is_devmap_managed(struct page *page)
 	switch (page->pgmap->type) {
 	case MEMORY_DEVICE_PRIVATE:
 	case MEMORY_DEVICE_FS_DAX:
+	case MEMORY_DEVICE_PCI_P2PDMA:
 		return true;
 	default:
 		break;
diff --git a/mm/memremap.c b/mm/memremap.c
index 5a66a71ab591..ec3143ffdeee 100644
--- a/mm/memremap.c
+++ b/mm/memremap.c
@@ -44,14 +44,16 @@ EXPORT_SYMBOL(devmap_managed_key);
 static void devmap_managed_enable_put(struct dev_pagemap *pgmap)
 {
 	if (pgmap->type == MEMORY_DEVICE_PRIVATE ||
-	    pgmap->type == MEMORY_DEVICE_FS_DAX)
+	    pgmap->type == MEMORY_DEVICE_FS_DAX ||
+	    pgmap->type == MEMORY_DEVICE_PCI_P2PDMA)
 		static_branch_dec(&devmap_managed_key);
 }
 
 static void devmap_managed_enable_get(struct dev_pagemap *pgmap)
 {
 	if (pgmap->type == MEMORY_DEVICE_PRIVATE ||
-	    pgmap->type == MEMORY_DEVICE_FS_DAX)
+	    pgmap->type == MEMORY_DEVICE_FS_DAX ||
+	    pgmap->type == MEMORY_DEVICE_PCI_P2PDMA)
 		static_branch_inc(&devmap_managed_key);
 }
 #else
@@ -355,6 +357,10 @@ void *memremap_pages(struct dev_pagemap *pgmap, int nid)
 	case MEMORY_DEVICE_GENERIC:
 		break;
 	case MEMORY_DEVICE_PCI_P2PDMA:
+		if (!pgmap->ops->page_free) {
+			WARN(1, "Missing page_free method\n");
+			return ERR_PTR(-EINVAL);
+		}
 		params.pgprot = pgprot_noncached(params.pgprot);
 		break;
 	default:
@@ -498,7 +504,7 @@ EXPORT_SYMBOL_GPL(get_dev_pagemap);
 void free_devmap_managed_page(struct page *page)
 {
 	/* notify page idle for dax */
-	if (!is_device_private_page(page)) {
+	if (!is_device_private_page(page) && !is_pci_p2pdma_page(page)) {
 		wake_up_var(&page->_refcount);
 		return;
 	}
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v4 22/23] PCI/P2PDMA: Introduce pci_mmap_p2pmem()
  2021-11-17 21:53 [PATCH v4 00/23] Userspace P2PDMA with O_DIRECT NVMe devices Logan Gunthorpe
                   ` (20 preceding siblings ...)
  2021-11-17 21:54 ` [PATCH v4 21/23] mm: use custom page_free for P2PDMA pages Logan Gunthorpe
@ 2021-11-17 21:54 ` Logan Gunthorpe
  2021-11-17 21:54 ` [PATCH v4 23/23] nvme-pci: allow mmaping the CMB in userspace Logan Gunthorpe
  22 siblings, 0 replies; 40+ messages in thread
From: Logan Gunthorpe @ 2021-11-17 21:54 UTC (permalink / raw)
  To: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm, iommu
  Cc: Stephen Bates, Christoph Hellwig, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni,
	Logan Gunthorpe, Bjorn Helgaas

Introduce pci_mmap_p2pmem() which is a helper to allocate and mmap
a hunk of p2pmem into userspace.

Pages are allocated from the genalloc in bulk and their reference count
incremented. They are returned to the genalloc when the page is put.

The VMA does not take a reference to the pages when they are inserted
with vmf_insert_mixed() (which is necessary for zone device pages) so
the backing P2P memory is stored in a structures in vm_private_data.

A pseudo mount is used to allocate an inode for each PCI device. The
inode's address_space is used in the file doing the mmap so that all
VMAs are collected and can be unmapped if the PCI device is unbound.
After unmapping, the VMAs are iterated through and their pages are
put so the device can continue to be unbound. An active flag is used
to signal to VMAs not to allocate any further P2P memory once the
removal process starts. The flag is synchronized with concurrent
access with an RCU lock.

The VMAs and inode will survive after the unbind of the device, but no
pages will be present in the VMA and a subsequent access will result
in a SIGBUS error.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
---
 drivers/pci/p2pdma.c       | 301 ++++++++++++++++++++++++++++++++++++-
 include/linux/pci-p2pdma.h |  11 ++
 include/uapi/linux/magic.h |   1 +
 3 files changed, 311 insertions(+), 2 deletions(-)

diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c
index 16992b0f0c36..641a7808a527 100644
--- a/drivers/pci/p2pdma.c
+++ b/drivers/pci/p2pdma.c
@@ -17,14 +17,19 @@
 #include <linux/genalloc.h>
 #include <linux/memremap.h>
 #include <linux/percpu-refcount.h>
+#include <linux/pfn_t.h>
+#include <linux/pseudo_fs.h>
 #include <linux/random.h>
 #include <linux/seq_buf.h>
 #include <linux/xarray.h>
+#include <uapi/linux/magic.h>
 
 struct pci_p2pdma {
 	struct gen_pool *pool;
 	bool p2pmem_published;
 	struct xarray map_types;
+	struct inode *inode;
+	bool active;
 };
 
 struct pci_p2pdma_pagemap {
@@ -33,6 +38,15 @@ struct pci_p2pdma_pagemap {
 	u64 bus_offset;
 };
 
+struct pci_p2pdma_map {
+	struct kref ref;
+	struct rcu_head rcu;
+	struct pci_dev *pdev;
+	struct inode *inode;
+	void *kaddr;
+	size_t len;
+};
+
 static struct pci_p2pdma_pagemap *to_p2p_pgmap(struct dev_pagemap *pgmap)
 {
 	return container_of(pgmap, struct pci_p2pdma_pagemap, pgmap);
@@ -101,6 +115,26 @@ static const struct attribute_group p2pmem_group = {
 	.name = "p2pmem",
 };
 
+/*
+ * P2PDMA internal mount
+ * Fake an internal VFS mount-point in order to allocate struct address_space
+ * mappings to remove VMAs on unbind events.
+ */
+static int pci_p2pdma_fs_cnt;
+static struct vfsmount *pci_p2pdma_fs_mnt;
+
+static int pci_p2pdma_fs_init_fs_context(struct fs_context *fc)
+{
+	return init_pseudo(fc, P2PDMA_MAGIC) ? 0 : -ENOMEM;
+}
+
+static struct file_system_type pci_p2pdma_fs_type = {
+	.name = "p2dma",
+	.owner = THIS_MODULE,
+	.init_fs_context = pci_p2pdma_fs_init_fs_context,
+	.kill_sb = kill_anon_super,
+};
+
 static void p2pdma_page_free(struct page *page)
 {
 	struct pci_p2pdma_pagemap *pgmap = to_p2p_pgmap(page->pgmap);
@@ -129,6 +163,9 @@ static void pci_p2pdma_release(void *data)
 	gen_pool_destroy(p2pdma->pool);
 	sysfs_remove_group(&pdev->dev.kobj, &p2pmem_group);
 	xa_destroy(&p2pdma->map_types);
+
+	iput(p2pdma->inode);
+	simple_release_fs(&pci_p2pdma_fs_mnt, &pci_p2pdma_fs_cnt);
 }
 
 static int pci_p2pdma_setup(struct pci_dev *pdev)
@@ -146,17 +183,32 @@ static int pci_p2pdma_setup(struct pci_dev *pdev)
 	if (!p2p->pool)
 		goto out;
 
-	error = devm_add_action_or_reset(&pdev->dev, pci_p2pdma_release, pdev);
+	error = simple_pin_fs(&pci_p2pdma_fs_type, &pci_p2pdma_fs_mnt,
+			      &pci_p2pdma_fs_cnt);
 	if (error)
 		goto out_pool_destroy;
 
+	p2p->inode = alloc_anon_inode(pci_p2pdma_fs_mnt->mnt_sb);
+	if (IS_ERR(p2p->inode)) {
+		error = -ENOMEM;
+		goto out_unpin_fs;
+	}
+
+	error = devm_add_action_or_reset(&pdev->dev, pci_p2pdma_release, pdev);
+	if (error)
+		goto out_put_inode;
+
 	error = sysfs_create_group(&pdev->dev.kobj, &p2pmem_group);
 	if (error)
-		goto out_pool_destroy;
+		goto out_put_inode;
 
 	rcu_assign_pointer(pdev->p2pdma, p2p);
 	return 0;
 
+out_put_inode:
+	iput(p2p->inode);
+out_unpin_fs:
+	simple_release_fs(&pci_p2pdma_fs_mnt, &pci_p2pdma_fs_cnt);
 out_pool_destroy:
 	gen_pool_destroy(p2p->pool);
 out:
@@ -164,6 +216,54 @@ static int pci_p2pdma_setup(struct pci_dev *pdev)
 	return error;
 }
 
+static void pci_p2pdma_map_free_pages(struct pci_p2pdma_map *pmap)
+{
+	int i;
+
+	if (!pmap->kaddr)
+		return;
+
+	for (i = 0; i < pmap->len; i += PAGE_SIZE)
+		put_page(virt_to_page(pmap->kaddr + i));
+
+	pmap->kaddr = NULL;
+}
+
+static void pci_p2pdma_free_mappings(struct address_space *mapping)
+{
+	struct vm_area_struct *vma;
+
+	i_mmap_lock_write(mapping);
+	if (RB_EMPTY_ROOT(&mapping->i_mmap.rb_root))
+		goto out;
+
+	vma_interval_tree_foreach(vma, &mapping->i_mmap, 0, -1)
+		pci_p2pdma_map_free_pages(vma->vm_private_data);
+
+out:
+	i_mmap_unlock_write(mapping);
+}
+
+static void pci_p2pdma_unmap_mappings(void *data)
+{
+	struct pci_dev *pdev = data;
+	struct pci_p2pdma *p2pdma = rcu_dereference_protected(pdev->p2pdma, 1);
+
+	/* Ensure no new pages can be allocated in mappings */
+	p2pdma->active = false;
+	synchronize_rcu();
+
+	unmap_mapping_range(p2pdma->inode->i_mapping, 0, 0, 1);
+
+	/*
+	 * On some architectures, TLB flushes are done with call_rcu()
+	 * so to ensure GUP fast is done with the pages, call synchronize_rcu()
+	 * before freeing them.
+	 */
+	synchronize_rcu();
+	pci_p2pdma_free_mappings(p2pdma->inode->i_mapping);
+}
+
 /**
  * pci_p2pdma_add_resource - add memory for use as p2p memory
  * @pdev: the device to add the memory to
@@ -222,6 +322,11 @@ int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size,
 		goto pgmap_free;
 	}
 
+	error = devm_add_action_or_reset(&pdev->dev, pci_p2pdma_unmap_mappings,
+					 pdev);
+	if (error)
+		goto pages_free;
+
 	p2pdma = rcu_dereference_protected(pdev->p2pdma, 1);
 	error = gen_pool_add_owner(p2pdma->pool, (unsigned long)addr,
 			pci_bus_address(pdev, bar) + offset,
@@ -230,6 +335,7 @@ int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size,
 	if (error)
 		goto pages_free;
 
+	p2pdma->active = true;
 	pci_info(pdev, "added peer-to-peer DMA memory %#llx-%#llx\n",
 		 pgmap->range.start, pgmap->range.end);
 
@@ -1030,3 +1136,194 @@ ssize_t pci_p2pdma_enable_show(char *page, struct pci_dev *p2p_dev,
 	return sprintf(page, "%s\n", pci_name(p2p_dev));
 }
 EXPORT_SYMBOL_GPL(pci_p2pdma_enable_show);
+
+static struct pci_p2pdma_map *pci_p2pdma_map_alloc(struct pci_dev *pdev,
+						   size_t len)
+{
+	struct pci_p2pdma_map *pmap;
+
+	pmap = kzalloc(sizeof(*pmap), GFP_KERNEL);
+	if (!pmap)
+		return NULL;
+
+	kref_init(&pmap->ref);
+	pmap->pdev = pci_dev_get(pdev);
+	pmap->len = len;
+
+	return pmap;
+}
+
+static void pci_p2pdma_map_free(struct rcu_head *rcu)
+{
+	struct pci_p2pdma_map *pmap =
+		container_of(rcu, struct pci_p2pdma_map, rcu);
+
+	pci_p2pdma_map_free_pages(pmap);
+	kfree(pmap);
+}
+
+static void pci_p2pdma_map_release(struct kref *ref)
+{
+	struct pci_p2pdma_map *pmap =
+		container_of(ref, struct pci_p2pdma_map, ref);
+
+	iput(pmap->inode);
+	simple_release_fs(&pci_p2pdma_fs_mnt, &pci_p2pdma_fs_cnt);
+	pci_dev_put(pmap->pdev);
+
+	if (pmap->kaddr) {
+		/*
+		 * Make sure to wait for the TLB flush (which some
+		 * architectures do using call_rcu()) before returning the
+		 * pages to the genalloc. This ensures the pages are not reused
+		 * before GUP-fast is finished with them. So the mapping is
+		 * freed using call_rcu() seeing adding synchronize_rcu() to
+		 * the munmap path can cause long delays on large systems
+		 * during process cleanup.
+		 */
+		call_rcu(&pmap->rcu, pci_p2pdma_map_free);
+		return;
+	}
+
+	/*
+	 * If there are no pages, just free the object immediately. There
+	 * are no more references to it so there is nothing that can race
+	 * with adding the pages.
+	 */
+	pci_p2pdma_map_free(&pmap->rcu);
+}
+
+static void pci_p2pdma_vma_open(struct vm_area_struct *vma)
+{
+	struct pci_p2pdma_map *pmap = vma->vm_private_data;
+
+	kref_get(&pmap->ref);
+}
+
+static void pci_p2pdma_vma_close(struct vm_area_struct *vma)
+{
+	struct pci_p2pdma_map *pmap = vma->vm_private_data;
+
+	kref_put(&pmap->ref, pci_p2pdma_map_release);
+}
+
+static vm_fault_t pci_p2pdma_vma_fault(struct vm_fault *vmf)
+{
+	struct pci_p2pdma_map *pmap = vmf->vma->vm_private_data;
+	struct pci_p2pdma *p2pdma;
+	void *vaddr;
+	pfn_t pfn;
+	int i;
+
+	if (!pmap->kaddr) {
+		rcu_read_lock();
+		p2pdma = rcu_dereference(pmap->pdev->p2pdma);
+		if (!p2pdma)
+			goto err_out;
+
+		if (!p2pdma->active)
+			goto err_out;
+
+		pmap->kaddr = (void *)gen_pool_alloc(p2pdma->pool, pmap->len);
+		if (!pmap->kaddr)
+			goto err_out;
+
+		for (i = 0; i < pmap->len; i += PAGE_SIZE)
+			get_page(virt_to_page(pmap->kaddr + i));
+
+		rcu_read_unlock();
+	}
+
+	vaddr = pmap->kaddr + (vmf->pgoff << PAGE_SHIFT);
+	pfn = phys_to_pfn_t(virt_to_phys(vaddr), PFN_DEV | PFN_MAP);
+
+	return vmf_insert_mixed(vmf->vma, vmf->address, pfn);
+
+err_out:
+	rcu_read_unlock();
+	return VM_FAULT_SIGBUS;
+}
+static const struct vm_operations_struct pci_p2pdma_vmops = {
+	.open = pci_p2pdma_vma_open,
+	.close = pci_p2pdma_vma_close,
+	.fault = pci_p2pdma_vma_fault,
+};
+
+/**
+ * pci_p2pdma_mmap_file_open - setup file mapping to store P2PMEM VMAs
+ * @pdev: the device to allocate memory from
+ * @vma: the userspace vma to map the memory to
+ *
+ * Set f_mapping of the file to the p2pdma inode so that mappings
+ * are can be torn down on device unbind.
+ *
+ * Returns 0 on success, or a negative error code on failure
+ */
+void pci_p2pdma_mmap_file_open(struct pci_dev *pdev, struct file *file)
+{
+	struct pci_p2pdma *p2pdma;
+
+	rcu_read_lock();
+	p2pdma = rcu_dereference(pdev->p2pdma);
+	if (p2pdma)
+		file->f_mapping = p2pdma->inode->i_mapping;
+	rcu_read_unlock();
+}
+EXPORT_SYMBOL_GPL(pci_p2pdma_mmap_file_open);
+
+/**
+ * pci_mmap_p2pmem - setup an mmap region to be backed with P2PDMA memory
+ *	that was registered with pci_p2pdma_add_resource()
+ * @pdev: the device to allocate memory from
+ * @vma: the userspace vma to map the memory to
+ *
+ * The file must call pci_p2pdma_mmap_file_open() in its open() operation.
+ *
+ * Returns 0 on success, or a negative error code on failure
+ */
+int pci_mmap_p2pmem(struct pci_dev *pdev, struct vm_area_struct *vma)
+{
+	struct pci_p2pdma_map *pmap;
+	struct pci_p2pdma *p2pdma;
+	int ret;
+
+	/* prevent private mappings from being established */
+	if ((vma->vm_flags & VM_MAYSHARE) != VM_MAYSHARE) {
+		pci_info_ratelimited(pdev,
+				     "%s: fail, attempted private mapping\n",
+				     current->comm);
+		return -EINVAL;
+	}
+
+	pmap = pci_p2pdma_map_alloc(pdev, vma->vm_end - vma->vm_start);
+	if (!pmap)
+		return -ENOMEM;
+
+	rcu_read_lock();
+	p2pdma = rcu_dereference(pdev->p2pdma);
+	if (!p2pdma) {
+		ret = -ENODEV;
+		goto out;
+	}
+
+	ret = simple_pin_fs(&pci_p2pdma_fs_type, &pci_p2pdma_fs_mnt,
+			    &pci_p2pdma_fs_cnt);
+	if (ret)
+		goto out;
+
+	ihold(p2pdma->inode);
+	pmap->inode = p2pdma->inode;
+	rcu_read_unlock();
+
+	vma->vm_flags |= VM_MIXEDMAP;
+	vma->vm_private_data = pmap;
+	vma->vm_ops = &pci_p2pdma_vmops;
+
+	return 0;
+
+out:
+	rcu_read_unlock();
+	kfree(pmap);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(pci_mmap_p2pmem);
diff --git a/include/linux/pci-p2pdma.h b/include/linux/pci-p2pdma.h
index 2c07aa6b7665..7122050ee660 100644
--- a/include/linux/pci-p2pdma.h
+++ b/include/linux/pci-p2pdma.h
@@ -34,6 +34,8 @@ int pci_p2pdma_enable_store(const char *page, struct pci_dev **p2p_dev,
 			    bool *use_p2pdma);
 ssize_t pci_p2pdma_enable_show(char *page, struct pci_dev *p2p_dev,
 			       bool use_p2pdma);
+void pci_p2pdma_mmap_file_open(struct pci_dev *pdev, struct file *file);
+int pci_mmap_p2pmem(struct pci_dev *pdev, struct vm_area_struct *vma);
 #else /* CONFIG_PCI_P2PDMA */
 static inline int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar,
 		size_t size, u64 offset)
@@ -90,6 +92,15 @@ static inline ssize_t pci_p2pdma_enable_show(char *page,
 {
 	return sprintf(page, "none\n");
 }
+static inline void pci_p2pdma_mmap_file_open(struct pci_dev *pdev,
+					     struct file *file)
+{
+}
+static inline int pci_mmap_p2pmem(struct pci_dev *pdev,
+				  struct vm_area_struct *vma)
+{
+	return -EOPNOTSUPP;
+}
 #endif /* CONFIG_PCI_P2PDMA */
 
 
diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h
index 35687dcb1a42..af737842c56f 100644
--- a/include/uapi/linux/magic.h
+++ b/include/uapi/linux/magic.h
@@ -88,6 +88,7 @@
 #define BPF_FS_MAGIC		0xcafe4a11
 #define AAFS_MAGIC		0x5a3c69f0
 #define ZONEFS_MAGIC		0x5a4f4653
+#define P2PDMA_MAGIC		0x70327064
 
 /* Since UDF 2.01 is ISO 13346 based... */
 #define UDF_SUPER_MAGIC		0x15013346
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v4 23/23] nvme-pci: allow mmaping the CMB in userspace
  2021-11-17 21:53 [PATCH v4 00/23] Userspace P2PDMA with O_DIRECT NVMe devices Logan Gunthorpe
                   ` (21 preceding siblings ...)
  2021-11-17 21:54 ` [PATCH v4 22/23] PCI/P2PDMA: Introduce pci_mmap_p2pmem() Logan Gunthorpe
@ 2021-11-17 21:54 ` Logan Gunthorpe
  2021-12-21  9:07   ` Christoph Hellwig
  22 siblings, 1 reply; 40+ messages in thread
From: Logan Gunthorpe @ 2021-11-17 21:54 UTC (permalink / raw)
  To: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm, iommu
  Cc: Stephen Bates, Christoph Hellwig, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni,
	Logan Gunthorpe

Allow userspace to obtain CMB memory by mmaping the controller's
char device. The mmap call allocates and returns a hunk of CMB memory,
(the offset is ignored) so userspace does not have control over the
address within the CMB.

A VMA allocated in this way will only be usable by drivers that set
FOLL_PCI_P2PDMA when calling GUP. And inter-device support will be
checked the first time the pages are mapped for DMA.

Currently this is only supported by O_DIRECT to an PCI NVMe device
or through the NVMe passthrough IOCTL.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
---
 drivers/nvme/host/core.c | 15 +++++++++++++++
 drivers/nvme/host/nvme.h |  2 ++
 drivers/nvme/host/pci.c  | 18 ++++++++++++++++++
 3 files changed, 35 insertions(+)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 344414351314..39ad592cacdc 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3111,6 +3111,10 @@ static int nvme_dev_open(struct inode *inode, struct file *file)
 	}
 
 	file->private_data = ctrl;
+
+	if (ctrl->ops->mmap_file_open)
+		ctrl->ops->mmap_file_open(ctrl, file);
+
 	return 0;
 }
 
@@ -3124,12 +3128,23 @@ static int nvme_dev_release(struct inode *inode, struct file *file)
 	return 0;
 }
 
+static int nvme_dev_mmap(struct file *file, struct vm_area_struct *vma)
+{
+	struct nvme_ctrl *ctrl = file->private_data;
+
+	if (!ctrl->ops->mmap_cmb)
+		return -ENODEV;
+
+	return ctrl->ops->mmap_cmb(ctrl, vma);
+}
+
 static const struct file_operations nvme_dev_fops = {
 	.owner		= THIS_MODULE,
 	.open		= nvme_dev_open,
 	.release	= nvme_dev_release,
 	.unlocked_ioctl	= nvme_dev_ioctl,
 	.compat_ioctl	= compat_ptr_ioctl,
+	.mmap		= nvme_dev_mmap,
 };
 
 static ssize_t nvme_sysfs_reset(struct device *dev,
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index a9f60b12a32b..5fdc1a2027e9 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -494,6 +494,8 @@ struct nvme_ctrl_ops {
 	void (*delete_ctrl)(struct nvme_ctrl *ctrl);
 	int (*get_address)(struct nvme_ctrl *ctrl, char *buf, int size);
 	bool (*supports_pci_p2pdma)(struct nvme_ctrl *ctrl);
+	void (*mmap_file_open)(struct nvme_ctrl *ctrl, struct file *file);
+	int (*mmap_cmb)(struct nvme_ctrl *ctrl, struct vm_area_struct *vma);
 };
 
 /*
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 3f2bd1efe076..05d6e7284000 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2896,6 +2896,22 @@ static bool nvme_pci_supports_pci_p2pdma(struct nvme_ctrl *ctrl)
 	return dma_pci_p2pdma_supported(dev->dev);
 }
 
+static void nvme_pci_mmap_file_open(struct nvme_ctrl *ctrl,
+				    struct file *file)
+{
+	struct pci_dev *pdev = to_pci_dev(to_nvme_dev(ctrl)->dev);
+
+	pci_p2pdma_mmap_file_open(pdev, file);
+}
+
+static int nvme_pci_mmap_cmb(struct nvme_ctrl *ctrl,
+			     struct vm_area_struct *vma)
+{
+	struct pci_dev *pdev = to_pci_dev(to_nvme_dev(ctrl)->dev);
+
+	return pci_mmap_p2pmem(pdev, vma);
+}
+
 static const struct nvme_ctrl_ops nvme_pci_ctrl_ops = {
 	.name			= "pcie",
 	.module			= THIS_MODULE,
@@ -2907,6 +2923,8 @@ static const struct nvme_ctrl_ops nvme_pci_ctrl_ops = {
 	.submit_async_event	= nvme_pci_submit_async_event,
 	.get_address		= nvme_pci_get_address,
 	.supports_pci_p2pdma	= nvme_pci_supports_pci_p2pdma,
+	.mmap_file_open		= nvme_pci_mmap_file_open,
+	.mmap_cmb		= nvme_pci_mmap_cmb,
 };
 
 static int nvme_dev_map(struct nvme_dev *dev)
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* Re: [PATCH v4 01/23] lib/scatterlist: cleanup macros into static inline functions
  2021-11-17 21:53 ` [PATCH v4 01/23] lib/scatterlist: cleanup macros into static inline functions Logan Gunthorpe
@ 2021-12-13 21:51   ` Chaitanya Kulkarni
  2021-12-21  9:00   ` Christoph Hellwig
  1 sibling, 0 replies; 40+ messages in thread
From: Chaitanya Kulkarni @ 2021-12-13 21:51 UTC (permalink / raw)
  To: Logan Gunthorpe, linux-kernel, linux-nvme, linux-block,
	linux-pci, linux-mm, iommu
  Cc: Stephen Bates, Christoph Hellwig, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni,
	Jason Gunthorpe

On 11/17/21 1:53 PM, Logan Gunthorpe wrote:
> Convert the sg_is_chain(), sg_is_last() and sg_chain_ptr() macros
> into static inline functions. There's no reason for these to be macros
> and static inline are generally preferred these days.
> 
> Also introduce the SG_PAGE_LINK_MASK define so the P2PDMA work, which is
> adding another bit to this mask, can do so more easily.
> 
> Suggested-by: Jason Gunthorpe <jgg@nvidia.com>
> Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
> ---

Looks good.

Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v4 02/23] lib/scatterlist: add flag for indicating P2PDMA segments in an SGL
  2021-11-17 21:53 ` [PATCH v4 02/23] lib/scatterlist: add flag for indicating P2PDMA segments in an SGL Logan Gunthorpe
@ 2021-12-13 21:55   ` Chaitanya Kulkarni
  2021-12-21  9:02   ` Christoph Hellwig
  1 sibling, 0 replies; 40+ messages in thread
From: Chaitanya Kulkarni @ 2021-12-13 21:55 UTC (permalink / raw)
  To: Logan Gunthorpe, linux-kernel, linux-nvme, linux-block,
	linux-pci, linux-mm, iommu
  Cc: Stephen Bates, Christoph Hellwig, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni

On 11/17/21 1:53 PM, Logan Gunthorpe wrote:
> Make use of the third free LSB in scatterlist's page_link on 64bit systems.
> 
> The extra bit will be used by dma_[un]map_sg_p2pdma() to determine when a
> given SGL segments dma_address points to a PCI bus address.
> dma_unmap_sg_p2pdma() will need to perform different cleanup when a
> segment is marked as a bus address.
> 
> Create a CONFIG_NEED_SG_DMA_BUS_ADDR_FLAG bool which depends on
> CONFIG_64BIT (so there is space in the page link for the new flag).
> CONFIG_PCI_P2PDMA will then depend on this so this means PCI P2PDMA will
> require CONFIG_64BIT. This should be acceptable as the majority of P2PDMA
> use cases are restricted to newer root complexes and roughly require the
> extra address space for memory BARs used in the transactions.
> 
> Signed-off-by: Logan Gunthorpe <logang@deltatee.com>

Looks good.

Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v4 03/23] PCI/P2PDMA: Attempt to set map_type if it has not been set
  2021-11-17 21:53 ` [PATCH v4 03/23] PCI/P2PDMA: Attempt to set map_type if it has not been set Logan Gunthorpe
@ 2021-12-13 22:00   ` Chaitanya Kulkarni
  0 siblings, 0 replies; 40+ messages in thread
From: Chaitanya Kulkarni @ 2021-12-13 22:00 UTC (permalink / raw)
  To: Logan Gunthorpe, linux-kernel, linux-nvme, linux-block,
	linux-pci, linux-mm, iommu
  Cc: Stephen Bates, Christoph Hellwig, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni, Bjorn Helgaas

On 11/17/21 1:53 PM, Logan Gunthorpe wrote:
> Attempt to find the mapping type for P2PDMA pages on the first
> DMA map attempt if it has not been done ahead of time.
> 
> Previously, the mapping type was expected to be calculated ahead of
> time, but if pages are to come from userspace then there's no
> way to ensure the path was checked ahead of time.
> 
> This change will calculate the mapping type if it hasn't pre-calculated
> so it is no longer invalid to call pci_p2pdma_map_sg() before the mapping
> type is calculated, so drop the WARN_ON when that is he case.
> 
> Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
> Acked-by: Bjorn Helgaas <bhelgaas@google.com>
> ---

Perhaps a comment would be nice in the default case in
pci_p2pdma_map_sg_attrs() where you have removed the WARN_ON_ONCE().

Either way, looks good.

Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v4 04/23] PCI/P2PDMA: Expose pci_p2pdma_map_type()
  2021-11-17 21:53 ` [PATCH v4 04/23] PCI/P2PDMA: Expose pci_p2pdma_map_type() Logan Gunthorpe
@ 2021-12-13 22:05   ` Chaitanya Kulkarni
  0 siblings, 0 replies; 40+ messages in thread
From: Chaitanya Kulkarni @ 2021-12-13 22:05 UTC (permalink / raw)
  To: Logan Gunthorpe, linux-kernel, linux-nvme, linux-block,
	linux-pci, linux-mm, iommu
  Cc: Stephen Bates, Christoph Hellwig, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni, Bjorn Helgaas,
	Jason Gunthorpe

On 11/17/21 1:53 PM, Logan Gunthorpe wrote:
> pci_p2pdma_map_type() will be needed by the dma-iommu map_sg
> implementation because it will need to determine the mapping type
> ahead of actually doing the mapping to create the actual IOMMU mapping.
> 
> Prototypes for this helper are added to dma-map-ops.h as they are only
> useful to dma map implementations and don't need to pollute the public
> pci-p2pdma header
> 
> Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
> Acked-by: Bjorn Helgaas <bhelgaas@google.com>
> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
> ---

The documentation looks much better from reviewer point of view.

Looks good.

Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v4 10/23] nvme-pci: check DMA ops when indicating support for PCI P2PDMA
  2021-11-17 21:53 ` [PATCH v4 10/23] nvme-pci: check DMA ops when indicating support for PCI P2PDMA Logan Gunthorpe
@ 2021-12-13 22:10   ` Chaitanya Kulkarni
  0 siblings, 0 replies; 40+ messages in thread
From: Chaitanya Kulkarni @ 2021-12-13 22:10 UTC (permalink / raw)
  To: Logan Gunthorpe, linux-kernel, linux-nvme, linux-block,
	linux-pci, linux-mm, iommu
  Cc: Stephen Bates, Christoph Hellwig, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni

On 11/17/21 1:53 PM, Logan Gunthorpe wrote:
> Introduce a supports_pci_p2pdma() operation in nvme_ctrl_ops to
> replace the fixed NVME_F_PCI_P2PDMA flag such that the dma_map_ops
> flags can be checked for PCI P2PDMA support.
> 
> Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
> ---

Looks good.

Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v4 11/23] nvme-pci: convert to using dma_map_sgtable()
  2021-11-17 21:53 ` [PATCH v4 11/23] nvme-pci: convert to using dma_map_sgtable() Logan Gunthorpe
@ 2021-12-13 22:21   ` Chaitanya Kulkarni
  2021-12-13 22:28     ` Logan Gunthorpe
  0 siblings, 1 reply; 40+ messages in thread
From: Chaitanya Kulkarni @ 2021-12-13 22:21 UTC (permalink / raw)
  To: Logan Gunthorpe, linux-kernel, linux-nvme, linux-block,
	linux-pci, linux-mm, iommu
  Cc: Stephen Bates, Christoph Hellwig, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni, Max Gurtovoy


>   static blk_status_t nvme_pci_setup_sgls(struct nvme_dev *dev,
> -		struct request *req, struct nvme_rw_command *cmd, int entries)
> +		struct request *req, struct nvme_rw_command *cmd)
>   {
>   	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>   	struct dma_pool *pool;
>   	struct nvme_sgl_desc *sg_list;
> -	struct scatterlist *sg = iod->sg;
> +	struct scatterlist *sg = iod->sgt.sgl;
> +	int entries = iod->sgt.nents;

I don't see use of newly added entries variable anywhere in
nvme_pci_setup_sgls(), what am I missing ?

Also, type of entries variable should be unsigned int to match
the iod->sgt.nents.

>   	dma_addr_t sgl_dma;
>   	int i = 0;
>   
> @@ -848,7 +838,7 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req,
>   {
>   	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>   	blk_status_t ret = BLK_STS_RESOURCE;
> -	int nr_mapped;
> +	int rc;
>   
>   	if (blk_rq_nr_phys_segments(req) == 1) {
>   		struct bio_vec bv = req_bvec(req);

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v4 11/23] nvme-pci: convert to using dma_map_sgtable()
  2021-12-13 22:21   ` Chaitanya Kulkarni
@ 2021-12-13 22:28     ` Logan Gunthorpe
  0 siblings, 0 replies; 40+ messages in thread
From: Logan Gunthorpe @ 2021-12-13 22:28 UTC (permalink / raw)
  To: Chaitanya Kulkarni, linux-kernel, linux-nvme, linux-block,
	linux-pci, linux-mm, iommu
  Cc: Stephen Bates, Christoph Hellwig, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni, Max Gurtovoy


On 2021-12-13 3:21 p.m., Chaitanya Kulkarni wrote:
> 
>>   static blk_status_t nvme_pci_setup_sgls(struct nvme_dev *dev,
>> -		struct request *req, struct nvme_rw_command *cmd, int entries)
>> +		struct request *req, struct nvme_rw_command *cmd)
>>   {
>>   	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
>>   	struct dma_pool *pool;
>>   	struct nvme_sgl_desc *sg_list;
>> -	struct scatterlist *sg = iod->sg;
>> +	struct scatterlist *sg = iod->sgt.sgl;
>> +	int entries = iod->sgt.nents;
> 
> I don't see use of newly added entries variable anywhere in
> nvme_pci_setup_sgls(), what am I missing ?

'entries' is being moved out from the argument list of
nvme_pci_setup_sgls(), so there are already uses in the function that
don't show in the diff.

> Also, type of entries variable should be unsigned int to match
> the iod->sgt.nents.

Sure, I will fix that in the next version.

Thanks for the reviews!

Logan

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v4 01/23] lib/scatterlist: cleanup macros into static inline functions
  2021-11-17 21:53 ` [PATCH v4 01/23] lib/scatterlist: cleanup macros into static inline functions Logan Gunthorpe
  2021-12-13 21:51   ` Chaitanya Kulkarni
@ 2021-12-21  9:00   ` Christoph Hellwig
  2021-12-21 17:23     ` Logan Gunthorpe
  1 sibling, 1 reply; 40+ messages in thread
From: Christoph Hellwig @ 2021-12-21  9:00 UTC (permalink / raw)
  To: Logan Gunthorpe
  Cc: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm,
	iommu, Stephen Bates, Christoph Hellwig, Dan Williams,
	Jason Gunthorpe, Christian König, John Hubbard, Don Dutile,
	Matthew Wilcox, Daniel Vetter, Jakowski Andrzej, Minturn Dave B,
	Jason Ekstrand, Dave Hansen, Xiong Jianxin, Bjorn Helgaas,
	Ira Weiny, Robin Murphy, Martin Oliveira, Chaitanya Kulkarni,
	Jason Gunthorpe

On Wed, Nov 17, 2021 at 02:53:48PM -0700, Logan Gunthorpe wrote:
> Convert the sg_is_chain(), sg_is_last() and sg_chain_ptr() macros
> into static inline functions. There's no reason for these to be macros
> and static inline are generally preferred these days.
> 
> Also introduce the SG_PAGE_LINK_MASK define so the P2PDMA work, which is
> adding another bit to this mask, can do so more easily.
> 
> Suggested-by: Jason Gunthorpe <jgg@nvidia.com>
> Signed-off-by: Logan Gunthorpe <logang@deltatee.com>

Looks fine:

Reviewed-by: Christoph Hellwig <hch@lst.de>

scatterlist.h doesn't have a real maintainer, do you want me to pick
this up through the DMA tree?

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v4 02/23] lib/scatterlist: add flag for indicating P2PDMA segments in an SGL
  2021-11-17 21:53 ` [PATCH v4 02/23] lib/scatterlist: add flag for indicating P2PDMA segments in an SGL Logan Gunthorpe
  2021-12-13 21:55   ` Chaitanya Kulkarni
@ 2021-12-21  9:02   ` Christoph Hellwig
  1 sibling, 0 replies; 40+ messages in thread
From: Christoph Hellwig @ 2021-12-21  9:02 UTC (permalink / raw)
  To: Logan Gunthorpe
  Cc: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm,
	iommu, Stephen Bates, Christoph Hellwig, Dan Williams,
	Jason Gunthorpe, Christian König, John Hubbard, Don Dutile,
	Matthew Wilcox, Daniel Vetter, Jakowski Andrzej, Minturn Dave B,
	Jason Ekstrand, Dave Hansen, Xiong Jianxin, Bjorn Helgaas,
	Ira Weiny, Robin Murphy, Martin Oliveira, Chaitanya Kulkarni

> +	#
> +	# The need for the scatterlist DMA bus address flag means PCI P2PDMA
> +	# requires 64bit
> +	#
> +	select NEED_SG_DMA_BUS_ADDR_FLAG

> +config NEED_SG_DMA_BUS_ADDR_FLAG
> +	depends on 64BIT
> +	bool

depends does not work for symbols that are selected using select.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v4 16/23] iov_iter: introduce iov_iter_get_pages_[alloc_]flags()
  2021-11-17 21:54 ` [PATCH v4 16/23] iov_iter: introduce iov_iter_get_pages_[alloc_]flags() Logan Gunthorpe
@ 2021-12-21  9:04   ` Christoph Hellwig
  0 siblings, 0 replies; 40+ messages in thread
From: Christoph Hellwig @ 2021-12-21  9:04 UTC (permalink / raw)
  To: Logan Gunthorpe
  Cc: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm,
	iommu, Stephen Bates, Christoph Hellwig, Dan Williams,
	Jason Gunthorpe, Christian König, John Hubbard, Don Dutile,
	Matthew Wilcox, Daniel Vetter, Jakowski Andrzej, Minturn Dave B,
	Jason Ekstrand, Dave Hansen, Xiong Jianxin, Bjorn Helgaas,
	Ira Weiny, Robin Murphy, Martin Oliveira, Chaitanya Kulkarni

All these new helpers should be _GPL exports, keeping the existing
ones (which should never have been non-GPL exports as tiny wrappers
around GUP-fast) as out of line wrappers.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v4 17/23] block: add check when merging zone device pages
  2021-11-17 21:54 ` [PATCH v4 17/23] block: add check when merging zone device pages Logan Gunthorpe
@ 2021-12-21  9:05   ` Christoph Hellwig
  0 siblings, 0 replies; 40+ messages in thread
From: Christoph Hellwig @ 2021-12-21  9:05 UTC (permalink / raw)
  To: Logan Gunthorpe
  Cc: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm,
	iommu, Stephen Bates, Christoph Hellwig, Dan Williams,
	Jason Gunthorpe, Christian König, John Hubbard, Don Dutile,
	Matthew Wilcox, Daniel Vetter, Jakowski Andrzej, Minturn Dave B,
	Jason Ekstrand, Dave Hansen, Xiong Jianxin, Bjorn Helgaas,
	Ira Weiny, Robin Murphy, Martin Oliveira, Chaitanya Kulkarni

> +/*
> + * Consecutive zone device pages should not be merged into the same sgl
> + * or bvec segment with other types of pages or if they belong to different
> + * pgmaps. Otherwise getting the pgmap of a given segment is not possible
> + * without scanning the entire segment. This helper returns true either if
> + * both pages are not zone device pages or both pages are zone device pages
> + * with the same pgmap.
> + */
> +static inline bool zone_device_pages_are_mergeable(const struct page *a,
> +						   const struct page *b)

Merging is only really a use case here.  This really checks if they
belong to the same pgmap, so I suspect that should be in the name.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v4 21/23] mm: use custom page_free for P2PDMA pages
  2021-11-17 21:54 ` [PATCH v4 21/23] mm: use custom page_free for P2PDMA pages Logan Gunthorpe
@ 2021-12-21  9:06   ` Christoph Hellwig
  2021-12-21 17:27     ` Logan Gunthorpe
  0 siblings, 1 reply; 40+ messages in thread
From: Christoph Hellwig @ 2021-12-21  9:06 UTC (permalink / raw)
  To: Logan Gunthorpe
  Cc: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm,
	iommu, Stephen Bates, Christoph Hellwig, Dan Williams,
	Jason Gunthorpe, Christian König, John Hubbard, Don Dutile,
	Matthew Wilcox, Daniel Vetter, Jakowski Andrzej, Minturn Dave B,
	Jason Ekstrand, Dave Hansen, Xiong Jianxin, Bjorn Helgaas,
	Ira Weiny, Robin Murphy, Martin Oliveira, Chaitanya Kulkarni

On Wed, Nov 17, 2021 at 02:54:08PM -0700, Logan Gunthorpe wrote:
> When P2PDMA pages are passed to userspace, they will need to be
> reference counted properly and returned to their genalloc after their
> reference count returns to 1. This is accomplished with the existing
> DEV_PAGEMAP_OPS and the .page_free() operation.
> 
> Change CONFIG_P2PDMA to select CONFIG_DEV_PAGEMAP_OPS and add
> MEMORY_DEVICE_PCI_P2PDMA to page_is_devmap_managed(),
> devmap_managed_enable_[put|get]() and free_devmap_managed_page().

Uuuh.  We are trying hard to kill off this magic free at refcount 1
behavior in the amdgpu device coherent series.  We really should not
add more of this.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v4 23/23] nvme-pci: allow mmaping the CMB in userspace
  2021-11-17 21:54 ` [PATCH v4 23/23] nvme-pci: allow mmaping the CMB in userspace Logan Gunthorpe
@ 2021-12-21  9:07   ` Christoph Hellwig
  0 siblings, 0 replies; 40+ messages in thread
From: Christoph Hellwig @ 2021-12-21  9:07 UTC (permalink / raw)
  To: Logan Gunthorpe
  Cc: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm,
	iommu, Stephen Bates, Christoph Hellwig, Dan Williams,
	Jason Gunthorpe, Christian König, John Hubbard, Don Dutile,
	Matthew Wilcox, Daniel Vetter, Jakowski Andrzej, Minturn Dave B,
	Jason Ekstrand, Dave Hansen, Xiong Jianxin, Bjorn Helgaas,
	Ira Weiny, Robin Murphy, Martin Oliveira, Chaitanya Kulkarni

>  	file->private_data = ctrl;
> +
> +	if (ctrl->ops->mmap_file_open)
> +		ctrl->ops->mmap_file_open(ctrl, file);
> +

The callout doesn't really have anything to do with mmap, that is just
how you use it.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v4 01/23] lib/scatterlist: cleanup macros into static inline functions
  2021-12-21  9:00   ` Christoph Hellwig
@ 2021-12-21 17:23     ` Logan Gunthorpe
  2021-12-22  8:22       ` Christoph Hellwig
  0 siblings, 1 reply; 40+ messages in thread
From: Logan Gunthorpe @ 2021-12-21 17:23 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm,
	iommu, Stephen Bates, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni,
	Jason Gunthorpe



On 2021-12-21 2:00 a.m., Christoph Hellwig wrote:
> On Wed, Nov 17, 2021 at 02:53:48PM -0700, Logan Gunthorpe wrote:
>> Convert the sg_is_chain(), sg_is_last() and sg_chain_ptr() macros
>> into static inline functions. There's no reason for these to be macros
>> and static inline are generally preferred these days.
>>
>> Also introduce the SG_PAGE_LINK_MASK define so the P2PDMA work, which is
>> adding another bit to this mask, can do so more easily.
>>
>> Suggested-by: Jason Gunthorpe <jgg@nvidia.com>
>> Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
> 
> Looks fine:
> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> 
> scatterlist.h doesn't have a real maintainer, do you want me to pick
> this up through the DMA tree?

Sure, that would be great!

Thanks,

Logan

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v4 21/23] mm: use custom page_free for P2PDMA pages
  2021-12-21  9:06   ` Christoph Hellwig
@ 2021-12-21 17:27     ` Logan Gunthorpe
  0 siblings, 0 replies; 40+ messages in thread
From: Logan Gunthorpe @ 2021-12-21 17:27 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-kernel, linux-nvme, linux-block, linux-pci, linux-mm,
	iommu, Stephen Bates, Dan Williams, Jason Gunthorpe,
	Christian König, John Hubbard, Don Dutile, Matthew Wilcox,
	Daniel Vetter, Jakowski Andrzej, Minturn Dave B, Jason Ekstrand,
	Dave Hansen, Xiong Jianxin, Bjorn Helgaas, Ira Weiny,
	Robin Murphy, Martin Oliveira, Chaitanya Kulkarni



On 2021-12-21 2:06 a.m., Christoph Hellwig wrote:
> On Wed, Nov 17, 2021 at 02:54:08PM -0700, Logan Gunthorpe wrote:
>> When P2PDMA pages are passed to userspace, they will need to be
>> reference counted properly and returned to their genalloc after their
>> reference count returns to 1. This is accomplished with the existing
>> DEV_PAGEMAP_OPS and the .page_free() operation.
>>
>> Change CONFIG_P2PDMA to select CONFIG_DEV_PAGEMAP_OPS and add
>> MEMORY_DEVICE_PCI_P2PDMA to page_is_devmap_managed(),
>> devmap_managed_enable_[put|get]() and free_devmap_managed_page().
> 
> Uuuh.  We are trying hard to kill off this magic free at refcount 1
> behavior in the amdgpu device coherent series.  We really should not
> add more of this.

Ah, ok. I found Ralph's patch that cleans this up and I can try to
rebase this onto it for future postings until it gets merged.

Your other comments I can address for the next time I post this series.

Thanks for the review!

Logan

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v4 01/23] lib/scatterlist: cleanup macros into static inline functions
  2021-12-21 17:23     ` Logan Gunthorpe
@ 2021-12-22  8:22       ` Christoph Hellwig
  0 siblings, 0 replies; 40+ messages in thread
From: Christoph Hellwig @ 2021-12-22  8:22 UTC (permalink / raw)
  To: Logan Gunthorpe
  Cc: Christoph Hellwig, linux-kernel, linux-nvme, linux-block,
	linux-pci, linux-mm, iommu, Stephen Bates, Dan Williams,
	Jason Gunthorpe, Christian König, John Hubbard, Don Dutile,
	Matthew Wilcox, Daniel Vetter, Jakowski Andrzej, Minturn Dave B,
	Jason Ekstrand, Dave Hansen, Xiong Jianxin, Bjorn Helgaas,
	Ira Weiny, Robin Murphy, Martin Oliveira, Chaitanya Kulkarni,
	Jason Gunthorpe

On Tue, Dec 21, 2021 at 10:23:24AM -0700, Logan Gunthorpe wrote:
> > scatterlist.h doesn't have a real maintainer, do you want me to pick
> > this up through the DMA tree?
> 
> Sure, that would be great!

Done.

^ permalink raw reply	[flat|nested] 40+ messages in thread

end of thread, other threads:[~2021-12-22  8:22 UTC | newest]

Thread overview: 40+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-17 21:53 [PATCH v4 00/23] Userspace P2PDMA with O_DIRECT NVMe devices Logan Gunthorpe
2021-11-17 21:53 ` [PATCH v4 01/23] lib/scatterlist: cleanup macros into static inline functions Logan Gunthorpe
2021-12-13 21:51   ` Chaitanya Kulkarni
2021-12-21  9:00   ` Christoph Hellwig
2021-12-21 17:23     ` Logan Gunthorpe
2021-12-22  8:22       ` Christoph Hellwig
2021-11-17 21:53 ` [PATCH v4 02/23] lib/scatterlist: add flag for indicating P2PDMA segments in an SGL Logan Gunthorpe
2021-12-13 21:55   ` Chaitanya Kulkarni
2021-12-21  9:02   ` Christoph Hellwig
2021-11-17 21:53 ` [PATCH v4 03/23] PCI/P2PDMA: Attempt to set map_type if it has not been set Logan Gunthorpe
2021-12-13 22:00   ` Chaitanya Kulkarni
2021-11-17 21:53 ` [PATCH v4 04/23] PCI/P2PDMA: Expose pci_p2pdma_map_type() Logan Gunthorpe
2021-12-13 22:05   ` Chaitanya Kulkarni
2021-11-17 21:53 ` [PATCH v4 05/23] PCI/P2PDMA: Introduce helpers for dma_map_sg implementations Logan Gunthorpe
2021-11-17 21:53 ` [PATCH v4 06/23] dma-mapping: allow EREMOTEIO return code for P2PDMA transfers Logan Gunthorpe
2021-11-17 21:53 ` [PATCH v4 07/23] dma-direct: support PCI P2PDMA pages in dma-direct map_sg Logan Gunthorpe
2021-11-17 21:53 ` [PATCH v4 08/23] dma-mapping: add flags to dma_map_ops to indicate PCI P2PDMA support Logan Gunthorpe
2021-11-17 21:53 ` [PATCH v4 09/23] iommu/dma: support PCI P2PDMA pages in dma-iommu map_sg Logan Gunthorpe
2021-11-17 21:53 ` [PATCH v4 10/23] nvme-pci: check DMA ops when indicating support for PCI P2PDMA Logan Gunthorpe
2021-12-13 22:10   ` Chaitanya Kulkarni
2021-11-17 21:53 ` [PATCH v4 11/23] nvme-pci: convert to using dma_map_sgtable() Logan Gunthorpe
2021-12-13 22:21   ` Chaitanya Kulkarni
2021-12-13 22:28     ` Logan Gunthorpe
2021-11-17 21:53 ` [PATCH v4 12/23] RDMA/core: introduce ib_dma_pci_p2p_dma_supported() Logan Gunthorpe
2021-11-17 21:54 ` [PATCH v4 13/23] RDMA/rw: drop pci_p2pdma_[un]map_sg() Logan Gunthorpe
2021-11-17 21:54 ` [PATCH v4 14/23] PCI/P2PDMA: Remove pci_p2pdma_[un]map_sg() Logan Gunthorpe
2021-11-17 21:54 ` [PATCH v4 15/23] mm: introduce FOLL_PCI_P2PDMA to gate getting PCI P2PDMA pages Logan Gunthorpe
2021-11-17 21:54 ` [PATCH v4 16/23] iov_iter: introduce iov_iter_get_pages_[alloc_]flags() Logan Gunthorpe
2021-12-21  9:04   ` Christoph Hellwig
2021-11-17 21:54 ` [PATCH v4 17/23] block: add check when merging zone device pages Logan Gunthorpe
2021-12-21  9:05   ` Christoph Hellwig
2021-11-17 21:54 ` [PATCH v4 18/23] lib/scatterlist: " Logan Gunthorpe
2021-11-17 21:54 ` [PATCH v4 19/23] block: set FOLL_PCI_P2PDMA in __bio_iov_iter_get_pages() Logan Gunthorpe
2021-11-17 21:54 ` [PATCH v4 20/23] block: set FOLL_PCI_P2PDMA in bio_map_user_iov() Logan Gunthorpe
2021-11-17 21:54 ` [PATCH v4 21/23] mm: use custom page_free for P2PDMA pages Logan Gunthorpe
2021-12-21  9:06   ` Christoph Hellwig
2021-12-21 17:27     ` Logan Gunthorpe
2021-11-17 21:54 ` [PATCH v4 22/23] PCI/P2PDMA: Introduce pci_mmap_p2pmem() Logan Gunthorpe
2021-11-17 21:54 ` [PATCH v4 23/23] nvme-pci: allow mmaping the CMB in userspace Logan Gunthorpe
2021-12-21  9:07   ` Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).