All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH, RFC 0/5] IB: Optimize DMA mapping
@ 2016-12-08  1:10 Bart Van Assche
       [not found] ` <07c07529-4636-fafb-2598-7358d8a1460d-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
  0 siblings, 1 reply; 36+ messages in thread
From: Bart Van Assche @ 2016-12-08  1:10 UTC (permalink / raw)
  To: Doug Ledford, Christoph Hellwig, Sagi Grimberg
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

The DMA mapping operations are in the hot path so it is important that 
the overhead of these operations is as low as possible. There has been a 
reason in the past to have DMA mapping operations that are specific to 
the IB subsystem but that reason no longer exists today. Hence this 
patch series that eliminates the if (dev->dma_ops) test from the hot 
path. An additional benefit is that the size of HW and SW drivers that 
do not use DMA is reduced by switching to dma_noop_ops. The patches in 
this series are:

0001-treewide-constify-most-struct-dma_map_ops.patch
0002-misc-vop-Remove-a-cast.patch
0003-Move-dma_ops-from-archdata-into-struct-device.patch
0004-IB-Switch-from-struct-ib_dma_mapping_ops-to-struct-d.patch
0005-treewide-Inline-ib_dma_map_-functions.patch

As usual, feedback is welcome.

Bart.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 1/5] treewide: constify most struct dma_map_ops
       [not found] ` <07c07529-4636-fafb-2598-7358d8a1460d-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
@ 2016-12-08  1:10   ` Bart Van Assche
       [not found]     ` <f6b70724-772c-c17f-f1be-1681fab31228-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
  2016-12-08  1:10   ` [PATCH 2/5] misc: vop: Remove a cast Bart Van Assche
                     ` (4 subsequent siblings)
  5 siblings, 1 reply; 36+ messages in thread
From: Bart Van Assche @ 2016-12-08  1:10 UTC (permalink / raw)
  To: Doug Ledford, Christoph Hellwig, Sagi Grimberg
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

This basis of this has been generated as follows:

git grep -l 'struct dma_map_ops' |
  xargs -d\\n sed -i \
    -e 's/struct dma_map_ops/const struct dma_map_ops/g' \
    -e 's/const struct dma_map_ops {/struct dma_map_ops {/g' \
    -e 's/^const struct dma_map_ops;$/struct dma_map_ops;/' \
    -e 's/const const struct dma_map_ops /const struct dma_map_ops /g';
sed -i -e 's/const \(struct dma_map_ops intel_dma_ops\)/\1/' \
  $(git grep -l 'struct dma_map_ops intel_dma_ops');
sed -i -e 's/const \(struct dma_map_ops dma_iommu_ops\)/\1/' \
  $(git grep -l 'struct dma_map_ops' | grep ^arch/powerpc);
sed -i -e '/^struct vmd_dev {$/,/^};/ s/const \(struct dma_map_ops[[:blank:]]dma_ops;\)/\1/' \
       -e 's/const \(struct dma_map_ops \*dest = \&vmd->dma_ops\)/\1/' drivers/pci/host/vmd.c

Signed-off-by: Bart Van Assche <bart.vanassche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
---
 arch/alpha/include/asm/dma-mapping.h               |  4 +--
 arch/alpha/kernel/pci-noop.c                       |  4 +--
 arch/alpha/kernel/pci_iommu.c                      |  4 +--
 arch/arc/include/asm/dma-mapping.h                 |  4 +--
 arch/arc/mm/dma.c                                  |  2 +-
 arch/arm/common/dmabounce.c                        |  2 +-
 arch/arm/include/asm/device.h                      |  2 +-
 arch/arm/include/asm/dma-mapping.h                 | 10 +++---
 arch/arm/include/asm/xen/hypervisor.h              |  2 +-
 arch/arm/mm/dma-mapping.c                          | 22 ++++++------
 arch/arm/xen/mm.c                                  |  4 +--
 arch/arm64/include/asm/device.h                    |  2 +-
 arch/arm64/include/asm/dma-mapping.h               |  6 ++--
 arch/arm64/mm/dma-mapping.c                        |  6 ++--
 arch/avr32/include/asm/dma-mapping.h               |  4 +--
 arch/avr32/mm/dma-coherent.c                       |  2 +-
 arch/blackfin/include/asm/dma-mapping.h            |  4 +--
 arch/blackfin/kernel/dma-mapping.c                 |  2 +-
 arch/c6x/include/asm/dma-mapping.h                 |  4 +--
 arch/c6x/kernel/dma.c                              |  2 +-
 arch/cris/arch-v32/drivers/pci/dma.c               |  2 +-
 arch/cris/include/asm/dma-mapping.h                |  6 ++--
 arch/frv/include/asm/dma-mapping.h                 |  4 +--
 arch/frv/mb93090-mb00/pci-dma-nommu.c              |  2 +-
 arch/frv/mb93090-mb00/pci-dma.c                    |  2 +-
 arch/h8300/include/asm/dma-mapping.h               |  4 +--
 arch/h8300/kernel/dma.c                            |  2 +-
 arch/hexagon/include/asm/dma-mapping.h             |  4 +--
 arch/hexagon/kernel/dma.c                          |  4 +--
 arch/ia64/hp/common/hwsw_iommu.c                   |  4 +--
 arch/ia64/hp/common/sba_iommu.c                    |  4 +--
 arch/ia64/include/asm/dma-mapping.h                |  2 +-
 arch/ia64/include/asm/machvec.h                    |  4 +--
 arch/ia64/kernel/dma-mapping.c                     |  4 +--
 arch/ia64/kernel/pci-dma.c                         | 10 +++---
 arch/ia64/kernel/pci-swiotlb.c                     |  2 +-
 arch/m68k/include/asm/dma-mapping.h                |  4 +--
 arch/m68k/kernel/dma.c                             |  2 +-
 arch/metag/include/asm/dma-mapping.h               |  4 +--
 arch/metag/kernel/dma.c                            |  2 +-
 arch/microblaze/include/asm/dma-mapping.h          |  4 +--
 arch/microblaze/kernel/dma.c                       |  2 +-
 arch/mips/cavium-octeon/dma-octeon.c               |  4 +--
 arch/mips/include/asm/device.h                     |  2 +-
 arch/mips/include/asm/dma-mapping.h                |  4 +--
 .../include/asm/mach-cavium-octeon/dma-coherence.h |  2 +-
 arch/mips/include/asm/netlogic/common.h            |  2 +-
 arch/mips/loongson64/common/dma-swiotlb.c          |  2 +-
 arch/mips/mm/dma-default.c                         |  4 +--
 arch/mips/netlogic/common/nlm-dma.c                |  2 +-
 arch/mn10300/include/asm/dma-mapping.h             |  4 +--
 arch/mn10300/mm/dma-alloc.c                        |  2 +-
 arch/nios2/include/asm/dma-mapping.h               |  4 +--
 arch/nios2/mm/dma-mapping.c                        |  2 +-
 arch/openrisc/include/asm/dma-mapping.h            |  4 +--
 arch/openrisc/kernel/dma.c                         |  2 +-
 arch/parisc/include/asm/dma-mapping.h              |  8 ++---
 arch/parisc/kernel/drivers.c                       |  2 +-
 arch/parisc/kernel/pci-dma.c                       |  4 +--
 arch/powerpc/include/asm/device.h                  |  2 +-
 arch/powerpc/include/asm/dma-mapping.h             |  6 ++--
 arch/powerpc/include/asm/pci.h                     |  4 +--
 arch/powerpc/include/asm/swiotlb.h                 |  2 +-
 arch/powerpc/kernel/dma-swiotlb.c                  |  2 +-
 arch/powerpc/kernel/dma.c                          |  6 ++--
 arch/powerpc/kernel/ibmebus.c                      |  2 +-
 arch/powerpc/kernel/pci-common.c                   |  6 ++--
 arch/powerpc/kernel/vio.c                          |  2 +-
 arch/powerpc/platforms/cell/iommu.c                |  4 +--
 arch/powerpc/platforms/powernv/npu-dma.c           |  2 +-
 arch/powerpc/platforms/ps3/system-bus.c            |  4 +--
 arch/s390/include/asm/device.h                     |  2 +-
 arch/s390/include/asm/dma-mapping.h                |  4 +--
 arch/s390/pci/pci_dma.c                            |  2 +-
 arch/sh/include/asm/dma-mapping.h                  |  4 +--
 arch/sh/kernel/dma-nommu.c                         |  2 +-
 arch/sh/mm/consistent.c                            |  2 +-
 arch/sparc/include/asm/dma-mapping.h               |  8 ++---
 arch/sparc/kernel/iommu.c                          |  4 +--
 arch/sparc/kernel/ioport.c                         |  8 ++---
 arch/sparc/kernel/pci_sun4v.c                      |  2 +-
 arch/tile/include/asm/device.h                     |  2 +-
 arch/tile/include/asm/dma-mapping.h                | 12 +++----
 arch/tile/kernel/pci-dma.c                         | 24 ++++++-------
 arch/unicore32/include/asm/dma-mapping.h           |  4 +--
 arch/unicore32/mm/dma-swiotlb.c                    |  2 +-
 arch/x86/include/asm/device.h                      |  4 +--
 arch/x86/include/asm/dma-mapping.h                 |  4 +--
 arch/x86/include/asm/iommu.h                       |  2 +-
 arch/x86/kernel/amd_gart_64.c                      |  2 +-
 arch/x86/kernel/pci-calgary_64.c                   |  2 +-
 arch/x86/kernel/pci-dma.c                          |  4 +--
 arch/x86/kernel/pci-nommu.c                        |  2 +-
 arch/x86/kernel/pci-swiotlb.c                      |  2 +-
 arch/x86/pci/sta2x11-fixup.c                       |  2 +-
 arch/x86/xen/pci-swiotlb-xen.c                     |  2 +-
 arch/xtensa/include/asm/device.h                   |  2 +-
 arch/xtensa/include/asm/dma-mapping.h              |  4 +--
 arch/xtensa/kernel/pci-dma.c                       |  2 +-
 drivers/iommu/amd_iommu.c                          |  4 +--
 drivers/misc/mic/bus/mic_bus.c                     |  2 +-
 drivers/misc/mic/bus/scif_bus.c                    |  2 +-
 drivers/misc/mic/bus/scif_bus.h                    |  2 +-
 drivers/misc/mic/host/mic_boot.c                   |  4 +--
 drivers/parisc/ccio-dma.c                          |  2 +-
 drivers/parisc/sba_iommu.c                         |  2 +-
 drivers/pci/host/vmd.c                             |  2 +-
 include/linux/dma-mapping.h                        | 42 +++++++++++-----------
 include/linux/mic_bus.h                            |  2 +-
 lib/dma-noop.c                                     |  2 +-
 110 files changed, 224 insertions(+), 224 deletions(-)

diff --git a/arch/alpha/include/asm/dma-mapping.h b/arch/alpha/include/asm/dma-mapping.h
index c63b6ac19ee5..d3480562411d 100644
--- a/arch/alpha/include/asm/dma-mapping.h
+++ b/arch/alpha/include/asm/dma-mapping.h
@@ -1,9 +1,9 @@
 #ifndef _ALPHA_DMA_MAPPING_H
 #define _ALPHA_DMA_MAPPING_H
 
-extern struct dma_map_ops *dma_ops;
+extern const struct dma_map_ops *dma_ops;
 
-static inline struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
 {
 	return dma_ops;
 }
diff --git a/arch/alpha/kernel/pci-noop.c b/arch/alpha/kernel/pci-noop.c
index bb152e21e5ae..ffbdb3fb672f 100644
--- a/arch/alpha/kernel/pci-noop.c
+++ b/arch/alpha/kernel/pci-noop.c
@@ -128,7 +128,7 @@ static int alpha_noop_supported(struct device *dev, u64 mask)
 	return mask < 0x00ffffffUL ? 0 : 1;
 }
 
-struct dma_map_ops alpha_noop_ops = {
+const struct dma_map_ops alpha_noop_ops = {
 	.alloc			= alpha_noop_alloc_coherent,
 	.free			= dma_noop_free_coherent,
 	.map_page		= dma_noop_map_page,
@@ -137,5 +137,5 @@ struct dma_map_ops alpha_noop_ops = {
 	.dma_supported		= alpha_noop_supported,
 };
 
-struct dma_map_ops *dma_ops = &alpha_noop_ops;
+const struct dma_map_ops *dma_ops = &alpha_noop_ops;
 EXPORT_SYMBOL(dma_ops);
diff --git a/arch/alpha/kernel/pci_iommu.c b/arch/alpha/kernel/pci_iommu.c
index 451fc9cdd323..7fd2329038a3 100644
--- a/arch/alpha/kernel/pci_iommu.c
+++ b/arch/alpha/kernel/pci_iommu.c
@@ -939,7 +939,7 @@ static int alpha_pci_mapping_error(struct device *dev, dma_addr_t dma_addr)
 	return dma_addr == 0;
 }
 
-struct dma_map_ops alpha_pci_ops = {
+const struct dma_map_ops alpha_pci_ops = {
 	.alloc			= alpha_pci_alloc_coherent,
 	.free			= alpha_pci_free_coherent,
 	.map_page		= alpha_pci_map_page,
@@ -950,5 +950,5 @@ struct dma_map_ops alpha_pci_ops = {
 	.dma_supported		= alpha_pci_supported,
 };
 
-struct dma_map_ops *dma_ops = &alpha_pci_ops;
+const struct dma_map_ops *dma_ops = &alpha_pci_ops;
 EXPORT_SYMBOL(dma_ops);
diff --git a/arch/arc/include/asm/dma-mapping.h b/arch/arc/include/asm/dma-mapping.h
index 266f11c9bd59..fdff3aa60052 100644
--- a/arch/arc/include/asm/dma-mapping.h
+++ b/arch/arc/include/asm/dma-mapping.h
@@ -18,9 +18,9 @@
 #include <plat/dma.h>
 #endif
 
-extern struct dma_map_ops arc_dma_ops;
+extern const struct dma_map_ops arc_dma_ops;
 
-static inline struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
 {
 	return &arc_dma_ops;
 }
diff --git a/arch/arc/mm/dma.c b/arch/arc/mm/dma.c
index cd8aad8226dd..b104b7b822dc 100644
--- a/arch/arc/mm/dma.c
+++ b/arch/arc/mm/dma.c
@@ -215,7 +215,7 @@ static int arc_dma_supported(struct device *dev, u64 dma_mask)
 	return dma_mask == DMA_BIT_MASK(32);
 }
 
-struct dma_map_ops arc_dma_ops = {
+const struct dma_map_ops arc_dma_ops = {
 	.alloc			= arc_dma_alloc,
 	.free			= arc_dma_free,
 	.mmap			= arc_dma_mmap,
diff --git a/arch/arm/common/dmabounce.c b/arch/arm/common/dmabounce.c
index 301281645d08..d4c0af0dc235 100644
--- a/arch/arm/common/dmabounce.c
+++ b/arch/arm/common/dmabounce.c
@@ -448,7 +448,7 @@ static int dmabounce_set_mask(struct device *dev, u64 dma_mask)
 	return arm_dma_ops.set_dma_mask(dev, dma_mask);
 }
 
-static struct dma_map_ops dmabounce_ops = {
+static const struct dma_map_ops dmabounce_ops = {
 	.alloc			= arm_dma_alloc,
 	.free			= arm_dma_free,
 	.mmap			= arm_dma_mmap,
diff --git a/arch/arm/include/asm/device.h b/arch/arm/include/asm/device.h
index 4111592f0130..d8a572f9c187 100644
--- a/arch/arm/include/asm/device.h
+++ b/arch/arm/include/asm/device.h
@@ -7,7 +7,7 @@
 #define ASMARM_DEVICE_H
 
 struct dev_archdata {
-	struct dma_map_ops	*dma_ops;
+	const struct dma_map_ops	*dma_ops;
 #ifdef CONFIG_DMABOUNCE
 	struct dmabounce_device_info *dmabounce;
 #endif
diff --git a/arch/arm/include/asm/dma-mapping.h b/arch/arm/include/asm/dma-mapping.h
index bf02dbd9ccda..1aabd781306f 100644
--- a/arch/arm/include/asm/dma-mapping.h
+++ b/arch/arm/include/asm/dma-mapping.h
@@ -13,17 +13,17 @@
 #include <asm/xen/hypervisor.h>
 
 #define DMA_ERROR_CODE	(~(dma_addr_t)0x0)
-extern struct dma_map_ops arm_dma_ops;
-extern struct dma_map_ops arm_coherent_dma_ops;
+extern const struct dma_map_ops arm_dma_ops;
+extern const struct dma_map_ops arm_coherent_dma_ops;
 
-static inline struct dma_map_ops *__generic_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *__generic_dma_ops(struct device *dev)
 {
 	if (dev && dev->archdata.dma_ops)
 		return dev->archdata.dma_ops;
 	return &arm_dma_ops;
 }
 
-static inline struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
 {
 	if (xen_initial_domain())
 		return xen_dma_ops;
@@ -31,7 +31,7 @@ static inline struct dma_map_ops *get_dma_ops(struct device *dev)
 		return __generic_dma_ops(dev);
 }
 
-static inline void set_dma_ops(struct device *dev, struct dma_map_ops *ops)
+static inline void set_dma_ops(struct device *dev, const struct dma_map_ops *ops)
 {
 	BUG_ON(!dev);
 	dev->archdata.dma_ops = ops;
diff --git a/arch/arm/include/asm/xen/hypervisor.h b/arch/arm/include/asm/xen/hypervisor.h
index 95251512e2c4..44b587b49904 100644
--- a/arch/arm/include/asm/xen/hypervisor.h
+++ b/arch/arm/include/asm/xen/hypervisor.h
@@ -18,7 +18,7 @@ static inline enum paravirt_lazy_mode paravirt_get_lazy_mode(void)
 	return PARAVIRT_LAZY_NONE;
 }
 
-extern struct dma_map_ops *xen_dma_ops;
+extern const struct dma_map_ops *xen_dma_ops;
 
 #ifdef CONFIG_XEN
 void __init xen_early_init(void);
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index ab7710002ba6..d26fe1a35687 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -180,7 +180,7 @@ static void arm_dma_sync_single_for_device(struct device *dev,
 	__dma_page_cpu_to_dev(page, offset, size, dir);
 }
 
-struct dma_map_ops arm_dma_ops = {
+const struct dma_map_ops arm_dma_ops = {
 	.alloc			= arm_dma_alloc,
 	.free			= arm_dma_free,
 	.mmap			= arm_dma_mmap,
@@ -204,7 +204,7 @@ static int arm_coherent_dma_mmap(struct device *dev, struct vm_area_struct *vma,
 		 void *cpu_addr, dma_addr_t dma_addr, size_t size,
 		 unsigned long attrs);
 
-struct dma_map_ops arm_coherent_dma_ops = {
+const struct dma_map_ops arm_coherent_dma_ops = {
 	.alloc			= arm_coherent_dma_alloc,
 	.free			= arm_coherent_dma_free,
 	.mmap			= arm_coherent_dma_mmap,
@@ -1067,7 +1067,7 @@ static void __dma_page_dev_to_cpu(struct page *page, unsigned long off,
 int arm_dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
 		enum dma_data_direction dir, unsigned long attrs)
 {
-	struct dma_map_ops *ops = get_dma_ops(dev);
+	const struct dma_map_ops *ops = get_dma_ops(dev);
 	struct scatterlist *s;
 	int i, j;
 
@@ -1101,7 +1101,7 @@ int arm_dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
 void arm_dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nents,
 		enum dma_data_direction dir, unsigned long attrs)
 {
-	struct dma_map_ops *ops = get_dma_ops(dev);
+	const struct dma_map_ops *ops = get_dma_ops(dev);
 	struct scatterlist *s;
 
 	int i;
@@ -1120,7 +1120,7 @@ void arm_dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nents,
 void arm_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
 			int nents, enum dma_data_direction dir)
 {
-	struct dma_map_ops *ops = get_dma_ops(dev);
+	const struct dma_map_ops *ops = get_dma_ops(dev);
 	struct scatterlist *s;
 	int i;
 
@@ -1139,7 +1139,7 @@ void arm_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
 void arm_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
 			int nents, enum dma_data_direction dir)
 {
-	struct dma_map_ops *ops = get_dma_ops(dev);
+	const struct dma_map_ops *ops = get_dma_ops(dev);
 	struct scatterlist *s;
 	int i;
 
@@ -2099,7 +2099,7 @@ static void arm_iommu_sync_single_for_device(struct device *dev,
 	__dma_page_cpu_to_dev(page, offset, size, dir);
 }
 
-struct dma_map_ops iommu_ops = {
+const struct dma_map_ops iommu_ops = {
 	.alloc		= arm_iommu_alloc_attrs,
 	.free		= arm_iommu_free_attrs,
 	.mmap		= arm_iommu_mmap_attrs,
@@ -2119,7 +2119,7 @@ struct dma_map_ops iommu_ops = {
 	.unmap_resource		= arm_iommu_unmap_resource,
 };
 
-struct dma_map_ops iommu_coherent_ops = {
+const struct dma_map_ops iommu_coherent_ops = {
 	.alloc		= arm_coherent_iommu_alloc_attrs,
 	.free		= arm_coherent_iommu_free_attrs,
 	.mmap		= arm_coherent_iommu_mmap_attrs,
@@ -2319,7 +2319,7 @@ void arm_iommu_detach_device(struct device *dev)
 }
 EXPORT_SYMBOL_GPL(arm_iommu_detach_device);
 
-static struct dma_map_ops *arm_get_iommu_dma_map_ops(bool coherent)
+static const struct dma_map_ops *arm_get_iommu_dma_map_ops(bool coherent)
 {
 	return coherent ? &iommu_coherent_ops : &iommu_ops;
 }
@@ -2374,7 +2374,7 @@ static void arm_teardown_iommu_dma_ops(struct device *dev) { }
 
 #endif	/* CONFIG_ARM_DMA_USE_IOMMU */
 
-static struct dma_map_ops *arm_get_dma_map_ops(bool coherent)
+static const struct dma_map_ops *arm_get_dma_map_ops(bool coherent)
 {
 	return coherent ? &arm_coherent_dma_ops : &arm_dma_ops;
 }
@@ -2382,7 +2382,7 @@ static struct dma_map_ops *arm_get_dma_map_ops(bool coherent)
 void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
 			const struct iommu_ops *iommu, bool coherent)
 {
-	struct dma_map_ops *dma_ops;
+	const struct dma_map_ops *dma_ops;
 
 	dev->archdata.dma_coherent = coherent;
 	if (arm_setup_iommu_dma_ops(dev, dma_base, size, iommu))
diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
index d062f08f5020..a6014a4b2b81 100644
--- a/arch/arm/xen/mm.c
+++ b/arch/arm/xen/mm.c
@@ -182,10 +182,10 @@ void xen_destroy_contiguous_region(phys_addr_t pstart, unsigned int order)
 }
 EXPORT_SYMBOL_GPL(xen_destroy_contiguous_region);
 
-struct dma_map_ops *xen_dma_ops;
+const struct dma_map_ops *xen_dma_ops;
 EXPORT_SYMBOL(xen_dma_ops);
 
-static struct dma_map_ops xen_swiotlb_dma_ops = {
+static const struct dma_map_ops xen_swiotlb_dma_ops = {
 	.mapping_error = xen_swiotlb_dma_mapping_error,
 	.alloc = xen_swiotlb_alloc_coherent,
 	.free = xen_swiotlb_free_coherent,
diff --git a/arch/arm64/include/asm/device.h b/arch/arm64/include/asm/device.h
index 243ef256b8c9..00c678cc31e1 100644
--- a/arch/arm64/include/asm/device.h
+++ b/arch/arm64/include/asm/device.h
@@ -17,7 +17,7 @@
 #define __ASM_DEVICE_H
 
 struct dev_archdata {
-	struct dma_map_ops *dma_ops;
+	const struct dma_map_ops *dma_ops;
 #ifdef CONFIG_IOMMU_API
 	void *iommu;			/* private IOMMU data */
 #endif
diff --git a/arch/arm64/include/asm/dma-mapping.h b/arch/arm64/include/asm/dma-mapping.h
index ccea82c2b089..1fedb43be712 100644
--- a/arch/arm64/include/asm/dma-mapping.h
+++ b/arch/arm64/include/asm/dma-mapping.h
@@ -25,9 +25,9 @@
 #include <asm/xen/hypervisor.h>
 
 #define DMA_ERROR_CODE	(~(dma_addr_t)0)
-extern struct dma_map_ops dummy_dma_ops;
+extern const struct dma_map_ops dummy_dma_ops;
 
-static inline struct dma_map_ops *__generic_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *__generic_dma_ops(struct device *dev)
 {
 	if (dev && dev->archdata.dma_ops)
 		return dev->archdata.dma_ops;
@@ -39,7 +39,7 @@ static inline struct dma_map_ops *__generic_dma_ops(struct device *dev)
 	return &dummy_dma_ops;
 }
 
-static inline struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
 {
 	if (xen_initial_domain())
 		return xen_dma_ops;
diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index 3f74d0d98de6..de6ec4e14074 100644
--- a/arch/arm64/mm/dma-mapping.c
+++ b/arch/arm64/mm/dma-mapping.c
@@ -352,7 +352,7 @@ static int __swiotlb_dma_supported(struct device *hwdev, u64 mask)
 	return 1;
 }
 
-static struct dma_map_ops swiotlb_dma_ops = {
+static const struct dma_map_ops swiotlb_dma_ops = {
 	.alloc = __dma_alloc,
 	.free = __dma_free,
 	.mmap = __swiotlb_mmap,
@@ -505,7 +505,7 @@ static int __dummy_dma_supported(struct device *hwdev, u64 mask)
 	return 0;
 }
 
-struct dma_map_ops dummy_dma_ops = {
+const struct dma_map_ops dummy_dma_ops = {
 	.alloc                  = __dummy_alloc,
 	.free                   = __dummy_free,
 	.mmap                   = __dummy_mmap,
@@ -783,7 +783,7 @@ static void __iommu_unmap_sg_attrs(struct device *dev,
 	iommu_dma_unmap_sg(dev, sgl, nelems, dir, attrs);
 }
 
-static struct dma_map_ops iommu_dma_ops = {
+static const struct dma_map_ops iommu_dma_ops = {
 	.alloc = __iommu_alloc_attrs,
 	.free = __iommu_free_attrs,
 	.mmap = __iommu_mmap_attrs,
diff --git a/arch/avr32/include/asm/dma-mapping.h b/arch/avr32/include/asm/dma-mapping.h
index 1115f2a645d1..b2b43c0e0774 100644
--- a/arch/avr32/include/asm/dma-mapping.h
+++ b/arch/avr32/include/asm/dma-mapping.h
@@ -4,9 +4,9 @@
 extern void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
 	int direction);
 
-extern struct dma_map_ops avr32_dma_ops;
+extern const struct dma_map_ops avr32_dma_ops;
 
-static inline struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
 {
 	return &avr32_dma_ops;
 }
diff --git a/arch/avr32/mm/dma-coherent.c b/arch/avr32/mm/dma-coherent.c
index 58610d0df7ed..599a32c6f88d 100644
--- a/arch/avr32/mm/dma-coherent.c
+++ b/arch/avr32/mm/dma-coherent.c
@@ -186,7 +186,7 @@ static void avr32_dma_sync_sg_for_device(struct device *dev,
 		dma_cache_sync(dev, sg_virt(sg), sg->length, direction);
 }
 
-struct dma_map_ops avr32_dma_ops = {
+const struct dma_map_ops avr32_dma_ops = {
 	.alloc			= avr32_dma_alloc,
 	.free			= avr32_dma_free,
 	.map_page		= avr32_dma_map_page,
diff --git a/arch/blackfin/include/asm/dma-mapping.h b/arch/blackfin/include/asm/dma-mapping.h
index 3490570aaa82..320fb50fbd41 100644
--- a/arch/blackfin/include/asm/dma-mapping.h
+++ b/arch/blackfin/include/asm/dma-mapping.h
@@ -36,9 +36,9 @@ _dma_sync(dma_addr_t addr, size_t size, enum dma_data_direction dir)
 		__dma_sync(addr, size, dir);
 }
 
-extern struct dma_map_ops bfin_dma_ops;
+extern const struct dma_map_ops bfin_dma_ops;
 
-static inline struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
 {
 	return &bfin_dma_ops;
 }
diff --git a/arch/blackfin/kernel/dma-mapping.c b/arch/blackfin/kernel/dma-mapping.c
index 53fbbb61aa86..444bb5a4d7e6 100644
--- a/arch/blackfin/kernel/dma-mapping.c
+++ b/arch/blackfin/kernel/dma-mapping.c
@@ -153,7 +153,7 @@ static inline void bfin_dma_sync_single_for_device(struct device *dev,
 	_dma_sync(handle, size, dir);
 }
 
-struct dma_map_ops bfin_dma_ops = {
+const struct dma_map_ops bfin_dma_ops = {
 	.alloc			= bfin_dma_alloc,
 	.free			= bfin_dma_free,
 
diff --git a/arch/c6x/include/asm/dma-mapping.h b/arch/c6x/include/asm/dma-mapping.h
index 5717b1e52d96..88258b9ebc8e 100644
--- a/arch/c6x/include/asm/dma-mapping.h
+++ b/arch/c6x/include/asm/dma-mapping.h
@@ -17,9 +17,9 @@
  */
 #define DMA_ERROR_CODE ~0
 
-extern struct dma_map_ops c6x_dma_ops;
+extern const struct dma_map_ops c6x_dma_ops;
 
-static inline struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
 {
 	return &c6x_dma_ops;
 }
diff --git a/arch/c6x/kernel/dma.c b/arch/c6x/kernel/dma.c
index db4a6a301f5e..68d99ec7e85a 100644
--- a/arch/c6x/kernel/dma.c
+++ b/arch/c6x/kernel/dma.c
@@ -117,7 +117,7 @@ static void c6x_dma_sync_sg_for_device(struct device *dev,
 
 }
 
-struct dma_map_ops c6x_dma_ops = {
+const struct dma_map_ops c6x_dma_ops = {
 	.alloc			= c6x_dma_alloc,
 	.free			= c6x_dma_free,
 	.map_page		= c6x_dma_map_page,
diff --git a/arch/cris/arch-v32/drivers/pci/dma.c b/arch/cris/arch-v32/drivers/pci/dma.c
index 1f0636793f0c..7072341995ff 100644
--- a/arch/cris/arch-v32/drivers/pci/dma.c
+++ b/arch/cris/arch-v32/drivers/pci/dma.c
@@ -69,7 +69,7 @@ static inline int v32_dma_supported(struct device *dev, u64 mask)
 	return 1;
 }
 
-struct dma_map_ops v32_dma_ops = {
+const struct dma_map_ops v32_dma_ops = {
 	.alloc			= v32_dma_alloc,
 	.free			= v32_dma_free,
 	.map_page		= v32_dma_map_page,
diff --git a/arch/cris/include/asm/dma-mapping.h b/arch/cris/include/asm/dma-mapping.h
index 5a370178a0e9..aae4fbc0a656 100644
--- a/arch/cris/include/asm/dma-mapping.h
+++ b/arch/cris/include/asm/dma-mapping.h
@@ -2,14 +2,14 @@
 #define _ASM_CRIS_DMA_MAPPING_H
 
 #ifdef CONFIG_PCI
-extern struct dma_map_ops v32_dma_ops;
+extern const struct dma_map_ops v32_dma_ops;
 
-static inline struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
 {
 	return &v32_dma_ops;
 }
 #else
-static inline struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
 {
 	BUG();
 	return NULL;
diff --git a/arch/frv/include/asm/dma-mapping.h b/arch/frv/include/asm/dma-mapping.h
index 9a82bfa4303b..150cc00544a8 100644
--- a/arch/frv/include/asm/dma-mapping.h
+++ b/arch/frv/include/asm/dma-mapping.h
@@ -7,9 +7,9 @@
 extern unsigned long __nongprelbss dma_coherent_mem_start;
 extern unsigned long __nongprelbss dma_coherent_mem_end;
 
-extern struct dma_map_ops frv_dma_ops;
+extern const struct dma_map_ops frv_dma_ops;
 
-static inline struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
 {
 	return &frv_dma_ops;
 }
diff --git a/arch/frv/mb93090-mb00/pci-dma-nommu.c b/arch/frv/mb93090-mb00/pci-dma-nommu.c
index 90f2e4cb33d6..b41083083b2f 100644
--- a/arch/frv/mb93090-mb00/pci-dma-nommu.c
+++ b/arch/frv/mb93090-mb00/pci-dma-nommu.c
@@ -158,7 +158,7 @@ static int frv_dma_supported(struct device *dev, u64 mask)
 	return 1;
 }
 
-struct dma_map_ops frv_dma_ops = {
+const struct dma_map_ops frv_dma_ops = {
 	.alloc			= frv_dma_alloc,
 	.free			= frv_dma_free,
 	.map_page		= frv_dma_map_page,
diff --git a/arch/frv/mb93090-mb00/pci-dma.c b/arch/frv/mb93090-mb00/pci-dma.c
index f585745b1abc..5ea67c3b9a3e 100644
--- a/arch/frv/mb93090-mb00/pci-dma.c
+++ b/arch/frv/mb93090-mb00/pci-dma.c
@@ -101,7 +101,7 @@ static int frv_dma_supported(struct device *dev, u64 mask)
 	return 1;
 }
 
-struct dma_map_ops frv_dma_ops = {
+const struct dma_map_ops frv_dma_ops = {
 	.alloc			= frv_dma_alloc,
 	.free			= frv_dma_free,
 	.map_page		= frv_dma_map_page,
diff --git a/arch/h8300/include/asm/dma-mapping.h b/arch/h8300/include/asm/dma-mapping.h
index 7ac7fadffed0..f804bca4c13f 100644
--- a/arch/h8300/include/asm/dma-mapping.h
+++ b/arch/h8300/include/asm/dma-mapping.h
@@ -1,9 +1,9 @@
 #ifndef _H8300_DMA_MAPPING_H
 #define _H8300_DMA_MAPPING_H
 
-extern struct dma_map_ops h8300_dma_map_ops;
+extern const struct dma_map_ops h8300_dma_map_ops;
 
-static inline struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
 {
 	return &h8300_dma_map_ops;
 }
diff --git a/arch/h8300/kernel/dma.c b/arch/h8300/kernel/dma.c
index 3651da045806..225dd0a188dc 100644
--- a/arch/h8300/kernel/dma.c
+++ b/arch/h8300/kernel/dma.c
@@ -60,7 +60,7 @@ static int map_sg(struct device *dev, struct scatterlist *sgl,
 	return nents;
 }
 
-struct dma_map_ops h8300_dma_map_ops = {
+const struct dma_map_ops h8300_dma_map_ops = {
 	.alloc = dma_alloc,
 	.free = dma_free,
 	.map_page = map_page,
diff --git a/arch/hexagon/include/asm/dma-mapping.h b/arch/hexagon/include/asm/dma-mapping.h
index 7ef58df909fc..b812e917cd95 100644
--- a/arch/hexagon/include/asm/dma-mapping.h
+++ b/arch/hexagon/include/asm/dma-mapping.h
@@ -32,9 +32,9 @@ struct device;
 extern int bad_dma_address;
 #define DMA_ERROR_CODE bad_dma_address
 
-extern struct dma_map_ops *dma_ops;
+extern const struct dma_map_ops *dma_ops;
 
-static inline struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
 {
 	if (unlikely(dev == NULL))
 		return NULL;
diff --git a/arch/hexagon/kernel/dma.c b/arch/hexagon/kernel/dma.c
index b9017785fb71..16b91754df01 100644
--- a/arch/hexagon/kernel/dma.c
+++ b/arch/hexagon/kernel/dma.c
@@ -25,7 +25,7 @@
 #include <linux/module.h>
 #include <asm/page.h>
 
-struct dma_map_ops *dma_ops;
+const struct dma_map_ops *dma_ops;
 EXPORT_SYMBOL(dma_ops);
 
 int bad_dma_address;  /*  globals are automatically initialized to zero  */
@@ -199,7 +199,7 @@ static void hexagon_sync_single_for_device(struct device *dev,
 	dma_sync(dma_addr_to_virt(dma_handle), size, dir);
 }
 
-struct dma_map_ops hexagon_dma_ops = {
+const struct dma_map_ops hexagon_dma_ops = {
 	.alloc		= hexagon_dma_alloc_coherent,
 	.free		= hexagon_free_coherent,
 	.map_sg		= hexagon_map_sg,
diff --git a/arch/ia64/hp/common/hwsw_iommu.c b/arch/ia64/hp/common/hwsw_iommu.c
index 1e4cae5ae053..0310078a95f8 100644
--- a/arch/ia64/hp/common/hwsw_iommu.c
+++ b/arch/ia64/hp/common/hwsw_iommu.c
@@ -18,7 +18,7 @@
 #include <linux/export.h>
 #include <asm/machvec.h>
 
-extern struct dma_map_ops sba_dma_ops, swiotlb_dma_ops;
+extern const struct dma_map_ops sba_dma_ops, swiotlb_dma_ops;
 
 /* swiotlb declarations & definitions: */
 extern int swiotlb_late_init_with_default_size (size_t size);
@@ -34,7 +34,7 @@ static inline int use_swiotlb(struct device *dev)
 		!sba_dma_ops.dma_supported(dev, *dev->dma_mask);
 }
 
-struct dma_map_ops *hwsw_dma_get_ops(struct device *dev)
+const struct dma_map_ops *hwsw_dma_get_ops(struct device *dev)
 {
 	if (use_swiotlb(dev))
 		return &swiotlb_dma_ops;
diff --git a/arch/ia64/hp/common/sba_iommu.c b/arch/ia64/hp/common/sba_iommu.c
index 630ee8073899..aec4a3354abe 100644
--- a/arch/ia64/hp/common/sba_iommu.c
+++ b/arch/ia64/hp/common/sba_iommu.c
@@ -2096,7 +2096,7 @@ static int __init acpi_sba_ioc_init_acpi(void)
 /* This has to run before acpi_scan_init(). */
 arch_initcall(acpi_sba_ioc_init_acpi);
 
-extern struct dma_map_ops swiotlb_dma_ops;
+extern const struct dma_map_ops swiotlb_dma_ops;
 
 static int __init
 sba_init(void)
@@ -2216,7 +2216,7 @@ sba_page_override(char *str)
 
 __setup("sbapagesize=",sba_page_override);
 
-struct dma_map_ops sba_dma_ops = {
+const struct dma_map_ops sba_dma_ops = {
 	.alloc			= sba_alloc_coherent,
 	.free			= sba_free_coherent,
 	.map_page		= sba_map_page,
diff --git a/arch/ia64/include/asm/dma-mapping.h b/arch/ia64/include/asm/dma-mapping.h
index d472805edfa9..05e467d56d86 100644
--- a/arch/ia64/include/asm/dma-mapping.h
+++ b/arch/ia64/include/asm/dma-mapping.h
@@ -14,7 +14,7 @@
 
 #define DMA_ERROR_CODE 0
 
-extern struct dma_map_ops *dma_ops;
+extern const struct dma_map_ops *dma_ops;
 extern struct ia64_machine_vector ia64_mv;
 extern void set_iommu_machvec(void);
 
diff --git a/arch/ia64/include/asm/machvec.h b/arch/ia64/include/asm/machvec.h
index ed7f09089f12..af285c423e1e 100644
--- a/arch/ia64/include/asm/machvec.h
+++ b/arch/ia64/include/asm/machvec.h
@@ -44,7 +44,7 @@ typedef void ia64_mv_kernel_launch_event_t(void);
 /* DMA-mapping interface: */
 typedef void ia64_mv_dma_init (void);
 typedef u64 ia64_mv_dma_get_required_mask (struct device *);
-typedef struct dma_map_ops *ia64_mv_dma_get_ops(struct device *);
+typedef const struct dma_map_ops *ia64_mv_dma_get_ops(struct device *);
 
 /*
  * WARNING: The legacy I/O space is _architected_.  Platforms are
@@ -248,7 +248,7 @@ extern void machvec_init_from_cmdline(const char *cmdline);
 # endif /* CONFIG_IA64_GENERIC */
 
 extern void swiotlb_dma_init(void);
-extern struct dma_map_ops *dma_get_ops(struct device *);
+extern const struct dma_map_ops *dma_get_ops(struct device *);
 
 /*
  * Define default versions so we can extend machvec for new platforms without having
diff --git a/arch/ia64/kernel/dma-mapping.c b/arch/ia64/kernel/dma-mapping.c
index 7f7916238208..e0dd97f4eb69 100644
--- a/arch/ia64/kernel/dma-mapping.c
+++ b/arch/ia64/kernel/dma-mapping.c
@@ -4,7 +4,7 @@
 /* Set this to 1 if there is a HW IOMMU in the system */
 int iommu_detected __read_mostly;
 
-struct dma_map_ops *dma_ops;
+const struct dma_map_ops *dma_ops;
 EXPORT_SYMBOL(dma_ops);
 
 #define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16)
@@ -17,7 +17,7 @@ static int __init dma_init(void)
 }
 fs_initcall(dma_init);
 
-struct dma_map_ops *dma_get_ops(struct device *dev)
+const struct dma_map_ops *dma_get_ops(struct device *dev)
 {
 	return dma_ops;
 }
diff --git a/arch/ia64/kernel/pci-dma.c b/arch/ia64/kernel/pci-dma.c
index 992c1098c522..9094a73f996f 100644
--- a/arch/ia64/kernel/pci-dma.c
+++ b/arch/ia64/kernel/pci-dma.c
@@ -90,11 +90,11 @@ void __init pci_iommu_alloc(void)
 {
 	dma_ops = &intel_dma_ops;
 
-	dma_ops->sync_single_for_cpu = machvec_dma_sync_single;
-	dma_ops->sync_sg_for_cpu = machvec_dma_sync_sg;
-	dma_ops->sync_single_for_device = machvec_dma_sync_single;
-	dma_ops->sync_sg_for_device = machvec_dma_sync_sg;
-	dma_ops->dma_supported = iommu_dma_supported;
+	intel_dma_ops.sync_single_for_cpu = machvec_dma_sync_single;
+	intel_dma_ops.sync_sg_for_cpu = machvec_dma_sync_sg;
+	intel_dma_ops.sync_single_for_device = machvec_dma_sync_single;
+	intel_dma_ops.sync_sg_for_device = machvec_dma_sync_sg;
+	intel_dma_ops.dma_supported = iommu_dma_supported;
 
 	/*
 	 * The order of these functions is important for
diff --git a/arch/ia64/kernel/pci-swiotlb.c b/arch/ia64/kernel/pci-swiotlb.c
index 2933208c0285..a14989dacded 100644
--- a/arch/ia64/kernel/pci-swiotlb.c
+++ b/arch/ia64/kernel/pci-swiotlb.c
@@ -30,7 +30,7 @@ static void ia64_swiotlb_free_coherent(struct device *dev, size_t size,
 	swiotlb_free_coherent(dev, size, vaddr, dma_addr);
 }
 
-struct dma_map_ops swiotlb_dma_ops = {
+const struct dma_map_ops swiotlb_dma_ops = {
 	.alloc = ia64_swiotlb_alloc_coherent,
 	.free = ia64_swiotlb_free_coherent,
 	.map_page = swiotlb_map_page,
diff --git a/arch/m68k/include/asm/dma-mapping.h b/arch/m68k/include/asm/dma-mapping.h
index 96c536194287..863509939d5a 100644
--- a/arch/m68k/include/asm/dma-mapping.h
+++ b/arch/m68k/include/asm/dma-mapping.h
@@ -1,9 +1,9 @@
 #ifndef _M68K_DMA_MAPPING_H
 #define _M68K_DMA_MAPPING_H
 
-extern struct dma_map_ops m68k_dma_ops;
+extern const struct dma_map_ops m68k_dma_ops;
 
-static inline struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
 {
         return &m68k_dma_ops;
 }
diff --git a/arch/m68k/kernel/dma.c b/arch/m68k/kernel/dma.c
index 8cf97cbadc91..dce8e18daee6 100644
--- a/arch/m68k/kernel/dma.c
+++ b/arch/m68k/kernel/dma.c
@@ -152,7 +152,7 @@ static int m68k_dma_map_sg(struct device *dev, struct scatterlist *sglist,
 	return nents;
 }
 
-struct dma_map_ops m68k_dma_ops = {
+const struct dma_map_ops m68k_dma_ops = {
 	.alloc			= m68k_dma_alloc,
 	.free			= m68k_dma_free,
 	.map_page		= m68k_dma_map_page,
diff --git a/arch/metag/include/asm/dma-mapping.h b/arch/metag/include/asm/dma-mapping.h
index 27af5d479ce6..c156a7ac732f 100644
--- a/arch/metag/include/asm/dma-mapping.h
+++ b/arch/metag/include/asm/dma-mapping.h
@@ -1,9 +1,9 @@
 #ifndef _ASM_METAG_DMA_MAPPING_H
 #define _ASM_METAG_DMA_MAPPING_H
 
-extern struct dma_map_ops metag_dma_ops;
+extern const struct dma_map_ops metag_dma_ops;
 
-static inline struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
 {
 	return &metag_dma_ops;
 }
diff --git a/arch/metag/kernel/dma.c b/arch/metag/kernel/dma.c
index 0db31e24c541..acbed1ddec4b 100644
--- a/arch/metag/kernel/dma.c
+++ b/arch/metag/kernel/dma.c
@@ -565,7 +565,7 @@ static void metag_dma_sync_sg_for_device(struct device *dev,
 		dma_sync_for_device(sg_virt(sg), sg->length, direction);
 }
 
-struct dma_map_ops metag_dma_ops = {
+const struct dma_map_ops metag_dma_ops = {
 	.alloc			= metag_dma_alloc,
 	.free			= metag_dma_free,
 	.map_page		= metag_dma_map_page,
diff --git a/arch/microblaze/include/asm/dma-mapping.h b/arch/microblaze/include/asm/dma-mapping.h
index 1768d4bdc8d3..c7faf2fb51d6 100644
--- a/arch/microblaze/include/asm/dma-mapping.h
+++ b/arch/microblaze/include/asm/dma-mapping.h
@@ -36,9 +36,9 @@
 /*
  * Available generic sets of operations
  */
-extern struct dma_map_ops dma_direct_ops;
+extern const struct dma_map_ops dma_direct_ops;
 
-static inline struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
 {
 	return &dma_direct_ops;
 }
diff --git a/arch/microblaze/kernel/dma.c b/arch/microblaze/kernel/dma.c
index ec04dc1e2527..3a7cf42378ff 100644
--- a/arch/microblaze/kernel/dma.c
+++ b/arch/microblaze/kernel/dma.c
@@ -181,7 +181,7 @@ int dma_direct_mmap_coherent(struct device *dev, struct vm_area_struct *vma,
 #endif
 }
 
-struct dma_map_ops dma_direct_ops = {
+const struct dma_map_ops dma_direct_ops = {
 	.alloc		= dma_direct_alloc_coherent,
 	.free		= dma_direct_free_coherent,
 	.mmap		= dma_direct_mmap_coherent,
diff --git a/arch/mips/cavium-octeon/dma-octeon.c b/arch/mips/cavium-octeon/dma-octeon.c
index fd69528b24fb..897d32c888ee 100644
--- a/arch/mips/cavium-octeon/dma-octeon.c
+++ b/arch/mips/cavium-octeon/dma-octeon.c
@@ -205,7 +205,7 @@ static phys_addr_t octeon_unity_dma_to_phys(struct device *dev, dma_addr_t daddr
 }
 
 struct octeon_dma_map_ops {
-	struct dma_map_ops dma_map_ops;
+	const struct dma_map_ops dma_map_ops;
 	dma_addr_t (*phys_to_dma)(struct device *dev, phys_addr_t paddr);
 	phys_addr_t (*dma_to_phys)(struct device *dev, dma_addr_t daddr);
 };
@@ -333,7 +333,7 @@ static struct octeon_dma_map_ops _octeon_pci_dma_map_ops = {
 	},
 };
 
-struct dma_map_ops *octeon_pci_dma_map_ops;
+const struct dma_map_ops *octeon_pci_dma_map_ops;
 
 void __init octeon_pci_dma_init(void)
 {
diff --git a/arch/mips/include/asm/device.h b/arch/mips/include/asm/device.h
index 21c2082a0dfb..ebc5c1265473 100644
--- a/arch/mips/include/asm/device.h
+++ b/arch/mips/include/asm/device.h
@@ -10,7 +10,7 @@ struct dma_map_ops;
 
 struct dev_archdata {
 	/* DMA operations on that device */
-	struct dma_map_ops *dma_ops;
+	const struct dma_map_ops *dma_ops;
 
 #ifdef CONFIG_DMA_PERDEV_COHERENT
 	/* Non-zero if DMA is coherent with CPU caches */
diff --git a/arch/mips/include/asm/dma-mapping.h b/arch/mips/include/asm/dma-mapping.h
index 7aa71b9b0258..b59b084a7569 100644
--- a/arch/mips/include/asm/dma-mapping.h
+++ b/arch/mips/include/asm/dma-mapping.h
@@ -9,9 +9,9 @@
 #include <dma-coherence.h>
 #endif
 
-extern struct dma_map_ops *mips_dma_map_ops;
+extern const struct dma_map_ops *mips_dma_map_ops;
 
-static inline struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
 {
 	if (dev && dev->archdata.dma_ops)
 		return dev->archdata.dma_ops;
diff --git a/arch/mips/include/asm/mach-cavium-octeon/dma-coherence.h b/arch/mips/include/asm/mach-cavium-octeon/dma-coherence.h
index 460042ee5d6f..9110988b92a1 100644
--- a/arch/mips/include/asm/mach-cavium-octeon/dma-coherence.h
+++ b/arch/mips/include/asm/mach-cavium-octeon/dma-coherence.h
@@ -65,7 +65,7 @@ dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr);
 phys_addr_t dma_to_phys(struct device *dev, dma_addr_t daddr);
 
 struct dma_map_ops;
-extern struct dma_map_ops *octeon_pci_dma_map_ops;
+extern const struct dma_map_ops *octeon_pci_dma_map_ops;
 extern char *octeon_swiotlb;
 
 #endif /* __ASM_MACH_CAVIUM_OCTEON_DMA_COHERENCE_H */
diff --git a/arch/mips/include/asm/netlogic/common.h b/arch/mips/include/asm/netlogic/common.h
index be52c2125d71..e0717d10e650 100644
--- a/arch/mips/include/asm/netlogic/common.h
+++ b/arch/mips/include/asm/netlogic/common.h
@@ -88,7 +88,7 @@ extern struct plat_smp_ops nlm_smp_ops;
 extern char nlm_reset_entry[], nlm_reset_entry_end[];
 
 /* SWIOTLB */
-extern struct dma_map_ops nlm_swiotlb_dma_ops;
+extern const struct dma_map_ops nlm_swiotlb_dma_ops;
 
 extern unsigned int nlm_threads_per_core;
 extern cpumask_t nlm_cpumask;
diff --git a/arch/mips/loongson64/common/dma-swiotlb.c b/arch/mips/loongson64/common/dma-swiotlb.c
index 1a80b6f73ab2..46288c1639b3 100644
--- a/arch/mips/loongson64/common/dma-swiotlb.c
+++ b/arch/mips/loongson64/common/dma-swiotlb.c
@@ -122,7 +122,7 @@ phys_addr_t dma_to_phys(struct device *dev, dma_addr_t daddr)
 	return daddr;
 }
 
-static struct dma_map_ops loongson_dma_map_ops = {
+static const struct dma_map_ops loongson_dma_map_ops = {
 	.alloc = loongson_dma_alloc_coherent,
 	.free = loongson_dma_free_coherent,
 	.map_page = loongson_dma_map_page,
diff --git a/arch/mips/mm/dma-default.c b/arch/mips/mm/dma-default.c
index 46d5696c4f27..4cadbaff1d36 100644
--- a/arch/mips/mm/dma-default.c
+++ b/arch/mips/mm/dma-default.c
@@ -415,7 +415,7 @@ void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
 
 EXPORT_SYMBOL(dma_cache_sync);
 
-static struct dma_map_ops mips_default_dma_map_ops = {
+static const struct dma_map_ops mips_default_dma_map_ops = {
 	.alloc = mips_dma_alloc_coherent,
 	.free = mips_dma_free_coherent,
 	.mmap = mips_dma_mmap,
@@ -431,7 +431,7 @@ static struct dma_map_ops mips_default_dma_map_ops = {
 	.dma_supported = mips_dma_supported
 };
 
-struct dma_map_ops *mips_dma_map_ops = &mips_default_dma_map_ops;
+const struct dma_map_ops *mips_dma_map_ops = &mips_default_dma_map_ops;
 EXPORT_SYMBOL(mips_dma_map_ops);
 
 #define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16)
diff --git a/arch/mips/netlogic/common/nlm-dma.c b/arch/mips/netlogic/common/nlm-dma.c
index 0630693bec2a..0ec9d9da6d51 100644
--- a/arch/mips/netlogic/common/nlm-dma.c
+++ b/arch/mips/netlogic/common/nlm-dma.c
@@ -67,7 +67,7 @@ static void nlm_dma_free_coherent(struct device *dev, size_t size,
 	swiotlb_free_coherent(dev, size, vaddr, dma_handle);
 }
 
-struct dma_map_ops nlm_swiotlb_dma_ops = {
+const struct dma_map_ops nlm_swiotlb_dma_ops = {
 	.alloc = nlm_dma_alloc_coherent,
 	.free = nlm_dma_free_coherent,
 	.map_page = swiotlb_map_page,
diff --git a/arch/mn10300/include/asm/dma-mapping.h b/arch/mn10300/include/asm/dma-mapping.h
index 1dcd44757f32..564e3927e005 100644
--- a/arch/mn10300/include/asm/dma-mapping.h
+++ b/arch/mn10300/include/asm/dma-mapping.h
@@ -14,9 +14,9 @@
 #include <asm/cache.h>
 #include <asm/io.h>
 
-extern struct dma_map_ops mn10300_dma_ops;
+extern const struct dma_map_ops mn10300_dma_ops;
 
-static inline struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
 {
 	return &mn10300_dma_ops;
 }
diff --git a/arch/mn10300/mm/dma-alloc.c b/arch/mn10300/mm/dma-alloc.c
index 4f4b9029f0ea..86108d2496b3 100644
--- a/arch/mn10300/mm/dma-alloc.c
+++ b/arch/mn10300/mm/dma-alloc.c
@@ -121,7 +121,7 @@ static int mn10300_dma_supported(struct device *dev, u64 mask)
 	return 1;
 }
 
-struct dma_map_ops mn10300_dma_ops = {
+const struct dma_map_ops mn10300_dma_ops = {
 	.alloc			= mn10300_dma_alloc,
 	.free			= mn10300_dma_free,
 	.map_page		= mn10300_dma_map_page,
diff --git a/arch/nios2/include/asm/dma-mapping.h b/arch/nios2/include/asm/dma-mapping.h
index bec8ac8e6ad2..aa00d839a64b 100644
--- a/arch/nios2/include/asm/dma-mapping.h
+++ b/arch/nios2/include/asm/dma-mapping.h
@@ -10,9 +10,9 @@
 #ifndef _ASM_NIOS2_DMA_MAPPING_H
 #define _ASM_NIOS2_DMA_MAPPING_H
 
-extern struct dma_map_ops nios2_dma_ops;
+extern const struct dma_map_ops nios2_dma_ops;
 
-static inline struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
 {
 	return &nios2_dma_ops;
 }
diff --git a/arch/nios2/mm/dma-mapping.c b/arch/nios2/mm/dma-mapping.c
index d800fad87896..e7f1dab3d859 100644
--- a/arch/nios2/mm/dma-mapping.c
+++ b/arch/nios2/mm/dma-mapping.c
@@ -182,7 +182,7 @@ static void nios2_dma_sync_sg_for_device(struct device *dev,
 
 }
 
-struct dma_map_ops nios2_dma_ops = {
+const struct dma_map_ops nios2_dma_ops = {
 	.alloc			= nios2_dma_alloc,
 	.free			= nios2_dma_free,
 	.map_page		= nios2_dma_map_page,
diff --git a/arch/openrisc/include/asm/dma-mapping.h b/arch/openrisc/include/asm/dma-mapping.h
index 1f260bccb368..88acbedb4947 100644
--- a/arch/openrisc/include/asm/dma-mapping.h
+++ b/arch/openrisc/include/asm/dma-mapping.h
@@ -28,9 +28,9 @@
 
 #define DMA_ERROR_CODE		(~(dma_addr_t)0x0)
 
-extern struct dma_map_ops or1k_dma_map_ops;
+extern const struct dma_map_ops or1k_dma_map_ops;
 
-static inline struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
 {
 	return &or1k_dma_map_ops;
 }
diff --git a/arch/openrisc/kernel/dma.c b/arch/openrisc/kernel/dma.c
index 140c99140649..80729dd045aa 100644
--- a/arch/openrisc/kernel/dma.c
+++ b/arch/openrisc/kernel/dma.c
@@ -229,7 +229,7 @@ or1k_sync_single_for_device(struct device *dev,
 		mtspr(SPR_DCBFR, cl);
 }
 
-struct dma_map_ops or1k_dma_map_ops = {
+const struct dma_map_ops or1k_dma_map_ops = {
 	.alloc = or1k_dma_alloc,
 	.free = or1k_dma_free,
 	.map_page = or1k_map_page,
diff --git a/arch/parisc/include/asm/dma-mapping.h b/arch/parisc/include/asm/dma-mapping.h
index 16e024602737..1749073e44fc 100644
--- a/arch/parisc/include/asm/dma-mapping.h
+++ b/arch/parisc/include/asm/dma-mapping.h
@@ -21,13 +21,13 @@
 */
 
 #ifdef CONFIG_PA11
-extern struct dma_map_ops pcxl_dma_ops;
-extern struct dma_map_ops pcx_dma_ops;
+extern const struct dma_map_ops pcxl_dma_ops;
+extern const struct dma_map_ops pcx_dma_ops;
 #endif
 
-extern struct dma_map_ops *hppa_dma_ops;
+extern const struct dma_map_ops *hppa_dma_ops;
 
-static inline struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
 {
 	return hppa_dma_ops;
 }
diff --git a/arch/parisc/kernel/drivers.c b/arch/parisc/kernel/drivers.c
index 700e2d2da096..fa78419100c8 100644
--- a/arch/parisc/kernel/drivers.c
+++ b/arch/parisc/kernel/drivers.c
@@ -40,7 +40,7 @@
 #include <asm/parisc-device.h>
 
 /* See comments in include/asm-parisc/pci.h */
-struct dma_map_ops *hppa_dma_ops __read_mostly;
+const struct dma_map_ops *hppa_dma_ops __read_mostly;
 EXPORT_SYMBOL(hppa_dma_ops);
 
 static struct device root = {
diff --git a/arch/parisc/kernel/pci-dma.c b/arch/parisc/kernel/pci-dma.c
index 494ff6e8c88a..809c4e2c44a0 100644
--- a/arch/parisc/kernel/pci-dma.c
+++ b/arch/parisc/kernel/pci-dma.c
@@ -562,7 +562,7 @@ static void pa11_dma_sync_sg_for_device(struct device *dev, struct scatterlist *
 		flush_kernel_vmap_range(sg_virt(sg), sg->length);
 }
 
-struct dma_map_ops pcxl_dma_ops = {
+const struct dma_map_ops pcxl_dma_ops = {
 	.dma_supported =	pa11_dma_supported,
 	.alloc =		pa11_dma_alloc,
 	.free =			pa11_dma_free,
@@ -598,7 +598,7 @@ static void pcx_dma_free(struct device *dev, size_t size, void *vaddr,
 	return;
 }
 
-struct dma_map_ops pcx_dma_ops = {
+const struct dma_map_ops pcx_dma_ops = {
 	.dma_supported =	pa11_dma_supported,
 	.alloc =		pcx_dma_alloc,
 	.free =			pcx_dma_free,
diff --git a/arch/powerpc/include/asm/device.h b/arch/powerpc/include/asm/device.h
index 406c2b1ff82d..49cbb0fca233 100644
--- a/arch/powerpc/include/asm/device.h
+++ b/arch/powerpc/include/asm/device.h
@@ -21,7 +21,7 @@ struct iommu_table;
  */
 struct dev_archdata {
 	/* DMA operations on that device */
-	struct dma_map_ops	*dma_ops;
+	const struct dma_map_ops	*dma_ops;
 
 	/*
 	 * These two used to be a union. However, with the hybrid ops we need
diff --git a/arch/powerpc/include/asm/dma-mapping.h b/arch/powerpc/include/asm/dma-mapping.h
index 84e3f8dd5e4f..2ec3eadf336f 100644
--- a/arch/powerpc/include/asm/dma-mapping.h
+++ b/arch/powerpc/include/asm/dma-mapping.h
@@ -76,9 +76,9 @@ static inline unsigned long device_to_mask(struct device *dev)
 #ifdef CONFIG_PPC64
 extern struct dma_map_ops dma_iommu_ops;
 #endif
-extern struct dma_map_ops dma_direct_ops;
+extern const struct dma_map_ops dma_direct_ops;
 
-static inline struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
 {
 	/* We don't handle the NULL dev case for ISA for now. We could
 	 * do it via an out of line call but it is not needed for now. The
@@ -91,7 +91,7 @@ static inline struct dma_map_ops *get_dma_ops(struct device *dev)
 	return dev->archdata.dma_ops;
 }
 
-static inline void set_dma_ops(struct device *dev, struct dma_map_ops *ops)
+static inline void set_dma_ops(struct device *dev, const struct dma_map_ops *ops)
 {
 	dev->archdata.dma_ops = ops;
 }
diff --git a/arch/powerpc/include/asm/pci.h b/arch/powerpc/include/asm/pci.h
index e9bd6cf0212f..93eded8d3843 100644
--- a/arch/powerpc/include/asm/pci.h
+++ b/arch/powerpc/include/asm/pci.h
@@ -53,8 +53,8 @@ static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)
 }
 
 #ifdef CONFIG_PCI
-extern void set_pci_dma_ops(struct dma_map_ops *dma_ops);
-extern struct dma_map_ops *get_pci_dma_ops(void);
+extern void set_pci_dma_ops(const struct dma_map_ops *dma_ops);
+extern const struct dma_map_ops *get_pci_dma_ops(void);
 #else	/* CONFIG_PCI */
 #define set_pci_dma_ops(d)
 #define get_pci_dma_ops()	NULL
diff --git a/arch/powerpc/include/asm/swiotlb.h b/arch/powerpc/include/asm/swiotlb.h
index de99d6e29430..01d45a5fd00b 100644
--- a/arch/powerpc/include/asm/swiotlb.h
+++ b/arch/powerpc/include/asm/swiotlb.h
@@ -13,7 +13,7 @@
 
 #include <linux/swiotlb.h>
 
-extern struct dma_map_ops swiotlb_dma_ops;
+extern const struct dma_map_ops swiotlb_dma_ops;
 
 static inline void dma_mark_clean(void *addr, size_t size) {}
 
diff --git a/arch/powerpc/kernel/dma-swiotlb.c b/arch/powerpc/kernel/dma-swiotlb.c
index c6689f658b50..d0ea7860e02b 100644
--- a/arch/powerpc/kernel/dma-swiotlb.c
+++ b/arch/powerpc/kernel/dma-swiotlb.c
@@ -46,7 +46,7 @@ static u64 swiotlb_powerpc_get_required(struct device *dev)
  * map_page, and unmap_page on highmem, use normal dma_ops
  * for everything else.
  */
-struct dma_map_ops swiotlb_dma_ops = {
+const struct dma_map_ops swiotlb_dma_ops = {
 	.alloc = __dma_direct_alloc_coherent,
 	.free = __dma_direct_free_coherent,
 	.mmap = dma_direct_mmap_coherent,
diff --git a/arch/powerpc/kernel/dma.c b/arch/powerpc/kernel/dma.c
index e64a6016fba7..beb18d7a1999 100644
--- a/arch/powerpc/kernel/dma.c
+++ b/arch/powerpc/kernel/dma.c
@@ -267,7 +267,7 @@ static inline void dma_direct_sync_single(struct device *dev,
 }
 #endif
 
-struct dma_map_ops dma_direct_ops = {
+const struct dma_map_ops dma_direct_ops = {
 	.alloc				= dma_direct_alloc_coherent,
 	.free				= dma_direct_free_coherent,
 	.mmap				= dma_direct_mmap_coherent,
@@ -309,7 +309,7 @@ EXPORT_SYMBOL(dma_set_coherent_mask);
 
 int __dma_set_mask(struct device *dev, u64 dma_mask)
 {
-	struct dma_map_ops *dma_ops = get_dma_ops(dev);
+	const struct dma_map_ops *dma_ops = get_dma_ops(dev);
 
 	if ((dma_ops != NULL) && (dma_ops->set_dma_mask != NULL))
 		return dma_ops->set_dma_mask(dev, dma_mask);
@@ -337,7 +337,7 @@ EXPORT_SYMBOL(dma_set_mask);
 
 u64 __dma_get_required_mask(struct device *dev)
 {
-	struct dma_map_ops *dma_ops = get_dma_ops(dev);
+	const struct dma_map_ops *dma_ops = get_dma_ops(dev);
 
 	if (unlikely(dma_ops == NULL))
 		return 0;
diff --git a/arch/powerpc/kernel/ibmebus.c b/arch/powerpc/kernel/ibmebus.c
index 6ca9a2ffaac7..ca2e1f89544e 100644
--- a/arch/powerpc/kernel/ibmebus.c
+++ b/arch/powerpc/kernel/ibmebus.c
@@ -136,7 +136,7 @@ static u64 ibmebus_dma_get_required_mask(struct device *dev)
 	return DMA_BIT_MASK(64);
 }
 
-static struct dma_map_ops ibmebus_dma_ops = {
+static const struct dma_map_ops ibmebus_dma_ops = {
 	.alloc              = ibmebus_alloc_coherent,
 	.free               = ibmebus_free_coherent,
 	.map_sg             = ibmebus_map_sg,
diff --git a/arch/powerpc/kernel/pci-common.c b/arch/powerpc/kernel/pci-common.c
index 74bec5498972..09db4778435c 100644
--- a/arch/powerpc/kernel/pci-common.c
+++ b/arch/powerpc/kernel/pci-common.c
@@ -59,14 +59,14 @@ resource_size_t isa_mem_base;
 EXPORT_SYMBOL(isa_mem_base);
 
 
-static struct dma_map_ops *pci_dma_ops = &dma_direct_ops;
+static const struct dma_map_ops *pci_dma_ops = &dma_direct_ops;
 
-void set_pci_dma_ops(struct dma_map_ops *dma_ops)
+void set_pci_dma_ops(const struct dma_map_ops *dma_ops)
 {
 	pci_dma_ops = dma_ops;
 }
 
-struct dma_map_ops *get_pci_dma_ops(void)
+const struct dma_map_ops *get_pci_dma_ops(void)
 {
 	return pci_dma_ops;
 }
diff --git a/arch/powerpc/kernel/vio.c b/arch/powerpc/kernel/vio.c
index b3813ddb2fb4..75b40ca300f6 100644
--- a/arch/powerpc/kernel/vio.c
+++ b/arch/powerpc/kernel/vio.c
@@ -615,7 +615,7 @@ static u64 vio_dma_get_required_mask(struct device *dev)
         return dma_iommu_ops.get_required_mask(dev);
 }
 
-static struct dma_map_ops vio_dma_mapping_ops = {
+static const struct dma_map_ops vio_dma_mapping_ops = {
 	.alloc             = vio_dma_iommu_alloc_coherent,
 	.free              = vio_dma_iommu_free_coherent,
 	.mmap		   = dma_direct_mmap_coherent,
diff --git a/arch/powerpc/platforms/cell/iommu.c b/arch/powerpc/platforms/cell/iommu.c
index 7ff51f96a00e..e1413e69e5fe 100644
--- a/arch/powerpc/platforms/cell/iommu.c
+++ b/arch/powerpc/platforms/cell/iommu.c
@@ -651,7 +651,7 @@ static int dma_fixed_dma_supported(struct device *dev, u64 mask)
 
 static int dma_set_mask_and_switch(struct device *dev, u64 dma_mask);
 
-static struct dma_map_ops dma_iommu_fixed_ops = {
+static const struct dma_map_ops dma_iommu_fixed_ops = {
 	.alloc          = dma_fixed_alloc_coherent,
 	.free           = dma_fixed_free_coherent,
 	.map_sg         = dma_fixed_map_sg,
@@ -1172,7 +1172,7 @@ __setup("iommu_fixed=", setup_iommu_fixed);
 
 static u64 cell_dma_get_required_mask(struct device *dev)
 {
-	struct dma_map_ops *dma_ops;
+	const struct dma_map_ops *dma_ops;
 
 	if (!dev->dma_mask)
 		return 0;
diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
index aec85e778028..9ae456939738 100644
--- a/arch/powerpc/platforms/powernv/npu-dma.c
+++ b/arch/powerpc/platforms/powernv/npu-dma.c
@@ -115,7 +115,7 @@ static u64 dma_npu_get_required_mask(struct device *dev)
 	return 0;
 }
 
-static struct dma_map_ops dma_npu_ops = {
+static const struct dma_map_ops dma_npu_ops = {
 	.map_page		= dma_npu_map_page,
 	.map_sg			= dma_npu_map_sg,
 	.alloc			= dma_npu_alloc,
diff --git a/arch/powerpc/platforms/ps3/system-bus.c b/arch/powerpc/platforms/ps3/system-bus.c
index 8af1c15aef85..c81450d98794 100644
--- a/arch/powerpc/platforms/ps3/system-bus.c
+++ b/arch/powerpc/platforms/ps3/system-bus.c
@@ -701,7 +701,7 @@ static u64 ps3_dma_get_required_mask(struct device *_dev)
 	return DMA_BIT_MASK(32);
 }
 
-static struct dma_map_ops ps3_sb_dma_ops = {
+static const struct dma_map_ops ps3_sb_dma_ops = {
 	.alloc = ps3_alloc_coherent,
 	.free = ps3_free_coherent,
 	.map_sg = ps3_sb_map_sg,
@@ -712,7 +712,7 @@ static struct dma_map_ops ps3_sb_dma_ops = {
 	.unmap_page = ps3_unmap_page,
 };
 
-static struct dma_map_ops ps3_ioc0_dma_ops = {
+static const struct dma_map_ops ps3_ioc0_dma_ops = {
 	.alloc = ps3_alloc_coherent,
 	.free = ps3_free_coherent,
 	.map_sg = ps3_ioc0_map_sg,
diff --git a/arch/s390/include/asm/device.h b/arch/s390/include/asm/device.h
index 4a9f35e0973f..7955a9799466 100644
--- a/arch/s390/include/asm/device.h
+++ b/arch/s390/include/asm/device.h
@@ -4,7 +4,7 @@
  * This file is released under the GPLv2
  */
 struct dev_archdata {
-	struct dma_map_ops *dma_ops;
+	const struct dma_map_ops *dma_ops;
 };
 
 struct pdev_archdata {
diff --git a/arch/s390/include/asm/dma-mapping.h b/arch/s390/include/asm/dma-mapping.h
index ffaba07f50ab..2776d205b1ff 100644
--- a/arch/s390/include/asm/dma-mapping.h
+++ b/arch/s390/include/asm/dma-mapping.h
@@ -10,9 +10,9 @@
 
 #define DMA_ERROR_CODE		(~(dma_addr_t) 0x0)
 
-extern struct dma_map_ops s390_pci_dma_ops;
+extern const struct dma_map_ops s390_pci_dma_ops;
 
-static inline struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
 {
 	if (dev && dev->archdata.dma_ops)
 		return dev->archdata.dma_ops;
diff --git a/arch/s390/pci/pci_dma.c b/arch/s390/pci/pci_dma.c
index 6b2f72f523b9..d6e8219d4001 100644
--- a/arch/s390/pci/pci_dma.c
+++ b/arch/s390/pci/pci_dma.c
@@ -648,7 +648,7 @@ static int __init dma_debug_do_init(void)
 }
 fs_initcall(dma_debug_do_init);
 
-struct dma_map_ops s390_pci_dma_ops = {
+const struct dma_map_ops s390_pci_dma_ops = {
 	.alloc		= s390_dma_alloc,
 	.free		= s390_dma_free,
 	.map_sg		= s390_dma_map_sg,
diff --git a/arch/sh/include/asm/dma-mapping.h b/arch/sh/include/asm/dma-mapping.h
index 0052ad40e86d..a7382c34c241 100644
--- a/arch/sh/include/asm/dma-mapping.h
+++ b/arch/sh/include/asm/dma-mapping.h
@@ -1,10 +1,10 @@
 #ifndef __ASM_SH_DMA_MAPPING_H
 #define __ASM_SH_DMA_MAPPING_H
 
-extern struct dma_map_ops *dma_ops;
+extern const struct dma_map_ops *dma_ops;
 extern void no_iommu_init(void);
 
-static inline struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
 {
 	return dma_ops;
 }
diff --git a/arch/sh/kernel/dma-nommu.c b/arch/sh/kernel/dma-nommu.c
index eadb669a7329..5ed8beb3666e 100644
--- a/arch/sh/kernel/dma-nommu.c
+++ b/arch/sh/kernel/dma-nommu.c
@@ -62,7 +62,7 @@ static void nommu_sync_sg(struct device *dev, struct scatterlist *sg,
 }
 #endif
 
-struct dma_map_ops nommu_dma_ops = {
+const struct dma_map_ops nommu_dma_ops = {
 	.alloc			= dma_generic_alloc_coherent,
 	.free			= dma_generic_free_coherent,
 	.map_page		= nommu_map_page,
diff --git a/arch/sh/mm/consistent.c b/arch/sh/mm/consistent.c
index 92b6976fde59..d1275adfa0ef 100644
--- a/arch/sh/mm/consistent.c
+++ b/arch/sh/mm/consistent.c
@@ -22,7 +22,7 @@
 
 #define PREALLOC_DMA_DEBUG_ENTRIES	4096
 
-struct dma_map_ops *dma_ops;
+const struct dma_map_ops *dma_ops;
 EXPORT_SYMBOL(dma_ops);
 
 static int __init dma_init(void)
diff --git a/arch/sparc/include/asm/dma-mapping.h b/arch/sparc/include/asm/dma-mapping.h
index 1180ae254154..3d2babc0c4c6 100644
--- a/arch/sparc/include/asm/dma-mapping.h
+++ b/arch/sparc/include/asm/dma-mapping.h
@@ -18,13 +18,13 @@ static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
 	 */
 }
 
-extern struct dma_map_ops *dma_ops;
-extern struct dma_map_ops *leon_dma_ops;
-extern struct dma_map_ops pci32_dma_ops;
+extern const struct dma_map_ops *dma_ops;
+extern const struct dma_map_ops *leon_dma_ops;
+extern const struct dma_map_ops pci32_dma_ops;
 
 extern struct bus_type pci_bus_type;
 
-static inline struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
 {
 #ifdef CONFIG_SPARC_LEON
 	if (sparc_cpu_model == sparc_leon)
diff --git a/arch/sparc/kernel/iommu.c b/arch/sparc/kernel/iommu.c
index 852a3291db96..f89e6f893607 100644
--- a/arch/sparc/kernel/iommu.c
+++ b/arch/sparc/kernel/iommu.c
@@ -741,7 +741,7 @@ static void dma_4u_sync_sg_for_cpu(struct device *dev,
 	spin_unlock_irqrestore(&iommu->lock, flags);
 }
 
-static struct dma_map_ops sun4u_dma_ops = {
+static const struct dma_map_ops sun4u_dma_ops = {
 	.alloc			= dma_4u_alloc_coherent,
 	.free			= dma_4u_free_coherent,
 	.map_page		= dma_4u_map_page,
@@ -752,7 +752,7 @@ static struct dma_map_ops sun4u_dma_ops = {
 	.sync_sg_for_cpu	= dma_4u_sync_sg_for_cpu,
 };
 
-struct dma_map_ops *dma_ops = &sun4u_dma_ops;
+const struct dma_map_ops *dma_ops = &sun4u_dma_ops;
 EXPORT_SYMBOL(dma_ops);
 
 int dma_supported(struct device *dev, u64 device_mask)
diff --git a/arch/sparc/kernel/ioport.c b/arch/sparc/kernel/ioport.c
index 2344103414d1..c0c68d2a697d 100644
--- a/arch/sparc/kernel/ioport.c
+++ b/arch/sparc/kernel/ioport.c
@@ -401,7 +401,7 @@ static void sbus_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
 	BUG();
 }
 
-static struct dma_map_ops sbus_dma_ops = {
+static const struct dma_map_ops sbus_dma_ops = {
 	.alloc			= sbus_alloc_coherent,
 	.free			= sbus_free_coherent,
 	.map_page		= sbus_map_page,
@@ -637,7 +637,7 @@ static void pci32_sync_sg_for_device(struct device *device, struct scatterlist *
 	}
 }
 
-struct dma_map_ops pci32_dma_ops = {
+const struct dma_map_ops pci32_dma_ops = {
 	.alloc			= pci32_alloc_coherent,
 	.free			= pci32_free_coherent,
 	.map_page		= pci32_map_page,
@@ -652,10 +652,10 @@ struct dma_map_ops pci32_dma_ops = {
 EXPORT_SYMBOL(pci32_dma_ops);
 
 /* leon re-uses pci32_dma_ops */
-struct dma_map_ops *leon_dma_ops = &pci32_dma_ops;
+const struct dma_map_ops *leon_dma_ops = &pci32_dma_ops;
 EXPORT_SYMBOL(leon_dma_ops);
 
-struct dma_map_ops *dma_ops = &sbus_dma_ops;
+const struct dma_map_ops *dma_ops = &sbus_dma_ops;
 EXPORT_SYMBOL(dma_ops);
 
 
diff --git a/arch/sparc/kernel/pci_sun4v.c b/arch/sparc/kernel/pci_sun4v.c
index 06981cc716b6..957443241c5a 100644
--- a/arch/sparc/kernel/pci_sun4v.c
+++ b/arch/sparc/kernel/pci_sun4v.c
@@ -668,7 +668,7 @@ static void dma_4v_unmap_sg(struct device *dev, struct scatterlist *sglist,
 	local_irq_restore(flags);
 }
 
-static struct dma_map_ops sun4v_dma_ops = {
+static const struct dma_map_ops sun4v_dma_ops = {
 	.alloc				= dma_4v_alloc_coherent,
 	.free				= dma_4v_free_coherent,
 	.map_page			= dma_4v_map_page,
diff --git a/arch/tile/include/asm/device.h b/arch/tile/include/asm/device.h
index 6ab8bf146d4c..25f23ac7d361 100644
--- a/arch/tile/include/asm/device.h
+++ b/arch/tile/include/asm/device.h
@@ -18,7 +18,7 @@
 
 struct dev_archdata {
 	/* DMA operations on that device */
-        struct dma_map_ops	*dma_ops;
+        const struct dma_map_ops	*dma_ops;
 
 	/* Offset of the DMA address from the PA. */
 	dma_addr_t		dma_offset;
diff --git a/arch/tile/include/asm/dma-mapping.h b/arch/tile/include/asm/dma-mapping.h
index 01ceb4a895b0..4a06cc75b856 100644
--- a/arch/tile/include/asm/dma-mapping.h
+++ b/arch/tile/include/asm/dma-mapping.h
@@ -24,12 +24,12 @@
 #define ARCH_HAS_DMA_GET_REQUIRED_MASK
 #endif
 
-extern struct dma_map_ops *tile_dma_map_ops;
-extern struct dma_map_ops *gx_pci_dma_map_ops;
-extern struct dma_map_ops *gx_legacy_pci_dma_map_ops;
-extern struct dma_map_ops *gx_hybrid_pci_dma_map_ops;
+extern const struct dma_map_ops *tile_dma_map_ops;
+extern const struct dma_map_ops *gx_pci_dma_map_ops;
+extern const struct dma_map_ops *gx_legacy_pci_dma_map_ops;
+extern const struct dma_map_ops *gx_hybrid_pci_dma_map_ops;
 
-static inline struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
 {
 	if (dev && dev->archdata.dma_ops)
 		return dev->archdata.dma_ops;
@@ -59,7 +59,7 @@ static inline phys_addr_t dma_to_phys(struct device *dev, dma_addr_t daddr)
 
 static inline void dma_mark_clean(void *addr, size_t size) {}
 
-static inline void set_dma_ops(struct device *dev, struct dma_map_ops *ops)
+static inline void set_dma_ops(struct device *dev, const struct dma_map_ops *ops)
 {
 	dev->archdata.dma_ops = ops;
 }
diff --git a/arch/tile/kernel/pci-dma.c b/arch/tile/kernel/pci-dma.c
index 09bb774b39cd..06fe0841cb40 100644
--- a/arch/tile/kernel/pci-dma.c
+++ b/arch/tile/kernel/pci-dma.c
@@ -321,7 +321,7 @@ tile_dma_supported(struct device *dev, u64 mask)
 	return 1;
 }
 
-static struct dma_map_ops tile_default_dma_map_ops = {
+static const struct dma_map_ops tile_default_dma_map_ops = {
 	.alloc = tile_dma_alloc_coherent,
 	.free = tile_dma_free_coherent,
 	.map_page = tile_dma_map_page,
@@ -336,7 +336,7 @@ static struct dma_map_ops tile_default_dma_map_ops = {
 	.dma_supported = tile_dma_supported
 };
 
-struct dma_map_ops *tile_dma_map_ops = &tile_default_dma_map_ops;
+const struct dma_map_ops *tile_dma_map_ops = &tile_default_dma_map_ops;
 EXPORT_SYMBOL(tile_dma_map_ops);
 
 /* Generic PCI DMA mapping functions */
@@ -508,7 +508,7 @@ tile_pci_dma_supported(struct device *dev, u64 mask)
 	return 1;
 }
 
-static struct dma_map_ops tile_pci_default_dma_map_ops = {
+static const struct dma_map_ops tile_pci_default_dma_map_ops = {
 	.alloc = tile_pci_dma_alloc_coherent,
 	.free = tile_pci_dma_free_coherent,
 	.map_page = tile_pci_dma_map_page,
@@ -523,7 +523,7 @@ static struct dma_map_ops tile_pci_default_dma_map_ops = {
 	.dma_supported = tile_pci_dma_supported
 };
 
-struct dma_map_ops *gx_pci_dma_map_ops = &tile_pci_default_dma_map_ops;
+const struct dma_map_ops *gx_pci_dma_map_ops = &tile_pci_default_dma_map_ops;
 EXPORT_SYMBOL(gx_pci_dma_map_ops);
 
 /* PCI DMA mapping functions for legacy PCI devices */
@@ -544,7 +544,7 @@ static void tile_swiotlb_free_coherent(struct device *dev, size_t size,
 	swiotlb_free_coherent(dev, size, vaddr, dma_addr);
 }
 
-static struct dma_map_ops pci_swiotlb_dma_ops = {
+static const struct dma_map_ops pci_swiotlb_dma_ops = {
 	.alloc = tile_swiotlb_alloc_coherent,
 	.free = tile_swiotlb_free_coherent,
 	.map_page = swiotlb_map_page,
@@ -559,7 +559,7 @@ static struct dma_map_ops pci_swiotlb_dma_ops = {
 	.mapping_error = swiotlb_dma_mapping_error,
 };
 
-static struct dma_map_ops pci_hybrid_dma_ops = {
+static const struct dma_map_ops pci_hybrid_dma_ops = {
 	.alloc = tile_swiotlb_alloc_coherent,
 	.free = tile_swiotlb_free_coherent,
 	.map_page = tile_pci_dma_map_page,
@@ -574,18 +574,18 @@ static struct dma_map_ops pci_hybrid_dma_ops = {
 	.dma_supported = tile_pci_dma_supported
 };
 
-struct dma_map_ops *gx_legacy_pci_dma_map_ops = &pci_swiotlb_dma_ops;
-struct dma_map_ops *gx_hybrid_pci_dma_map_ops = &pci_hybrid_dma_ops;
+const struct dma_map_ops *gx_legacy_pci_dma_map_ops = &pci_swiotlb_dma_ops;
+const struct dma_map_ops *gx_hybrid_pci_dma_map_ops = &pci_hybrid_dma_ops;
 #else
-struct dma_map_ops *gx_legacy_pci_dma_map_ops;
-struct dma_map_ops *gx_hybrid_pci_dma_map_ops;
+const struct dma_map_ops *gx_legacy_pci_dma_map_ops;
+const struct dma_map_ops *gx_hybrid_pci_dma_map_ops;
 #endif
 EXPORT_SYMBOL(gx_legacy_pci_dma_map_ops);
 EXPORT_SYMBOL(gx_hybrid_pci_dma_map_ops);
 
 int dma_set_mask(struct device *dev, u64 mask)
 {
-	struct dma_map_ops *dma_ops = get_dma_ops(dev);
+	const struct dma_map_ops *dma_ops = get_dma_ops(dev);
 
 	/*
 	 * For PCI devices with 64-bit DMA addressing capability, promote
@@ -615,7 +615,7 @@ EXPORT_SYMBOL(dma_set_mask);
 #ifdef CONFIG_ARCH_HAS_DMA_SET_COHERENT_MASK
 int dma_set_coherent_mask(struct device *dev, u64 mask)
 {
-	struct dma_map_ops *dma_ops = get_dma_ops(dev);
+	const struct dma_map_ops *dma_ops = get_dma_ops(dev);
 
 	/*
 	 * For PCI devices with 64-bit DMA addressing capability, promote
diff --git a/arch/unicore32/include/asm/dma-mapping.h b/arch/unicore32/include/asm/dma-mapping.h
index 4749854afd03..14d7729c7b73 100644
--- a/arch/unicore32/include/asm/dma-mapping.h
+++ b/arch/unicore32/include/asm/dma-mapping.h
@@ -21,9 +21,9 @@
 #include <asm/memory.h>
 #include <asm/cacheflush.h>
 
-extern struct dma_map_ops swiotlb_dma_map_ops;
+extern const struct dma_map_ops swiotlb_dma_map_ops;
 
-static inline struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
 {
 	return &swiotlb_dma_map_ops;
 }
diff --git a/arch/unicore32/mm/dma-swiotlb.c b/arch/unicore32/mm/dma-swiotlb.c
index 3e9f6489ba38..525413d6690e 100644
--- a/arch/unicore32/mm/dma-swiotlb.c
+++ b/arch/unicore32/mm/dma-swiotlb.c
@@ -31,7 +31,7 @@ static void unicore_swiotlb_free_coherent(struct device *dev, size_t size,
 	swiotlb_free_coherent(dev, size, vaddr, dma_addr);
 }
 
-struct dma_map_ops swiotlb_dma_map_ops = {
+const struct dma_map_ops swiotlb_dma_map_ops = {
 	.alloc = unicore_swiotlb_alloc_coherent,
 	.free = unicore_swiotlb_free_coherent,
 	.map_sg = swiotlb_map_sg_attrs,
diff --git a/arch/x86/include/asm/device.h b/arch/x86/include/asm/device.h
index 684ed6c3aa67..b2d0b4ced7e3 100644
--- a/arch/x86/include/asm/device.h
+++ b/arch/x86/include/asm/device.h
@@ -3,7 +3,7 @@
 
 struct dev_archdata {
 #ifdef CONFIG_X86_DEV_DMA_OPS
-	struct dma_map_ops *dma_ops;
+	const struct dma_map_ops *dma_ops;
 #endif
 #if defined(CONFIG_INTEL_IOMMU) || defined(CONFIG_AMD_IOMMU)
 	void *iommu; /* hook for IOMMU specific extension */
@@ -13,7 +13,7 @@ struct dev_archdata {
 #if defined(CONFIG_X86_DEV_DMA_OPS) && defined(CONFIG_PCI_DOMAINS)
 struct dma_domain {
 	struct list_head node;
-	struct dma_map_ops *dma_ops;
+	const struct dma_map_ops *dma_ops;
 	int domain_nr;
 };
 void add_dma_domain(struct dma_domain *domain);
diff --git a/arch/x86/include/asm/dma-mapping.h b/arch/x86/include/asm/dma-mapping.h
index 44461626830e..5e4772886a1e 100644
--- a/arch/x86/include/asm/dma-mapping.h
+++ b/arch/x86/include/asm/dma-mapping.h
@@ -25,9 +25,9 @@ extern int iommu_merge;
 extern struct device x86_dma_fallback_dev;
 extern int panic_on_overflow;
 
-extern struct dma_map_ops *dma_ops;
+extern const struct dma_map_ops *dma_ops;
 
-static inline struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
 {
 #ifndef CONFIG_X86_DEV_DMA_OPS
 	return dma_ops;
diff --git a/arch/x86/include/asm/iommu.h b/arch/x86/include/asm/iommu.h
index 345c99cef152..793869879464 100644
--- a/arch/x86/include/asm/iommu.h
+++ b/arch/x86/include/asm/iommu.h
@@ -1,7 +1,7 @@
 #ifndef _ASM_X86_IOMMU_H
 #define _ASM_X86_IOMMU_H
 
-extern struct dma_map_ops nommu_dma_ops;
+extern const struct dma_map_ops nommu_dma_ops;
 extern int force_iommu, no_iommu;
 extern int iommu_detected;
 extern int iommu_pass_through;
diff --git a/arch/x86/kernel/amd_gart_64.c b/arch/x86/kernel/amd_gart_64.c
index 63ff468a7986..82dfe32faaf4 100644
--- a/arch/x86/kernel/amd_gart_64.c
+++ b/arch/x86/kernel/amd_gart_64.c
@@ -695,7 +695,7 @@ static __init int init_amd_gatt(struct agp_kern_info *info)
 	return -1;
 }
 
-static struct dma_map_ops gart_dma_ops = {
+static const struct dma_map_ops gart_dma_ops = {
 	.map_sg				= gart_map_sg,
 	.unmap_sg			= gart_unmap_sg,
 	.map_page			= gart_map_page,
diff --git a/arch/x86/kernel/pci-calgary_64.c b/arch/x86/kernel/pci-calgary_64.c
index 5d400ba1349d..17f180148c80 100644
--- a/arch/x86/kernel/pci-calgary_64.c
+++ b/arch/x86/kernel/pci-calgary_64.c
@@ -478,7 +478,7 @@ static void calgary_free_coherent(struct device *dev, size_t size,
 	free_pages((unsigned long)vaddr, get_order(size));
 }
 
-static struct dma_map_ops calgary_dma_ops = {
+static const struct dma_map_ops calgary_dma_ops = {
 	.alloc = calgary_alloc_coherent,
 	.free = calgary_free_coherent,
 	.map_sg = calgary_map_sg,
diff --git a/arch/x86/kernel/pci-dma.c b/arch/x86/kernel/pci-dma.c
index d30c37750765..76f4c039baae 100644
--- a/arch/x86/kernel/pci-dma.c
+++ b/arch/x86/kernel/pci-dma.c
@@ -17,7 +17,7 @@
 
 static int forbid_dac __read_mostly;
 
-struct dma_map_ops *dma_ops = &nommu_dma_ops;
+const struct dma_map_ops *dma_ops = &nommu_dma_ops;
 EXPORT_SYMBOL(dma_ops);
 
 static int iommu_sac_force __read_mostly;
@@ -214,7 +214,7 @@ early_param("iommu", iommu_setup);
 
 int dma_supported(struct device *dev, u64 mask)
 {
-	struct dma_map_ops *ops = get_dma_ops(dev);
+	const struct dma_map_ops *ops = get_dma_ops(dev);
 
 #ifdef CONFIG_PCI
 	if (mask > 0xffffffff && forbid_dac > 0) {
diff --git a/arch/x86/kernel/pci-nommu.c b/arch/x86/kernel/pci-nommu.c
index 00e71ce396a8..a88952ef371c 100644
--- a/arch/x86/kernel/pci-nommu.c
+++ b/arch/x86/kernel/pci-nommu.c
@@ -88,7 +88,7 @@ static void nommu_sync_sg_for_device(struct device *dev,
 	flush_write_buffers();
 }
 
-struct dma_map_ops nommu_dma_ops = {
+const struct dma_map_ops nommu_dma_ops = {
 	.alloc			= dma_generic_alloc_coherent,
 	.free			= dma_generic_free_coherent,
 	.map_sg			= nommu_map_sg,
diff --git a/arch/x86/kernel/pci-swiotlb.c b/arch/x86/kernel/pci-swiotlb.c
index b47edb8f5256..0a97137d3d86 100644
--- a/arch/x86/kernel/pci-swiotlb.c
+++ b/arch/x86/kernel/pci-swiotlb.c
@@ -45,7 +45,7 @@ void x86_swiotlb_free_coherent(struct device *dev, size_t size,
 		dma_generic_free_coherent(dev, size, vaddr, dma_addr, attrs);
 }
 
-static struct dma_map_ops swiotlb_dma_ops = {
+static const struct dma_map_ops swiotlb_dma_ops = {
 	.mapping_error = swiotlb_dma_mapping_error,
 	.alloc = x86_swiotlb_alloc_coherent,
 	.free = x86_swiotlb_free_coherent,
diff --git a/arch/x86/pci/sta2x11-fixup.c b/arch/x86/pci/sta2x11-fixup.c
index 052c1cb76305..aa3828823170 100644
--- a/arch/x86/pci/sta2x11-fixup.c
+++ b/arch/x86/pci/sta2x11-fixup.c
@@ -179,7 +179,7 @@ static void *sta2x11_swiotlb_alloc_coherent(struct device *dev,
 }
 
 /* We have our own dma_ops: the same as swiotlb but from alloc (above) */
-static struct dma_map_ops sta2x11_dma_ops = {
+static const struct dma_map_ops sta2x11_dma_ops = {
 	.alloc = sta2x11_swiotlb_alloc_coherent,
 	.free = x86_swiotlb_free_coherent,
 	.map_page = swiotlb_map_page,
diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
index 0e98e5d241d0..838ad3968850 100644
--- a/arch/x86/xen/pci-swiotlb-xen.c
+++ b/arch/x86/xen/pci-swiotlb-xen.c
@@ -18,7 +18,7 @@
 
 int xen_swiotlb __read_mostly;
 
-static struct dma_map_ops xen_swiotlb_dma_ops = {
+static const struct dma_map_ops xen_swiotlb_dma_ops = {
 	.mapping_error = xen_swiotlb_dma_mapping_error,
 	.alloc = xen_swiotlb_alloc_coherent,
 	.free = xen_swiotlb_free_coherent,
diff --git a/arch/xtensa/include/asm/device.h b/arch/xtensa/include/asm/device.h
index fe1f5c878493..a77d45d39f35 100644
--- a/arch/xtensa/include/asm/device.h
+++ b/arch/xtensa/include/asm/device.h
@@ -10,7 +10,7 @@ struct dma_map_ops;
 
 struct dev_archdata {
 	/* DMA operations on that device */
-	struct dma_map_ops *dma_ops;
+	const struct dma_map_ops *dma_ops;
 };
 
 struct pdev_archdata {
diff --git a/arch/xtensa/include/asm/dma-mapping.h b/arch/xtensa/include/asm/dma-mapping.h
index 3fc1170a6488..50d23106cce0 100644
--- a/arch/xtensa/include/asm/dma-mapping.h
+++ b/arch/xtensa/include/asm/dma-mapping.h
@@ -18,9 +18,9 @@
 
 #define DMA_ERROR_CODE		(~(dma_addr_t)0x0)
 
-extern struct dma_map_ops xtensa_dma_map_ops;
+extern const struct dma_map_ops xtensa_dma_map_ops;
 
-static inline struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
 {
 	if (dev && dev->archdata.dma_ops)
 		return dev->archdata.dma_ops;
diff --git a/arch/xtensa/kernel/pci-dma.c b/arch/xtensa/kernel/pci-dma.c
index 1e68806d6695..8310e2748c3c 100644
--- a/arch/xtensa/kernel/pci-dma.c
+++ b/arch/xtensa/kernel/pci-dma.c
@@ -233,7 +233,7 @@ int xtensa_dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
 	return 0;
 }
 
-struct dma_map_ops xtensa_dma_map_ops = {
+const struct dma_map_ops xtensa_dma_map_ops = {
 	.alloc = xtensa_dma_alloc,
 	.free = xtensa_dma_free,
 	.map_page = xtensa_map_page,
diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index 754595ee11b6..03fca30fa39a 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -117,7 +117,7 @@ static const struct iommu_ops amd_iommu_ops;
 static ATOMIC_NOTIFIER_HEAD(ppr_notifier);
 int amd_iommu_max_glx_val = -1;
 
-static struct dma_map_ops amd_iommu_dma_ops;
+static const struct dma_map_ops amd_iommu_dma_ops;
 
 /*
  * This struct contains device specific data for the IOMMU
@@ -2726,7 +2726,7 @@ static int amd_iommu_dma_supported(struct device *dev, u64 mask)
 	return check_device(dev);
 }
 
-static struct dma_map_ops amd_iommu_dma_ops = {
+static const struct dma_map_ops amd_iommu_dma_ops = {
 	.alloc		= alloc_coherent,
 	.free		= free_coherent,
 	.map_page	= map_page,
diff --git a/drivers/misc/mic/bus/mic_bus.c b/drivers/misc/mic/bus/mic_bus.c
index be37890abb93..c4b27a25662a 100644
--- a/drivers/misc/mic/bus/mic_bus.c
+++ b/drivers/misc/mic/bus/mic_bus.c
@@ -143,7 +143,7 @@ static void mbus_release_dev(struct device *d)
 }
 
 struct mbus_device *
-mbus_register_device(struct device *pdev, int id, struct dma_map_ops *dma_ops,
+mbus_register_device(struct device *pdev, int id, const struct dma_map_ops *dma_ops,
 		     struct mbus_hw_ops *hw_ops, int index,
 		     void __iomem *mmio_va)
 {
diff --git a/drivers/misc/mic/bus/scif_bus.c b/drivers/misc/mic/bus/scif_bus.c
index ff6e01c25810..e5d377e97c86 100644
--- a/drivers/misc/mic/bus/scif_bus.c
+++ b/drivers/misc/mic/bus/scif_bus.c
@@ -138,7 +138,7 @@ static void scif_release_dev(struct device *d)
 }
 
 struct scif_hw_dev *
-scif_register_device(struct device *pdev, int id, struct dma_map_ops *dma_ops,
+scif_register_device(struct device *pdev, int id, const struct dma_map_ops *dma_ops,
 		     struct scif_hw_ops *hw_ops, u8 dnode, u8 snode,
 		     struct mic_mw *mmio, struct mic_mw *aper, void *dp,
 		     void __iomem *rdp, struct dma_chan **chan, int num_chan,
diff --git a/drivers/misc/mic/bus/scif_bus.h b/drivers/misc/mic/bus/scif_bus.h
index 94f29ac608b6..ff59568219ad 100644
--- a/drivers/misc/mic/bus/scif_bus.h
+++ b/drivers/misc/mic/bus/scif_bus.h
@@ -113,7 +113,7 @@ int scif_register_driver(struct scif_driver *driver);
 void scif_unregister_driver(struct scif_driver *driver);
 struct scif_hw_dev *
 scif_register_device(struct device *pdev, int id,
-		     struct dma_map_ops *dma_ops,
+		     const struct dma_map_ops *dma_ops,
 		     struct scif_hw_ops *hw_ops, u8 dnode, u8 snode,
 		     struct mic_mw *mmio, struct mic_mw *aper,
 		     void *dp, void __iomem *rdp,
diff --git a/drivers/misc/mic/host/mic_boot.c b/drivers/misc/mic/host/mic_boot.c
index 9599d732aff3..c327985c9523 100644
--- a/drivers/misc/mic/host/mic_boot.c
+++ b/drivers/misc/mic/host/mic_boot.c
@@ -245,7 +245,7 @@ static void __mic_dma_unmap_sg(struct device *dev,
 	dma_unmap_sg(&mdev->pdev->dev, sg, nents, dir);
 }
 
-static struct dma_map_ops __mic_dma_ops = {
+static const struct dma_map_ops __mic_dma_ops = {
 	.alloc = __mic_dma_alloc,
 	.free = __mic_dma_free,
 	.map_page = __mic_dma_map_page,
@@ -344,7 +344,7 @@ mic_dma_unmap_page(struct device *dev, dma_addr_t dma_addr,
 	mic_unmap_single(mdev, dma_addr, size);
 }
 
-static struct dma_map_ops mic_dma_ops = {
+static const struct dma_map_ops mic_dma_ops = {
 	.map_page = mic_dma_map_page,
 	.unmap_page = mic_dma_unmap_page,
 };
diff --git a/drivers/parisc/ccio-dma.c b/drivers/parisc/ccio-dma.c
index 3ed6238f8f6e..41a79cca3a47 100644
--- a/drivers/parisc/ccio-dma.c
+++ b/drivers/parisc/ccio-dma.c
@@ -1011,7 +1011,7 @@ ccio_unmap_sg(struct device *dev, struct scatterlist *sglist, int nents,
 	DBG_RUN_SG("%s() DONE (nents %d)\n", __func__, nents);
 }
 
-static struct dma_map_ops ccio_ops = {
+static const struct dma_map_ops ccio_ops = {
 	.dma_supported =	ccio_dma_supported,
 	.alloc =		ccio_alloc,
 	.free =			ccio_free,
diff --git a/drivers/parisc/sba_iommu.c b/drivers/parisc/sba_iommu.c
index 151b86b6d2e2..33385e574433 100644
--- a/drivers/parisc/sba_iommu.c
+++ b/drivers/parisc/sba_iommu.c
@@ -1069,7 +1069,7 @@ sba_unmap_sg(struct device *dev, struct scatterlist *sglist, int nents,
 
 }
 
-static struct dma_map_ops sba_ops = {
+static const struct dma_map_ops sba_ops = {
 	.dma_supported =	sba_dma_supported,
 	.alloc =		sba_alloc,
 	.free =			sba_free,
diff --git a/drivers/pci/host/vmd.c b/drivers/pci/host/vmd.c
index 37e29b580be3..59e11e8525a1 100644
--- a/drivers/pci/host/vmd.c
+++ b/drivers/pci/host/vmd.c
@@ -281,7 +281,7 @@ static struct device *to_vmd_dev(struct device *dev)
 	return &vmd->dev->dev;
 }
 
-static struct dma_map_ops *vmd_dma_ops(struct device *dev)
+static const struct dma_map_ops *vmd_dma_ops(struct device *dev)
 {
 	return get_dma_ops(to_vmd_dev(dev));
 }
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index 08528afdf58b..c4e4b80d3843 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -127,7 +127,7 @@ struct dma_map_ops {
 	int is_phys;
 };
 
-extern struct dma_map_ops dma_noop_ops;
+extern const struct dma_map_ops dma_noop_ops;
 
 #define DMA_BIT_MASK(n)	(((n) == 64) ? ~0ULL : ((1ULL<<(n))-1))
 
@@ -170,8 +170,8 @@ int dma_mmap_from_coherent(struct device *dev, struct vm_area_struct *vma,
  * dma dependent code.  Code that depends on the dma-mapping
  * API needs to set 'depends on HAS_DMA' in its Kconfig
  */
-extern struct dma_map_ops bad_dma_ops;
-static inline struct dma_map_ops *get_dma_ops(struct device *dev)
+extern const struct dma_map_ops bad_dma_ops;
+static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
 {
 	return &bad_dma_ops;
 }
@@ -182,7 +182,7 @@ static inline dma_addr_t dma_map_single_attrs(struct device *dev, void *ptr,
 					      enum dma_data_direction dir,
 					      unsigned long attrs)
 {
-	struct dma_map_ops *ops = get_dma_ops(dev);
+	const struct dma_map_ops *ops = get_dma_ops(dev);
 	dma_addr_t addr;
 
 	kmemcheck_mark_initialized(ptr, size);
@@ -201,7 +201,7 @@ static inline void dma_unmap_single_attrs(struct device *dev, dma_addr_t addr,
 					  enum dma_data_direction dir,
 					  unsigned long attrs)
 {
-	struct dma_map_ops *ops = get_dma_ops(dev);
+	const struct dma_map_ops *ops = get_dma_ops(dev);
 
 	BUG_ON(!valid_dma_direction(dir));
 	if (ops->unmap_page)
@@ -217,7 +217,7 @@ static inline int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg,
 				   int nents, enum dma_data_direction dir,
 				   unsigned long attrs)
 {
-	struct dma_map_ops *ops = get_dma_ops(dev);
+	const struct dma_map_ops *ops = get_dma_ops(dev);
 	int i, ents;
 	struct scatterlist *s;
 
@@ -235,7 +235,7 @@ static inline void dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg
 				      int nents, enum dma_data_direction dir,
 				      unsigned long attrs)
 {
-	struct dma_map_ops *ops = get_dma_ops(dev);
+	const struct dma_map_ops *ops = get_dma_ops(dev);
 
 	BUG_ON(!valid_dma_direction(dir));
 	debug_dma_unmap_sg(dev, sg, nents, dir);
@@ -247,7 +247,7 @@ static inline dma_addr_t dma_map_page(struct device *dev, struct page *page,
 				      size_t offset, size_t size,
 				      enum dma_data_direction dir)
 {
-	struct dma_map_ops *ops = get_dma_ops(dev);
+	const struct dma_map_ops *ops = get_dma_ops(dev);
 	dma_addr_t addr;
 
 	kmemcheck_mark_initialized(page_address(page) + offset, size);
@@ -261,7 +261,7 @@ static inline dma_addr_t dma_map_page(struct device *dev, struct page *page,
 static inline void dma_unmap_page(struct device *dev, dma_addr_t addr,
 				  size_t size, enum dma_data_direction dir)
 {
-	struct dma_map_ops *ops = get_dma_ops(dev);
+	const struct dma_map_ops *ops = get_dma_ops(dev);
 
 	BUG_ON(!valid_dma_direction(dir));
 	if (ops->unmap_page)
@@ -275,7 +275,7 @@ static inline dma_addr_t dma_map_resource(struct device *dev,
 					  enum dma_data_direction dir,
 					  unsigned long attrs)
 {
-	struct dma_map_ops *ops = get_dma_ops(dev);
+	const struct dma_map_ops *ops = get_dma_ops(dev);
 	dma_addr_t addr;
 
 	BUG_ON(!valid_dma_direction(dir));
@@ -296,7 +296,7 @@ static inline void dma_unmap_resource(struct device *dev, dma_addr_t addr,
 				      size_t size, enum dma_data_direction dir,
 				      unsigned long attrs)
 {
-	struct dma_map_ops *ops = get_dma_ops(dev);
+	const struct dma_map_ops *ops = get_dma_ops(dev);
 
 	BUG_ON(!valid_dma_direction(dir));
 	if (ops->unmap_resource)
@@ -308,7 +308,7 @@ static inline void dma_sync_single_for_cpu(struct device *dev, dma_addr_t addr,
 					   size_t size,
 					   enum dma_data_direction dir)
 {
-	struct dma_map_ops *ops = get_dma_ops(dev);
+	const struct dma_map_ops *ops = get_dma_ops(dev);
 
 	BUG_ON(!valid_dma_direction(dir));
 	if (ops->sync_single_for_cpu)
@@ -320,7 +320,7 @@ static inline void dma_sync_single_for_device(struct device *dev,
 					      dma_addr_t addr, size_t size,
 					      enum dma_data_direction dir)
 {
-	struct dma_map_ops *ops = get_dma_ops(dev);
+	const struct dma_map_ops *ops = get_dma_ops(dev);
 
 	BUG_ON(!valid_dma_direction(dir));
 	if (ops->sync_single_for_device)
@@ -360,7 +360,7 @@ static inline void
 dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
 		    int nelems, enum dma_data_direction dir)
 {
-	struct dma_map_ops *ops = get_dma_ops(dev);
+	const struct dma_map_ops *ops = get_dma_ops(dev);
 
 	BUG_ON(!valid_dma_direction(dir));
 	if (ops->sync_sg_for_cpu)
@@ -372,7 +372,7 @@ static inline void
 dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
 		       int nelems, enum dma_data_direction dir)
 {
-	struct dma_map_ops *ops = get_dma_ops(dev);
+	const struct dma_map_ops *ops = get_dma_ops(dev);
 
 	BUG_ON(!valid_dma_direction(dir));
 	if (ops->sync_sg_for_device)
@@ -415,7 +415,7 @@ static inline int
 dma_mmap_attrs(struct device *dev, struct vm_area_struct *vma, void *cpu_addr,
 	       dma_addr_t dma_addr, size_t size, unsigned long attrs)
 {
-	struct dma_map_ops *ops = get_dma_ops(dev);
+	const struct dma_map_ops *ops = get_dma_ops(dev);
 	BUG_ON(!ops);
 	if (ops->mmap)
 		return ops->mmap(dev, vma, cpu_addr, dma_addr, size, attrs);
@@ -433,7 +433,7 @@ dma_get_sgtable_attrs(struct device *dev, struct sg_table *sgt, void *cpu_addr,
 		      dma_addr_t dma_addr, size_t size,
 		      unsigned long attrs)
 {
-	struct dma_map_ops *ops = get_dma_ops(dev);
+	const struct dma_map_ops *ops = get_dma_ops(dev);
 	BUG_ON(!ops);
 	if (ops->get_sgtable)
 		return ops->get_sgtable(dev, sgt, cpu_addr, dma_addr, size,
@@ -451,7 +451,7 @@ static inline void *dma_alloc_attrs(struct device *dev, size_t size,
 				       dma_addr_t *dma_handle, gfp_t flag,
 				       unsigned long attrs)
 {
-	struct dma_map_ops *ops = get_dma_ops(dev);
+	const struct dma_map_ops *ops = get_dma_ops(dev);
 	void *cpu_addr;
 
 	BUG_ON(!ops);
@@ -473,7 +473,7 @@ static inline void dma_free_attrs(struct device *dev, size_t size,
 				     void *cpu_addr, dma_addr_t dma_handle,
 				     unsigned long attrs)
 {
-	struct dma_map_ops *ops = get_dma_ops(dev);
+	const struct dma_map_ops *ops = get_dma_ops(dev);
 
 	BUG_ON(!ops);
 	WARN_ON(irqs_disabled());
@@ -531,7 +531,7 @@ static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
 #ifndef HAVE_ARCH_DMA_SUPPORTED
 static inline int dma_supported(struct device *dev, u64 mask)
 {
-	struct dma_map_ops *ops = get_dma_ops(dev);
+	const struct dma_map_ops *ops = get_dma_ops(dev);
 
 	if (!ops)
 		return 0;
@@ -544,7 +544,7 @@ static inline int dma_supported(struct device *dev, u64 mask)
 #ifndef HAVE_ARCH_DMA_SET_MASK
 static inline int dma_set_mask(struct device *dev, u64 mask)
 {
-	struct dma_map_ops *ops = get_dma_ops(dev);
+	const struct dma_map_ops *ops = get_dma_ops(dev);
 
 	if (ops->set_dma_mask)
 		return ops->set_dma_mask(dev, mask);
diff --git a/include/linux/mic_bus.h b/include/linux/mic_bus.h
index 27d7c95fd0da..504d54c71bdb 100644
--- a/include/linux/mic_bus.h
+++ b/include/linux/mic_bus.h
@@ -90,7 +90,7 @@ struct mbus_hw_ops {
 };
 
 struct mbus_device *
-mbus_register_device(struct device *pdev, int id, struct dma_map_ops *dma_ops,
+mbus_register_device(struct device *pdev, int id, const struct dma_map_ops *dma_ops,
 		     struct mbus_hw_ops *hw_ops, int index,
 		     void __iomem *mmio_va);
 void mbus_unregister_device(struct mbus_device *mbdev);
diff --git a/lib/dma-noop.c b/lib/dma-noop.c
index 3d766e78fbe2..65e49dd35b7b 100644
--- a/lib/dma-noop.c
+++ b/lib/dma-noop.c
@@ -64,7 +64,7 @@ static int dma_noop_supported(struct device *dev, u64 mask)
 	return 1;
 }
 
-struct dma_map_ops dma_noop_ops = {
+const struct dma_map_ops dma_noop_ops = {
 	.alloc			= dma_noop_alloc,
 	.free			= dma_noop_free,
 	.map_page		= dma_noop_map_page,
-- 
2.11.0

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 2/5] misc: vop: Remove a cast
       [not found] ` <07c07529-4636-fafb-2598-7358d8a1460d-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
  2016-12-08  1:10   ` [PATCH 1/5] treewide: constify most struct dma_map_ops Bart Van Assche
@ 2016-12-08  1:10   ` Bart Van Assche
       [not found]     ` <6fff2450-6442-4539-47ff-67f04a593c06-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
  2016-12-08  1:11   ` [PATCH 3/5] Move dma_ops from archdata into struct device Bart Van Assche
                     ` (3 subsequent siblings)
  5 siblings, 1 reply; 36+ messages in thread
From: Bart Van Assche @ 2016-12-08  1:10 UTC (permalink / raw)
  To: Doug Ledford, Christoph Hellwig, Sagi Grimberg
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

Due to the previous patch this cast is no longer necessary.
---
 drivers/misc/mic/bus/vop_bus.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/misc/mic/bus/vop_bus.c b/drivers/misc/mic/bus/vop_bus.c
index 303da222f5b6..e3caa6c53922 100644
--- a/drivers/misc/mic/bus/vop_bus.c
+++ b/drivers/misc/mic/bus/vop_bus.c
@@ -154,7 +154,7 @@ vop_register_device(struct device *pdev, int id,
 	vdev->dev.parent = pdev;
 	vdev->id.device = id;
 	vdev->id.vendor = VOP_DEV_ANY_ID;
-	vdev->dev.archdata.dma_ops = (struct dma_map_ops *)dma_ops;
+	vdev->dev.archdata.dma_ops = dma_ops;
 	vdev->dev.dma_mask = &vdev->dev.coherent_dma_mask;
 	dma_set_mask(&vdev->dev, DMA_BIT_MASK(64));
 	vdev->dev.release = vop_release_dev;
-- 
2.11.0

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 3/5] Move dma_ops from archdata into struct device
       [not found] ` <07c07529-4636-fafb-2598-7358d8a1460d-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
  2016-12-08  1:10   ` [PATCH 1/5] treewide: constify most struct dma_map_ops Bart Van Assche
  2016-12-08  1:10   ` [PATCH 2/5] misc: vop: Remove a cast Bart Van Assche
@ 2016-12-08  1:11   ` Bart Van Assche
  2016-12-09 18:22       ` Christoph Hellwig
  2016-12-08  1:11   ` [PATCH 4/5] IB: Switch from struct ib_dma_mapping_ops to struct dma_mapping_ops Bart Van Assche
                     ` (2 subsequent siblings)
  5 siblings, 1 reply; 36+ messages in thread
From: Bart Van Assche @ 2016-12-08  1:11 UTC (permalink / raw)
  To: Doug Ledford, Christoph Hellwig, Sagi Grimberg
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

Additionally, introduce set_dma_ops(). A later patch will introduce a
call to that function in the RDMA drivers that will be modified to use
dma_noop_ops.
---
 arch/alpha/include/asm/dma-mapping.h      |  2 +-
 arch/arc/include/asm/dma-mapping.h        |  2 +-
 arch/arm/include/asm/device.h             |  1 -
 arch/arm/include/asm/dma-mapping.h        | 17 ++++-------------
 arch/arm64/include/asm/device.h           |  1 -
 arch/arm64/include/asm/dma-mapping.h      | 10 +++++-----
 arch/arm64/mm/dma-mapping.c               |  8 ++++----
 arch/avr32/include/asm/dma-mapping.h      |  2 +-
 arch/blackfin/include/asm/dma-mapping.h   |  2 +-
 arch/c6x/include/asm/dma-mapping.h        |  2 +-
 arch/cris/include/asm/dma-mapping.h       |  4 ++--
 arch/frv/include/asm/dma-mapping.h        |  2 +-
 arch/h8300/include/asm/dma-mapping.h      |  2 +-
 arch/hexagon/include/asm/dma-mapping.h    |  5 +----
 arch/ia64/include/asm/dma-mapping.h       |  5 ++++-
 arch/m68k/include/asm/dma-mapping.h       |  2 +-
 arch/metag/include/asm/dma-mapping.h      |  2 +-
 arch/microblaze/include/asm/dma-mapping.h |  2 +-
 arch/mips/include/asm/device.h            |  5 -----
 arch/mips/include/asm/dma-mapping.h       |  7 ++-----
 arch/mips/pci/pci-octeon.c                |  2 +-
 arch/mn10300/include/asm/dma-mapping.h    |  2 +-
 arch/nios2/include/asm/dma-mapping.h      |  2 +-
 arch/openrisc/include/asm/dma-mapping.h   |  2 +-
 arch/parisc/include/asm/dma-mapping.h     |  2 +-
 arch/powerpc/include/asm/device.h         |  4 ----
 arch/powerpc/include/asm/dma-mapping.h    | 17 ++---------------
 arch/powerpc/include/asm/ps3.h            |  2 +-
 arch/powerpc/kernel/dma.c                 |  2 +-
 arch/powerpc/kernel/ibmebus.c             |  2 +-
 arch/powerpc/platforms/cell/iommu.c       |  2 +-
 arch/powerpc/platforms/pasemi/iommu.c     |  2 +-
 arch/powerpc/platforms/pasemi/setup.c     |  2 +-
 arch/powerpc/platforms/ps3/system-bus.c   |  4 ++--
 arch/s390/include/asm/device.h            |  1 -
 arch/s390/include/asm/dma-mapping.h       |  4 +---
 arch/s390/pci/pci.c                       |  2 +-
 arch/sh/include/asm/dma-mapping.h         |  2 +-
 arch/sparc/include/asm/dma-mapping.h      |  4 ++--
 arch/tile/include/asm/device.h            |  3 ---
 arch/tile/include/asm/dma-mapping.h       | 12 ++----------
 arch/x86/include/asm/device.h             |  3 ---
 arch/x86/include/asm/dma-mapping.h        |  9 +--------
 arch/x86/kernel/pci-calgary_64.c          |  4 ++--
 arch/x86/pci/common.c                     |  2 +-
 arch/x86/pci/sta2x11-fixup.c              |  8 ++++----
 arch/xtensa/include/asm/device.h          |  4 ----
 arch/xtensa/include/asm/dma-mapping.h     |  7 ++-----
 drivers/infiniband/ulp/srpt/ib_srpt.c     |  2 +-
 drivers/iommu/amd_iommu.c                 |  6 +++---
 drivers/misc/mic/bus/mic_bus.c            |  2 +-
 drivers/misc/mic/bus/scif_bus.c           |  2 +-
 drivers/misc/mic/bus/vop_bus.c            |  2 +-
 include/linux/device.h                    |  2 ++
 include/linux/dma-mapping.h               | 12 ++++++++++++
 55 files changed, 85 insertions(+), 138 deletions(-)

diff --git a/arch/alpha/include/asm/dma-mapping.h b/arch/alpha/include/asm/dma-mapping.h
index d3480562411d..5d53666935e6 100644
--- a/arch/alpha/include/asm/dma-mapping.h
+++ b/arch/alpha/include/asm/dma-mapping.h
@@ -3,7 +3,7 @@
 
 extern const struct dma_map_ops *dma_ops;
 
-static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
 	return dma_ops;
 }
diff --git a/arch/arc/include/asm/dma-mapping.h b/arch/arc/include/asm/dma-mapping.h
index fdff3aa60052..94285031c4fb 100644
--- a/arch/arc/include/asm/dma-mapping.h
+++ b/arch/arc/include/asm/dma-mapping.h
@@ -20,7 +20,7 @@
 
 extern const struct dma_map_ops arc_dma_ops;
 
-static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
 	return &arc_dma_ops;
 }
diff --git a/arch/arm/include/asm/device.h b/arch/arm/include/asm/device.h
index d8a572f9c187..220ba207be91 100644
--- a/arch/arm/include/asm/device.h
+++ b/arch/arm/include/asm/device.h
@@ -7,7 +7,6 @@
 #define ASMARM_DEVICE_H
 
 struct dev_archdata {
-	const struct dma_map_ops	*dma_ops;
 #ifdef CONFIG_DMABOUNCE
 	struct dmabounce_device_info *dmabounce;
 #endif
diff --git a/arch/arm/include/asm/dma-mapping.h b/arch/arm/include/asm/dma-mapping.h
index 1aabd781306f..7c6d995fb935 100644
--- a/arch/arm/include/asm/dma-mapping.h
+++ b/arch/arm/include/asm/dma-mapping.h
@@ -18,23 +18,14 @@ extern const struct dma_map_ops arm_coherent_dma_ops;
 
 static inline const struct dma_map_ops *__generic_dma_ops(struct device *dev)
 {
-	if (dev && dev->archdata.dma_ops)
-		return dev->archdata.dma_ops;
+	if (dev && dev->dma_ops)
+		return dev->dma_ops;
 	return &arm_dma_ops;
 }
 
-static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
-	if (xen_initial_domain())
-		return xen_dma_ops;
-	else
-		return __generic_dma_ops(dev);
-}
-
-static inline void set_dma_ops(struct device *dev, const struct dma_map_ops *ops)
-{
-	BUG_ON(!dev);
-	dev->archdata.dma_ops = ops;
+	return xen_initial_domain() ? xen_dma_ops : &arm_dma_ops;
 }
 
 #define HAVE_ARCH_DMA_SUPPORTED 1
diff --git a/arch/arm64/include/asm/device.h b/arch/arm64/include/asm/device.h
index 00c678cc31e1..73d5bab015eb 100644
--- a/arch/arm64/include/asm/device.h
+++ b/arch/arm64/include/asm/device.h
@@ -17,7 +17,6 @@
 #define __ASM_DEVICE_H
 
 struct dev_archdata {
-	const struct dma_map_ops *dma_ops;
 #ifdef CONFIG_IOMMU_API
 	void *iommu;			/* private IOMMU data */
 #endif
diff --git a/arch/arm64/include/asm/dma-mapping.h b/arch/arm64/include/asm/dma-mapping.h
index 1fedb43be712..47c99cbeb084 100644
--- a/arch/arm64/include/asm/dma-mapping.h
+++ b/arch/arm64/include/asm/dma-mapping.h
@@ -29,8 +29,8 @@ extern const struct dma_map_ops dummy_dma_ops;
 
 static inline const struct dma_map_ops *__generic_dma_ops(struct device *dev)
 {
-	if (dev && dev->archdata.dma_ops)
-		return dev->archdata.dma_ops;
+       if (dev && dev->dma_ops)
+               return dev->dma_ops;
 
 	/*
 	 * We expect no ISA devices, and all other DMA masters are expected to
@@ -39,12 +39,12 @@ static inline const struct dma_map_ops *__generic_dma_ops(struct device *dev)
 	return &dummy_dma_ops;
 }
 
-static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
 	if (xen_initial_domain())
 		return xen_dma_ops;
-	else
-		return __generic_dma_ops(dev);
+
+	return __generic_dma_ops(NULL);
 }
 
 void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index de6ec4e14074..7623c51e0287 100644
--- a/arch/arm64/mm/dma-mapping.c
+++ b/arch/arm64/mm/dma-mapping.c
@@ -834,7 +834,7 @@ static bool do_iommu_attach(struct device *dev, const struct iommu_ops *ops,
 		return false;
 	}
 
-	dev->archdata.dma_ops = &iommu_dma_ops;
+	set_dma_ops(dev, &iommu_dma_ops);
 	return true;
 }
 
@@ -943,7 +943,7 @@ void arch_teardown_dma_ops(struct device *dev)
 	if (WARN_ON(domain))
 		iommu_detach_device(domain, dev);
 
-	dev->archdata.dma_ops = NULL;
+	set_dma_ops(dev, NULL);
 }
 
 #else
@@ -957,8 +957,8 @@ static void __iommu_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
 void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
 			const struct iommu_ops *iommu, bool coherent)
 {
-	if (!dev->archdata.dma_ops)
-		dev->archdata.dma_ops = &swiotlb_dma_ops;
+	if (!dev->dma_ops)
+		set_dma_ops(dev, &swiotlb_dma_ops);
 
 	dev->archdata.dma_coherent = coherent;
 	__iommu_setup_dma_ops(dev, dma_base, size, iommu);
diff --git a/arch/avr32/include/asm/dma-mapping.h b/arch/avr32/include/asm/dma-mapping.h
index b2b43c0e0774..7388451f9905 100644
--- a/arch/avr32/include/asm/dma-mapping.h
+++ b/arch/avr32/include/asm/dma-mapping.h
@@ -6,7 +6,7 @@ extern void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
 
 extern const struct dma_map_ops avr32_dma_ops;
 
-static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
 	return &avr32_dma_ops;
 }
diff --git a/arch/blackfin/include/asm/dma-mapping.h b/arch/blackfin/include/asm/dma-mapping.h
index 320fb50fbd41..04254ac36bed 100644
--- a/arch/blackfin/include/asm/dma-mapping.h
+++ b/arch/blackfin/include/asm/dma-mapping.h
@@ -38,7 +38,7 @@ _dma_sync(dma_addr_t addr, size_t size, enum dma_data_direction dir)
 
 extern const struct dma_map_ops bfin_dma_ops;
 
-static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
 	return &bfin_dma_ops;
 }
diff --git a/arch/c6x/include/asm/dma-mapping.h b/arch/c6x/include/asm/dma-mapping.h
index 88258b9ebc8e..aca9f755e4f8 100644
--- a/arch/c6x/include/asm/dma-mapping.h
+++ b/arch/c6x/include/asm/dma-mapping.h
@@ -19,7 +19,7 @@
 
 extern const struct dma_map_ops c6x_dma_ops;
 
-static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
 	return &c6x_dma_ops;
 }
diff --git a/arch/cris/include/asm/dma-mapping.h b/arch/cris/include/asm/dma-mapping.h
index aae4fbc0a656..256169de3743 100644
--- a/arch/cris/include/asm/dma-mapping.h
+++ b/arch/cris/include/asm/dma-mapping.h
@@ -4,12 +4,12 @@
 #ifdef CONFIG_PCI
 extern const struct dma_map_ops v32_dma_ops;
 
-static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
 	return &v32_dma_ops;
 }
 #else
-static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
 	BUG();
 	return NULL;
diff --git a/arch/frv/include/asm/dma-mapping.h b/arch/frv/include/asm/dma-mapping.h
index 150cc00544a8..354900917585 100644
--- a/arch/frv/include/asm/dma-mapping.h
+++ b/arch/frv/include/asm/dma-mapping.h
@@ -9,7 +9,7 @@ extern unsigned long __nongprelbss dma_coherent_mem_end;
 
 extern const struct dma_map_ops frv_dma_ops;
 
-static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
 	return &frv_dma_ops;
 }
diff --git a/arch/h8300/include/asm/dma-mapping.h b/arch/h8300/include/asm/dma-mapping.h
index f804bca4c13f..847c7562e046 100644
--- a/arch/h8300/include/asm/dma-mapping.h
+++ b/arch/h8300/include/asm/dma-mapping.h
@@ -3,7 +3,7 @@
 
 extern const struct dma_map_ops h8300_dma_map_ops;
 
-static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
 	return &h8300_dma_map_ops;
 }
diff --git a/arch/hexagon/include/asm/dma-mapping.h b/arch/hexagon/include/asm/dma-mapping.h
index b812e917cd95..d3a87bd9b686 100644
--- a/arch/hexagon/include/asm/dma-mapping.h
+++ b/arch/hexagon/include/asm/dma-mapping.h
@@ -34,11 +34,8 @@ extern int bad_dma_address;
 
 extern const struct dma_map_ops *dma_ops;
 
-static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
-	if (unlikely(dev == NULL))
-		return NULL;
-
 	return dma_ops;
 }
 
diff --git a/arch/ia64/include/asm/dma-mapping.h b/arch/ia64/include/asm/dma-mapping.h
index 05e467d56d86..73ec3c6f4cfe 100644
--- a/arch/ia64/include/asm/dma-mapping.h
+++ b/arch/ia64/include/asm/dma-mapping.h
@@ -23,7 +23,10 @@ extern void machvec_dma_sync_single(struct device *, dma_addr_t, size_t,
 extern void machvec_dma_sync_sg(struct device *, struct scatterlist *, int,
 				enum dma_data_direction);
 
-#define get_dma_ops(dev) platform_dma_get_ops(dev)
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
+{
+	return platform_dma_get_ops(NULL);
+}
 
 static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
 {
diff --git a/arch/m68k/include/asm/dma-mapping.h b/arch/m68k/include/asm/dma-mapping.h
index 863509939d5a..9210e470771b 100644
--- a/arch/m68k/include/asm/dma-mapping.h
+++ b/arch/m68k/include/asm/dma-mapping.h
@@ -3,7 +3,7 @@
 
 extern const struct dma_map_ops m68k_dma_ops;
 
-static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
         return &m68k_dma_ops;
 }
diff --git a/arch/metag/include/asm/dma-mapping.h b/arch/metag/include/asm/dma-mapping.h
index c156a7ac732f..fad3dc3cb210 100644
--- a/arch/metag/include/asm/dma-mapping.h
+++ b/arch/metag/include/asm/dma-mapping.h
@@ -3,7 +3,7 @@
 
 extern const struct dma_map_ops metag_dma_ops;
 
-static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
 	return &metag_dma_ops;
 }
diff --git a/arch/microblaze/include/asm/dma-mapping.h b/arch/microblaze/include/asm/dma-mapping.h
index c7faf2fb51d6..3fad5e722a66 100644
--- a/arch/microblaze/include/asm/dma-mapping.h
+++ b/arch/microblaze/include/asm/dma-mapping.h
@@ -38,7 +38,7 @@
  */
 extern const struct dma_map_ops dma_direct_ops;
 
-static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
 	return &dma_direct_ops;
 }
diff --git a/arch/mips/include/asm/device.h b/arch/mips/include/asm/device.h
index ebc5c1265473..6aa796f1081a 100644
--- a/arch/mips/include/asm/device.h
+++ b/arch/mips/include/asm/device.h
@@ -6,12 +6,7 @@
 #ifndef _ASM_MIPS_DEVICE_H
 #define _ASM_MIPS_DEVICE_H
 
-struct dma_map_ops;
-
 struct dev_archdata {
-	/* DMA operations on that device */
-	const struct dma_map_ops *dma_ops;
-
 #ifdef CONFIG_DMA_PERDEV_COHERENT
 	/* Non-zero if DMA is coherent with CPU caches */
 	bool dma_coherent;
diff --git a/arch/mips/include/asm/dma-mapping.h b/arch/mips/include/asm/dma-mapping.h
index b59b084a7569..aba71385f9d1 100644
--- a/arch/mips/include/asm/dma-mapping.h
+++ b/arch/mips/include/asm/dma-mapping.h
@@ -11,12 +11,9 @@
 
 extern const struct dma_map_ops *mips_dma_map_ops;
 
-static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
-	if (dev && dev->archdata.dma_ops)
-		return dev->archdata.dma_ops;
-	else
-		return mips_dma_map_ops;
+	return mips_dma_map_ops;
 }
 
 static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
diff --git a/arch/mips/pci/pci-octeon.c b/arch/mips/pci/pci-octeon.c
index 308d051fc45c..7da99a908229 100644
--- a/arch/mips/pci/pci-octeon.c
+++ b/arch/mips/pci/pci-octeon.c
@@ -167,7 +167,7 @@ int pcibios_plat_dev_init(struct pci_dev *dev)
 		pci_write_config_dword(dev, pos + PCI_ERR_ROOT_STATUS, dconfig);
 	}
 
-	dev->dev.archdata.dma_ops = octeon_pci_dma_map_ops;
+	set_dma_ops(&dev->dev, octeon_pci_dma_map_ops);
 
 	return 0;
 }
diff --git a/arch/mn10300/include/asm/dma-mapping.h b/arch/mn10300/include/asm/dma-mapping.h
index 564e3927e005..737ef574b3ea 100644
--- a/arch/mn10300/include/asm/dma-mapping.h
+++ b/arch/mn10300/include/asm/dma-mapping.h
@@ -16,7 +16,7 @@
 
 extern const struct dma_map_ops mn10300_dma_ops;
 
-static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
 	return &mn10300_dma_ops;
 }
diff --git a/arch/nios2/include/asm/dma-mapping.h b/arch/nios2/include/asm/dma-mapping.h
index aa00d839a64b..7b3c6f280293 100644
--- a/arch/nios2/include/asm/dma-mapping.h
+++ b/arch/nios2/include/asm/dma-mapping.h
@@ -12,7 +12,7 @@
 
 extern const struct dma_map_ops nios2_dma_ops;
 
-static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
 	return &nios2_dma_ops;
 }
diff --git a/arch/openrisc/include/asm/dma-mapping.h b/arch/openrisc/include/asm/dma-mapping.h
index 88acbedb4947..0c0075f17145 100644
--- a/arch/openrisc/include/asm/dma-mapping.h
+++ b/arch/openrisc/include/asm/dma-mapping.h
@@ -30,7 +30,7 @@
 
 extern const struct dma_map_ops or1k_dma_map_ops;
 
-static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
 	return &or1k_dma_map_ops;
 }
diff --git a/arch/parisc/include/asm/dma-mapping.h b/arch/parisc/include/asm/dma-mapping.h
index 1749073e44fc..5404c6a726b2 100644
--- a/arch/parisc/include/asm/dma-mapping.h
+++ b/arch/parisc/include/asm/dma-mapping.h
@@ -27,7 +27,7 @@ extern const struct dma_map_ops pcx_dma_ops;
 
 extern const struct dma_map_ops *hppa_dma_ops;
 
-static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
 	return hppa_dma_ops;
 }
diff --git a/arch/powerpc/include/asm/device.h b/arch/powerpc/include/asm/device.h
index 49cbb0fca233..0245bfcaac32 100644
--- a/arch/powerpc/include/asm/device.h
+++ b/arch/powerpc/include/asm/device.h
@@ -6,7 +6,6 @@
 #ifndef _ASM_POWERPC_DEVICE_H
 #define _ASM_POWERPC_DEVICE_H
 
-struct dma_map_ops;
 struct device_node;
 #ifdef CONFIG_PPC64
 struct pci_dn;
@@ -20,9 +19,6 @@ struct iommu_table;
  * drivers/macintosh/macio_asic.c
  */
 struct dev_archdata {
-	/* DMA operations on that device */
-	const struct dma_map_ops	*dma_ops;
-
 	/*
 	 * These two used to be a union. However, with the hybrid ops we need
 	 * both so here we store both a DMA offset for direct mappings and
diff --git a/arch/powerpc/include/asm/dma-mapping.h b/arch/powerpc/include/asm/dma-mapping.h
index 2ec3eadf336f..efdcf87c4c2f 100644
--- a/arch/powerpc/include/asm/dma-mapping.h
+++ b/arch/powerpc/include/asm/dma-mapping.h
@@ -78,22 +78,9 @@ extern struct dma_map_ops dma_iommu_ops;
 #endif
 extern const struct dma_map_ops dma_direct_ops;
 
-static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
-	/* We don't handle the NULL dev case for ISA for now. We could
-	 * do it via an out of line call but it is not needed for now. The
-	 * only ISA DMA device we support is the floppy and we have a hack
-	 * in the floppy driver directly to get a device for us.
-	 */
-	if (unlikely(dev == NULL))
-		return NULL;
-
-	return dev->archdata.dma_ops;
-}
-
-static inline void set_dma_ops(struct device *dev, const struct dma_map_ops *ops)
-{
-	dev->archdata.dma_ops = ops;
+	return NULL;
 }
 
 /*
diff --git a/arch/powerpc/include/asm/ps3.h b/arch/powerpc/include/asm/ps3.h
index a19f831a4cc9..17ee719e799f 100644
--- a/arch/powerpc/include/asm/ps3.h
+++ b/arch/powerpc/include/asm/ps3.h
@@ -435,7 +435,7 @@ static inline void *ps3_system_bus_get_drvdata(
 	return dev_get_drvdata(&dev->core);
 }
 
-/* These two need global scope for get_dma_ops(). */
+/* These two need global scope for get_arch_dma_ops(). */
 
 extern struct bus_type ps3_system_bus_type;
 
diff --git a/arch/powerpc/kernel/dma.c b/arch/powerpc/kernel/dma.c
index beb18d7a1999..6f4e06b63a80 100644
--- a/arch/powerpc/kernel/dma.c
+++ b/arch/powerpc/kernel/dma.c
@@ -33,7 +33,7 @@ static u64 __maybe_unused get_pfn_limit(struct device *dev)
 	struct dev_archdata __maybe_unused *sd = &dev->archdata;
 
 #ifdef CONFIG_SWIOTLB
-	if (sd->max_direct_dma_addr && sd->dma_ops == &swiotlb_dma_ops)
+	if (sd->max_direct_dma_addr && dev->dma_ops == &swiotlb_dma_ops)
 		pfn = min_t(u64, pfn, sd->max_direct_dma_addr >> PAGE_SHIFT);
 #endif
 
diff --git a/arch/powerpc/kernel/ibmebus.c b/arch/powerpc/kernel/ibmebus.c
index ca2e1f89544e..e2f09f0d1152 100644
--- a/arch/powerpc/kernel/ibmebus.c
+++ b/arch/powerpc/kernel/ibmebus.c
@@ -169,7 +169,7 @@ static int ibmebus_create_device(struct device_node *dn)
 		return -ENOMEM;
 
 	dev->dev.bus = &ibmebus_bus_type;
-	dev->dev.archdata.dma_ops = &ibmebus_dma_ops;
+	set_dma_ops(&dev->dev, &ibmebus_dma_ops);
 
 	ret = of_device_add(dev);
 	if (ret)
diff --git a/arch/powerpc/platforms/cell/iommu.c b/arch/powerpc/platforms/cell/iommu.c
index e1413e69e5fe..592a2a0f4860 100644
--- a/arch/powerpc/platforms/cell/iommu.c
+++ b/arch/powerpc/platforms/cell/iommu.c
@@ -692,7 +692,7 @@ static int cell_of_bus_notify(struct notifier_block *nb, unsigned long action,
 		return 0;
 
 	/* We use the PCI DMA ops */
-	dev->archdata.dma_ops = get_pci_dma_ops();
+	set_dma_ops(dev, get_pci_dma_ops());
 
 	cell_dma_dev_setup(dev);
 
diff --git a/arch/powerpc/platforms/pasemi/iommu.c b/arch/powerpc/platforms/pasemi/iommu.c
index e74adc4e7fd8..66bf56788260 100644
--- a/arch/powerpc/platforms/pasemi/iommu.c
+++ b/arch/powerpc/platforms/pasemi/iommu.c
@@ -186,7 +186,7 @@ static void pci_dma_dev_setup_pasemi(struct pci_dev *dev)
 	 */
 	if (dev->vendor == 0x1959 && dev->device == 0xa007 &&
 	    !firmware_has_feature(FW_FEATURE_LPAR)) {
-		dev->dev.archdata.dma_ops = &dma_direct_ops;
+		set_dma_ops(&dev->dev, &dma_direct_ops);
 		/*
 		 * Set the coherent DMA mask to prevent the iommu
 		 * being used unnecessarily
diff --git a/arch/powerpc/platforms/pasemi/setup.c b/arch/powerpc/platforms/pasemi/setup.c
index 3182400cf48f..a00412d369f8 100644
--- a/arch/powerpc/platforms/pasemi/setup.c
+++ b/arch/powerpc/platforms/pasemi/setup.c
@@ -363,7 +363,7 @@ static int pcmcia_notify(struct notifier_block *nb, unsigned long action,
 		return 0;
 
 	/* We use the direct ops for localbus */
-	dev->archdata.dma_ops = &dma_direct_ops;
+	set_dma_ops(dev, &dma_direct_ops);
 
 	return 0;
 }
diff --git a/arch/powerpc/platforms/ps3/system-bus.c b/arch/powerpc/platforms/ps3/system-bus.c
index c81450d98794..b78041049146 100644
--- a/arch/powerpc/platforms/ps3/system-bus.c
+++ b/arch/powerpc/platforms/ps3/system-bus.c
@@ -756,11 +756,11 @@ int ps3_system_bus_device_register(struct ps3_system_bus_device *dev)
 
 	switch (dev->dev_type) {
 	case PS3_DEVICE_TYPE_IOC0:
-		dev->core.archdata.dma_ops = &ps3_ioc0_dma_ops;
+		set_dma_ops(&dev->core, &ps3_ioc0_dma_ops);
 		dev_set_name(&dev->core, "ioc0_%02x", ++dev_ioc0_count);
 		break;
 	case PS3_DEVICE_TYPE_SB:
-		dev->core.archdata.dma_ops = &ps3_sb_dma_ops;
+		set_dma_ops(&dev->core, &ps3_sb_dma_ops);
 		dev_set_name(&dev->core, "sb_%02x", ++dev_sb_count);
 
 		break;
diff --git a/arch/s390/include/asm/device.h b/arch/s390/include/asm/device.h
index 7955a9799466..5203fc87f080 100644
--- a/arch/s390/include/asm/device.h
+++ b/arch/s390/include/asm/device.h
@@ -4,7 +4,6 @@
  * This file is released under the GPLv2
  */
 struct dev_archdata {
-	const struct dma_map_ops *dma_ops;
 };
 
 struct pdev_archdata {
diff --git a/arch/s390/include/asm/dma-mapping.h b/arch/s390/include/asm/dma-mapping.h
index 2776d205b1ff..3108b8dbe266 100644
--- a/arch/s390/include/asm/dma-mapping.h
+++ b/arch/s390/include/asm/dma-mapping.h
@@ -12,10 +12,8 @@
 
 extern const struct dma_map_ops s390_pci_dma_ops;
 
-static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
-	if (dev && dev->archdata.dma_ops)
-		return dev->archdata.dma_ops;
 	return &dma_noop_ops;
 }
 
diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
index 15ffc19c8c0c..1f6b279678c0 100644
--- a/arch/s390/pci/pci.c
+++ b/arch/s390/pci/pci.c
@@ -641,7 +641,7 @@ int pcibios_add_device(struct pci_dev *pdev)
 	int i;
 
 	pdev->dev.groups = zpci_attr_groups;
-	pdev->dev.archdata.dma_ops = &s390_pci_dma_ops;
+	set_dma_ops(&pdev->dev, &s390_pci_dma_ops);
 	zpci_map_resources(pdev);
 
 	for (i = 0; i < PCI_BAR_COUNT; i++) {
diff --git a/arch/sh/include/asm/dma-mapping.h b/arch/sh/include/asm/dma-mapping.h
index a7382c34c241..d99008af5f73 100644
--- a/arch/sh/include/asm/dma-mapping.h
+++ b/arch/sh/include/asm/dma-mapping.h
@@ -4,7 +4,7 @@
 extern const struct dma_map_ops *dma_ops;
 extern void no_iommu_init(void);
 
-static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
 	return dma_ops;
 }
diff --git a/arch/sparc/include/asm/dma-mapping.h b/arch/sparc/include/asm/dma-mapping.h
index 3d2babc0c4c6..69cc627779f2 100644
--- a/arch/sparc/include/asm/dma-mapping.h
+++ b/arch/sparc/include/asm/dma-mapping.h
@@ -24,14 +24,14 @@ extern const struct dma_map_ops pci32_dma_ops;
 
 extern struct bus_type pci_bus_type;
 
-static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
 #ifdef CONFIG_SPARC_LEON
 	if (sparc_cpu_model == sparc_leon)
 		return leon_dma_ops;
 #endif
 #if defined(CONFIG_SPARC32) && defined(CONFIG_PCI)
-	if (dev->bus == &pci_bus_type)
+	if (bus == &pci_bus_type)
 		return &pci32_dma_ops;
 #endif
 	return dma_ops;
diff --git a/arch/tile/include/asm/device.h b/arch/tile/include/asm/device.h
index 25f23ac7d361..1cf45422a0df 100644
--- a/arch/tile/include/asm/device.h
+++ b/arch/tile/include/asm/device.h
@@ -17,9 +17,6 @@
 #define _ASM_TILE_DEVICE_H
 
 struct dev_archdata {
-	/* DMA operations on that device */
-        const struct dma_map_ops	*dma_ops;
-
 	/* Offset of the DMA address from the PA. */
 	dma_addr_t		dma_offset;
 
diff --git a/arch/tile/include/asm/dma-mapping.h b/arch/tile/include/asm/dma-mapping.h
index 4a06cc75b856..bbc71a29b2c6 100644
--- a/arch/tile/include/asm/dma-mapping.h
+++ b/arch/tile/include/asm/dma-mapping.h
@@ -29,12 +29,9 @@ extern const struct dma_map_ops *gx_pci_dma_map_ops;
 extern const struct dma_map_ops *gx_legacy_pci_dma_map_ops;
 extern const struct dma_map_ops *gx_hybrid_pci_dma_map_ops;
 
-static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
-	if (dev && dev->archdata.dma_ops)
-		return dev->archdata.dma_ops;
-	else
-		return tile_dma_map_ops;
+	return tile_dma_map_ops;
 }
 
 static inline dma_addr_t get_dma_offset(struct device *dev)
@@ -59,11 +56,6 @@ static inline phys_addr_t dma_to_phys(struct device *dev, dma_addr_t daddr)
 
 static inline void dma_mark_clean(void *addr, size_t size) {}
 
-static inline void set_dma_ops(struct device *dev, const struct dma_map_ops *ops)
-{
-	dev->archdata.dma_ops = ops;
-}
-
 static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
 {
 	if (!dev->dma_mask)
diff --git a/arch/x86/include/asm/device.h b/arch/x86/include/asm/device.h
index b2d0b4ced7e3..1b3ef26e77df 100644
--- a/arch/x86/include/asm/device.h
+++ b/arch/x86/include/asm/device.h
@@ -2,9 +2,6 @@
 #define _ASM_X86_DEVICE_H
 
 struct dev_archdata {
-#ifdef CONFIG_X86_DEV_DMA_OPS
-	const struct dma_map_ops *dma_ops;
-#endif
 #if defined(CONFIG_INTEL_IOMMU) || defined(CONFIG_AMD_IOMMU)
 	void *iommu; /* hook for IOMMU specific extension */
 #endif
diff --git a/arch/x86/include/asm/dma-mapping.h b/arch/x86/include/asm/dma-mapping.h
index 5e4772886a1e..08a0838b83fb 100644
--- a/arch/x86/include/asm/dma-mapping.h
+++ b/arch/x86/include/asm/dma-mapping.h
@@ -27,16 +27,9 @@ extern int panic_on_overflow;
 
 extern const struct dma_map_ops *dma_ops;
 
-static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
-#ifndef CONFIG_X86_DEV_DMA_OPS
 	return dma_ops;
-#else
-	if (unlikely(!dev) || !dev->archdata.dma_ops)
-		return dma_ops;
-	else
-		return dev->archdata.dma_ops;
-#endif
 }
 
 bool arch_dma_alloc_attrs(struct device **dev, gfp_t *gfp);
diff --git a/arch/x86/kernel/pci-calgary_64.c b/arch/x86/kernel/pci-calgary_64.c
index 17f180148c80..87f9a1ff7cf6 100644
--- a/arch/x86/kernel/pci-calgary_64.c
+++ b/arch/x86/kernel/pci-calgary_64.c
@@ -1177,7 +1177,7 @@ static int __init calgary_init(void)
 		tbl = find_iommu_table(&dev->dev);
 
 		if (translation_enabled(tbl))
-			dev->dev.archdata.dma_ops = &calgary_dma_ops;
+			set_dma_ops(&dev->dev, &calgary_dma_ops);
 	}
 
 	return ret;
@@ -1201,7 +1201,7 @@ static int __init calgary_init(void)
 		calgary_disable_translation(dev);
 		calgary_free_bus(dev);
 		pci_dev_put(dev); /* Undo calgary_init_one()'s pci_dev_get() */
-		dev->dev.archdata.dma_ops = NULL;
+		set_dma_ops(&dev->dev, NULL);
 	} while (1);
 
 	return ret;
diff --git a/arch/x86/pci/common.c b/arch/x86/pci/common.c
index a4fdfa7dcc1b..944e13c1a1e4 100644
--- a/arch/x86/pci/common.c
+++ b/arch/x86/pci/common.c
@@ -667,7 +667,7 @@ static void set_dma_domain_ops(struct pci_dev *pdev)
 	spin_lock(&dma_domain_list_lock);
 	list_for_each_entry(domain, &dma_domain_list, node) {
 		if (pci_domain_nr(pdev->bus) == domain->domain_nr) {
-			pdev->dev.archdata.dma_ops = domain->dma_ops;
+			set_dma_ops(&pdev->dev, domain->dma_ops);
 			break;
 		}
 	}
diff --git a/arch/x86/pci/sta2x11-fixup.c b/arch/x86/pci/sta2x11-fixup.c
index aa3828823170..21008548f225 100644
--- a/arch/x86/pci/sta2x11-fixup.c
+++ b/arch/x86/pci/sta2x11-fixup.c
@@ -203,7 +203,7 @@ static void sta2x11_setup_pdev(struct pci_dev *pdev)
 		return;
 	pci_set_consistent_dma_mask(pdev, STA2X11_AMBA_SIZE - 1);
 	pci_set_dma_mask(pdev, STA2X11_AMBA_SIZE - 1);
-	pdev->dev.archdata.dma_ops = &sta2x11_dma_ops;
+	set_dma_ops(&pdev->dev, &sta2x11_dma_ops);
 
 	/* We must enable all devices as master, for audio DMA to work */
 	pci_set_master(pdev);
@@ -223,7 +223,7 @@ bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
 {
 	struct sta2x11_mapping *map;
 
-	if (dev->archdata.dma_ops != &sta2x11_dma_ops) {
+	if (dev->dma_ops != &sta2x11_dma_ops) {
 		if (!dev->dma_mask)
 			return false;
 		return addr + size - 1 <= *dev->dma_mask;
@@ -247,7 +247,7 @@ bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
  */
 dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr)
 {
-	if (dev->archdata.dma_ops != &sta2x11_dma_ops)
+	if (dev->dma_ops != &sta2x11_dma_ops)
 		return paddr;
 	return p2a(paddr, to_pci_dev(dev));
 }
@@ -259,7 +259,7 @@ dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr)
  */
 phys_addr_t dma_to_phys(struct device *dev, dma_addr_t daddr)
 {
-	if (dev->archdata.dma_ops != &sta2x11_dma_ops)
+	if (dev->dma_ops != &sta2x11_dma_ops)
 		return daddr;
 	return a2p(daddr, to_pci_dev(dev));
 }
diff --git a/arch/xtensa/include/asm/device.h b/arch/xtensa/include/asm/device.h
index a77d45d39f35..1deeb8ebbb1b 100644
--- a/arch/xtensa/include/asm/device.h
+++ b/arch/xtensa/include/asm/device.h
@@ -6,11 +6,7 @@
 #ifndef _ASM_XTENSA_DEVICE_H
 #define _ASM_XTENSA_DEVICE_H
 
-struct dma_map_ops;
-
 struct dev_archdata {
-	/* DMA operations on that device */
-	const struct dma_map_ops *dma_ops;
 };
 
 struct pdev_archdata {
diff --git a/arch/xtensa/include/asm/dma-mapping.h b/arch/xtensa/include/asm/dma-mapping.h
index 50d23106cce0..c6140fa8c0be 100644
--- a/arch/xtensa/include/asm/dma-mapping.h
+++ b/arch/xtensa/include/asm/dma-mapping.h
@@ -20,12 +20,9 @@
 
 extern const struct dma_map_ops xtensa_dma_map_ops;
 
-static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
+static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
-	if (dev && dev->archdata.dma_ops)
-		return dev->archdata.dma_ops;
-	else
-		return &xtensa_dma_map_ops;
+	return &xtensa_dma_map_ops;
 }
 
 void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
index 0951e4fe02c2..6004bcdfc12b 100644
--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
+++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
@@ -2467,7 +2467,7 @@ static void srpt_add_one(struct ib_device *device)
 	int i;
 
 	pr_debug("device = %p, device->dma_ops = %p\n", device,
-		 device->dma_ops);
+		 get_dma_ops(device->dma_device));
 
 	sdev = kzalloc(sizeof(*sdev), GFP_KERNEL);
 	if (!sdev)
diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index 03fca30fa39a..4d7287c51752 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -513,7 +513,7 @@ static void iommu_uninit_device(struct device *dev)
 	iommu_group_remove_device(dev);
 
 	/* Remove dma-ops */
-	dev->archdata.dma_ops = NULL;
+	set_dma_ops(dev, NULL);
 
 	/*
 	 * We keep dev_data around for unplugged devices and reuse it when the
@@ -2162,7 +2162,7 @@ static int amd_iommu_add_device(struct device *dev)
 				dev_name(dev));
 
 		iommu_ignore_device(dev);
-		dev->archdata.dma_ops = &nommu_dma_ops;
+		set_dma_ops(dev, &nommu_dma_ops);
 		goto out;
 	}
 	init_iommu_group(dev);
@@ -2179,7 +2179,7 @@ static int amd_iommu_add_device(struct device *dev)
 	if (domain->type == IOMMU_DOMAIN_IDENTITY)
 		dev_data->passthrough = true;
 	else
-		dev->archdata.dma_ops = &amd_iommu_dma_ops;
+		set_dma_ops(dev, &amd_iommu_dma_ops);
 
 out:
 	iommu_completion_wait(iommu);
diff --git a/drivers/misc/mic/bus/mic_bus.c b/drivers/misc/mic/bus/mic_bus.c
index c4b27a25662a..ee6e4ef370ea 100644
--- a/drivers/misc/mic/bus/mic_bus.c
+++ b/drivers/misc/mic/bus/mic_bus.c
@@ -158,7 +158,7 @@ mbus_register_device(struct device *pdev, int id, const struct dma_map_ops *dma_
 	mbdev->dev.parent = pdev;
 	mbdev->id.device = id;
 	mbdev->id.vendor = MBUS_DEV_ANY_ID;
-	mbdev->dev.archdata.dma_ops = dma_ops;
+	set_dma_ops(&mbdev->dev, dma_ops);
 	mbdev->dev.dma_mask = &mbdev->dev.coherent_dma_mask;
 	dma_set_mask(&mbdev->dev, DMA_BIT_MASK(64));
 	mbdev->dev.release = mbus_release_dev;
diff --git a/drivers/misc/mic/bus/scif_bus.c b/drivers/misc/mic/bus/scif_bus.c
index e5d377e97c86..d4d559cad6a1 100644
--- a/drivers/misc/mic/bus/scif_bus.c
+++ b/drivers/misc/mic/bus/scif_bus.c
@@ -154,7 +154,7 @@ scif_register_device(struct device *pdev, int id, const struct dma_map_ops *dma_
 	sdev->dev.parent = pdev;
 	sdev->id.device = id;
 	sdev->id.vendor = SCIF_DEV_ANY_ID;
-	sdev->dev.archdata.dma_ops = dma_ops;
+	set_dma_ops(&sdev->dev, dma_ops);
 	sdev->dev.release = scif_release_dev;
 	sdev->hw_ops = hw_ops;
 	sdev->dnode = dnode;
diff --git a/drivers/misc/mic/bus/vop_bus.c b/drivers/misc/mic/bus/vop_bus.c
index e3caa6c53922..c96a05f811f2 100644
--- a/drivers/misc/mic/bus/vop_bus.c
+++ b/drivers/misc/mic/bus/vop_bus.c
@@ -154,7 +154,7 @@ vop_register_device(struct device *pdev, int id,
 	vdev->dev.parent = pdev;
 	vdev->id.device = id;
 	vdev->id.vendor = VOP_DEV_ANY_ID;
-	vdev->dev.archdata.dma_ops = dma_ops;
+	set_dma_ops(&vdev->dev, dma_ops);
 	vdev->dev.dma_mask = &vdev->dev.coherent_dma_mask;
 	dma_set_mask(&vdev->dev, DMA_BIT_MASK(64));
 	vdev->dev.release = vop_release_dev;
diff --git a/include/linux/device.h b/include/linux/device.h
index bc41e87a969b..e887b22a37e5 100644
--- a/include/linux/device.h
+++ b/include/linux/device.h
@@ -780,6 +780,8 @@ struct device_dma_parameters {
  * a higher-level representation of the device.
  */
 struct device {
+	const struct dma_map_ops *dma_ops; /* See also get_dma_ops() */
+
 	struct device		*parent;
 
 	struct device_private	*p;
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index c4e4b80d3843..50d059e08720 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -164,6 +164,18 @@ int dma_mmap_from_coherent(struct device *dev, struct vm_area_struct *vma,
 
 #ifdef CONFIG_HAS_DMA
 #include <asm/dma-mapping.h>
+static inline const struct dma_map_ops *get_dma_ops(struct device *dev)
+{
+	if (dev && dev->dma_ops)
+		return dev->dma_ops;
+	return get_arch_dma_ops(dev ? dev->bus : NULL);
+}
+
+static inline void set_dma_ops(struct device *dev,
+			       const struct dma_map_ops *dma_ops)
+{
+	dev->dma_ops = dma_ops;
+}
 #else
 /*
  * Define the dma api to allow compilation but not linking of
-- 
2.11.0

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 4/5] IB: Switch from struct ib_dma_mapping_ops to struct dma_mapping_ops
       [not found] ` <07c07529-4636-fafb-2598-7358d8a1460d-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
                     ` (2 preceding siblings ...)
  2016-12-08  1:11   ` [PATCH 3/5] Move dma_ops from archdata into struct device Bart Van Assche
@ 2016-12-08  1:11   ` Bart Van Assche
       [not found]     ` <25d066c2-59d7-2be7-dd56-e29e99b43620-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
  2016-12-08  1:12   ` [PATCH 5/5] treewide: Inline ib_dma_map_*() functions Bart Van Assche
  2016-12-08  6:48   ` [PATCH, RFC 0/5] IB: Optimize DMA mapping Or Gerlitz
  5 siblings, 1 reply; 36+ messages in thread
From: Bart Van Assche @ 2016-12-08  1:11 UTC (permalink / raw)
  To: Doug Ledford, Christoph Hellwig, Sagi Grimberg
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

Additionally, use dma_noop_ops instead of duplicating it.

This patch eliminates one branch from every ib_dma_map_*() call.
---
 drivers/infiniband/hw/hfi1/dma.c      | 183 -------------------------------
 drivers/infiniband/hw/qib/qib_dma.c   | 169 -----------------------------
 drivers/infiniband/hw/qib/qib_keys.c  |   2 +-
 drivers/infiniband/sw/rdmavt/Makefile |   2 +-
 drivers/infiniband/sw/rdmavt/dma.c    | 198 ----------------------------------
 drivers/infiniband/sw/rdmavt/dma.h    |  53 ---------
 drivers/infiniband/sw/rdmavt/mr.c     |   4 +-
 drivers/infiniband/sw/rdmavt/vt.c     |   4 +-
 drivers/infiniband/sw/rdmavt/vt.h     |   1 -
 drivers/infiniband/sw/rxe/Makefile    |   1 -
 drivers/infiniband/sw/rxe/rxe_dma.c   | 183 -------------------------------
 drivers/infiniband/sw/rxe/rxe_loc.h   |   2 -
 drivers/infiniband/sw/rxe/rxe_verbs.c |   2 +-
 include/rdma/ib_verbs.h               | 117 +++-----------------
 14 files changed, 22 insertions(+), 899 deletions(-)
 delete mode 100644 drivers/infiniband/hw/hfi1/dma.c
 delete mode 100644 drivers/infiniband/hw/qib/qib_dma.c
 delete mode 100644 drivers/infiniband/sw/rdmavt/dma.c
 delete mode 100644 drivers/infiniband/sw/rdmavt/dma.h
 delete mode 100644 drivers/infiniband/sw/rxe/rxe_dma.c

diff --git a/drivers/infiniband/hw/hfi1/dma.c b/drivers/infiniband/hw/hfi1/dma.c
deleted file mode 100644
index 7e8dab892848..000000000000
--- a/drivers/infiniband/hw/hfi1/dma.c
+++ /dev/null
@@ -1,183 +0,0 @@
-/*
- * Copyright(c) 2015, 2016 Intel Corporation.
- *
- * This file is provided under a dual BSD/GPLv2 license.  When using or
- * redistributing this file, you may do so under either license.
- *
- * GPL LICENSE SUMMARY
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
- * General Public License for more details.
- *
- * BSD LICENSE
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- *  - Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- *  - Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in
- *    the documentation and/or other materials provided with the
- *    distribution.
- *  - Neither the name of Intel Corporation nor the names of its
- *    contributors may be used to endorse or promote products derived
- *    from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- */
-#include <linux/types.h>
-#include <linux/scatterlist.h>
-
-#include "verbs.h"
-
-#define BAD_DMA_ADDRESS ((u64)0)
-
-/*
- * The following functions implement driver specific replacements
- * for the ib_dma_*() functions.
- *
- * These functions return kernel virtual addresses instead of
- * device bus addresses since the driver uses the CPU to copy
- * data instead of using hardware DMA.
- */
-
-static int hfi1_mapping_error(struct ib_device *dev, u64 dma_addr)
-{
-	return dma_addr == BAD_DMA_ADDRESS;
-}
-
-static u64 hfi1_dma_map_single(struct ib_device *dev, void *cpu_addr,
-			       size_t size, enum dma_data_direction direction)
-{
-	if (WARN_ON(!valid_dma_direction(direction)))
-		return BAD_DMA_ADDRESS;
-
-	return (u64)cpu_addr;
-}
-
-static void hfi1_dma_unmap_single(struct ib_device *dev, u64 addr, size_t size,
-				  enum dma_data_direction direction)
-{
-	/* This is a stub, nothing to be done here */
-}
-
-static u64 hfi1_dma_map_page(struct ib_device *dev, struct page *page,
-			     unsigned long offset, size_t size,
-			    enum dma_data_direction direction)
-{
-	u64 addr;
-
-	if (WARN_ON(!valid_dma_direction(direction)))
-		return BAD_DMA_ADDRESS;
-
-	if (offset + size > PAGE_SIZE)
-		return BAD_DMA_ADDRESS;
-
-	addr = (u64)page_address(page);
-	if (addr)
-		addr += offset;
-
-	return addr;
-}
-
-static void hfi1_dma_unmap_page(struct ib_device *dev, u64 addr, size_t size,
-				enum dma_data_direction direction)
-{
-	/* This is a stub, nothing to be done here */
-}
-
-static int hfi1_map_sg(struct ib_device *dev, struct scatterlist *sgl,
-		       int nents, enum dma_data_direction direction)
-{
-	struct scatterlist *sg;
-	u64 addr;
-	int i;
-	int ret = nents;
-
-	if (WARN_ON(!valid_dma_direction(direction)))
-		return BAD_DMA_ADDRESS;
-
-	for_each_sg(sgl, sg, nents, i) {
-		addr = (u64)page_address(sg_page(sg));
-		if (!addr) {
-			ret = 0;
-			break;
-		}
-		sg->dma_address = addr + sg->offset;
-#ifdef CONFIG_NEED_SG_DMA_LENGTH
-		sg->dma_length = sg->length;
-#endif
-	}
-	return ret;
-}
-
-static void hfi1_unmap_sg(struct ib_device *dev,
-			  struct scatterlist *sg, int nents,
-			 enum dma_data_direction direction)
-{
-	/* This is a stub, nothing to be done here */
-}
-
-static void hfi1_sync_single_for_cpu(struct ib_device *dev, u64 addr,
-				     size_t size, enum dma_data_direction dir)
-{
-}
-
-static void hfi1_sync_single_for_device(struct ib_device *dev, u64 addr,
-					size_t size,
-					enum dma_data_direction dir)
-{
-}
-
-static void *hfi1_dma_alloc_coherent(struct ib_device *dev, size_t size,
-				     u64 *dma_handle, gfp_t flag)
-{
-	struct page *p;
-	void *addr = NULL;
-
-	p = alloc_pages(flag, get_order(size));
-	if (p)
-		addr = page_address(p);
-	if (dma_handle)
-		*dma_handle = (u64)addr;
-	return addr;
-}
-
-static void hfi1_dma_free_coherent(struct ib_device *dev, size_t size,
-				   void *cpu_addr, u64 dma_handle)
-{
-	free_pages((unsigned long)cpu_addr, get_order(size));
-}
-
-struct ib_dma_mapping_ops hfi1_dma_mapping_ops = {
-	.mapping_error = hfi1_mapping_error,
-	.map_single = hfi1_dma_map_single,
-	.unmap_single = hfi1_dma_unmap_single,
-	.map_page = hfi1_dma_map_page,
-	.unmap_page = hfi1_dma_unmap_page,
-	.map_sg = hfi1_map_sg,
-	.unmap_sg = hfi1_unmap_sg,
-	.sync_single_for_cpu = hfi1_sync_single_for_cpu,
-	.sync_single_for_device = hfi1_sync_single_for_device,
-	.alloc_coherent = hfi1_dma_alloc_coherent,
-	.free_coherent = hfi1_dma_free_coherent
-};
diff --git a/drivers/infiniband/hw/qib/qib_dma.c b/drivers/infiniband/hw/qib/qib_dma.c
deleted file mode 100644
index 59fe092b4b0f..000000000000
--- a/drivers/infiniband/hw/qib/qib_dma.c
+++ /dev/null
@@ -1,169 +0,0 @@
-/*
- * Copyright (c) 2006, 2009, 2010 QLogic, Corporation. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *     Redistribution and use in source and binary forms, with or
- *     without modification, are permitted provided that the following
- *     conditions are met:
- *
- *      - Redistributions of source code must retain the above
- *        copyright notice, this list of conditions and the following
- *        disclaimer.
- *
- *      - Redistributions in binary form must reproduce the above
- *        copyright notice, this list of conditions and the following
- *        disclaimer in the documentation and/or other materials
- *        provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
- */
-#include <linux/types.h>
-#include <linux/scatterlist.h>
-
-#include "qib_verbs.h"
-
-#define BAD_DMA_ADDRESS ((u64) 0)
-
-/*
- * The following functions implement driver specific replacements
- * for the ib_dma_*() functions.
- *
- * These functions return kernel virtual addresses instead of
- * device bus addresses since the driver uses the CPU to copy
- * data instead of using hardware DMA.
- */
-
-static int qib_mapping_error(struct ib_device *dev, u64 dma_addr)
-{
-	return dma_addr == BAD_DMA_ADDRESS;
-}
-
-static u64 qib_dma_map_single(struct ib_device *dev, void *cpu_addr,
-			      size_t size, enum dma_data_direction direction)
-{
-	BUG_ON(!valid_dma_direction(direction));
-	return (u64) cpu_addr;
-}
-
-static void qib_dma_unmap_single(struct ib_device *dev, u64 addr, size_t size,
-				 enum dma_data_direction direction)
-{
-	BUG_ON(!valid_dma_direction(direction));
-}
-
-static u64 qib_dma_map_page(struct ib_device *dev, struct page *page,
-			    unsigned long offset, size_t size,
-			    enum dma_data_direction direction)
-{
-	u64 addr;
-
-	BUG_ON(!valid_dma_direction(direction));
-
-	if (offset + size > PAGE_SIZE) {
-		addr = BAD_DMA_ADDRESS;
-		goto done;
-	}
-
-	addr = (u64) page_address(page);
-	if (addr)
-		addr += offset;
-	/* TODO: handle highmem pages */
-
-done:
-	return addr;
-}
-
-static void qib_dma_unmap_page(struct ib_device *dev, u64 addr, size_t size,
-			       enum dma_data_direction direction)
-{
-	BUG_ON(!valid_dma_direction(direction));
-}
-
-static int qib_map_sg(struct ib_device *dev, struct scatterlist *sgl,
-		      int nents, enum dma_data_direction direction)
-{
-	struct scatterlist *sg;
-	u64 addr;
-	int i;
-	int ret = nents;
-
-	BUG_ON(!valid_dma_direction(direction));
-
-	for_each_sg(sgl, sg, nents, i) {
-		addr = (u64) page_address(sg_page(sg));
-		/* TODO: handle highmem pages */
-		if (!addr) {
-			ret = 0;
-			break;
-		}
-		sg->dma_address = addr + sg->offset;
-#ifdef CONFIG_NEED_SG_DMA_LENGTH
-		sg->dma_length = sg->length;
-#endif
-	}
-	return ret;
-}
-
-static void qib_unmap_sg(struct ib_device *dev,
-			 struct scatterlist *sg, int nents,
-			 enum dma_data_direction direction)
-{
-	BUG_ON(!valid_dma_direction(direction));
-}
-
-static void qib_sync_single_for_cpu(struct ib_device *dev, u64 addr,
-				    size_t size, enum dma_data_direction dir)
-{
-}
-
-static void qib_sync_single_for_device(struct ib_device *dev, u64 addr,
-				       size_t size,
-				       enum dma_data_direction dir)
-{
-}
-
-static void *qib_dma_alloc_coherent(struct ib_device *dev, size_t size,
-				    u64 *dma_handle, gfp_t flag)
-{
-	struct page *p;
-	void *addr = NULL;
-
-	p = alloc_pages(flag, get_order(size));
-	if (p)
-		addr = page_address(p);
-	if (dma_handle)
-		*dma_handle = (u64) addr;
-	return addr;
-}
-
-static void qib_dma_free_coherent(struct ib_device *dev, size_t size,
-				  void *cpu_addr, u64 dma_handle)
-{
-	free_pages((unsigned long) cpu_addr, get_order(size));
-}
-
-struct ib_dma_mapping_ops qib_dma_mapping_ops = {
-	.mapping_error = qib_mapping_error,
-	.map_single = qib_dma_map_single,
-	.unmap_single = qib_dma_unmap_single,
-	.map_page = qib_dma_map_page,
-	.unmap_page = qib_dma_unmap_page,
-	.map_sg = qib_map_sg,
-	.unmap_sg = qib_unmap_sg,
-	.sync_single_for_cpu = qib_sync_single_for_cpu,
-	.sync_single_for_device = qib_sync_single_for_device,
-	.alloc_coherent = qib_dma_alloc_coherent,
-	.free_coherent = qib_dma_free_coherent
-};
diff --git a/drivers/infiniband/hw/qib/qib_keys.c b/drivers/infiniband/hw/qib/qib_keys.c
index 2c3c93572c17..d2f64a03733e 100644
--- a/drivers/infiniband/hw/qib/qib_keys.c
+++ b/drivers/infiniband/hw/qib/qib_keys.c
@@ -160,7 +160,7 @@ int qib_rkey_ok(struct rvt_qp *qp, struct rvt_sge *sge,
 
 	/*
 	 * We use RKEY == zero for kernel virtual addresses
-	 * (see qib_get_dma_mr and qib_dma.c).
+	 * (see qib_get_dma_mr).
 	 */
 	rcu_read_lock();
 	if (rkey == 0) {
diff --git a/drivers/infiniband/sw/rdmavt/Makefile b/drivers/infiniband/sw/rdmavt/Makefile
index ccaa7992ac97..2a821d2fb569 100644
--- a/drivers/infiniband/sw/rdmavt/Makefile
+++ b/drivers/infiniband/sw/rdmavt/Makefile
@@ -7,7 +7,7 @@
 #
 obj-$(CONFIG_INFINIBAND_RDMAVT) += rdmavt.o
 
-rdmavt-y := vt.o ah.o cq.o dma.o mad.o mcast.o mmap.o mr.o pd.o qp.o srq.o \
+rdmavt-y := vt.o ah.o cq.o mad.o mcast.o mmap.o mr.o pd.o qp.o srq.o \
 	trace.o
 
 CFLAGS_trace.o = -I$(src)
diff --git a/drivers/infiniband/sw/rdmavt/dma.c b/drivers/infiniband/sw/rdmavt/dma.c
deleted file mode 100644
index f2cefb0d9180..000000000000
--- a/drivers/infiniband/sw/rdmavt/dma.c
+++ /dev/null
@@ -1,198 +0,0 @@
-/*
- * Copyright(c) 2016 Intel Corporation.
- *
- * This file is provided under a dual BSD/GPLv2 license.  When using or
- * redistributing this file, you may do so under either license.
- *
- * GPL LICENSE SUMMARY
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
- * General Public License for more details.
- *
- * BSD LICENSE
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- *  - Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- *  - Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in
- *    the documentation and/or other materials provided with the
- *    distribution.
- *  - Neither the name of Intel Corporation nor the names of its
- *    contributors may be used to endorse or promote products derived
- *    from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- */
-#include <linux/types.h>
-#include <linux/scatterlist.h>
-#include <rdma/ib_verbs.h>
-
-#include "dma.h"
-
-#define BAD_DMA_ADDRESS ((u64)0)
-
-/*
- * The following functions implement driver specific replacements
- * for the ib_dma_*() functions.
- *
- * These functions return kernel virtual addresses instead of
- * device bus addresses since the driver uses the CPU to copy
- * data instead of using hardware DMA.
- */
-
-static int rvt_mapping_error(struct ib_device *dev, u64 dma_addr)
-{
-	return dma_addr == BAD_DMA_ADDRESS;
-}
-
-static u64 rvt_dma_map_single(struct ib_device *dev, void *cpu_addr,
-			      size_t size, enum dma_data_direction direction)
-{
-	if (WARN_ON(!valid_dma_direction(direction)))
-		return BAD_DMA_ADDRESS;
-
-	return (u64)cpu_addr;
-}
-
-static void rvt_dma_unmap_single(struct ib_device *dev, u64 addr, size_t size,
-				 enum dma_data_direction direction)
-{
-	/* This is a stub, nothing to be done here */
-}
-
-static u64 rvt_dma_map_page(struct ib_device *dev, struct page *page,
-			    unsigned long offset, size_t size,
-			    enum dma_data_direction direction)
-{
-	u64 addr;
-
-	if (WARN_ON(!valid_dma_direction(direction)))
-		return BAD_DMA_ADDRESS;
-
-	addr = (u64)page_address(page);
-	if (addr)
-		addr += offset;
-
-	return addr;
-}
-
-static void rvt_dma_unmap_page(struct ib_device *dev, u64 addr, size_t size,
-			       enum dma_data_direction direction)
-{
-	/* This is a stub, nothing to be done here */
-}
-
-static int rvt_map_sg(struct ib_device *dev, struct scatterlist *sgl,
-		      int nents, enum dma_data_direction direction)
-{
-	struct scatterlist *sg;
-	u64 addr;
-	int i;
-	int ret = nents;
-
-	if (WARN_ON(!valid_dma_direction(direction)))
-		return 0;
-
-	for_each_sg(sgl, sg, nents, i) {
-		addr = (u64)page_address(sg_page(sg));
-		if (!addr) {
-			ret = 0;
-			break;
-		}
-		sg->dma_address = addr + sg->offset;
-#ifdef CONFIG_NEED_SG_DMA_LENGTH
-		sg->dma_length = sg->length;
-#endif
-	}
-	return ret;
-}
-
-static void rvt_unmap_sg(struct ib_device *dev,
-			 struct scatterlist *sg, int nents,
-			 enum dma_data_direction direction)
-{
-	/* This is a stub, nothing to be done here */
-}
-
-static int rvt_map_sg_attrs(struct ib_device *dev, struct scatterlist *sgl,
-			    int nents, enum dma_data_direction direction,
-			    unsigned long attrs)
-{
-	return rvt_map_sg(dev, sgl, nents, direction);
-}
-
-static void rvt_unmap_sg_attrs(struct ib_device *dev,
-			       struct scatterlist *sg, int nents,
-			       enum dma_data_direction direction,
-			       unsigned long attrs)
-{
-	return rvt_unmap_sg(dev, sg, nents, direction);
-}
-
-static void rvt_sync_single_for_cpu(struct ib_device *dev, u64 addr,
-				    size_t size, enum dma_data_direction dir)
-{
-}
-
-static void rvt_sync_single_for_device(struct ib_device *dev, u64 addr,
-				       size_t size,
-				       enum dma_data_direction dir)
-{
-}
-
-static void *rvt_dma_alloc_coherent(struct ib_device *dev, size_t size,
-				    u64 *dma_handle, gfp_t flag)
-{
-	struct page *p;
-	void *addr = NULL;
-
-	p = alloc_pages(flag, get_order(size));
-	if (p)
-		addr = page_address(p);
-	if (dma_handle)
-		*dma_handle = (u64)addr;
-	return addr;
-}
-
-static void rvt_dma_free_coherent(struct ib_device *dev, size_t size,
-				  void *cpu_addr, u64 dma_handle)
-{
-	free_pages((unsigned long)cpu_addr, get_order(size));
-}
-
-struct ib_dma_mapping_ops rvt_default_dma_mapping_ops = {
-	.mapping_error = rvt_mapping_error,
-	.map_single = rvt_dma_map_single,
-	.unmap_single = rvt_dma_unmap_single,
-	.map_page = rvt_dma_map_page,
-	.unmap_page = rvt_dma_unmap_page,
-	.map_sg = rvt_map_sg,
-	.unmap_sg = rvt_unmap_sg,
-	.map_sg_attrs = rvt_map_sg_attrs,
-	.unmap_sg_attrs = rvt_unmap_sg_attrs,
-	.sync_single_for_cpu = rvt_sync_single_for_cpu,
-	.sync_single_for_device = rvt_sync_single_for_device,
-	.alloc_coherent = rvt_dma_alloc_coherent,
-	.free_coherent = rvt_dma_free_coherent
-};
diff --git a/drivers/infiniband/sw/rdmavt/dma.h b/drivers/infiniband/sw/rdmavt/dma.h
deleted file mode 100644
index 979f07e09195..000000000000
--- a/drivers/infiniband/sw/rdmavt/dma.h
+++ /dev/null
@@ -1,53 +0,0 @@
-#ifndef DEF_RDMAVTDMA_H
-#define DEF_RDMAVTDMA_H
-
-/*
- * Copyright(c) 2016 Intel Corporation.
- *
- * This file is provided under a dual BSD/GPLv2 license.  When using or
- * redistributing this file, you may do so under either license.
- *
- * GPL LICENSE SUMMARY
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
- * General Public License for more details.
- *
- * BSD LICENSE
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- *  - Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- *  - Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in
- *    the documentation and/or other materials provided with the
- *    distribution.
- *  - Neither the name of Intel Corporation nor the names of its
- *    contributors may be used to endorse or promote products derived
- *    from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- */
-
-extern struct ib_dma_mapping_ops rvt_default_dma_mapping_ops;
-
-#endif          /* DEF_RDMAVTDMA_H */
diff --git a/drivers/infiniband/sw/rdmavt/mr.c b/drivers/infiniband/sw/rdmavt/mr.c
index 46b64970058e..e12f1345bbab 100644
--- a/drivers/infiniband/sw/rdmavt/mr.c
+++ b/drivers/infiniband/sw/rdmavt/mr.c
@@ -303,8 +303,8 @@ static void __rvt_free_mr(struct rvt_mr *mr)
  * @acc: access flags
  *
  * Return: the memory region on success, otherwise returns an errno.
- * Note that all DMA addresses should be created via the
- * struct ib_dma_mapping_ops functions (see dma.c).
+ * Note that all DMA addresses should be created via the functions in
+ * struct dma_noop_ops.
  */
 struct ib_mr *rvt_get_dma_mr(struct ib_pd *pd, int acc)
 {
diff --git a/drivers/infiniband/sw/rdmavt/vt.c b/drivers/infiniband/sw/rdmavt/vt.c
index d430c2f7cec4..1014a813a942 100644
--- a/drivers/infiniband/sw/rdmavt/vt.c
+++ b/drivers/infiniband/sw/rdmavt/vt.c
@@ -777,8 +777,8 @@ int rvt_register_device(struct rvt_dev_info *rdi)
 	}
 
 	/* DMA Operations */
-	rdi->ibdev.dma_ops =
-		rdi->ibdev.dma_ops ? : &rvt_default_dma_mapping_ops;
+	if (rdi->ibdev.dma_device->dma_ops == NULL)
+		set_dma_ops(rdi->ibdev.dma_device, &dma_noop_ops);
 
 	/* Protection Domain */
 	spin_lock_init(&rdi->n_pds_lock);
diff --git a/drivers/infiniband/sw/rdmavt/vt.h b/drivers/infiniband/sw/rdmavt/vt.h
index 6b01eaa4461b..f363505312be 100644
--- a/drivers/infiniband/sw/rdmavt/vt.h
+++ b/drivers/infiniband/sw/rdmavt/vt.h
@@ -50,7 +50,6 @@
 
 #include <rdma/rdma_vt.h>
 #include <linux/pci.h>
-#include "dma.h"
 #include "pd.h"
 #include "qp.h"
 #include "ah.h"
diff --git a/drivers/infiniband/sw/rxe/Makefile b/drivers/infiniband/sw/rxe/Makefile
index 3b3fb9d1c470..ec35ff022a42 100644
--- a/drivers/infiniband/sw/rxe/Makefile
+++ b/drivers/infiniband/sw/rxe/Makefile
@@ -14,7 +14,6 @@ rdma_rxe-y := \
 	rxe_qp.o \
 	rxe_cq.o \
 	rxe_mr.o \
-	rxe_dma.o \
 	rxe_opcode.o \
 	rxe_mmap.o \
 	rxe_icrc.o \
diff --git a/drivers/infiniband/sw/rxe/rxe_dma.c b/drivers/infiniband/sw/rxe/rxe_dma.c
deleted file mode 100644
index a0f8af5851ae..000000000000
--- a/drivers/infiniband/sw/rxe/rxe_dma.c
+++ /dev/null
@@ -1,183 +0,0 @@
-/*
- * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
- * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *     Redistribution and use in source and binary forms, with or
- *     without modification, are permitted provided that the following
- *     conditions are met:
- *
- *	- Redistributions of source code must retain the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer.
- *
- *	- Redistributions in binary form must reproduce the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer in the documentation and/or other materials
- *	  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
- */
-
-#include "rxe.h"
-#include "rxe_loc.h"
-
-#define DMA_BAD_ADDER ((u64)0)
-
-static int rxe_mapping_error(struct ib_device *dev, u64 dma_addr)
-{
-	return dma_addr == DMA_BAD_ADDER;
-}
-
-static u64 rxe_dma_map_single(struct ib_device *dev,
-			      void *cpu_addr, size_t size,
-			      enum dma_data_direction direction)
-{
-	WARN_ON(!valid_dma_direction(direction));
-	return (uintptr_t)cpu_addr;
-}
-
-static void rxe_dma_unmap_single(struct ib_device *dev,
-				 u64 addr, size_t size,
-				 enum dma_data_direction direction)
-{
-	WARN_ON(!valid_dma_direction(direction));
-}
-
-static u64 rxe_dma_map_page(struct ib_device *dev,
-			    struct page *page,
-			    unsigned long offset,
-			    size_t size, enum dma_data_direction direction)
-{
-	u64 addr;
-
-	WARN_ON(!valid_dma_direction(direction));
-
-	if (offset + size > PAGE_SIZE) {
-		addr = DMA_BAD_ADDER;
-		goto done;
-	}
-
-	addr = (uintptr_t)page_address(page);
-	if (addr)
-		addr += offset;
-
-done:
-	return addr;
-}
-
-static void rxe_dma_unmap_page(struct ib_device *dev,
-			       u64 addr, size_t size,
-			       enum dma_data_direction direction)
-{
-	WARN_ON(!valid_dma_direction(direction));
-}
-
-static int rxe_map_sg(struct ib_device *dev, struct scatterlist *sgl,
-		      int nents, enum dma_data_direction direction)
-{
-	struct scatterlist *sg;
-	u64 addr;
-	int i;
-	int ret = nents;
-
-	WARN_ON(!valid_dma_direction(direction));
-
-	for_each_sg(sgl, sg, nents, i) {
-		addr = (uintptr_t)page_address(sg_page(sg));
-		if (!addr) {
-			ret = 0;
-			break;
-		}
-		sg->dma_address = addr + sg->offset;
-#ifdef CONFIG_NEED_SG_DMA_LENGTH
-		sg->dma_length = sg->length;
-#endif
-	}
-
-	return ret;
-}
-
-static void rxe_unmap_sg(struct ib_device *dev,
-			 struct scatterlist *sg, int nents,
-			 enum dma_data_direction direction)
-{
-	WARN_ON(!valid_dma_direction(direction));
-}
-
-static int rxe_map_sg_attrs(struct ib_device *dev, struct scatterlist *sgl,
-			    int nents, enum dma_data_direction direction,
-			    unsigned long attrs)
-{
-	return rxe_map_sg(dev, sgl, nents, direction);
-}
-
-static void rxe_unmap_sg_attrs(struct ib_device *dev,
-			       struct scatterlist *sg, int nents,
-			       enum dma_data_direction direction,
-			       unsigned long attrs)
-{
-	rxe_unmap_sg(dev, sg, nents, direction);
-}
-
-static void rxe_sync_single_for_cpu(struct ib_device *dev,
-				    u64 addr,
-				    size_t size, enum dma_data_direction dir)
-{
-}
-
-static void rxe_sync_single_for_device(struct ib_device *dev,
-				       u64 addr,
-				       size_t size, enum dma_data_direction dir)
-{
-}
-
-static void *rxe_dma_alloc_coherent(struct ib_device *dev, size_t size,
-				    u64 *dma_handle, gfp_t flag)
-{
-	struct page *p;
-	void *addr = NULL;
-
-	p = alloc_pages(flag, get_order(size));
-	if (p)
-		addr = page_address(p);
-
-	if (dma_handle)
-		*dma_handle = (uintptr_t)addr;
-
-	return addr;
-}
-
-static void rxe_dma_free_coherent(struct ib_device *dev, size_t size,
-				  void *cpu_addr, u64 dma_handle)
-{
-	free_pages((unsigned long)cpu_addr, get_order(size));
-}
-
-struct ib_dma_mapping_ops rxe_dma_mapping_ops = {
-	.mapping_error		= rxe_mapping_error,
-	.map_single		= rxe_dma_map_single,
-	.unmap_single		= rxe_dma_unmap_single,
-	.map_page		= rxe_dma_map_page,
-	.unmap_page		= rxe_dma_unmap_page,
-	.map_sg			= rxe_map_sg,
-	.unmap_sg		= rxe_unmap_sg,
-	.map_sg_attrs		= rxe_map_sg_attrs,
-	.unmap_sg_attrs		= rxe_unmap_sg_attrs,
-	.sync_single_for_cpu	= rxe_sync_single_for_cpu,
-	.sync_single_for_device	= rxe_sync_single_for_device,
-	.alloc_coherent		= rxe_dma_alloc_coherent,
-	.free_coherent		= rxe_dma_free_coherent
-};
diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
index 73849a5a91b3..a075023332dc 100644
--- a/drivers/infiniband/sw/rxe/rxe_loc.h
+++ b/drivers/infiniband/sw/rxe/rxe_loc.h
@@ -221,8 +221,6 @@ int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq,
 		      struct ib_srq_attr *attr, enum ib_srq_attr_mask mask,
 		      struct ib_udata *udata);
 
-extern struct ib_dma_mapping_ops rxe_dma_mapping_ops;
-
 void rxe_release(struct kref *kref);
 
 int rxe_completer(void *arg);
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
index 19841c863daf..c4a5154928b7 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
@@ -1226,7 +1226,7 @@ int rxe_register_device(struct rxe_dev *rxe)
 	dev->dma_device = rxe->ifc_ops->dma_device(rxe);
 	dev->local_dma_lkey = 0;
 	dev->node_guid = rxe->ifc_ops->node_guid(rxe);
-	dev->dma_ops = &rxe_dma_mapping_ops;
+	set_dma_ops(dev->dma_device, &dma_noop_ops);
 
 	dev->uverbs_abi_ver = RXE_UVERBS_ABI_VERSION;
 	dev->uverbs_cmd_mask = BIT_ULL(IB_USER_VERBS_CMD_GET_CONTEXT)
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index 5ad43a487745..663a28f37570 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -1763,53 +1763,6 @@ struct ib_cache {
 	u8                     *lmc_cache;
 };
 
-struct ib_dma_mapping_ops {
-	int		(*mapping_error)(struct ib_device *dev,
-					 u64 dma_addr);
-	u64		(*map_single)(struct ib_device *dev,
-				      void *ptr, size_t size,
-				      enum dma_data_direction direction);
-	void		(*unmap_single)(struct ib_device *dev,
-					u64 addr, size_t size,
-					enum dma_data_direction direction);
-	u64		(*map_page)(struct ib_device *dev,
-				    struct page *page, unsigned long offset,
-				    size_t size,
-				    enum dma_data_direction direction);
-	void		(*unmap_page)(struct ib_device *dev,
-				      u64 addr, size_t size,
-				      enum dma_data_direction direction);
-	int		(*map_sg)(struct ib_device *dev,
-				  struct scatterlist *sg, int nents,
-				  enum dma_data_direction direction);
-	void		(*unmap_sg)(struct ib_device *dev,
-				    struct scatterlist *sg, int nents,
-				    enum dma_data_direction direction);
-	int		(*map_sg_attrs)(struct ib_device *dev,
-					struct scatterlist *sg, int nents,
-					enum dma_data_direction direction,
-					unsigned long attrs);
-	void		(*unmap_sg_attrs)(struct ib_device *dev,
-					  struct scatterlist *sg, int nents,
-					  enum dma_data_direction direction,
-					  unsigned long attrs);
-	void		(*sync_single_for_cpu)(struct ib_device *dev,
-					       u64 dma_handle,
-					       size_t size,
-					       enum dma_data_direction dir);
-	void		(*sync_single_for_device)(struct ib_device *dev,
-						  u64 dma_handle,
-						  size_t size,
-						  enum dma_data_direction dir);
-	void		*(*alloc_coherent)(struct ib_device *dev,
-					   size_t size,
-					   u64 *dma_handle,
-					   gfp_t flag);
-	void		(*free_coherent)(struct ib_device *dev,
-					 size_t size, void *cpu_addr,
-					 u64 dma_handle);
-};
-
 struct iw_cm_verbs;
 
 struct ib_port_immutable {
@@ -2070,7 +2023,6 @@ struct ib_device {
 							   struct ib_rwq_ind_table_init_attr *init_attr,
 							   struct ib_udata *udata);
 	int                        (*destroy_rwq_ind_table)(struct ib_rwq_ind_table *wq_ind_table);
-	struct ib_dma_mapping_ops   *dma_ops;
 
 	struct module               *owner;
 	struct device                dev;
@@ -2927,8 +2879,6 @@ static inline int ib_req_ncomp_notif(struct ib_cq *cq, int wc_cnt)
  */
 static inline int ib_dma_mapping_error(struct ib_device *dev, u64 dma_addr)
 {
-	if (dev->dma_ops)
-		return dev->dma_ops->mapping_error(dev, dma_addr);
 	return dma_mapping_error(dev->dma_device, dma_addr);
 }
 
@@ -2943,8 +2893,6 @@ static inline u64 ib_dma_map_single(struct ib_device *dev,
 				    void *cpu_addr, size_t size,
 				    enum dma_data_direction direction)
 {
-	if (dev->dma_ops)
-		return dev->dma_ops->map_single(dev, cpu_addr, size, direction);
 	return dma_map_single(dev->dma_device, cpu_addr, size, direction);
 }
 
@@ -2959,10 +2907,7 @@ static inline void ib_dma_unmap_single(struct ib_device *dev,
 				       u64 addr, size_t size,
 				       enum dma_data_direction direction)
 {
-	if (dev->dma_ops)
-		dev->dma_ops->unmap_single(dev, addr, size, direction);
-	else
-		dma_unmap_single(dev->dma_device, addr, size, direction);
+	dma_unmap_single(dev->dma_device, addr, size, direction);
 }
 
 static inline u64 ib_dma_map_single_attrs(struct ib_device *dev,
@@ -2997,8 +2942,6 @@ static inline u64 ib_dma_map_page(struct ib_device *dev,
 				  size_t size,
 					 enum dma_data_direction direction)
 {
-	if (dev->dma_ops)
-		return dev->dma_ops->map_page(dev, page, offset, size, direction);
 	return dma_map_page(dev->dma_device, page, offset, size, direction);
 }
 
@@ -3013,10 +2956,7 @@ static inline void ib_dma_unmap_page(struct ib_device *dev,
 				     u64 addr, size_t size,
 				     enum dma_data_direction direction)
 {
-	if (dev->dma_ops)
-		dev->dma_ops->unmap_page(dev, addr, size, direction);
-	else
-		dma_unmap_page(dev->dma_device, addr, size, direction);
+	dma_unmap_page(dev->dma_device, addr, size, direction);
 }
 
 /**
@@ -3030,8 +2970,6 @@ static inline int ib_dma_map_sg(struct ib_device *dev,
 				struct scatterlist *sg, int nents,
 				enum dma_data_direction direction)
 {
-	if (dev->dma_ops)
-		return dev->dma_ops->map_sg(dev, sg, nents, direction);
 	return dma_map_sg(dev->dma_device, sg, nents, direction);
 }
 
@@ -3046,10 +2984,7 @@ static inline void ib_dma_unmap_sg(struct ib_device *dev,
 				   struct scatterlist *sg, int nents,
 				   enum dma_data_direction direction)
 {
-	if (dev->dma_ops)
-		dev->dma_ops->unmap_sg(dev, sg, nents, direction);
-	else
-		dma_unmap_sg(dev->dma_device, sg, nents, direction);
+	dma_unmap_sg(dev->dma_device, sg, nents, direction);
 }
 
 static inline int ib_dma_map_sg_attrs(struct ib_device *dev,
@@ -3057,12 +2992,8 @@ static inline int ib_dma_map_sg_attrs(struct ib_device *dev,
 				      enum dma_data_direction direction,
 				      unsigned long dma_attrs)
 {
-	if (dev->dma_ops)
-		return dev->dma_ops->map_sg_attrs(dev, sg, nents, direction,
-						  dma_attrs);
-	else
-		return dma_map_sg_attrs(dev->dma_device, sg, nents, direction,
-					dma_attrs);
+	return dma_map_sg_attrs(dev->dma_device, sg, nents, direction,
+				dma_attrs);
 }
 
 static inline void ib_dma_unmap_sg_attrs(struct ib_device *dev,
@@ -3070,12 +3001,7 @@ static inline void ib_dma_unmap_sg_attrs(struct ib_device *dev,
 					 enum dma_data_direction direction,
 					 unsigned long dma_attrs)
 {
-	if (dev->dma_ops)
-		return dev->dma_ops->unmap_sg_attrs(dev, sg, nents, direction,
-						  dma_attrs);
-	else
-		dma_unmap_sg_attrs(dev->dma_device, sg, nents, direction,
-				   dma_attrs);
+	dma_unmap_sg_attrs(dev->dma_device, sg, nents, direction, dma_attrs);
 }
 /**
  * ib_sg_dma_address - Return the DMA address from a scatter/gather entry
@@ -3117,10 +3043,7 @@ static inline void ib_dma_sync_single_for_cpu(struct ib_device *dev,
 					      size_t size,
 					      enum dma_data_direction dir)
 {
-	if (dev->dma_ops)
-		dev->dma_ops->sync_single_for_cpu(dev, addr, size, dir);
-	else
-		dma_sync_single_for_cpu(dev->dma_device, addr, size, dir);
+	dma_sync_single_for_cpu(dev->dma_device, addr, size, dir);
 }
 
 /**
@@ -3135,10 +3058,7 @@ static inline void ib_dma_sync_single_for_device(struct ib_device *dev,
 						 size_t size,
 						 enum dma_data_direction dir)
 {
-	if (dev->dma_ops)
-		dev->dma_ops->sync_single_for_device(dev, addr, size, dir);
-	else
-		dma_sync_single_for_device(dev->dma_device, addr, size, dir);
+	dma_sync_single_for_device(dev->dma_device, addr, size, dir);
 }
 
 /**
@@ -3153,16 +3073,12 @@ static inline void *ib_dma_alloc_coherent(struct ib_device *dev,
 					   u64 *dma_handle,
 					   gfp_t flag)
 {
-	if (dev->dma_ops)
-		return dev->dma_ops->alloc_coherent(dev, size, dma_handle, flag);
-	else {
-		dma_addr_t handle;
-		void *ret;
-
-		ret = dma_alloc_coherent(dev->dma_device, size, &handle, flag);
-		*dma_handle = handle;
-		return ret;
-	}
+	dma_addr_t handle;
+	void *ret;
+
+	ret = dma_alloc_coherent(dev->dma_device, size, &handle, flag);
+	*dma_handle = handle;
+	return ret;
 }
 
 /**
@@ -3176,10 +3092,7 @@ static inline void ib_dma_free_coherent(struct ib_device *dev,
 					size_t size, void *cpu_addr,
 					u64 dma_handle)
 {
-	if (dev->dma_ops)
-		dev->dma_ops->free_coherent(dev, size, cpu_addr, dma_handle);
-	else
-		dma_free_coherent(dev->dma_device, size, cpu_addr, dma_handle);
+	dma_free_coherent(dev->dma_device, size, cpu_addr, dma_handle);
 }
 
 /**
-- 
2.11.0

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 5/5] treewide: Inline ib_dma_map_*() functions
       [not found] ` <07c07529-4636-fafb-2598-7358d8a1460d-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
                     ` (3 preceding siblings ...)
  2016-12-08  1:11   ` [PATCH 4/5] IB: Switch from struct ib_dma_mapping_ops to struct dma_mapping_ops Bart Van Assche
@ 2016-12-08  1:12   ` Bart Van Assche
       [not found]     ` <9bdf696e-ec64-d60e-3d7e-7ad5b3000d60-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
  2016-12-08  6:48   ` [PATCH, RFC 0/5] IB: Optimize DMA mapping Or Gerlitz
  5 siblings, 1 reply; 36+ messages in thread
From: Bart Van Assche @ 2016-12-08  1:12 UTC (permalink / raw)
  To: Doug Ledford, Christoph Hellwig, Sagi Grimberg
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

Almost all changes in this patch have been generated as follows:

git grep -lE 'ib_(sg_|)dma_' |
  xargs -d\\n \
    sed -i -e 's/\([^[:alnum:]_]\)ib_dma_\([^(]*\)(\([^,]\+\)/\1dma_\2(\3->dma_device/g' \
	   -e 's/ib_sg_dma_\(len\|address\)(\([^,]\+\), /sg_dma_\1(/g'
---
 drivers/infiniband/core/mad.c                      |  28 +--
 drivers/infiniband/core/rw.c                       |  30 ++-
 drivers/infiniband/core/umem.c                     |   4 +-
 drivers/infiniband/core/umem_odp.c                 |   6 +-
 drivers/infiniband/hw/mlx4/cq.c                    |   2 +-
 drivers/infiniband/hw/mlx4/mad.c                   |  28 +--
 drivers/infiniband/hw/mlx4/mr.c                    |   4 +-
 drivers/infiniband/hw/mlx4/qp.c                    |  10 +-
 drivers/infiniband/hw/mlx5/mr.c                    |   4 +-
 drivers/infiniband/ulp/ipoib/ipoib_cm.c            |  20 +-
 drivers/infiniband/ulp/ipoib/ipoib_ib.c            |  22 +-
 drivers/infiniband/ulp/iser/iscsi_iser.c           |   6 +-
 drivers/infiniband/ulp/iser/iser_initiator.c       |  38 ++--
 drivers/infiniband/ulp/iser/iser_memory.c          |  12 +-
 drivers/infiniband/ulp/iser/iser_verbs.c           |   2 +-
 drivers/infiniband/ulp/isert/ib_isert.c            |  54 ++---
 drivers/infiniband/ulp/srp/ib_srp.c                |  50 +++--
 drivers/infiniband/ulp/srpt/ib_srpt.c              |  10 +-
 drivers/nvme/host/rdma.c                           |  22 +-
 drivers/nvme/target/rdma.c                         |  20 +-
 .../staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h    |  14 +-
 include/rdma/ib_verbs.h                            | 223 ---------------------
 net/9p/trans_rdma.c                                |  12 +-
 net/rds/ib.h                                       |  14 +-
 net/rds/ib_cm.c                                    |  12 +-
 net/rds/ib_fmr.c                                   |  10 +-
 net/rds/ib_frmr.c                                  |   8 +-
 net/rds/ib_rdma.c                                  |   6 +-
 net/rds/ib_recv.c                                  |  14 +-
 net/rds/ib_send.c                                  |  26 +--
 net/sunrpc/xprtrdma/fmr_ops.c                      |   6 +-
 net/sunrpc/xprtrdma/frwr_ops.c                     |   6 +-
 net/sunrpc/xprtrdma/rpc_rdma.c                     |  14 +-
 net/sunrpc/xprtrdma/svc_rdma_backchannel.c         |   4 +-
 net/sunrpc/xprtrdma/svc_rdma_recvfrom.c            |   8 +-
 net/sunrpc/xprtrdma/svc_rdma_sendto.c              |  14 +-
 net/sunrpc/xprtrdma/svc_rdma_transport.c           |   8 +-
 net/sunrpc/xprtrdma/verbs.c                        |   8 +-
 38 files changed, 275 insertions(+), 504 deletions(-)

diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c
index 2395fe2021c9..d69f8461337c 100644
--- a/drivers/infiniband/core/mad.c
+++ b/drivers/infiniband/core/mad.c
@@ -1157,21 +1157,21 @@ int ib_send_mad(struct ib_mad_send_wr_private *mad_send_wr)
 
 	mad_agent = mad_send_wr->send_buf.mad_agent;
 	sge = mad_send_wr->sg_list;
-	sge[0].addr = ib_dma_map_single(mad_agent->device,
+	sge[0].addr = dma_map_single(mad_agent->device->dma_device,
 					mad_send_wr->send_buf.mad,
 					sge[0].length,
 					DMA_TO_DEVICE);
-	if (unlikely(ib_dma_mapping_error(mad_agent->device, sge[0].addr)))
+	if (unlikely(dma_mapping_error(mad_agent->device->dma_device, sge[0].addr)))
 		return -ENOMEM;
 
 	mad_send_wr->header_mapping = sge[0].addr;
 
-	sge[1].addr = ib_dma_map_single(mad_agent->device,
+	sge[1].addr = dma_map_single(mad_agent->device->dma_device,
 					ib_get_payload(mad_send_wr),
 					sge[1].length,
 					DMA_TO_DEVICE);
-	if (unlikely(ib_dma_mapping_error(mad_agent->device, sge[1].addr))) {
-		ib_dma_unmap_single(mad_agent->device,
+	if (unlikely(dma_mapping_error(mad_agent->device->dma_device, sge[1].addr))) {
+		dma_unmap_single(mad_agent->device->dma_device,
 				    mad_send_wr->header_mapping,
 				    sge[0].length, DMA_TO_DEVICE);
 		return -ENOMEM;
@@ -1194,10 +1194,10 @@ int ib_send_mad(struct ib_mad_send_wr_private *mad_send_wr)
 	}
 	spin_unlock_irqrestore(&qp_info->send_queue.lock, flags);
 	if (ret) {
-		ib_dma_unmap_single(mad_agent->device,
+		dma_unmap_single(mad_agent->device->dma_device,
 				    mad_send_wr->header_mapping,
 				    sge[0].length, DMA_TO_DEVICE);
-		ib_dma_unmap_single(mad_agent->device,
+		dma_unmap_single(mad_agent->device->dma_device,
 				    mad_send_wr->payload_mapping,
 				    sge[1].length, DMA_TO_DEVICE);
 	}
@@ -2209,7 +2209,7 @@ static void ib_mad_recv_done(struct ib_cq *cq, struct ib_wc *wc)
 	mad_priv_hdr = container_of(mad_list, struct ib_mad_private_header,
 				    mad_list);
 	recv = container_of(mad_priv_hdr, struct ib_mad_private, header);
-	ib_dma_unmap_single(port_priv->device,
+	dma_unmap_single(port_priv->device->dma_device,
 			    recv->header.mapping,
 			    mad_priv_dma_size(recv),
 			    DMA_FROM_DEVICE);
@@ -2453,10 +2453,10 @@ static void ib_mad_send_done(struct ib_cq *cq, struct ib_wc *wc)
 	qp_info = send_queue->qp_info;
 
 retry:
-	ib_dma_unmap_single(mad_send_wr->send_buf.mad_agent->device,
+	dma_unmap_single(mad_send_wr->send_buf.mad_agent->device->dma_device,
 			    mad_send_wr->header_mapping,
 			    mad_send_wr->sg_list[0].length, DMA_TO_DEVICE);
-	ib_dma_unmap_single(mad_send_wr->send_buf.mad_agent->device,
+	dma_unmap_single(mad_send_wr->send_buf.mad_agent->device->dma_device,
 			    mad_send_wr->payload_mapping,
 			    mad_send_wr->sg_list[1].length, DMA_TO_DEVICE);
 	queued_send_wr = NULL;
@@ -2876,11 +2876,11 @@ static int ib_mad_post_receive_mads(struct ib_mad_qp_info *qp_info,
 			}
 		}
 		sg_list.length = mad_priv_dma_size(mad_priv);
-		sg_list.addr = ib_dma_map_single(qp_info->port_priv->device,
+		sg_list.addr = dma_map_single(qp_info->port_priv->device->dma_device,
 						 &mad_priv->grh,
 						 mad_priv_dma_size(mad_priv),
 						 DMA_FROM_DEVICE);
-		if (unlikely(ib_dma_mapping_error(qp_info->port_priv->device,
+		if (unlikely(dma_mapping_error(qp_info->port_priv->device->dma_device,
 						  sg_list.addr))) {
 			ret = -ENOMEM;
 			break;
@@ -2901,7 +2901,7 @@ static int ib_mad_post_receive_mads(struct ib_mad_qp_info *qp_info,
 			list_del(&mad_priv->header.mad_list.list);
 			recv_queue->count--;
 			spin_unlock_irqrestore(&recv_queue->lock, flags);
-			ib_dma_unmap_single(qp_info->port_priv->device,
+			dma_unmap_single(qp_info->port_priv->device->dma_device,
 					    mad_priv->header.mapping,
 					    mad_priv_dma_size(mad_priv),
 					    DMA_FROM_DEVICE);
@@ -2940,7 +2940,7 @@ static void cleanup_recv_queue(struct ib_mad_qp_info *qp_info)
 		/* Remove from posted receive MAD list */
 		list_del(&mad_list->list);
 
-		ib_dma_unmap_single(qp_info->port_priv->device,
+		dma_unmap_single(qp_info->port_priv->device->dma_device,
 				    recv->header.mapping,
 				    mad_priv_dma_size(recv),
 				    DMA_FROM_DEVICE);
diff --git a/drivers/infiniband/core/rw.c b/drivers/infiniband/core/rw.c
index dbfd854c32c9..f8aef874f636 100644
--- a/drivers/infiniband/core/rw.c
+++ b/drivers/infiniband/core/rw.c
@@ -178,7 +178,6 @@ static int rdma_rw_init_map_wrs(struct rdma_rw_ctx *ctx, struct ib_qp *qp,
 		struct scatterlist *sg, u32 sg_cnt, u32 offset,
 		u64 remote_addr, u32 rkey, enum dma_data_direction dir)
 {
-	struct ib_device *dev = qp->pd->device;
 	u32 max_sge = dir == DMA_TO_DEVICE ? qp->max_write_sge :
 		      qp->max_read_sge;
 	struct ib_sge *sge;
@@ -208,8 +207,8 @@ static int rdma_rw_init_map_wrs(struct rdma_rw_ctx *ctx, struct ib_qp *qp,
 		rdma_wr->wr.sg_list = sge;
 
 		for (j = 0; j < nr_sge; j++, sg = sg_next(sg)) {
-			sge->addr = ib_sg_dma_address(dev, sg) + offset;
-			sge->length = ib_sg_dma_len(dev, sg) - offset;
+			sge->addr = sg_dma_address(sg) + offset;
+			sge->length = sg_dma_len(sg) - offset;
 			sge->lkey = qp->pd->local_dma_lkey;
 
 			total_len += sge->length;
@@ -235,14 +234,13 @@ static int rdma_rw_init_single_wr(struct rdma_rw_ctx *ctx, struct ib_qp *qp,
 		struct scatterlist *sg, u32 offset, u64 remote_addr, u32 rkey,
 		enum dma_data_direction dir)
 {
-	struct ib_device *dev = qp->pd->device;
 	struct ib_rdma_wr *rdma_wr = &ctx->single.wr;
 
 	ctx->nr_ops = 1;
 
 	ctx->single.sge.lkey = qp->pd->local_dma_lkey;
-	ctx->single.sge.addr = ib_sg_dma_address(dev, sg) + offset;
-	ctx->single.sge.length = ib_sg_dma_len(dev, sg) - offset;
+	ctx->single.sge.addr = sg_dma_address(sg) + offset;
+	ctx->single.sge.length = sg_dma_len(sg) - offset;
 
 	memset(rdma_wr, 0, sizeof(*rdma_wr));
 	if (dir == DMA_TO_DEVICE)
@@ -280,7 +278,7 @@ int rdma_rw_ctx_init(struct rdma_rw_ctx *ctx, struct ib_qp *qp, u8 port_num,
 	struct ib_device *dev = qp->pd->device;
 	int ret;
 
-	ret = ib_dma_map_sg(dev, sg, sg_cnt, dir);
+	ret = dma_map_sg(dev->dma_device, sg, sg_cnt, dir);
 	if (!ret)
 		return -ENOMEM;
 	sg_cnt = ret;
@@ -289,7 +287,7 @@ int rdma_rw_ctx_init(struct rdma_rw_ctx *ctx, struct ib_qp *qp, u8 port_num,
 	 * Skip to the S/G entry that sg_offset falls into:
 	 */
 	for (;;) {
-		u32 len = ib_sg_dma_len(dev, sg);
+		u32 len = sg_dma_len(sg);
 
 		if (sg_offset < len)
 			break;
@@ -319,7 +317,7 @@ int rdma_rw_ctx_init(struct rdma_rw_ctx *ctx, struct ib_qp *qp, u8 port_num,
 	return ret;
 
 out_unmap_sg:
-	ib_dma_unmap_sg(dev, sg, sg_cnt, dir);
+	dma_unmap_sg(dev->dma_device, sg, sg_cnt, dir);
 	return ret;
 }
 EXPORT_SYMBOL(rdma_rw_ctx_init);
@@ -358,12 +356,12 @@ int rdma_rw_ctx_signature_init(struct rdma_rw_ctx *ctx, struct ib_qp *qp,
 		return -EINVAL;
 	}
 
-	ret = ib_dma_map_sg(dev, sg, sg_cnt, dir);
+	ret = dma_map_sg(dev->dma_device, sg, sg_cnt, dir);
 	if (!ret)
 		return -ENOMEM;
 	sg_cnt = ret;
 
-	ret = ib_dma_map_sg(dev, prot_sg, prot_sg_cnt, dir);
+	ret = dma_map_sg(dev->dma_device, prot_sg, prot_sg_cnt, dir);
 	if (!ret) {
 		ret = -ENOMEM;
 		goto out_unmap_sg;
@@ -457,9 +455,9 @@ int rdma_rw_ctx_signature_init(struct rdma_rw_ctx *ctx, struct ib_qp *qp,
 out_free_ctx:
 	kfree(ctx->sig);
 out_unmap_prot_sg:
-	ib_dma_unmap_sg(dev, prot_sg, prot_sg_cnt, dir);
+	dma_unmap_sg(dev->dma_device, prot_sg, prot_sg_cnt, dir);
 out_unmap_sg:
-	ib_dma_unmap_sg(dev, sg, sg_cnt, dir);
+	dma_unmap_sg(dev->dma_device, sg, sg_cnt, dir);
 	return ret;
 }
 EXPORT_SYMBOL(rdma_rw_ctx_signature_init);
@@ -606,7 +604,7 @@ void rdma_rw_ctx_destroy(struct rdma_rw_ctx *ctx, struct ib_qp *qp, u8 port_num,
 		break;
 	}
 
-	ib_dma_unmap_sg(qp->pd->device, sg, sg_cnt, dir);
+	dma_unmap_sg(qp->pd->device->dma_device, sg, sg_cnt, dir);
 }
 EXPORT_SYMBOL(rdma_rw_ctx_destroy);
 
@@ -631,11 +629,11 @@ void rdma_rw_ctx_destroy_signature(struct rdma_rw_ctx *ctx, struct ib_qp *qp,
 		return;
 
 	ib_mr_pool_put(qp, &qp->rdma_mrs, ctx->sig->data.mr);
-	ib_dma_unmap_sg(qp->pd->device, sg, sg_cnt, dir);
+	dma_unmap_sg(qp->pd->device->dma_device, sg, sg_cnt, dir);
 
 	if (ctx->sig->prot.mr) {
 		ib_mr_pool_put(qp, &qp->rdma_mrs, ctx->sig->prot.mr);
-		ib_dma_unmap_sg(qp->pd->device, prot_sg, prot_sg_cnt, dir);
+		dma_unmap_sg(qp->pd->device->dma_device, prot_sg, prot_sg_cnt, dir);
 	}
 
 	ib_mr_pool_put(qp, &qp->sig_mrs, ctx->sig->sig_mr);
diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
index 84b4eff90395..dee42f5b7671 100644
--- a/drivers/infiniband/core/umem.c
+++ b/drivers/infiniband/core/umem.c
@@ -50,7 +50,7 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d
 	int i;
 
 	if (umem->nmap > 0)
-		ib_dma_unmap_sg(dev, umem->sg_head.sgl,
+		dma_unmap_sg(dev->dma_device, umem->sg_head.sgl,
 				umem->nmap,
 				DMA_BIDIRECTIONAL);
 
@@ -214,7 +214,7 @@ struct ib_umem *ib_umem_get(struct ib_ucontext *context, unsigned long addr,
 		sg_list_start = sg;
 	}
 
-	umem->nmap = ib_dma_map_sg_attrs(context->device,
+	umem->nmap = dma_map_sg_attrs(context->device->dma_device,
 				  umem->sg_head.sgl,
 				  umem->npages,
 				  DMA_BIDIRECTIONAL,
diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c
index 1f0fe3217f23..e560f29e6bf9 100644
--- a/drivers/infiniband/core/umem_odp.c
+++ b/drivers/infiniband/core/umem_odp.c
@@ -456,11 +456,11 @@ static int ib_umem_odp_map_dma_single_page(
 		goto out;
 	}
 	if (!(umem->odp_data->dma_list[page_index])) {
-		dma_addr = ib_dma_map_page(dev,
+		dma_addr = dma_map_page(dev->dma_device,
 					   page,
 					   0, PAGE_SIZE,
 					   DMA_BIDIRECTIONAL);
-		if (ib_dma_mapping_error(dev, dma_addr)) {
+		if (dma_mapping_error(dev->dma_device, dma_addr)) {
 			ret = -EFAULT;
 			goto out;
 		}
@@ -645,7 +645,7 @@ void ib_umem_odp_unmap_dma_pages(struct ib_umem *umem, u64 virt,
 
 			WARN_ON(!dma_addr);
 
-			ib_dma_unmap_page(dev, dma_addr, PAGE_SIZE,
+			dma_unmap_page(dev->dma_device, dma_addr, PAGE_SIZE,
 					  DMA_BIDIRECTIONAL);
 			if (dma & ODP_WRITE_ALLOWED_BIT) {
 				struct page *head_page = compound_head(page);
diff --git a/drivers/infiniband/hw/mlx4/cq.c b/drivers/infiniband/hw/mlx4/cq.c
index 6a0fec357dae..138ac59655ab 100644
--- a/drivers/infiniband/hw/mlx4/cq.c
+++ b/drivers/infiniband/hw/mlx4/cq.c
@@ -584,7 +584,7 @@ static void use_tunnel_data(struct mlx4_ib_qp *qp, struct mlx4_ib_cq *cq, struct
 {
 	struct mlx4_ib_proxy_sqp_hdr *hdr;
 
-	ib_dma_sync_single_for_cpu(qp->ibqp.device,
+	dma_sync_single_for_cpu(qp->ibqp.device->dma_device,
 				   qp->sqp_proxy_rcv[tail].map,
 				   sizeof (struct mlx4_ib_proxy_sqp_hdr),
 				   DMA_FROM_DEVICE);
diff --git a/drivers/infiniband/hw/mlx4/mad.c b/drivers/infiniband/hw/mlx4/mad.c
index 1672907ff219..fb3c565a5915 100644
--- a/drivers/infiniband/hw/mlx4/mad.c
+++ b/drivers/infiniband/hw/mlx4/mad.c
@@ -560,7 +560,7 @@ int mlx4_ib_send_to_slave(struct mlx4_ib_dev *dev, int slave, u8 port,
 	if (tun_qp->tx_ring[tun_tx_ix].ah)
 		ib_destroy_ah(tun_qp->tx_ring[tun_tx_ix].ah);
 	tun_qp->tx_ring[tun_tx_ix].ah = ah;
-	ib_dma_sync_single_for_cpu(&dev->ib_dev,
+	dma_sync_single_for_cpu(dev->ib_dev.dma_device,
 				   tun_qp->tx_ring[tun_tx_ix].buf.map,
 				   sizeof (struct mlx4_rcv_tunnel_mad),
 				   DMA_TO_DEVICE);
@@ -602,7 +602,7 @@ int mlx4_ib_send_to_slave(struct mlx4_ib_dev *dev, int slave, u8 port,
 		tun_mad->hdr.slid_mac_47_32 = cpu_to_be16(wc->slid);
 	}
 
-	ib_dma_sync_single_for_device(&dev->ib_dev,
+	dma_sync_single_for_device(dev->ib_dev.dma_device,
 				      tun_qp->tx_ring[tun_tx_ix].buf.map,
 				      sizeof (struct mlx4_rcv_tunnel_mad),
 				      DMA_TO_DEVICE);
@@ -1288,7 +1288,7 @@ static int mlx4_ib_post_pv_qp_buf(struct mlx4_ib_demux_pv_ctx *ctx,
 	recv_wr.num_sge = 1;
 	recv_wr.wr_id = (u64) index | MLX4_TUN_WRID_RECV |
 		MLX4_TUN_SET_WRID_QPN(tun_qp->proxy_qpt);
-	ib_dma_sync_single_for_device(ctx->ib_dev, tun_qp->ring[index].map,
+	dma_sync_single_for_device(ctx->ib_dev->dma_device, tun_qp->ring[index].map,
 				      size, DMA_FROM_DEVICE);
 	return ib_post_recv(tun_qp->qp, &recv_wr, &bad_recv_wr);
 }
@@ -1379,14 +1379,14 @@ int mlx4_ib_send_to_wire(struct mlx4_ib_dev *dev, int slave, u8 port,
 	if (sqp->tx_ring[wire_tx_ix].ah)
 		ib_destroy_ah(sqp->tx_ring[wire_tx_ix].ah);
 	sqp->tx_ring[wire_tx_ix].ah = ah;
-	ib_dma_sync_single_for_cpu(&dev->ib_dev,
+	dma_sync_single_for_cpu(dev->ib_dev.dma_device,
 				   sqp->tx_ring[wire_tx_ix].buf.map,
 				   sizeof (struct mlx4_mad_snd_buf),
 				   DMA_TO_DEVICE);
 
 	memcpy(&sqp_mad->payload, mad, sizeof *mad);
 
-	ib_dma_sync_single_for_device(&dev->ib_dev,
+	dma_sync_single_for_device(dev->ib_dev.dma_device,
 				      sqp->tx_ring[wire_tx_ix].buf.map,
 				      sizeof (struct mlx4_mad_snd_buf),
 				      DMA_TO_DEVICE);
@@ -1471,7 +1471,7 @@ static void mlx4_ib_multiplex_mad(struct mlx4_ib_demux_pv_ctx *ctx, struct ib_wc
 	}
 
 	/* Map transaction ID */
-	ib_dma_sync_single_for_cpu(ctx->ib_dev, tun_qp->ring[wr_ix].map,
+	dma_sync_single_for_cpu(ctx->ib_dev->dma_device, tun_qp->ring[wr_ix].map,
 				   sizeof (struct mlx4_tunnel_mad),
 				   DMA_FROM_DEVICE);
 	switch (tunnel->mad.mad_hdr.method) {
@@ -1594,11 +1594,11 @@ static int mlx4_ib_alloc_pv_bufs(struct mlx4_ib_demux_pv_ctx *ctx,
 		tun_qp->ring[i].addr = kmalloc(rx_buf_size, GFP_KERNEL);
 		if (!tun_qp->ring[i].addr)
 			goto err;
-		tun_qp->ring[i].map = ib_dma_map_single(ctx->ib_dev,
+		tun_qp->ring[i].map = dma_map_single(ctx->ib_dev->dma_device,
 							tun_qp->ring[i].addr,
 							rx_buf_size,
 							DMA_FROM_DEVICE);
-		if (ib_dma_mapping_error(ctx->ib_dev, tun_qp->ring[i].map)) {
+		if (dma_mapping_error(ctx->ib_dev->dma_device, tun_qp->ring[i].map)) {
 			kfree(tun_qp->ring[i].addr);
 			goto err;
 		}
@@ -1610,11 +1610,11 @@ static int mlx4_ib_alloc_pv_bufs(struct mlx4_ib_demux_pv_ctx *ctx,
 		if (!tun_qp->tx_ring[i].buf.addr)
 			goto tx_err;
 		tun_qp->tx_ring[i].buf.map =
-			ib_dma_map_single(ctx->ib_dev,
+			dma_map_single(ctx->ib_dev->dma_device,
 					  tun_qp->tx_ring[i].buf.addr,
 					  tx_buf_size,
 					  DMA_TO_DEVICE);
-		if (ib_dma_mapping_error(ctx->ib_dev,
+		if (dma_mapping_error(ctx->ib_dev->dma_device,
 					 tun_qp->tx_ring[i].buf.map)) {
 			kfree(tun_qp->tx_ring[i].buf.addr);
 			goto tx_err;
@@ -1631,7 +1631,7 @@ static int mlx4_ib_alloc_pv_bufs(struct mlx4_ib_demux_pv_ctx *ctx,
 tx_err:
 	while (i > 0) {
 		--i;
-		ib_dma_unmap_single(ctx->ib_dev, tun_qp->tx_ring[i].buf.map,
+		dma_unmap_single(ctx->ib_dev->dma_device, tun_qp->tx_ring[i].buf.map,
 				    tx_buf_size, DMA_TO_DEVICE);
 		kfree(tun_qp->tx_ring[i].buf.addr);
 	}
@@ -1641,7 +1641,7 @@ static int mlx4_ib_alloc_pv_bufs(struct mlx4_ib_demux_pv_ctx *ctx,
 err:
 	while (i > 0) {
 		--i;
-		ib_dma_unmap_single(ctx->ib_dev, tun_qp->ring[i].map,
+		dma_unmap_single(ctx->ib_dev->dma_device, tun_qp->ring[i].map,
 				    rx_buf_size, DMA_FROM_DEVICE);
 		kfree(tun_qp->ring[i].addr);
 	}
@@ -1671,13 +1671,13 @@ static void mlx4_ib_free_pv_qp_bufs(struct mlx4_ib_demux_pv_ctx *ctx,
 
 
 	for (i = 0; i < MLX4_NUM_TUNNEL_BUFS; i++) {
-		ib_dma_unmap_single(ctx->ib_dev, tun_qp->ring[i].map,
+		dma_unmap_single(ctx->ib_dev->dma_device, tun_qp->ring[i].map,
 				    rx_buf_size, DMA_FROM_DEVICE);
 		kfree(tun_qp->ring[i].addr);
 	}
 
 	for (i = 0; i < MLX4_NUM_TUNNEL_BUFS; i++) {
-		ib_dma_unmap_single(ctx->ib_dev, tun_qp->tx_ring[i].buf.map,
+		dma_unmap_single(ctx->ib_dev->dma_device, tun_qp->tx_ring[i].buf.map,
 				    tx_buf_size, DMA_TO_DEVICE);
 		kfree(tun_qp->tx_ring[i].buf.addr);
 		if (tun_qp->tx_ring[i].ah)
diff --git a/drivers/infiniband/hw/mlx4/mr.c b/drivers/infiniband/hw/mlx4/mr.c
index 5d73989d9771..89297b832a9f 100644
--- a/drivers/infiniband/hw/mlx4/mr.c
+++ b/drivers/infiniband/hw/mlx4/mr.c
@@ -538,12 +538,12 @@ int mlx4_ib_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents,
 
 	mr->npages = 0;
 
-	ib_dma_sync_single_for_cpu(ibmr->device, mr->page_map,
+	dma_sync_single_for_cpu(ibmr->device->dma_device, mr->page_map,
 				   mr->page_map_size, DMA_TO_DEVICE);
 
 	rc = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, mlx4_set_page);
 
-	ib_dma_sync_single_for_device(ibmr->device, mr->page_map,
+	dma_sync_single_for_device(ibmr->device->dma_device, mr->page_map,
 				      mr->page_map_size, DMA_TO_DEVICE);
 
 	return rc;
diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
index ad3bb048df4d..64f283817ed6 100644
--- a/drivers/infiniband/hw/mlx4/qp.c
+++ b/drivers/infiniband/hw/mlx4/qp.c
@@ -570,10 +570,10 @@ static int alloc_proxy_bufs(struct ib_device *dev, struct mlx4_ib_qp *qp)
 		if (!qp->sqp_proxy_rcv[i].addr)
 			goto err;
 		qp->sqp_proxy_rcv[i].map =
-			ib_dma_map_single(dev, qp->sqp_proxy_rcv[i].addr,
+			dma_map_single(dev->dma_device, qp->sqp_proxy_rcv[i].addr,
 					  sizeof (struct mlx4_ib_proxy_sqp_hdr),
 					  DMA_FROM_DEVICE);
-		if (ib_dma_mapping_error(dev, qp->sqp_proxy_rcv[i].map)) {
+		if (dma_mapping_error(dev->dma_device, qp->sqp_proxy_rcv[i].map)) {
 			kfree(qp->sqp_proxy_rcv[i].addr);
 			goto err;
 		}
@@ -583,7 +583,7 @@ static int alloc_proxy_bufs(struct ib_device *dev, struct mlx4_ib_qp *qp)
 err:
 	while (i > 0) {
 		--i;
-		ib_dma_unmap_single(dev, qp->sqp_proxy_rcv[i].map,
+		dma_unmap_single(dev->dma_device, qp->sqp_proxy_rcv[i].map,
 				    sizeof (struct mlx4_ib_proxy_sqp_hdr),
 				    DMA_FROM_DEVICE);
 		kfree(qp->sqp_proxy_rcv[i].addr);
@@ -598,7 +598,7 @@ static void free_proxy_bufs(struct ib_device *dev, struct mlx4_ib_qp *qp)
 	int i;
 
 	for (i = 0; i < qp->rq.wqe_cnt; i++) {
-		ib_dma_unmap_single(dev, qp->sqp_proxy_rcv[i].map,
+		dma_unmap_single(dev->dma_device, qp->sqp_proxy_rcv[i].map,
 				    sizeof (struct mlx4_ib_proxy_sqp_hdr),
 				    DMA_FROM_DEVICE);
 		kfree(qp->sqp_proxy_rcv[i].addr);
@@ -3309,7 +3309,7 @@ int mlx4_ib_post_recv(struct ib_qp *ibqp, struct ib_recv_wr *wr,
 
 		if (qp->mlx4_ib_qp_type & (MLX4_IB_QPT_PROXY_SMI_OWNER |
 		    MLX4_IB_QPT_PROXY_SMI | MLX4_IB_QPT_PROXY_GSI)) {
-			ib_dma_sync_single_for_device(ibqp->device,
+			dma_sync_single_for_device(ibqp->device->dma_device,
 						      qp->sqp_proxy_rcv[ind].map,
 						      sizeof (struct mlx4_ib_proxy_sqp_hdr),
 						      DMA_FROM_DEVICE);
diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
index 3a5707e0d82a..968b62af57fa 100644
--- a/drivers/infiniband/hw/mlx5/mr.c
+++ b/drivers/infiniband/hw/mlx5/mr.c
@@ -1832,7 +1832,7 @@ int mlx5_ib_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents,
 
 	mr->ndescs = 0;
 
-	ib_dma_sync_single_for_cpu(ibmr->device, mr->desc_map,
+	dma_sync_single_for_cpu(ibmr->device->dma_device, mr->desc_map,
 				   mr->desc_size * mr->max_descs,
 				   DMA_TO_DEVICE);
 
@@ -1842,7 +1842,7 @@ int mlx5_ib_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents,
 		n = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset,
 				mlx5_set_page);
 
-	ib_dma_sync_single_for_device(ibmr->device, mr->desc_map,
+	dma_sync_single_for_device(ibmr->device->dma_device, mr->desc_map,
 				      mr->desc_size * mr->max_descs,
 				      DMA_TO_DEVICE);
 
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_cm.c b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
index 339a1eecdfe3..b2224cc17267 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
@@ -83,10 +83,10 @@ static void ipoib_cm_dma_unmap_rx(struct ipoib_dev_priv *priv, int frags,
 {
 	int i;
 
-	ib_dma_unmap_single(priv->ca, mapping[0], IPOIB_CM_HEAD_SIZE, DMA_FROM_DEVICE);
+	dma_unmap_single(priv->ca->dma_device, mapping[0], IPOIB_CM_HEAD_SIZE, DMA_FROM_DEVICE);
 
 	for (i = 0; i < frags; ++i)
-		ib_dma_unmap_page(priv->ca, mapping[i + 1], PAGE_SIZE, DMA_FROM_DEVICE);
+		dma_unmap_page(priv->ca->dma_device, mapping[i + 1], PAGE_SIZE, DMA_FROM_DEVICE);
 }
 
 static int ipoib_cm_post_receive_srq(struct net_device *dev, int id)
@@ -158,9 +158,9 @@ static struct sk_buff *ipoib_cm_alloc_rx_skb(struct net_device *dev,
 	 */
 	skb_reserve(skb, IPOIB_CM_RX_RESERVE);
 
-	mapping[0] = ib_dma_map_single(priv->ca, skb->data, IPOIB_CM_HEAD_SIZE,
+	mapping[0] = dma_map_single(priv->ca->dma_device, skb->data, IPOIB_CM_HEAD_SIZE,
 				       DMA_FROM_DEVICE);
-	if (unlikely(ib_dma_mapping_error(priv->ca, mapping[0]))) {
+	if (unlikely(dma_mapping_error(priv->ca->dma_device, mapping[0]))) {
 		dev_kfree_skb_any(skb);
 		return NULL;
 	}
@@ -172,9 +172,9 @@ static struct sk_buff *ipoib_cm_alloc_rx_skb(struct net_device *dev,
 			goto partial_error;
 		skb_fill_page_desc(skb, i, page, 0, PAGE_SIZE);
 
-		mapping[i + 1] = ib_dma_map_page(priv->ca, page,
+		mapping[i + 1] = dma_map_page(priv->ca->dma_device, page,
 						 0, PAGE_SIZE, DMA_FROM_DEVICE);
-		if (unlikely(ib_dma_mapping_error(priv->ca, mapping[i + 1])))
+		if (unlikely(dma_mapping_error(priv->ca->dma_device, mapping[i + 1])))
 			goto partial_error;
 	}
 
@@ -183,10 +183,10 @@ static struct sk_buff *ipoib_cm_alloc_rx_skb(struct net_device *dev,
 
 partial_error:
 
-	ib_dma_unmap_single(priv->ca, mapping[0], IPOIB_CM_HEAD_SIZE, DMA_FROM_DEVICE);
+	dma_unmap_single(priv->ca->dma_device, mapping[0], IPOIB_CM_HEAD_SIZE, DMA_FROM_DEVICE);
 
 	for (; i > 0; --i)
-		ib_dma_unmap_page(priv->ca, mapping[i], PAGE_SIZE, DMA_FROM_DEVICE);
+		dma_unmap_page(priv->ca->dma_device, mapping[i], PAGE_SIZE, DMA_FROM_DEVICE);
 
 	dev_kfree_skb_any(skb);
 	return NULL;
@@ -629,10 +629,10 @@ void ipoib_cm_handle_rx_wc(struct net_device *dev, struct ib_wc *wc)
 		small_skb = dev_alloc_skb(dlen + IPOIB_CM_RX_RESERVE);
 		if (small_skb) {
 			skb_reserve(small_skb, IPOIB_CM_RX_RESERVE);
-			ib_dma_sync_single_for_cpu(priv->ca, rx_ring[wr_id].mapping[0],
+			dma_sync_single_for_cpu(priv->ca->dma_device, rx_ring[wr_id].mapping[0],
 						   dlen, DMA_FROM_DEVICE);
 			skb_copy_from_linear_data(skb, small_skb->data, dlen);
-			ib_dma_sync_single_for_device(priv->ca, rx_ring[wr_id].mapping[0],
+			dma_sync_single_for_device(priv->ca->dma_device, rx_ring[wr_id].mapping[0],
 						      dlen, DMA_FROM_DEVICE);
 			skb_put(small_skb, dlen);
 			skb = small_skb;
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_ib.c b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
index 830fecb6934c..34832c385d13 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_ib.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
@@ -92,7 +92,7 @@ void ipoib_free_ah(struct kref *kref)
 static void ipoib_ud_dma_unmap_rx(struct ipoib_dev_priv *priv,
 				  u64 mapping[IPOIB_UD_RX_SG])
 {
-	ib_dma_unmap_single(priv->ca, mapping[0],
+	dma_unmap_single(priv->ca->dma_device, mapping[0],
 			    IPOIB_UD_BUF_SIZE(priv->max_ib_mtu),
 			    DMA_FROM_DEVICE);
 }
@@ -139,9 +139,9 @@ static struct sk_buff *ipoib_alloc_rx_skb(struct net_device *dev, int id)
 	skb_reserve(skb, sizeof(struct ipoib_pseudo_header));
 
 	mapping = priv->rx_ring[id].mapping;
-	mapping[0] = ib_dma_map_single(priv->ca, skb->data, buf_size,
+	mapping[0] = dma_map_single(priv->ca->dma_device, skb->data, buf_size,
 				       DMA_FROM_DEVICE);
-	if (unlikely(ib_dma_mapping_error(priv->ca, mapping[0])))
+	if (unlikely(dma_mapping_error(priv->ca->dma_device, mapping[0])))
 		goto error;
 
 	priv->rx_ring[id].skb = skb;
@@ -278,9 +278,9 @@ int ipoib_dma_map_tx(struct ib_device *ca, struct ipoib_tx_buf *tx_req)
 	int off;
 
 	if (skb_headlen(skb)) {
-		mapping[0] = ib_dma_map_single(ca, skb->data, skb_headlen(skb),
+		mapping[0] = dma_map_single(ca->dma_device, skb->data, skb_headlen(skb),
 					       DMA_TO_DEVICE);
-		if (unlikely(ib_dma_mapping_error(ca, mapping[0])))
+		if (unlikely(dma_mapping_error(ca->dma_device, mapping[0])))
 			return -EIO;
 
 		off = 1;
@@ -289,11 +289,11 @@ int ipoib_dma_map_tx(struct ib_device *ca, struct ipoib_tx_buf *tx_req)
 
 	for (i = 0; i < skb_shinfo(skb)->nr_frags; ++i) {
 		const skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
-		mapping[i + off] = ib_dma_map_page(ca,
+		mapping[i + off] = dma_map_page(ca->dma_device,
 						 skb_frag_page(frag),
 						 frag->page_offset, skb_frag_size(frag),
 						 DMA_TO_DEVICE);
-		if (unlikely(ib_dma_mapping_error(ca, mapping[i + off])))
+		if (unlikely(dma_mapping_error(ca->dma_device, mapping[i + off])))
 			goto partial_error;
 	}
 	return 0;
@@ -302,11 +302,11 @@ int ipoib_dma_map_tx(struct ib_device *ca, struct ipoib_tx_buf *tx_req)
 	for (; i > 0; --i) {
 		const skb_frag_t *frag = &skb_shinfo(skb)->frags[i - 1];
 
-		ib_dma_unmap_page(ca, mapping[i - !off], skb_frag_size(frag), DMA_TO_DEVICE);
+		dma_unmap_page(ca->dma_device, mapping[i - !off], skb_frag_size(frag), DMA_TO_DEVICE);
 	}
 
 	if (off)
-		ib_dma_unmap_single(ca, mapping[0], skb_headlen(skb), DMA_TO_DEVICE);
+		dma_unmap_single(ca->dma_device, mapping[0], skb_headlen(skb), DMA_TO_DEVICE);
 
 	return -EIO;
 }
@@ -320,7 +320,7 @@ void ipoib_dma_unmap_tx(struct ipoib_dev_priv *priv,
 	int off;
 
 	if (skb_headlen(skb)) {
-		ib_dma_unmap_single(priv->ca, mapping[0], skb_headlen(skb),
+		dma_unmap_single(priv->ca->dma_device, mapping[0], skb_headlen(skb),
 				    DMA_TO_DEVICE);
 		off = 1;
 	} else
@@ -329,7 +329,7 @@ void ipoib_dma_unmap_tx(struct ipoib_dev_priv *priv,
 	for (i = 0; i < skb_shinfo(skb)->nr_frags; ++i) {
 		const skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
 
-		ib_dma_unmap_page(priv->ca, mapping[i + off],
+		dma_unmap_page(priv->ca->dma_device, mapping[i + off],
 				  skb_frag_size(frag), DMA_TO_DEVICE);
 	}
 }
diff --git a/drivers/infiniband/ulp/iser/iscsi_iser.c b/drivers/infiniband/ulp/iser/iscsi_iser.c
index 64b3d11dcf1e..ef7634ccf368 100644
--- a/drivers/infiniband/ulp/iser/iscsi_iser.c
+++ b/drivers/infiniband/ulp/iser/iscsi_iser.c
@@ -198,9 +198,9 @@ iser_initialize_task_headers(struct iscsi_task *task,
 		goto out;
 	}
 
-	dma_addr = ib_dma_map_single(device->ib_device, (void *)tx_desc,
+	dma_addr = dma_map_single(device->ib_device->dma_device, (void *)tx_desc,
 				ISER_HEADERS_LEN, DMA_TO_DEVICE);
-	if (ib_dma_mapping_error(device->ib_device, dma_addr)) {
+	if (dma_mapping_error(device->ib_device->dma_device, dma_addr)) {
 		ret = -ENOMEM;
 		goto out;
 	}
@@ -375,7 +375,7 @@ static void iscsi_iser_cleanup_task(struct iscsi_task *task)
 		return;
 
 	if (likely(tx_desc->mapped)) {
-		ib_dma_unmap_single(device->ib_device, tx_desc->dma_addr,
+		dma_unmap_single(device->ib_device->dma_device, tx_desc->dma_addr,
 				    ISER_HEADERS_LEN, DMA_TO_DEVICE);
 		tx_desc->mapped = false;
 	}
diff --git a/drivers/infiniband/ulp/iser/iser_initiator.c b/drivers/infiniband/ulp/iser/iser_initiator.c
index 81ae2e30dd12..2e65513833bd 100644
--- a/drivers/infiniband/ulp/iser/iser_initiator.c
+++ b/drivers/infiniband/ulp/iser/iser_initiator.c
@@ -164,7 +164,7 @@ static void iser_create_send_desc(struct iser_conn	*iser_conn,
 {
 	struct iser_device *device = iser_conn->ib_conn.device;
 
-	ib_dma_sync_single_for_cpu(device->ib_device,
+	dma_sync_single_for_cpu(device->ib_device->dma_device,
 		tx_desc->dma_addr, ISER_HEADERS_LEN, DMA_TO_DEVICE);
 
 	memset(&tx_desc->iser_header, 0, sizeof(struct iser_ctrl));
@@ -180,10 +180,10 @@ static void iser_free_login_buf(struct iser_conn *iser_conn)
 	if (!desc->req)
 		return;
 
-	ib_dma_unmap_single(device->ib_device, desc->req_dma,
+	dma_unmap_single(device->ib_device->dma_device, desc->req_dma,
 			    ISCSI_DEF_MAX_RECV_SEG_LEN, DMA_TO_DEVICE);
 
-	ib_dma_unmap_single(device->ib_device, desc->rsp_dma,
+	dma_unmap_single(device->ib_device->dma_device, desc->rsp_dma,
 			    ISER_RX_LOGIN_SIZE, DMA_FROM_DEVICE);
 
 	kfree(desc->req);
@@ -203,10 +203,10 @@ static int iser_alloc_login_buf(struct iser_conn *iser_conn)
 	if (!desc->req)
 		return -ENOMEM;
 
-	desc->req_dma = ib_dma_map_single(device->ib_device, desc->req,
+	desc->req_dma = dma_map_single(device->ib_device->dma_device, desc->req,
 					  ISCSI_DEF_MAX_RECV_SEG_LEN,
 					  DMA_TO_DEVICE);
-	if (ib_dma_mapping_error(device->ib_device,
+	if (dma_mapping_error(device->ib_device->dma_device,
 				desc->req_dma))
 		goto free_req;
 
@@ -214,10 +214,10 @@ static int iser_alloc_login_buf(struct iser_conn *iser_conn)
 	if (!desc->rsp)
 		goto unmap_req;
 
-	desc->rsp_dma = ib_dma_map_single(device->ib_device, desc->rsp,
+	desc->rsp_dma = dma_map_single(device->ib_device->dma_device, desc->rsp,
 					   ISER_RX_LOGIN_SIZE,
 					   DMA_FROM_DEVICE);
-	if (ib_dma_mapping_error(device->ib_device,
+	if (dma_mapping_error(device->ib_device->dma_device,
 				desc->rsp_dma))
 		goto free_rsp;
 
@@ -226,7 +226,7 @@ static int iser_alloc_login_buf(struct iser_conn *iser_conn)
 free_rsp:
 	kfree(desc->rsp);
 unmap_req:
-	ib_dma_unmap_single(device->ib_device, desc->req_dma,
+	dma_unmap_single(device->ib_device->dma_device, desc->req_dma,
 			    ISCSI_DEF_MAX_RECV_SEG_LEN,
 			    DMA_TO_DEVICE);
 free_req:
@@ -265,9 +265,9 @@ int iser_alloc_rx_descriptors(struct iser_conn *iser_conn,
 	rx_desc = iser_conn->rx_descs;
 
 	for (i = 0; i < iser_conn->qp_max_recv_dtos; i++, rx_desc++)  {
-		dma_addr = ib_dma_map_single(device->ib_device, (void *)rx_desc,
+		dma_addr = dma_map_single(device->ib_device->dma_device, (void *)rx_desc,
 					ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
-		if (ib_dma_mapping_error(device->ib_device, dma_addr))
+		if (dma_mapping_error(device->ib_device->dma_device, dma_addr))
 			goto rx_desc_dma_map_failed;
 
 		rx_desc->dma_addr = dma_addr;
@@ -284,7 +284,7 @@ int iser_alloc_rx_descriptors(struct iser_conn *iser_conn,
 rx_desc_dma_map_failed:
 	rx_desc = iser_conn->rx_descs;
 	for (j = 0; j < i; j++, rx_desc++)
-		ib_dma_unmap_single(device->ib_device, rx_desc->dma_addr,
+		dma_unmap_single(device->ib_device->dma_device, rx_desc->dma_addr,
 				    ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
 	kfree(iser_conn->rx_descs);
 	iser_conn->rx_descs = NULL;
@@ -309,7 +309,7 @@ void iser_free_rx_descriptors(struct iser_conn *iser_conn)
 
 	rx_desc = iser_conn->rx_descs;
 	for (i = 0; i < iser_conn->qp_max_recv_dtos; i++, rx_desc++)
-		ib_dma_unmap_single(device->ib_device, rx_desc->dma_addr,
+		dma_unmap_single(device->ib_device->dma_device, rx_desc->dma_addr,
 				    ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
 	kfree(iser_conn->rx_descs);
 	/* make sure we never redo any unmapping */
@@ -522,12 +522,12 @@ int iser_send_control(struct iscsi_conn *conn,
 			goto send_control_error;
 		}
 
-		ib_dma_sync_single_for_cpu(device->ib_device, desc->req_dma,
+		dma_sync_single_for_cpu(device->ib_device->dma_device, desc->req_dma,
 					   task->data_count, DMA_TO_DEVICE);
 
 		memcpy(desc->req, task->data, task->data_count);
 
-		ib_dma_sync_single_for_device(device->ib_device, desc->req_dma,
+		dma_sync_single_for_device(device->ib_device->dma_device, desc->req_dma,
 					      task->data_count, DMA_TO_DEVICE);
 
 		tx_dsg->addr = desc->req_dma;
@@ -570,7 +570,7 @@ void iser_login_rsp(struct ib_cq *cq, struct ib_wc *wc)
 		return;
 	}
 
-	ib_dma_sync_single_for_cpu(ib_conn->device->ib_device,
+	dma_sync_single_for_cpu(ib_conn->device->ib_device->dma_device,
 				   desc->rsp_dma, ISER_RX_LOGIN_SIZE,
 				   DMA_FROM_DEVICE);
 
@@ -583,7 +583,7 @@ void iser_login_rsp(struct ib_cq *cq, struct ib_wc *wc)
 
 	iscsi_iser_recv(iser_conn->iscsi_conn, hdr, data, length);
 
-	ib_dma_sync_single_for_device(ib_conn->device->ib_device,
+	dma_sync_single_for_device(ib_conn->device->ib_device->dma_device,
 				      desc->rsp_dma, ISER_RX_LOGIN_SIZE,
 				      DMA_FROM_DEVICE);
 
@@ -655,7 +655,7 @@ void iser_task_rsp(struct ib_cq *cq, struct ib_wc *wc)
 		return;
 	}
 
-	ib_dma_sync_single_for_cpu(ib_conn->device->ib_device,
+	dma_sync_single_for_cpu(ib_conn->device->ib_device->dma_device,
 				   desc->dma_addr, ISER_RX_PAYLOAD_SIZE,
 				   DMA_FROM_DEVICE);
 
@@ -673,7 +673,7 @@ void iser_task_rsp(struct ib_cq *cq, struct ib_wc *wc)
 
 	iscsi_iser_recv(iser_conn->iscsi_conn, hdr, desc->data, length);
 
-	ib_dma_sync_single_for_device(ib_conn->device->ib_device,
+	dma_sync_single_for_device(ib_conn->device->ib_device->dma_device,
 				      desc->dma_addr, ISER_RX_PAYLOAD_SIZE,
 				      DMA_FROM_DEVICE);
 
@@ -724,7 +724,7 @@ void iser_dataout_comp(struct ib_cq *cq, struct ib_wc *wc)
 	if (unlikely(wc->status != IB_WC_SUCCESS))
 		iser_err_comp(wc, "dataout");
 
-	ib_dma_unmap_single(device->ib_device, desc->dma_addr,
+	dma_unmap_single(device->ib_device->dma_device, desc->dma_addr,
 			    ISER_HEADERS_LEN, DMA_TO_DEVICE);
 	kmem_cache_free(ig.desc_cache, desc);
 }
diff --git a/drivers/infiniband/ulp/iser/iser_memory.c b/drivers/infiniband/ulp/iser/iser_memory.c
index 9c3e9ab53a41..ae89be7d69a9 100644
--- a/drivers/infiniband/ulp/iser/iser_memory.c
+++ b/drivers/infiniband/ulp/iser/iser_memory.c
@@ -145,9 +145,9 @@ static void iser_data_buf_dump(struct iser_data_buf *data,
 	for_each_sg(data->sg, sg, data->dma_nents, i)
 		iser_dbg("sg[%d] dma_addr:0x%lX page:0x%p "
 			 "off:0x%x sz:0x%x dma_len:0x%x\n",
-			 i, (unsigned long)ib_sg_dma_address(ibdev, sg),
+			 i, (unsigned long)sg_dma_address(sg),
 			 sg_page(sg), sg->offset,
-			 sg->length, ib_sg_dma_len(ibdev, sg));
+			 sg->length, sg_dma_len(sg));
 }
 
 static void iser_dump_page_vec(struct iser_page_vec *page_vec)
@@ -170,7 +170,7 @@ int iser_dma_map_task_data(struct iscsi_iser_task *iser_task,
 	iser_task->dir[iser_dir] = 1;
 	dev = iser_task->iser_conn->ib_conn.device->ib_device;
 
-	data->dma_nents = ib_dma_map_sg(dev, data->sg, data->size, dma_dir);
+	data->dma_nents = dma_map_sg(dev->dma_device, data->sg, data->size, dma_dir);
 	if (data->dma_nents == 0) {
 		iser_err("dma_map_sg failed!!!\n");
 		return -EINVAL;
@@ -185,7 +185,7 @@ void iser_dma_unmap_task_data(struct iscsi_iser_task *iser_task,
 	struct ib_device *dev;
 
 	dev = iser_task->iser_conn->ib_conn.device->ib_device;
-	ib_dma_unmap_sg(dev, data->sg, data->size, dir);
+	dma_unmap_sg(dev->dma_device, data->sg, data->size, dir);
 }
 
 static int
@@ -204,8 +204,8 @@ iser_reg_dma(struct iser_device *device, struct iser_data_buf *mem,
 		reg->rkey = device->pd->unsafe_global_rkey;
 	else
 		reg->rkey = 0;
-	reg->sge.addr = ib_sg_dma_address(device->ib_device, &sg[0]);
-	reg->sge.length = ib_sg_dma_len(device->ib_device, &sg[0]);
+	reg->sge.addr = sg_dma_address(&sg[0]);
+	reg->sge.length = sg_dma_len(&sg[0]);
 
 	iser_dbg("Single DMA entry: lkey=0x%x, rkey=0x%x, addr=0x%llx,"
 		 " length=0x%x\n", reg->sge.lkey, reg->rkey,
diff --git a/drivers/infiniband/ulp/iser/iser_verbs.c b/drivers/infiniband/ulp/iser/iser_verbs.c
index a4b791dfaa1d..7e0af444cff8 100644
--- a/drivers/infiniband/ulp/iser/iser_verbs.c
+++ b/drivers/infiniband/ulp/iser/iser_verbs.c
@@ -1074,7 +1074,7 @@ int iser_post_send(struct ib_conn *ib_conn, struct iser_tx_desc *tx_desc,
 	struct ib_send_wr *bad_wr, *wr = iser_tx_next_wr(tx_desc);
 	int ib_ret;
 
-	ib_dma_sync_single_for_device(ib_conn->device->ib_device,
+	dma_sync_single_for_device(ib_conn->device->ib_device->dma_device,
 				      tx_desc->dma_addr, ISER_HEADERS_LEN,
 				      DMA_TO_DEVICE);
 
diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
index 6dd43f63238e..9a50910cf367 100644
--- a/drivers/infiniband/ulp/isert/ib_isert.c
+++ b/drivers/infiniband/ulp/isert/ib_isert.c
@@ -189,9 +189,9 @@ isert_alloc_rx_descriptors(struct isert_conn *isert_conn)
 	rx_desc = isert_conn->rx_descs;
 
 	for (i = 0; i < ISERT_QP_MAX_RECV_DTOS; i++, rx_desc++)  {
-		dma_addr = ib_dma_map_single(ib_dev, (void *)rx_desc,
+		dma_addr = dma_map_single(ib_dev->dma_device, (void *)rx_desc,
 					ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
-		if (ib_dma_mapping_error(ib_dev, dma_addr))
+		if (dma_mapping_error(ib_dev->dma_device, dma_addr))
 			goto dma_map_fail;
 
 		rx_desc->dma_addr = dma_addr;
@@ -208,7 +208,7 @@ isert_alloc_rx_descriptors(struct isert_conn *isert_conn)
 dma_map_fail:
 	rx_desc = isert_conn->rx_descs;
 	for (j = 0; j < i; j++, rx_desc++) {
-		ib_dma_unmap_single(ib_dev, rx_desc->dma_addr,
+		dma_unmap_single(ib_dev->dma_device, rx_desc->dma_addr,
 				    ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
 	}
 	kfree(isert_conn->rx_descs);
@@ -231,7 +231,7 @@ isert_free_rx_descriptors(struct isert_conn *isert_conn)
 
 	rx_desc = isert_conn->rx_descs;
 	for (i = 0; i < ISERT_QP_MAX_RECV_DTOS; i++, rx_desc++)  {
-		ib_dma_unmap_single(ib_dev, rx_desc->dma_addr,
+		dma_unmap_single(ib_dev->dma_device, rx_desc->dma_addr,
 				    ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
 	}
 
@@ -414,11 +414,11 @@ isert_free_login_buf(struct isert_conn *isert_conn)
 {
 	struct ib_device *ib_dev = isert_conn->device->ib_device;
 
-	ib_dma_unmap_single(ib_dev, isert_conn->login_rsp_dma,
+	dma_unmap_single(ib_dev->dma_device, isert_conn->login_rsp_dma,
 			    ISER_RX_PAYLOAD_SIZE, DMA_TO_DEVICE);
 	kfree(isert_conn->login_rsp_buf);
 
-	ib_dma_unmap_single(ib_dev, isert_conn->login_req_dma,
+	dma_unmap_single(ib_dev->dma_device, isert_conn->login_req_dma,
 			    ISER_RX_PAYLOAD_SIZE,
 			    DMA_FROM_DEVICE);
 	kfree(isert_conn->login_req_buf);
@@ -437,10 +437,10 @@ isert_alloc_login_buf(struct isert_conn *isert_conn,
 		return -ENOMEM;
 	}
 
-	isert_conn->login_req_dma = ib_dma_map_single(ib_dev,
+	isert_conn->login_req_dma = dma_map_single(ib_dev->dma_device,
 				isert_conn->login_req_buf,
 				ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
-	ret = ib_dma_mapping_error(ib_dev, isert_conn->login_req_dma);
+	ret = dma_mapping_error(ib_dev->dma_device, isert_conn->login_req_dma);
 	if (ret) {
 		isert_err("login_req_dma mapping error: %d\n", ret);
 		isert_conn->login_req_dma = 0;
@@ -453,10 +453,10 @@ isert_alloc_login_buf(struct isert_conn *isert_conn,
 		goto out_unmap_login_req_buf;
 	}
 
-	isert_conn->login_rsp_dma = ib_dma_map_single(ib_dev,
+	isert_conn->login_rsp_dma = dma_map_single(ib_dev->dma_device,
 					isert_conn->login_rsp_buf,
 					ISER_RX_PAYLOAD_SIZE, DMA_TO_DEVICE);
-	ret = ib_dma_mapping_error(ib_dev, isert_conn->login_rsp_dma);
+	ret = dma_mapping_error(ib_dev->dma_device, isert_conn->login_rsp_dma);
 	if (ret) {
 		isert_err("login_rsp_dma mapping error: %d\n", ret);
 		isert_conn->login_rsp_dma = 0;
@@ -468,7 +468,7 @@ isert_alloc_login_buf(struct isert_conn *isert_conn,
 out_free_login_rsp_buf:
 	kfree(isert_conn->login_rsp_buf);
 out_unmap_login_req_buf:
-	ib_dma_unmap_single(ib_dev, isert_conn->login_req_dma,
+	dma_unmap_single(ib_dev->dma_device, isert_conn->login_req_dma,
 			    ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
 out_free_login_req_buf:
 	kfree(isert_conn->login_req_buf);
@@ -858,7 +858,7 @@ isert_login_post_send(struct isert_conn *isert_conn, struct iser_tx_desc *tx_des
 	struct ib_send_wr send_wr, *send_wr_failed;
 	int ret;
 
-	ib_dma_sync_single_for_device(ib_dev, tx_desc->dma_addr,
+	dma_sync_single_for_device(ib_dev->dma_device, tx_desc->dma_addr,
 				      ISER_HEADERS_LEN, DMA_TO_DEVICE);
 
 	tx_desc->tx_cqe.done = isert_login_send_done;
@@ -885,7 +885,7 @@ isert_create_send_desc(struct isert_conn *isert_conn,
 	struct isert_device *device = isert_conn->device;
 	struct ib_device *ib_dev = device->ib_device;
 
-	ib_dma_sync_single_for_cpu(ib_dev, tx_desc->dma_addr,
+	dma_sync_single_for_cpu(ib_dev->dma_device, tx_desc->dma_addr,
 				   ISER_HEADERS_LEN, DMA_TO_DEVICE);
 
 	memset(&tx_desc->iser_header, 0, sizeof(struct iser_ctrl));
@@ -907,10 +907,10 @@ isert_init_tx_hdrs(struct isert_conn *isert_conn,
 	struct ib_device *ib_dev = device->ib_device;
 	u64 dma_addr;
 
-	dma_addr = ib_dma_map_single(ib_dev, (void *)tx_desc,
+	dma_addr = dma_map_single(ib_dev->dma_device, (void *)tx_desc,
 			ISER_HEADERS_LEN, DMA_TO_DEVICE);
-	if (ib_dma_mapping_error(ib_dev, dma_addr)) {
-		isert_err("ib_dma_mapping_error() failed\n");
+	if (dma_mapping_error(ib_dev->dma_device, dma_addr)) {
+		isert_err("dma_mapping_error() failed\n");->dma_device
 		return -ENOMEM;
 	}
 
@@ -996,12 +996,12 @@ isert_put_login_tx(struct iscsi_conn *conn, struct iscsi_login *login,
 	if (length > 0) {
 		struct ib_sge *tx_dsg = &tx_desc->tx_sg[1];
 
-		ib_dma_sync_single_for_cpu(ib_dev, isert_conn->login_rsp_dma,
+		dma_sync_single_for_cpu(ib_dev->dma_device, isert_conn->login_rsp_dma,
 					   length, DMA_TO_DEVICE);
 
 		memcpy(isert_conn->login_rsp_buf, login->rsp_buf, length);
 
-		ib_dma_sync_single_for_device(ib_dev, isert_conn->login_rsp_dma,
+		dma_sync_single_for_device(ib_dev->dma_device, isert_conn->login_rsp_dma,
 					      length, DMA_TO_DEVICE);
 
 		tx_dsg->addr	= isert_conn->login_rsp_dma;
@@ -1404,7 +1404,7 @@ isert_recv_done(struct ib_cq *cq, struct ib_wc *wc)
 		return;
 	}
 
-	ib_dma_sync_single_for_cpu(ib_dev, rx_desc->dma_addr,
+	dma_sync_single_for_cpu(ib_dev->dma_device, rx_desc->dma_addr,
 			ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
 
 	isert_dbg("DMA: 0x%llx, iSCSI opcode: 0x%02x, ITT: 0x%08x, flags: 0x%02x dlen: %d\n",
@@ -1439,7 +1439,7 @@ isert_recv_done(struct ib_cq *cq, struct ib_wc *wc)
 	isert_rx_opcode(isert_conn, rx_desc,
 			read_stag, read_va, write_stag, write_va);
 
-	ib_dma_sync_single_for_device(ib_dev, rx_desc->dma_addr,
+	dma_sync_single_for_device(ib_dev->dma_device, rx_desc->dma_addr,
 			ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
 }
 
@@ -1454,7 +1454,7 @@ isert_login_recv_done(struct ib_cq *cq, struct ib_wc *wc)
 		return;
 	}
 
-	ib_dma_sync_single_for_cpu(ib_dev, isert_conn->login_req_dma,
+	dma_sync_single_for_cpu(ib_dev->dma_device, isert_conn->login_req_dma,
 			ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
 
 	isert_conn->login_req_len = wc->byte_len - ISER_HEADERS_LEN;
@@ -1470,7 +1470,7 @@ isert_login_recv_done(struct ib_cq *cq, struct ib_wc *wc)
 	complete(&isert_conn->login_req_comp);
 	mutex_unlock(&isert_conn->mutex);
 
-	ib_dma_sync_single_for_device(ib_dev, isert_conn->login_req_dma,
+	dma_sync_single_for_device(ib_dev->dma_device, isert_conn->login_req_dma,
 				ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
 }
 
@@ -1578,7 +1578,7 @@ isert_unmap_tx_desc(struct iser_tx_desc *tx_desc, struct ib_device *ib_dev)
 {
 	if (tx_desc->dma_addr != 0) {
 		isert_dbg("unmap single for tx_desc->dma_addr\n");
-		ib_dma_unmap_single(ib_dev, tx_desc->dma_addr,
+		dma_unmap_single(ib_dev->dma_device, tx_desc->dma_addr,
 				    ISER_HEADERS_LEN, DMA_TO_DEVICE);
 		tx_desc->dma_addr = 0;
 	}
@@ -1590,7 +1590,7 @@ isert_completion_put(struct iser_tx_desc *tx_desc, struct isert_cmd *isert_cmd,
 {
 	if (isert_cmd->pdu_buf_dma != 0) {
 		isert_dbg("unmap single for isert_cmd->pdu_buf_dma\n");
-		ib_dma_unmap_single(ib_dev, isert_cmd->pdu_buf_dma,
+		dma_unmap_single(ib_dev->dma_device, isert_cmd->pdu_buf_dma,
 				    isert_cmd->pdu_buf_len, DMA_TO_DEVICE);
 		isert_cmd->pdu_buf_dma = 0;
 	}
@@ -1848,7 +1848,7 @@ isert_put_response(struct iscsi_conn *conn, struct iscsi_cmd *cmd)
 		hton24(hdr->dlength, (u32)cmd->se_cmd.scsi_sense_length);
 		pdu_len = cmd->se_cmd.scsi_sense_length + padding;
 
-		isert_cmd->pdu_buf_dma = ib_dma_map_single(ib_dev,
+		isert_cmd->pdu_buf_dma = dma_map_single(ib_dev->dma_device,
 				(void *)cmd->sense_buffer, pdu_len,
 				DMA_TO_DEVICE);
 
@@ -1975,7 +1975,7 @@ isert_put_reject(struct iscsi_cmd *cmd, struct iscsi_conn *conn)
 	isert_init_tx_hdrs(isert_conn, &isert_cmd->tx_desc);
 
 	hton24(hdr->dlength, ISCSI_HDR_LEN);
-	isert_cmd->pdu_buf_dma = ib_dma_map_single(ib_dev,
+	isert_cmd->pdu_buf_dma = dma_map_single(ib_dev->dma_device,
 			(void *)cmd->buf_ptr, ISCSI_HDR_LEN,
 			DMA_TO_DEVICE);
 	isert_cmd->pdu_buf_len = ISCSI_HDR_LEN;
@@ -2016,7 +2016,7 @@ isert_put_text_rsp(struct iscsi_cmd *cmd, struct iscsi_conn *conn)
 		struct ib_sge *tx_dsg = &isert_cmd->tx_desc.tx_sg[1];
 		void *txt_rsp_buf = cmd->buf_ptr;
 
-		isert_cmd->pdu_buf_dma = ib_dma_map_single(ib_dev,
+		isert_cmd->pdu_buf_dma = dma_map_single(ib_dev->dma_device,
 				txt_rsp_buf, txt_rsp_len, DMA_TO_DEVICE);
 
 		isert_cmd->pdu_buf_len = txt_rsp_len;
diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
index e4cb86cdca34..6c8d6847f920 100644
--- a/drivers/infiniband/ulp/srp/ib_srp.c
+++ b/drivers/infiniband/ulp/srp/ib_srp.c
@@ -232,9 +232,9 @@ static struct srp_iu *srp_alloc_iu(struct srp_host *host, size_t size,
 	if (!iu->buf)
 		goto out_free_iu;
 
-	iu->dma = ib_dma_map_single(host->srp_dev->dev, iu->buf, size,
+	iu->dma = dma_map_single(host->srp_dev->dev->dma_device, iu->buf, size,
 				    direction);
-	if (ib_dma_mapping_error(host->srp_dev->dev, iu->dma))
+	if (dma_mapping_error(host->srp_dev->dev->dma_device, iu->dma))
 		goto out_free_buf;
 
 	iu->size      = size;
@@ -255,7 +255,7 @@ static void srp_free_iu(struct srp_host *host, struct srp_iu *iu)
 	if (!iu)
 		return;
 
-	ib_dma_unmap_single(host->srp_dev->dev, iu->dma, iu->size,
+	dma_unmap_single(host->srp_dev->dev->dma_device, iu->dma, iu->size,
 			    iu->direction);
 	kfree(iu->buf);
 	kfree(iu);
@@ -838,7 +838,7 @@ static void srp_free_req_data(struct srp_target_port *target,
 			kfree(req->map_page);
 		}
 		if (req->indirect_dma_addr) {
-			ib_dma_unmap_single(ibdev, req->indirect_dma_addr,
+			dma_unmap_single(ibdev->dma_device, req->indirect_dma_addr,
 					    target->indirect_size,
 					    DMA_TO_DEVICE);
 		}
@@ -883,10 +883,10 @@ static int srp_alloc_req_data(struct srp_rdma_ch *ch)
 		if (!req->indirect_desc)
 			goto out;
 
-		dma_addr = ib_dma_map_single(ibdev, req->indirect_desc,
+		dma_addr = dma_map_single(ibdev->dma_device, req->indirect_desc,
 					     target->indirect_size,
 					     DMA_TO_DEVICE);
-		if (ib_dma_mapping_error(ibdev, dma_addr))
+		if (dma_mapping_error(ibdev->dma_device, dma_addr))
 			goto out;
 
 		req->indirect_dma_addr = dma_addr;
@@ -1091,7 +1091,7 @@ static void srp_unmap_data(struct scsi_cmnd *scmnd,
 			ib_fmr_pool_unmap(*pfmr);
 	}
 
-	ib_dma_unmap_sg(ibdev, scsi_sglist(scmnd), scsi_sg_count(scmnd),
+	dma_unmap_sg(ibdev->dma_device, scsi_sglist(scmnd), scsi_sg_count(scmnd),
 			scmnd->sc_data_direction);
 }
 
@@ -1424,9 +1424,8 @@ static int srp_map_sg_entry(struct srp_map_state *state,
 {
 	struct srp_target_port *target = ch->target;
 	struct srp_device *dev = target->srp_host->srp_dev;
-	struct ib_device *ibdev = dev->dev;
-	dma_addr_t dma_addr = ib_sg_dma_address(ibdev, sg);
-	unsigned int dma_len = ib_sg_dma_len(ibdev, sg);
+	dma_addr_t dma_addr = sg_dma_address(sg);
+	unsigned int dma_len = sg_dma_len(sg);
 	unsigned int len = 0;
 	int ret;
 
@@ -1520,13 +1519,12 @@ static int srp_map_sg_dma(struct srp_map_state *state, struct srp_rdma_ch *ch,
 			  int count)
 {
 	struct srp_target_port *target = ch->target;
-	struct srp_device *dev = target->srp_host->srp_dev;
 	struct scatterlist *sg;
 	int i;
 
 	for_each_sg(scat, sg, count, i) {
-		srp_map_desc(state, ib_sg_dma_address(dev->dev, sg),
-			     ib_sg_dma_len(dev->dev, sg),
+		srp_map_desc(state, sg_dma_address(sg),
+			     sg_dma_len(sg),
 			     target->pd->unsafe_global_rkey);
 	}
 
@@ -1654,7 +1652,7 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_rdma_ch *ch,
 	dev = target->srp_host->srp_dev;
 	ibdev = dev->dev;
 
-	count = ib_dma_map_sg(ibdev, scat, nents, scmnd->sc_data_direction);
+	count = dma_map_sg(ibdev->dma_device, scat, nents, scmnd->sc_data_direction);
 	if (unlikely(count == 0))
 		return -EIO;
 
@@ -1686,9 +1684,9 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_rdma_ch *ch,
 		 */
 		struct srp_direct_buf *buf = (void *) cmd->add_data;
 
-		buf->va  = cpu_to_be64(ib_sg_dma_address(ibdev, scat));
+		buf->va  = cpu_to_be64(sg_dma_address(scat));
 		buf->key = cpu_to_be32(pd->unsafe_global_rkey);
-		buf->len = cpu_to_be32(ib_sg_dma_len(ibdev, scat));
+		buf->len = cpu_to_be32(sg_dma_len(scat));
 
 		req->nmdesc = 0;
 		/* Debugging help. */
@@ -1702,7 +1700,7 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_rdma_ch *ch,
 	 */
 	indirect_hdr = (void *) cmd->add_data;
 
-	ib_dma_sync_single_for_cpu(ibdev, req->indirect_dma_addr,
+	dma_sync_single_for_cpu(ibdev->dma_device, req->indirect_dma_addr,
 				   target->indirect_size, DMA_TO_DEVICE);
 
 	memset(&state, 0, sizeof(state));
@@ -1784,7 +1782,7 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_rdma_ch *ch,
 	else
 		cmd->data_in_desc_cnt = count;
 
-	ib_dma_sync_single_for_device(ibdev, req->indirect_dma_addr, table_len,
+	dma_sync_single_for_device(ibdev->dma_device, req->indirect_dma_addr, table_len,
 				      DMA_TO_DEVICE);
 
 map_complete:
@@ -2070,9 +2068,9 @@ static int srp_response_common(struct srp_rdma_ch *ch, s32 req_delta,
 		return 1;
 	}
 
-	ib_dma_sync_single_for_cpu(dev, iu->dma, len, DMA_TO_DEVICE);
+	dma_sync_single_for_cpu(dev->dma_device, iu->dma, len, DMA_TO_DEVICE);
 	memcpy(iu->buf, rsp, len);
-	ib_dma_sync_single_for_device(dev, iu->dma, len, DMA_TO_DEVICE);
+	dma_sync_single_for_device(dev->dma_device, iu->dma, len, DMA_TO_DEVICE);
 
 	err = srp_post_send(ch, iu, len);
 	if (err) {
@@ -2130,7 +2128,7 @@ static void srp_recv_done(struct ib_cq *cq, struct ib_wc *wc)
 		return;
 	}
 
-	ib_dma_sync_single_for_cpu(dev, iu->dma, ch->max_ti_iu_len,
+	dma_sync_single_for_cpu(dev->dma_device, iu->dma, ch->max_ti_iu_len,
 				   DMA_FROM_DEVICE);
 
 	opcode = *(u8 *) iu->buf;
@@ -2167,7 +2165,7 @@ static void srp_recv_done(struct ib_cq *cq, struct ib_wc *wc)
 		break;
 	}
 
-	ib_dma_sync_single_for_device(dev, iu->dma, ch->max_ti_iu_len,
+	dma_sync_single_for_device(dev->dma_device, iu->dma, ch->max_ti_iu_len,
 				      DMA_FROM_DEVICE);
 
 	res = srp_post_recv(ch, iu);
@@ -2253,7 +2251,7 @@ static int srp_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *scmnd)
 
 	req = &ch->req_ring[idx];
 	dev = target->srp_host->srp_dev->dev;
-	ib_dma_sync_single_for_cpu(dev, iu->dma, target->max_iu_len,
+	dma_sync_single_for_cpu(dev->dma_device, iu->dma, target->max_iu_len,
 				   DMA_TO_DEVICE);
 
 	scmnd->host_scribble = (void *) req;
@@ -2288,7 +2286,7 @@ static int srp_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *scmnd)
 		goto err_iu;
 	}
 
-	ib_dma_sync_single_for_device(dev, iu->dma, target->max_iu_len,
+	dma_sync_single_for_device(dev->dma_device, iu->dma, target->max_iu_len,
 				      DMA_TO_DEVICE);
 
 	if (srp_post_send(ch, iu, len)) {
@@ -2675,7 +2673,7 @@ static int srp_send_tsk_mgmt(struct srp_rdma_ch *ch, u64 req_tag, u64 lun,
 		return -1;
 	}
 
-	ib_dma_sync_single_for_cpu(dev, iu->dma, sizeof *tsk_mgmt,
+	dma_sync_single_for_cpu(dev->dma_device, iu->dma, sizeof *tsk_mgmt,
 				   DMA_TO_DEVICE);
 	tsk_mgmt = iu->buf;
 	memset(tsk_mgmt, 0, sizeof *tsk_mgmt);
@@ -2686,7 +2684,7 @@ static int srp_send_tsk_mgmt(struct srp_rdma_ch *ch, u64 req_tag, u64 lun,
 	tsk_mgmt->tsk_mgmt_func = func;
 	tsk_mgmt->task_tag	= req_tag;
 
-	ib_dma_sync_single_for_device(dev, iu->dma, sizeof *tsk_mgmt,
+	dma_sync_single_for_device(dev->dma_device, iu->dma, sizeof *tsk_mgmt,
 				      DMA_TO_DEVICE);
 	if (srp_post_send(ch, iu, sizeof(*tsk_mgmt))) {
 		srp_put_tx_iu(ch, iu, SRP_IU_TSK_MGMT);
diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
index 6004bcdfc12b..63bcd6be4633 100644
--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
+++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
@@ -603,8 +603,8 @@ static struct srpt_ioctx *srpt_alloc_ioctx(struct srpt_device *sdev,
 	if (!ioctx->buf)
 		goto err_free_ioctx;
 
-	ioctx->dma = ib_dma_map_single(sdev->device, ioctx->buf, dma_size, dir);
-	if (ib_dma_mapping_error(sdev->device, ioctx->dma))
+	ioctx->dma = dma_map_single(sdev->device->dma_device, ioctx->buf, dma_size, dir);
+	if (dma_mapping_error(sdev->device->dma_device, ioctx->dma))
 		goto err_free_buf;
 
 	if (dir == DMA_FROM_DEVICE)
@@ -629,7 +629,7 @@ static void srpt_free_ioctx(struct srpt_device *sdev, struct srpt_ioctx *ioctx,
 	if (!ioctx)
 		return;
 
-	ib_dma_unmap_single(sdev->device, ioctx->dma, dma_size, dir);
+	dma_unmap_single(sdev->device->dma_device, ioctx->dma, dma_size, dir);
 	kfree(ioctx->buf);
 	kfree(ioctx);
 }
@@ -1465,7 +1465,7 @@ static void srpt_handle_new_iu(struct srpt_rdma_ch *ch,
 	BUG_ON(!ch);
 	BUG_ON(!recv_ioctx);
 
-	ib_dma_sync_single_for_cpu(ch->sport->sdev->device,
+	dma_sync_single_for_cpu(ch->sport->sdev->device->dma_device,
 				   recv_ioctx->ioctx.dma, srp_max_req_size,
 				   DMA_FROM_DEVICE);
 
@@ -2330,7 +2330,7 @@ static void srpt_queue_response(struct se_cmd *cmd)
 		goto out;
 	}
 
-	ib_dma_sync_single_for_device(sdev->device, ioctx->ioctx.dma, resp_len,
+	dma_sync_single_for_device(sdev->device->dma_device, ioctx->ioctx.dma, resp_len,
 				      DMA_TO_DEVICE);
 
 	sge.addr = ioctx->ioctx.dma;
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 694b3ce8a4f5..b0511ecf4851 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -185,7 +185,7 @@ static inline size_t nvme_rdma_inline_data_size(struct nvme_rdma_queue *queue)
 static void nvme_rdma_free_qe(struct ib_device *ibdev, struct nvme_rdma_qe *qe,
 		size_t capsule_size, enum dma_data_direction dir)
 {
-	ib_dma_unmap_single(ibdev, qe->dma, capsule_size, dir);
+	dma_unmap_single(ibdev->dma_device, qe->dma, capsule_size, dir);
 	kfree(qe->data);
 }
 
@@ -196,8 +196,8 @@ static int nvme_rdma_alloc_qe(struct ib_device *ibdev, struct nvme_rdma_qe *qe,
 	if (!qe->data)
 		return -ENOMEM;
 
-	qe->dma = ib_dma_map_single(ibdev, qe->data, capsule_size, dir);
-	if (ib_dma_mapping_error(ibdev, qe->dma)) {
+	qe->dma = dma_map_single(ibdev->dma_device, qe->data, capsule_size, dir);
+	if (dma_mapping_error(ibdev->dma_device, qe->dma)) {
 		kfree(qe->data);
 		return -ENOMEM;
 	}
@@ -871,7 +871,7 @@ static void nvme_rdma_unmap_data(struct nvme_rdma_queue *queue,
 		}
 	}
 
-	ib_dma_unmap_sg(ibdev, req->sg_table.sgl,
+	dma_unmap_sg(ibdev->dma_device, req->sg_table.sgl,
 			req->nents, rq_data_dir(rq) ==
 				    WRITE ? DMA_TO_DEVICE : DMA_FROM_DEVICE);
 
@@ -987,7 +987,7 @@ static int nvme_rdma_map_data(struct nvme_rdma_queue *queue,
 	BUG_ON(nents > rq->nr_phys_segments);
 	req->nents = nents;
 
-	count = ib_dma_map_sg(ibdev, req->sg_table.sgl, nents,
+	count = dma_map_sg(ibdev->dma_device, req->sg_table.sgl, nents,
 		    rq_data_dir(rq) == WRITE ? DMA_TO_DEVICE : DMA_FROM_DEVICE);
 	if (unlikely(count <= 0)) {
 		sg_free_table_chained(&req->sg_table, true);
@@ -1114,7 +1114,7 @@ static void nvme_rdma_submit_async_event(struct nvme_ctrl *arg, int aer_idx)
 	if (WARN_ON_ONCE(aer_idx != 0))
 		return;
 
-	ib_dma_sync_single_for_cpu(dev, sqe->dma, sizeof(*cmd), DMA_TO_DEVICE);
+	dma_sync_single_for_cpu(dev->dma_device, sqe->dma, sizeof(*cmd), DMA_TO_DEVICE);
 
 	memset(cmd, 0, sizeof(*cmd));
 	cmd->common.opcode = nvme_admin_async_event;
@@ -1122,7 +1122,7 @@ static void nvme_rdma_submit_async_event(struct nvme_ctrl *arg, int aer_idx)
 	cmd->common.flags |= NVME_CMD_SGL_METABUF;
 	nvme_rdma_set_sg_null(cmd);
 
-	ib_dma_sync_single_for_device(dev, sqe->dma, sizeof(*cmd),
+	dma_sync_single_for_device(dev->dma_device, sqe->dma, sizeof(*cmd),
 			DMA_TO_DEVICE);
 
 	ret = nvme_rdma_post_send(queue, sqe, &sge, 1, NULL, false);
@@ -1179,7 +1179,7 @@ static int __nvme_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc, int tag)
 		return 0;
 	}
 
-	ib_dma_sync_single_for_cpu(ibdev, qe->dma, len, DMA_FROM_DEVICE);
+	dma_sync_single_for_cpu(ibdev->dma_device, qe->dma, len, DMA_FROM_DEVICE);
 	/*
 	 * AEN requests are special as they don't time out and can
 	 * survive any kind of queue freeze and often don't respond to
@@ -1191,7 +1191,7 @@ static int __nvme_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc, int tag)
 		nvme_complete_async_event(&queue->ctrl->ctrl, cqe);
 	else
 		ret = nvme_rdma_process_nvme_rsp(queue, cqe, wc, tag);
-	ib_dma_sync_single_for_device(ibdev, qe->dma, len, DMA_FROM_DEVICE);
+	dma_sync_single_for_device(ibdev->dma_device, qe->dma, len, DMA_FROM_DEVICE);
 
 	nvme_rdma_post_recv(queue, qe);
 	return ret;
@@ -1490,7 +1490,7 @@ static int nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
 		return BLK_MQ_RQ_QUEUE_BUSY;
 
 	dev = queue->device->dev;
-	ib_dma_sync_single_for_cpu(dev, sqe->dma,
+	dma_sync_single_for_cpu(dev->dma_device, sqe->dma,
 			sizeof(struct nvme_command), DMA_TO_DEVICE);
 
 	ret = nvme_setup_cmd(ns, rq, c);
@@ -1509,7 +1509,7 @@ static int nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
 		goto err;
 	}
 
-	ib_dma_sync_single_for_device(dev, sqe->dma,
+	dma_sync_single_for_device(dev->dma_device, sqe->dma,
 			sizeof(struct nvme_command), DMA_TO_DEVICE);
 
 	if (rq->cmd_type == REQ_TYPE_FS && req_op(rq) == REQ_OP_FLUSH)
diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index e619a6f57736..3124ef67c55a 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -246,9 +246,9 @@ static int nvmet_rdma_alloc_cmd(struct nvmet_rdma_device *ndev,
 	if (!c->nvme_cmd)
 		goto out;
 
-	c->sge[0].addr = ib_dma_map_single(ndev->device, c->nvme_cmd,
+	c->sge[0].addr = dma_map_single(ndev->device->dma_device, c->nvme_cmd,
 			sizeof(*c->nvme_cmd), DMA_FROM_DEVICE);
-	if (ib_dma_mapping_error(ndev->device, c->sge[0].addr))
+	if (dma_mapping_error(ndev->device->dma_device, c->sge[0].addr))
 		goto out_free_cmd;
 
 	c->sge[0].length = sizeof(*c->nvme_cmd);
@@ -259,10 +259,10 @@ static int nvmet_rdma_alloc_cmd(struct nvmet_rdma_device *ndev,
 				get_order(NVMET_RDMA_INLINE_DATA_SIZE));
 		if (!c->inline_page)
 			goto out_unmap_cmd;
-		c->sge[1].addr = ib_dma_map_page(ndev->device,
+		c->sge[1].addr = dma_map_page(ndev->device->dma_device,
 				c->inline_page, 0, NVMET_RDMA_INLINE_DATA_SIZE,
 				DMA_FROM_DEVICE);
-		if (ib_dma_mapping_error(ndev->device, c->sge[1].addr))
+		if (dma_mapping_error(ndev->device->dma_device, c->sge[1].addr))
 			goto out_free_inline_page;
 		c->sge[1].length = NVMET_RDMA_INLINE_DATA_SIZE;
 		c->sge[1].lkey = ndev->pd->local_dma_lkey;
@@ -282,7 +282,7 @@ static int nvmet_rdma_alloc_cmd(struct nvmet_rdma_device *ndev,
 				get_order(NVMET_RDMA_INLINE_DATA_SIZE));
 	}
 out_unmap_cmd:
-	ib_dma_unmap_single(ndev->device, c->sge[0].addr,
+	dma_unmap_single(ndev->device->dma_device, c->sge[0].addr,
 			sizeof(*c->nvme_cmd), DMA_FROM_DEVICE);
 out_free_cmd:
 	kfree(c->nvme_cmd);
@@ -295,12 +295,12 @@ static void nvmet_rdma_free_cmd(struct nvmet_rdma_device *ndev,
 		struct nvmet_rdma_cmd *c, bool admin)
 {
 	if (!admin) {
-		ib_dma_unmap_page(ndev->device, c->sge[1].addr,
+		dma_unmap_page(ndev->device->dma_device, c->sge[1].addr,
 				NVMET_RDMA_INLINE_DATA_SIZE, DMA_FROM_DEVICE);
 		__free_pages(c->inline_page,
 				get_order(NVMET_RDMA_INLINE_DATA_SIZE));
 	}
-	ib_dma_unmap_single(ndev->device, c->sge[0].addr,
+	dma_unmap_single(ndev->device->dma_device, c->sge[0].addr,
 				sizeof(*c->nvme_cmd), DMA_FROM_DEVICE);
 	kfree(c->nvme_cmd);
 }
@@ -350,9 +350,9 @@ static int nvmet_rdma_alloc_rsp(struct nvmet_rdma_device *ndev,
 	if (!r->req.rsp)
 		goto out;
 
-	r->send_sge.addr = ib_dma_map_single(ndev->device, r->req.rsp,
+	r->send_sge.addr = dma_map_single(ndev->device->dma_device, r->req.rsp,
 			sizeof(*r->req.rsp), DMA_TO_DEVICE);
-	if (ib_dma_mapping_error(ndev->device, r->send_sge.addr))
+	if (dma_mapping_error(ndev->device->dma_device, r->send_sge.addr))
 		goto out_free_rsp;
 
 	r->send_sge.length = sizeof(*r->req.rsp);
@@ -378,7 +378,7 @@ static int nvmet_rdma_alloc_rsp(struct nvmet_rdma_device *ndev,
 static void nvmet_rdma_free_rsp(struct nvmet_rdma_device *ndev,
 		struct nvmet_rdma_rsp *r)
 {
-	ib_dma_unmap_single(ndev->device, r->send_sge.addr,
+	dma_unmap_single(ndev->device->dma_device, r->send_sge.addr,
 				sizeof(*r->req.rsp), DMA_TO_DEVICE);
 	kfree(r->req.rsp);
 }
diff --git a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h
index 14576977200f..f2c0c60eee1d 100644
--- a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h
+++ b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h
@@ -925,21 +925,21 @@ kiblnd_rd_msg_size(struct kib_rdma_desc *rd, int msgtype, int n)
 static inline __u64
 kiblnd_dma_mapping_error(struct ib_device *dev, u64 dma_addr)
 {
-	return ib_dma_mapping_error(dev, dma_addr);
+	return dma_mapping_error(dev->dma_device, dma_addr);
 }
 
 static inline __u64 kiblnd_dma_map_single(struct ib_device *dev,
 					  void *msg, size_t size,
 					  enum dma_data_direction direction)
 {
-	return ib_dma_map_single(dev, msg, size, direction);
+	return dma_map_single(dev->dma_device, msg, size, direction);
 }
 
 static inline void kiblnd_dma_unmap_single(struct ib_device *dev,
 					   __u64 addr, size_t size,
 					  enum dma_data_direction direction)
 {
-	ib_dma_unmap_single(dev, addr, size, direction);
+	dma_unmap_single(dev->dma_device, addr, size, direction);
 }
 
 #define KIBLND_UNMAP_ADDR_SET(p, m, a)  do {} while (0)
@@ -949,26 +949,26 @@ static inline int kiblnd_dma_map_sg(struct ib_device *dev,
 				    struct scatterlist *sg, int nents,
 				    enum dma_data_direction direction)
 {
-	return ib_dma_map_sg(dev, sg, nents, direction);
+	return dma_map_sg(dev->dma_device, sg, nents, direction);
 }
 
 static inline void kiblnd_dma_unmap_sg(struct ib_device *dev,
 				       struct scatterlist *sg, int nents,
 				       enum dma_data_direction direction)
 {
-	ib_dma_unmap_sg(dev, sg, nents, direction);
+	dma_unmap_sg(dev->dma_device, sg, nents, direction);
 }
 
 static inline __u64 kiblnd_sg_dma_address(struct ib_device *dev,
 					  struct scatterlist *sg)
 {
-	return ib_sg_dma_address(dev, sg);
+	return sg_dma_address(sg);
 }
 
 static inline unsigned int kiblnd_sg_dma_len(struct ib_device *dev,
 					     struct scatterlist *sg)
 {
-	return ib_sg_dma_len(dev, sg);
+	return sg_dma_len(sg);
 }
 
 /* XXX We use KIBLND_CONN_PARAM(e) as writable buffer, it's not strictly */
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index 663a28f37570..de7e13e31b57 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -2873,229 +2873,6 @@ static inline int ib_req_ncomp_notif(struct ib_cq *cq, int wc_cnt)
 }
 
 /**
- * ib_dma_mapping_error - check a DMA addr for error
- * @dev: The device for which the dma_addr was created
- * @dma_addr: The DMA address to check
- */
-static inline int ib_dma_mapping_error(struct ib_device *dev, u64 dma_addr)
-{
-	return dma_mapping_error(dev->dma_device, dma_addr);
-}
-
-/**
- * ib_dma_map_single - Map a kernel virtual address to DMA address
- * @dev: The device for which the dma_addr is to be created
- * @cpu_addr: The kernel virtual address
- * @size: The size of the region in bytes
- * @direction: The direction of the DMA
- */
-static inline u64 ib_dma_map_single(struct ib_device *dev,
-				    void *cpu_addr, size_t size,
-				    enum dma_data_direction direction)
-{
-	return dma_map_single(dev->dma_device, cpu_addr, size, direction);
-}
-
-/**
- * ib_dma_unmap_single - Destroy a mapping created by ib_dma_map_single()
- * @dev: The device for which the DMA address was created
- * @addr: The DMA address
- * @size: The size of the region in bytes
- * @direction: The direction of the DMA
- */
-static inline void ib_dma_unmap_single(struct ib_device *dev,
-				       u64 addr, size_t size,
-				       enum dma_data_direction direction)
-{
-	dma_unmap_single(dev->dma_device, addr, size, direction);
-}
-
-static inline u64 ib_dma_map_single_attrs(struct ib_device *dev,
-					  void *cpu_addr, size_t size,
-					  enum dma_data_direction direction,
-					  unsigned long dma_attrs)
-{
-	return dma_map_single_attrs(dev->dma_device, cpu_addr, size,
-				    direction, dma_attrs);
-}
-
-static inline void ib_dma_unmap_single_attrs(struct ib_device *dev,
-					     u64 addr, size_t size,
-					     enum dma_data_direction direction,
-					     unsigned long dma_attrs)
-{
-	return dma_unmap_single_attrs(dev->dma_device, addr, size,
-				      direction, dma_attrs);
-}
-
-/**
- * ib_dma_map_page - Map a physical page to DMA address
- * @dev: The device for which the dma_addr is to be created
- * @page: The page to be mapped
- * @offset: The offset within the page
- * @size: The size of the region in bytes
- * @direction: The direction of the DMA
- */
-static inline u64 ib_dma_map_page(struct ib_device *dev,
-				  struct page *page,
-				  unsigned long offset,
-				  size_t size,
-					 enum dma_data_direction direction)
-{
-	return dma_map_page(dev->dma_device, page, offset, size, direction);
-}
-
-/**
- * ib_dma_unmap_page - Destroy a mapping created by ib_dma_map_page()
- * @dev: The device for which the DMA address was created
- * @addr: The DMA address
- * @size: The size of the region in bytes
- * @direction: The direction of the DMA
- */
-static inline void ib_dma_unmap_page(struct ib_device *dev,
-				     u64 addr, size_t size,
-				     enum dma_data_direction direction)
-{
-	dma_unmap_page(dev->dma_device, addr, size, direction);
-}
-
-/**
- * ib_dma_map_sg - Map a scatter/gather list to DMA addresses
- * @dev: The device for which the DMA addresses are to be created
- * @sg: The array of scatter/gather entries
- * @nents: The number of scatter/gather entries
- * @direction: The direction of the DMA
- */
-static inline int ib_dma_map_sg(struct ib_device *dev,
-				struct scatterlist *sg, int nents,
-				enum dma_data_direction direction)
-{
-	return dma_map_sg(dev->dma_device, sg, nents, direction);
-}
-
-/**
- * ib_dma_unmap_sg - Unmap a scatter/gather list of DMA addresses
- * @dev: The device for which the DMA addresses were created
- * @sg: The array of scatter/gather entries
- * @nents: The number of scatter/gather entries
- * @direction: The direction of the DMA
- */
-static inline void ib_dma_unmap_sg(struct ib_device *dev,
-				   struct scatterlist *sg, int nents,
-				   enum dma_data_direction direction)
-{
-	dma_unmap_sg(dev->dma_device, sg, nents, direction);
-}
-
-static inline int ib_dma_map_sg_attrs(struct ib_device *dev,
-				      struct scatterlist *sg, int nents,
-				      enum dma_data_direction direction,
-				      unsigned long dma_attrs)
-{
-	return dma_map_sg_attrs(dev->dma_device, sg, nents, direction,
-				dma_attrs);
-}
-
-static inline void ib_dma_unmap_sg_attrs(struct ib_device *dev,
-					 struct scatterlist *sg, int nents,
-					 enum dma_data_direction direction,
-					 unsigned long dma_attrs)
-{
-	dma_unmap_sg_attrs(dev->dma_device, sg, nents, direction, dma_attrs);
-}
-/**
- * ib_sg_dma_address - Return the DMA address from a scatter/gather entry
- * @dev: The device for which the DMA addresses were created
- * @sg: The scatter/gather entry
- *
- * Note: this function is obsolete. To do: change all occurrences of
- * ib_sg_dma_address() into sg_dma_address().
- */
-static inline u64 ib_sg_dma_address(struct ib_device *dev,
-				    struct scatterlist *sg)
-{
-	return sg_dma_address(sg);
-}
-
-/**
- * ib_sg_dma_len - Return the DMA length from a scatter/gather entry
- * @dev: The device for which the DMA addresses were created
- * @sg: The scatter/gather entry
- *
- * Note: this function is obsolete. To do: change all occurrences of
- * ib_sg_dma_len() into sg_dma_len().
- */
-static inline unsigned int ib_sg_dma_len(struct ib_device *dev,
-					 struct scatterlist *sg)
-{
-	return sg_dma_len(sg);
-}
-
-/**
- * ib_dma_sync_single_for_cpu - Prepare DMA region to be accessed by CPU
- * @dev: The device for which the DMA address was created
- * @addr: The DMA address
- * @size: The size of the region in bytes
- * @dir: The direction of the DMA
- */
-static inline void ib_dma_sync_single_for_cpu(struct ib_device *dev,
-					      u64 addr,
-					      size_t size,
-					      enum dma_data_direction dir)
-{
-	dma_sync_single_for_cpu(dev->dma_device, addr, size, dir);
-}
-
-/**
- * ib_dma_sync_single_for_device - Prepare DMA region to be accessed by device
- * @dev: The device for which the DMA address was created
- * @addr: The DMA address
- * @size: The size of the region in bytes
- * @dir: The direction of the DMA
- */
-static inline void ib_dma_sync_single_for_device(struct ib_device *dev,
-						 u64 addr,
-						 size_t size,
-						 enum dma_data_direction dir)
-{
-	dma_sync_single_for_device(dev->dma_device, addr, size, dir);
-}
-
-/**
- * ib_dma_alloc_coherent - Allocate memory and map it for DMA
- * @dev: The device for which the DMA address is requested
- * @size: The size of the region to allocate in bytes
- * @dma_handle: A pointer for returning the DMA address of the region
- * @flag: memory allocator flags
- */
-static inline void *ib_dma_alloc_coherent(struct ib_device *dev,
-					   size_t size,
-					   u64 *dma_handle,
-					   gfp_t flag)
-{
-	dma_addr_t handle;
-	void *ret;
-
-	ret = dma_alloc_coherent(dev->dma_device, size, &handle, flag);
-	*dma_handle = handle;
-	return ret;
-}
-
-/**
- * ib_dma_free_coherent - Free memory allocated by ib_dma_alloc_coherent()
- * @dev: The device for which the DMA addresses were allocated
- * @size: The size of the region
- * @cpu_addr: the address returned by ib_dma_alloc_coherent()
- * @dma_handle: the DMA address returned by ib_dma_alloc_coherent()
- */
-static inline void ib_dma_free_coherent(struct ib_device *dev,
-					size_t size, void *cpu_addr,
-					u64 dma_handle)
-{
-	dma_free_coherent(dev->dma_device, size, cpu_addr, dma_handle);
-}
-
-/**
  * ib_dereg_mr - Deregisters a memory region and removes it from the
  *   HCA translation table.
  * @mr: The memory region to deregister.
diff --git a/net/9p/trans_rdma.c b/net/9p/trans_rdma.c
index 553ed4ecb6a0..38b45a37ed18 100644
--- a/net/9p/trans_rdma.c
+++ b/net/9p/trans_rdma.c
@@ -294,7 +294,7 @@ recv_done(struct ib_cq *cq, struct ib_wc *wc)
 	int16_t tag;
 
 	req = NULL;
-	ib_dma_unmap_single(rdma->cm_id->device, c->busa, client->msize,
+	dma_unmap_single(rdma->cm_id->device->dma_device, c->busa, client->msize,
 							 DMA_FROM_DEVICE);
 
 	if (wc->status != IB_WC_SUCCESS)
@@ -339,7 +339,7 @@ send_done(struct ib_cq *cq, struct ib_wc *wc)
 	struct p9_rdma_context *c =
 		container_of(wc->wr_cqe, struct p9_rdma_context, cqe);
 
-	ib_dma_unmap_single(rdma->cm_id->device,
+	dma_unmap_single(rdma->cm_id->device->dma_device,
 			    c->busa, c->req->tc->size,
 			    DMA_TO_DEVICE);
 	up(&rdma->sq_sem);
@@ -379,10 +379,10 @@ post_recv(struct p9_client *client, struct p9_rdma_context *c)
 	struct ib_recv_wr wr, *bad_wr;
 	struct ib_sge sge;
 
-	c->busa = ib_dma_map_single(rdma->cm_id->device,
+	c->busa = dma_map_single(rdma->cm_id->device->dma_device,
 				    c->rc->sdata, client->msize,
 				    DMA_FROM_DEVICE);
-	if (ib_dma_mapping_error(rdma->cm_id->device, c->busa))
+	if (dma_mapping_error(rdma->cm_id->device->dma_device, c->busa))
 		goto error;
 
 	c->cqe.done = recv_done;
@@ -469,10 +469,10 @@ static int rdma_request(struct p9_client *client, struct p9_req_t *req)
 	}
 	c->req = req;
 
-	c->busa = ib_dma_map_single(rdma->cm_id->device,
+	c->busa = dma_map_single(rdma->cm_id->device->dma_device,
 				    c->req->tc->sdata, c->req->tc->size,
 				    DMA_TO_DEVICE);
-	if (ib_dma_mapping_error(rdma->cm_id->device, c->busa)) {
+	if (dma_mapping_error(rdma->cm_id->device->dma_device, c->busa)) {
 		err = -EIO;
 		goto send_error;
 	}
diff --git a/net/rds/ib.h b/net/rds/ib.h
index 45ac8e8e58f4..b8d815bb5e90 100644
--- a/net/rds/ib.h
+++ b/net/rds/ib.h
@@ -134,7 +134,7 @@ struct rds_ib_connection {
 	struct rds_ib_work_ring	i_send_ring;
 	struct rm_data_op	*i_data_op;
 	struct rds_header	*i_send_hdrs;
-	u64			i_send_hdrs_dma;
+	dma_addr_t		i_send_hdrs_dma;
 	struct rds_ib_send_work *i_sends;
 	atomic_t		i_signaled_sends;
 
@@ -288,9 +288,9 @@ static inline void rds_ib_dma_sync_sg_for_cpu(struct ib_device *dev,
 	unsigned int i;
 
 	for_each_sg(sglist, sg, sg_dma_len, i) {
-		ib_dma_sync_single_for_cpu(dev,
-				ib_sg_dma_address(dev, sg),
-				ib_sg_dma_len(dev, sg),
+		dma_sync_single_for_cpu(dev->dma_device,
+				sg_dma_address(sg),
+				sg_dma_len(sg),
 				direction);
 	}
 }
@@ -305,9 +305,9 @@ static inline void rds_ib_dma_sync_sg_for_device(struct ib_device *dev,
 	unsigned int i;
 
 	for_each_sg(sglist, sg, sg_dma_len, i) {
-		ib_dma_sync_single_for_device(dev,
-				ib_sg_dma_address(dev, sg),
-				ib_sg_dma_len(dev, sg),
+		dma_sync_single_for_device(dev->dma_device,
+				sg_dma_address(sg),
+				sg_dma_len(sg),
 				direction);
 	}
 }
diff --git a/net/rds/ib_cm.c b/net/rds/ib_cm.c
index 5b2ab95afa07..00ed8b678c61 100644
--- a/net/rds/ib_cm.c
+++ b/net/rds/ib_cm.c
@@ -456,7 +456,7 @@ static int rds_ib_setup_qp(struct rds_connection *conn)
 		goto out;
 	}
 
-	ic->i_send_hdrs = ib_dma_alloc_coherent(dev,
+	ic->i_send_hdrs = dma_alloc_coherent(dev->dma_device,
 					   ic->i_send_ring.w_nr *
 						sizeof(struct rds_header),
 					   &ic->i_send_hdrs_dma, GFP_KERNEL);
@@ -466,7 +466,7 @@ static int rds_ib_setup_qp(struct rds_connection *conn)
 		goto out;
 	}
 
-	ic->i_recv_hdrs = ib_dma_alloc_coherent(dev,
+	ic->i_recv_hdrs = dma_alloc_coherent(dev->dma_device,
 					   ic->i_recv_ring.w_nr *
 						sizeof(struct rds_header),
 					   &ic->i_recv_hdrs_dma, GFP_KERNEL);
@@ -476,7 +476,7 @@ static int rds_ib_setup_qp(struct rds_connection *conn)
 		goto out;
 	}
 
-	ic->i_ack = ib_dma_alloc_coherent(dev, sizeof(struct rds_header),
+	ic->i_ack = dma_alloc_coherent(dev->dma_device, sizeof(struct rds_header),
 				       &ic->i_ack_dma, GFP_KERNEL);
 	if (!ic->i_ack) {
 		ret = -ENOMEM;
@@ -781,21 +781,21 @@ void rds_ib_conn_path_shutdown(struct rds_conn_path *cp)
 
 		/* then free the resources that ib callbacks use */
 		if (ic->i_send_hdrs)
-			ib_dma_free_coherent(dev,
+			dma_free_coherent(dev->dma_device,
 					   ic->i_send_ring.w_nr *
 						sizeof(struct rds_header),
 					   ic->i_send_hdrs,
 					   ic->i_send_hdrs_dma);
 
 		if (ic->i_recv_hdrs)
-			ib_dma_free_coherent(dev,
+			dma_free_coherent(dev->dma_device,
 					   ic->i_recv_ring.w_nr *
 						sizeof(struct rds_header),
 					   ic->i_recv_hdrs,
 					   ic->i_recv_hdrs_dma);
 
 		if (ic->i_ack)
-			ib_dma_free_coherent(dev, sizeof(struct rds_header),
+			dma_free_coherent(dev->dma_device, sizeof(struct rds_header),
 					     ic->i_ack, ic->i_ack_dma);
 
 		if (ic->i_sends)
diff --git a/net/rds/ib_fmr.c b/net/rds/ib_fmr.c
index 4fe8f4fec4ee..150e8f756bd9 100644
--- a/net/rds/ib_fmr.c
+++ b/net/rds/ib_fmr.c
@@ -100,7 +100,7 @@ int rds_ib_map_fmr(struct rds_ib_device *rds_ibdev, struct rds_ib_mr *ibmr,
 	int i, j;
 	int ret;
 
-	sg_dma_len = ib_dma_map_sg(dev, sg, nents, DMA_BIDIRECTIONAL);
+	sg_dma_len = dma_map_sg(dev->dma_device, sg, nents, DMA_BIDIRECTIONAL);
 	if (unlikely(!sg_dma_len)) {
 		pr_warn("RDS/IB: %s failed!\n", __func__);
 		return -EBUSY;
@@ -110,8 +110,8 @@ int rds_ib_map_fmr(struct rds_ib_device *rds_ibdev, struct rds_ib_mr *ibmr,
 	page_cnt = 0;
 
 	for (i = 0; i < sg_dma_len; ++i) {
-		unsigned int dma_len = ib_sg_dma_len(dev, &scat[i]);
-		u64 dma_addr = ib_sg_dma_address(dev, &scat[i]);
+		unsigned int dma_len = sg_dma_len(&scat[i]);
+		u64 dma_addr = sg_dma_address(&scat[i]);
 
 		if (dma_addr & ~PAGE_MASK) {
 			if (i > 0)
@@ -140,8 +140,8 @@ int rds_ib_map_fmr(struct rds_ib_device *rds_ibdev, struct rds_ib_mr *ibmr,
 
 	page_cnt = 0;
 	for (i = 0; i < sg_dma_len; ++i) {
-		unsigned int dma_len = ib_sg_dma_len(dev, &scat[i]);
-		u64 dma_addr = ib_sg_dma_address(dev, &scat[i]);
+		unsigned int dma_len = sg_dma_len(&scat[i]);
+		u64 dma_addr = sg_dma_address(&scat[i]);
 
 		for (j = 0; j < dma_len; j += PAGE_SIZE)
 			dma_pages[page_cnt++] =
diff --git a/net/rds/ib_frmr.c b/net/rds/ib_frmr.c
index d921adc62765..e30a8c6f003f 100644
--- a/net/rds/ib_frmr.c
+++ b/net/rds/ib_frmr.c
@@ -169,7 +169,7 @@ static int rds_ib_map_frmr(struct rds_ib_device *rds_ibdev,
 	ibmr->sg_dma_len = 0;
 	frmr->sg_byte_len = 0;
 	WARN_ON(ibmr->sg_dma_len);
-	ibmr->sg_dma_len = ib_dma_map_sg(dev, ibmr->sg, ibmr->sg_len,
+	ibmr->sg_dma_len = dma_map_sg(dev->dma_device, ibmr->sg, ibmr->sg_len,
 					 DMA_BIDIRECTIONAL);
 	if (unlikely(!ibmr->sg_dma_len)) {
 		pr_warn("RDS/IB: %s failed!\n", __func__);
@@ -182,8 +182,8 @@ static int rds_ib_map_frmr(struct rds_ib_device *rds_ibdev,
 
 	ret = -EINVAL;
 	for (i = 0; i < ibmr->sg_dma_len; ++i) {
-		unsigned int dma_len = ib_sg_dma_len(dev, &ibmr->sg[i]);
-		u64 dma_addr = ib_sg_dma_address(dev, &ibmr->sg[i]);
+		unsigned int dma_len = sg_dma_len(&ibmr->sg[i]);
+		u64 dma_addr = sg_dma_address(&ibmr->sg[i]);
 
 		frmr->sg_byte_len += dma_len;
 		if (dma_addr & ~PAGE_MASK) {
@@ -221,7 +221,7 @@ static int rds_ib_map_frmr(struct rds_ib_device *rds_ibdev,
 	return ret;
 
 out_unmap:
-	ib_dma_unmap_sg(rds_ibdev->dev, ibmr->sg, ibmr->sg_len,
+	dma_unmap_sg(rds_ibdev->dev->dma_device, ibmr->sg, ibmr->sg_len,
 			DMA_BIDIRECTIONAL);
 	ibmr->sg_dma_len = 0;
 	return ret;
diff --git a/net/rds/ib_rdma.c b/net/rds/ib_rdma.c
index 977f69886c00..b8276915ab3d 100644
--- a/net/rds/ib_rdma.c
+++ b/net/rds/ib_rdma.c
@@ -221,11 +221,11 @@ void rds_ib_sync_mr(void *trans_private, int direction)
 
 	switch (direction) {
 	case DMA_FROM_DEVICE:
-		ib_dma_sync_sg_for_cpu(rds_ibdev->dev, ibmr->sg,
+		dma_sync_sg_for_cpu(rds_ibdev->dev->dma_device, ibmr->sg,
 			ibmr->sg_dma_len, DMA_BIDIRECTIONAL);
 		break;
 	case DMA_TO_DEVICE:
-		ib_dma_sync_sg_for_device(rds_ibdev->dev, ibmr->sg,
+		dma_sync_sg_for_device(rds_ibdev->dev->dma_device, ibmr->sg,
 			ibmr->sg_dma_len, DMA_BIDIRECTIONAL);
 		break;
 	}
@@ -236,7 +236,7 @@ void __rds_ib_teardown_mr(struct rds_ib_mr *ibmr)
 	struct rds_ib_device *rds_ibdev = ibmr->device;
 
 	if (ibmr->sg_dma_len) {
-		ib_dma_unmap_sg(rds_ibdev->dev,
+		dma_unmap_sg(rds_ibdev->dev->dma_device,
 				ibmr->sg, ibmr->sg_len,
 				DMA_BIDIRECTIONAL);
 		ibmr->sg_dma_len = 0;
diff --git a/net/rds/ib_recv.c b/net/rds/ib_recv.c
index 606a11f681d2..374a4e038b24 100644
--- a/net/rds/ib_recv.c
+++ b/net/rds/ib_recv.c
@@ -225,7 +225,7 @@ static void rds_ib_recv_clear_one(struct rds_ib_connection *ic,
 		recv->r_ibinc = NULL;
 	}
 	if (recv->r_frag) {
-		ib_dma_unmap_sg(ic->i_cm_id->device, &recv->r_frag->f_sg, 1, DMA_FROM_DEVICE);
+		dma_unmap_sg(ic->i_cm_id->device->dma_device, &recv->r_frag->f_sg, 1, DMA_FROM_DEVICE);
 		rds_ib_frag_free(ic, recv->r_frag);
 		recv->r_frag = NULL;
 	}
@@ -331,7 +331,7 @@ static int rds_ib_recv_refill_one(struct rds_connection *conn,
 	if (!recv->r_frag)
 		goto out;
 
-	ret = ib_dma_map_sg(ic->i_cm_id->device, &recv->r_frag->f_sg,
+	ret = dma_map_sg(ic->i_cm_id->device->dma_device, &recv->r_frag->f_sg,
 			    1, DMA_FROM_DEVICE);
 	WARN_ON(ret != 1);
 
@@ -340,8 +340,8 @@ static int rds_ib_recv_refill_one(struct rds_connection *conn,
 	sge->length = sizeof(struct rds_header);
 
 	sge = &recv->r_sge[1];
-	sge->addr = ib_sg_dma_address(ic->i_cm_id->device, &recv->r_frag->f_sg);
-	sge->length = ib_sg_dma_len(ic->i_cm_id->device, &recv->r_frag->f_sg);
+	sge->addr = sg_dma_address(&recv->r_frag->f_sg);
+	sge->length = sg_dma_len(&recv->r_frag->f_sg);
 
 	ret = 0;
 out:
@@ -408,9 +408,7 @@ void rds_ib_recv_refill(struct rds_connection *conn, int prefill, gfp_t gfp)
 		ret = ib_post_recv(ic->i_cm_id->qp, &recv->r_wr, &failed_wr);
 		rdsdebug("recv %p ibinc %p page %p addr %lu ret %d\n", recv,
 			 recv->r_ibinc, sg_page(&recv->r_frag->f_sg),
-			 (long) ib_sg_dma_address(
-				ic->i_cm_id->device,
-				&recv->r_frag->f_sg),
+			 (long) sg_dma_address(&recv->r_frag->f_sg),
 			ret);
 		if (ret) {
 			rds_ib_conn_error(conn, "recv post on "
@@ -968,7 +966,7 @@ void rds_ib_recv_cqe_handler(struct rds_ib_connection *ic,
 
 	rds_ib_stats_inc(s_ib_rx_cq_event);
 	recv = &ic->i_recvs[rds_ib_ring_oldest(&ic->i_recv_ring)];
-	ib_dma_unmap_sg(ic->i_cm_id->device, &recv->r_frag->f_sg, 1,
+	dma_unmap_sg(ic->i_cm_id->device->dma_device, &recv->r_frag->f_sg, 1,
 			DMA_FROM_DEVICE);
 
 	/* Also process recvs in connecting state because it is possible
diff --git a/net/rds/ib_send.c b/net/rds/ib_send.c
index 84d90c97332f..fd972b32bbd4 100644
--- a/net/rds/ib_send.c
+++ b/net/rds/ib_send.c
@@ -74,7 +74,7 @@ static void rds_ib_send_unmap_data(struct rds_ib_connection *ic,
 				   int wc_status)
 {
 	if (op->op_nents)
-		ib_dma_unmap_sg(ic->i_cm_id->device,
+		dma_unmap_sg(ic->i_cm_id->device->dma_device,
 				op->op_sg, op->op_nents,
 				DMA_TO_DEVICE);
 }
@@ -84,7 +84,7 @@ static void rds_ib_send_unmap_rdma(struct rds_ib_connection *ic,
 				   int wc_status)
 {
 	if (op->op_mapped) {
-		ib_dma_unmap_sg(ic->i_cm_id->device,
+		dma_unmap_sg(ic->i_cm_id->device->dma_device,
 				op->op_sg, op->op_nents,
 				op->op_write ? DMA_TO_DEVICE : DMA_FROM_DEVICE);
 		op->op_mapped = 0;
@@ -125,7 +125,7 @@ static void rds_ib_send_unmap_atomic(struct rds_ib_connection *ic,
 {
 	/* unmap atomic recvbuf */
 	if (op->op_mapped) {
-		ib_dma_unmap_sg(ic->i_cm_id->device, op->op_sg, 1,
+		dma_unmap_sg(ic->i_cm_id->device->dma_device, op->op_sg, 1,
 				DMA_FROM_DEVICE);
 		op->op_mapped = 0;
 	}
@@ -546,7 +546,7 @@ int rds_ib_xmit(struct rds_connection *conn, struct rds_message *rm,
 	/* map the message the first time we see it */
 	if (!ic->i_data_op) {
 		if (rm->data.op_nents) {
-			rm->data.op_count = ib_dma_map_sg(dev,
+			rm->data.op_count = dma_map_sg(dev->dma_device,
 							  rm->data.op_sg,
 							  rm->data.op_nents,
 							  DMA_TO_DEVICE);
@@ -640,16 +640,16 @@ int rds_ib_xmit(struct rds_connection *conn, struct rds_message *rm,
 		if (i < work_alloc
 		    && scat != &rm->data.op_sg[rm->data.op_count]) {
 			len = min(RDS_FRAG_SIZE,
-				ib_sg_dma_len(dev, scat) - rm->data.op_dmaoff);
+				sg_dma_len(scat) - rm->data.op_dmaoff);
 			send->s_wr.num_sge = 2;
 
-			send->s_sge[1].addr = ib_sg_dma_address(dev, scat);
+			send->s_sge[1].addr = sg_dma_address(scat);
 			send->s_sge[1].addr += rm->data.op_dmaoff;
 			send->s_sge[1].length = len;
 
 			bytes_sent += len;
 			rm->data.op_dmaoff += len;
-			if (rm->data.op_dmaoff == ib_sg_dma_len(dev, scat)) {
+			if (rm->data.op_dmaoff == sg_dma_len(scat)) {
 				scat++;
 				rm->data.op_dmasg++;
 				rm->data.op_dmaoff = 0;
@@ -797,7 +797,7 @@ int rds_ib_xmit_atomic(struct rds_connection *conn, struct rm_atomic_op *op)
 	rds_message_addref(container_of(send->s_op, struct rds_message, atomic));
 
 	/* map 8 byte retval buffer to the device */
-	ret = ib_dma_map_sg(ic->i_cm_id->device, op->op_sg, 1, DMA_FROM_DEVICE);
+	ret = dma_map_sg(ic->i_cm_id->device->dma_device, op->op_sg, 1, DMA_FROM_DEVICE);
 	rdsdebug("ic %p mapping atomic op %p. mapped %d pg\n", ic, op, ret);
 	if (ret != 1) {
 		rds_ib_ring_unalloc(&ic->i_send_ring, work_alloc);
@@ -807,8 +807,8 @@ int rds_ib_xmit_atomic(struct rds_connection *conn, struct rm_atomic_op *op)
 	}
 
 	/* Convert our struct scatterlist to struct ib_sge */
-	send->s_sge[0].addr = ib_sg_dma_address(ic->i_cm_id->device, op->op_sg);
-	send->s_sge[0].length = ib_sg_dma_len(ic->i_cm_id->device, op->op_sg);
+	send->s_sge[0].addr = sg_dma_address(op->op_sg);
+	send->s_sge[0].length = sg_dma_len(op->op_sg);
 	send->s_sge[0].lkey = ic->i_pd->local_dma_lkey;
 
 	rdsdebug("rva %Lx rpa %Lx len %u\n", op->op_remote_addr,
@@ -861,7 +861,7 @@ int rds_ib_xmit_rdma(struct rds_connection *conn, struct rm_rdma_op *op)
 
 	/* map the op the first time we see it */
 	if (!op->op_mapped) {
-		op->op_count = ib_dma_map_sg(ic->i_cm_id->device,
+		op->op_count = dma_map_sg(ic->i_cm_id->device->dma_device,
 					     op->op_sg, op->op_nents, (op->op_write) ?
 					     DMA_TO_DEVICE : DMA_FROM_DEVICE);
 		rdsdebug("ic %p mapping op %p: %d\n", ic, op, op->op_count);
@@ -920,9 +920,9 @@ int rds_ib_xmit_rdma(struct rds_connection *conn, struct rm_rdma_op *op)
 
 		for (j = 0; j < send->s_rdma_wr.wr.num_sge &&
 		     scat != &op->op_sg[op->op_count]; j++) {
-			len = ib_sg_dma_len(ic->i_cm_id->device, scat);
+			len = sg_dma_len(scat);
 			send->s_sge[j].addr =
-				 ib_sg_dma_address(ic->i_cm_id->device, scat);
+				 sg_dma_address(scat);
 			send->s_sge[j].length = len;
 			send->s_sge[j].lkey = ic->i_pd->local_dma_lkey;
 
diff --git a/net/sunrpc/xprtrdma/fmr_ops.c b/net/sunrpc/xprtrdma/fmr_ops.c
index 1ebb09e1ac4f..e47bf78ec478 100644
--- a/net/sunrpc/xprtrdma/fmr_ops.c
+++ b/net/sunrpc/xprtrdma/fmr_ops.c
@@ -136,7 +136,7 @@ fmr_op_recover_mr(struct rpcrdma_mw *mw)
 	rc = __fmr_unmap(mw);
 
 	/* ORDER: then DMA unmap */
-	ib_dma_unmap_sg(r_xprt->rx_ia.ri_device,
+	dma_unmap_sg(r_xprt->rx_ia.ri_device->dma_device,
 			mw->mw_sg, mw->mw_nents, mw->mw_dir);
 	if (rc)
 		goto out_release;
@@ -218,7 +218,7 @@ fmr_op_map(struct rpcrdma_xprt *r_xprt, struct rpcrdma_mr_seg *seg,
 	if (i == 0)
 		goto out_dmamap_err;
 
-	if (!ib_dma_map_sg(r_xprt->rx_ia.ri_device,
+	if (!dma_map_sg(r_xprt->rx_ia.ri_device->dma_device,
 			   mw->mw_sg, mw->mw_nents, mw->mw_dir))
 		goto out_dmamap_err;
 
@@ -284,7 +284,7 @@ fmr_op_unmap_sync(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
 	list_for_each_entry_safe(mw, tmp, &req->rl_registered, mw_list) {
 		list_del_init(&mw->mw_list);
 		list_del_init(&mw->fmr.fm_mr->list);
-		ib_dma_unmap_sg(r_xprt->rx_ia.ri_device,
+		dma_unmap_sg(r_xprt->rx_ia.ri_device->dma_device,
 				mw->mw_sg, mw->mw_nents, mw->mw_dir);
 		rpcrdma_put_mw(r_xprt, mw);
 	}
diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c
index 26b26beef2d4..48c3e3bcf60c 100644
--- a/net/sunrpc/xprtrdma/frwr_ops.c
+++ b/net/sunrpc/xprtrdma/frwr_ops.c
@@ -186,7 +186,7 @@ frwr_op_recover_mr(struct rpcrdma_mw *mw)
 
 	rc = __frwr_reset_mr(ia, mw);
 	if (state != FRMR_FLUSHED_LI)
-		ib_dma_unmap_sg(ia->ri_device,
+		dma_unmap_sg(ia->ri_device->dma_device,
 				mw->mw_sg, mw->mw_nents, mw->mw_dir);
 	if (rc)
 		goto out_release;
@@ -394,7 +394,7 @@ frwr_op_map(struct rpcrdma_xprt *r_xprt, struct rpcrdma_mr_seg *seg,
 	if (i == 0)
 		goto out_dmamap_err;
 
-	dma_nents = ib_dma_map_sg(ia->ri_device,
+	dma_nents = dma_map_sg(ia->ri_device->dma_device,
 				  mw->mw_sg, mw->mw_nents, mw->mw_dir);
 	if (!dma_nents)
 		goto out_dmamap_err;
@@ -544,7 +544,7 @@ frwr_op_unmap_sync(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
 		dprintk("RPC:       %s: unmapping frmr %p\n",
 			__func__, &mw->frmr);
 		list_del_init(&mw->mw_list);
-		ib_dma_unmap_sg(ia->ri_device,
+		dma_unmap_sg(ia->ri_device->dma_device,
 				mw->mw_sg, mw->mw_nents, mw->mw_dir);
 		rpcrdma_put_mw(r_xprt, mw);
 	}
diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
index d987c2d3dd6e..33decc4cbff8 100644
--- a/net/sunrpc/xprtrdma/rpc_rdma.c
+++ b/net/sunrpc/xprtrdma/rpc_rdma.c
@@ -476,7 +476,7 @@ rpcrdma_prepare_hdr_sge(struct rpcrdma_ia *ia, struct rpcrdma_req *req,
 	}
 	sge->length = len;
 
-	ib_dma_sync_single_for_device(ia->ri_device, sge->addr,
+	dma_sync_single_for_device(ia->ri_device->dma_device, sge->addr,
 				      sge->length, DMA_TO_DEVICE);
 	req->rl_send_wr.num_sge++;
 	return true;
@@ -505,7 +505,7 @@ rpcrdma_prepare_msg_sges(struct rpcrdma_ia *ia, struct rpcrdma_req *req,
 	sge[sge_no].addr = rdmab_addr(rb);
 	sge[sge_no].length = xdr->head[0].iov_len;
 	sge[sge_no].lkey = rdmab_lkey(rb);
-	ib_dma_sync_single_for_device(device, sge[sge_no].addr,
+	dma_sync_single_for_device(device->dma_device, sge[sge_no].addr,
 				      sge[sge_no].length, DMA_TO_DEVICE);
 
 	/* If there is a Read chunk, the page list is being handled
@@ -547,10 +547,10 @@ rpcrdma_prepare_msg_sges(struct rpcrdma_ia *ia, struct rpcrdma_req *req,
 				goto out_mapping_overflow;
 
 			len = min_t(u32, PAGE_SIZE - page_base, remaining);
-			sge[sge_no].addr = ib_dma_map_page(device, *ppages,
+			sge[sge_no].addr = dma_map_page(device->dma_device, *ppages,
 							   page_base, len,
 							   DMA_TO_DEVICE);
-			if (ib_dma_mapping_error(device, sge[sge_no].addr))
+			if (dma_mapping_error(device->dma_device, sge[sge_no].addr))
 				goto out_mapping_err;
 			sge[sge_no].length = len;
 			sge[sge_no].lkey = lkey;
@@ -574,10 +574,10 @@ rpcrdma_prepare_msg_sges(struct rpcrdma_ia *ia, struct rpcrdma_req *req,
 
 map_tail:
 		sge_no++;
-		sge[sge_no].addr = ib_dma_map_page(device, page,
+		sge[sge_no].addr = dma_map_page(device->dma_device, page,
 						   page_base, len,
 						   DMA_TO_DEVICE);
-		if (ib_dma_mapping_error(device, sge[sge_no].addr))
+		if (dma_mapping_error(device->dma_device, sge[sge_no].addr))
 			goto out_mapping_err;
 		sge[sge_no].length = len;
 		sge[sge_no].lkey = lkey;
@@ -628,7 +628,7 @@ rpcrdma_unmap_sges(struct rpcrdma_ia *ia, struct rpcrdma_req *req)
 
 	sge = &req->rl_send_sge[2];
 	for (count = req->rl_mapped_sges; count--; sge++)
-		ib_dma_unmap_page(device, sge->addr, sge->length,
+		dma_unmap_page(device->dma_device, sge->addr, sge->length,
 				  DMA_TO_DEVICE);
 	req->rl_mapped_sges = 0;
 }
diff --git a/net/sunrpc/xprtrdma/svc_rdma_backchannel.c b/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
index 20027f8de129..19f54b90a5af 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
@@ -123,9 +123,9 @@ static int svc_rdma_bc_sendto(struct svcxprt_rdma *rdma,
 	ctxt->sge[0].lkey = rdma->sc_pd->local_dma_lkey;
 	ctxt->sge[0].length = sndbuf->len;
 	ctxt->sge[0].addr =
-	    ib_dma_map_page(rdma->sc_cm_id->device, ctxt->pages[0], 0,
+	    dma_map_page(rdma->sc_cm_id->device->dma_device, ctxt->pages[0], 0,
 			    sndbuf->len, DMA_TO_DEVICE);
-	if (ib_dma_mapping_error(rdma->sc_cm_id->device, ctxt->sge[0].addr)) {
+	if (dma_mapping_error(rdma->sc_cm_id->device->dma_device, ctxt->sge[0].addr)) {
 		ret = -EIO;
 		goto out_unmap;
 	}
diff --git a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
index ad1df979b3f0..f1695f66b759 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
@@ -151,11 +151,11 @@ int rdma_read_chunk_lcl(struct svcxprt_rdma *xprt,
 		rqstp->rq_respages = &rqstp->rq_arg.pages[pg_no+1];
 		rqstp->rq_next_page = rqstp->rq_respages + 1;
 		ctxt->sge[pno].addr =
-			ib_dma_map_page(xprt->sc_cm_id->device,
+			dma_map_page(xprt->sc_cm_id->device->dma_device,
 					head->arg.pages[pg_no], pg_off,
 					PAGE_SIZE - pg_off,
 					DMA_FROM_DEVICE);
-		ret = ib_dma_mapping_error(xprt->sc_cm_id->device,
+		ret = dma_mapping_error(xprt->sc_cm_id->device->dma_device,
 					   ctxt->sge[pno].addr);
 		if (ret)
 			goto err;
@@ -271,7 +271,7 @@ int rdma_read_chunk_frmr(struct svcxprt_rdma *xprt,
 	else
 		clear_bit(RDMACTXT_F_LAST_CTXT, &ctxt->flags);
 
-	dma_nents = ib_dma_map_sg(xprt->sc_cm_id->device,
+	dma_nents = dma_map_sg(xprt->sc_cm_id->device->dma_device,
 				  frmr->sg, frmr->sg_nents,
 				  frmr->direction);
 	if (!dma_nents) {
@@ -348,7 +348,7 @@ int rdma_read_chunk_frmr(struct svcxprt_rdma *xprt,
 	atomic_inc(&rdma_stat_read);
 	return ret;
  err:
-	ib_dma_unmap_sg(xprt->sc_cm_id->device,
+	dma_unmap_sg(xprt->sc_cm_id->device->dma_device,
 			frmr->sg, frmr->sg_nents, frmr->direction);
 	svc_rdma_put_context(ctxt, 0);
 	svc_rdma_put_frmr(xprt, frmr);
diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
index f5a91edcd233..1784a4e749b0 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
@@ -148,7 +148,7 @@ static dma_addr_t dma_map_xdr(struct svcxprt_rdma *xprt,
 			page = virt_to_page(xdr->tail[0].iov_base);
 		}
 	}
-	dma_addr = ib_dma_map_page(xprt->sc_cm_id->device, page, xdr_off,
+	dma_addr = dma_map_page(xprt->sc_cm_id->device->dma_device, page, xdr_off,
 				   min_t(size_t, PAGE_SIZE, len), dir);
 	return dma_addr;
 }
@@ -319,7 +319,7 @@ static int send_write(struct svcxprt_rdma *xprt, struct svc_rqst *rqstp,
 			dma_map_xdr(xprt, &rqstp->rq_res, xdr_off,
 				    sge_bytes, DMA_TO_DEVICE);
 		xdr_off += sge_bytes;
-		if (ib_dma_mapping_error(xprt->sc_cm_id->device,
+		if (dma_mapping_error(xprt->sc_cm_id->device->dma_device,
 					 sge[sge_no].addr))
 			goto err;
 		svc_rdma_count_mappings(xprt, ctxt);
@@ -528,9 +528,9 @@ static int send_reply(struct svcxprt_rdma *rdma,
 	ctxt->sge[0].lkey = rdma->sc_pd->local_dma_lkey;
 	ctxt->sge[0].length = svc_rdma_xdr_get_reply_hdr_len(rdma_resp);
 	ctxt->sge[0].addr =
-	    ib_dma_map_page(rdma->sc_cm_id->device, page, 0,
+	    dma_map_page(rdma->sc_cm_id->device->dma_device, page, 0,
 			    ctxt->sge[0].length, DMA_TO_DEVICE);
-	if (ib_dma_mapping_error(rdma->sc_cm_id->device, ctxt->sge[0].addr))
+	if (dma_mapping_error(rdma->sc_cm_id->device->dma_device, ctxt->sge[0].addr))
 		goto err;
 	svc_rdma_count_mappings(rdma, ctxt);
 
@@ -545,7 +545,7 @@ static int send_reply(struct svcxprt_rdma *rdma,
 			dma_map_xdr(rdma, &rqstp->rq_res, xdr_off,
 				    sge_bytes, DMA_TO_DEVICE);
 		xdr_off += sge_bytes;
-		if (ib_dma_mapping_error(rdma->sc_cm_id->device,
+		if (dma_mapping_error(rdma->sc_cm_id->device->dma_device,
 					 ctxt->sge[sge_no].addr))
 			goto err;
 		svc_rdma_count_mappings(rdma, ctxt);
@@ -723,9 +723,9 @@ void svc_rdma_send_error(struct svcxprt_rdma *xprt, struct rpcrdma_msg *rmsgp,
 	/* Prepare SGE for local address */
 	ctxt->sge[0].lkey = xprt->sc_pd->local_dma_lkey;
 	ctxt->sge[0].length = length;
-	ctxt->sge[0].addr = ib_dma_map_page(xprt->sc_cm_id->device,
+	ctxt->sge[0].addr = dma_map_page(xprt->sc_cm_id->device->dma_device,
 					    p, 0, length, DMA_TO_DEVICE);
-	if (ib_dma_mapping_error(xprt->sc_cm_id->device, ctxt->sge[0].addr)) {
+	if (dma_mapping_error(xprt->sc_cm_id->device->dma_device, ctxt->sge[0].addr)) {
 		dprintk("svcrdma: Error mapping buffer for protocol error\n");
 		svc_rdma_put_context(ctxt, 1);
 		return;
diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c
index 1334de2715c2..9f2cbf77ac9c 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_transport.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c
@@ -237,7 +237,7 @@ void svc_rdma_unmap_dma(struct svc_rdma_op_ctxt *ctxt)
 		 */
 		if (ctxt->sge[i].lkey == lkey) {
 			count++;
-			ib_dma_unmap_page(device,
+			dma_unmap_page(device->dma_device,
 					    ctxt->sge[i].addr,
 					    ctxt->sge[i].length,
 					    ctxt->direction);
@@ -603,10 +603,10 @@ int svc_rdma_post_recv(struct svcxprt_rdma *xprt, gfp_t flags)
 		if (!page)
 			goto err_put_ctxt;
 		ctxt->pages[sge_no] = page;
-		pa = ib_dma_map_page(xprt->sc_cm_id->device,
+		pa = dma_map_page(xprt->sc_cm_id->device->dma_device,
 				     page, 0, PAGE_SIZE,
 				     DMA_FROM_DEVICE);
-		if (ib_dma_mapping_error(xprt->sc_cm_id->device, pa))
+		if (dma_mapping_error(xprt->sc_cm_id->device->dma_device, pa))
 			goto err_put_ctxt;
 		svc_rdma_count_mappings(xprt, ctxt);
 		ctxt->sge[sge_no].addr = pa;
@@ -944,7 +944,7 @@ void svc_rdma_put_frmr(struct svcxprt_rdma *rdma,
 		       struct svc_rdma_fastreg_mr *frmr)
 {
 	if (frmr) {
-		ib_dma_unmap_sg(rdma->sc_cm_id->device,
+		dma_unmap_sg(rdma->sc_cm_id->device->dma_device,
 				frmr->sg, frmr->sg_nents, frmr->direction);
 		atomic_dec(&rdma->sc_dma_used);
 		spin_lock_bh(&rdma->sc_frmr_q_lock);
diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index ec74289af7ec..2bd0e04b3ded 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -179,7 +179,7 @@ rpcrdma_wc_receive(struct ib_cq *cq, struct ib_wc *wc)
 	rep->rr_wc_flags = wc->wc_flags;
 	rep->rr_inv_rkey = wc->ex.invalidate_rkey;
 
-	ib_dma_sync_single_for_cpu(rep->rr_device,
+	dma_sync_single_for_cpu(rep->rr_device->dma_device,
 				   rdmab_addr(rep->rr_rdmabuf),
 				   rep->rr_len, DMA_FROM_DEVICE);
 
@@ -1250,11 +1250,11 @@ __rpcrdma_dma_map_regbuf(struct rpcrdma_ia *ia, struct rpcrdma_regbuf *rb)
 	if (rb->rg_direction == DMA_NONE)
 		return false;
 
-	rb->rg_iov.addr = ib_dma_map_single(ia->ri_device,
+	rb->rg_iov.addr = dma_map_single(ia->ri_device->dma_device,
 					    (void *)rb->rg_base,
 					    rdmab_length(rb),
 					    rb->rg_direction);
-	if (ib_dma_mapping_error(ia->ri_device, rdmab_addr(rb)))
+	if (dma_mapping_error(ia->ri_device->dma_device, rdmab_addr(rb)))
 		return false;
 
 	rb->rg_device = ia->ri_device;
@@ -1268,7 +1268,7 @@ rpcrdma_dma_unmap_regbuf(struct rpcrdma_regbuf *rb)
 	if (!rpcrdma_regbuf_is_mapped(rb))
 		return;
 
-	ib_dma_unmap_single(rb->rg_device, rdmab_addr(rb),
+	dma_unmap_single(rb->rg_device->dma_device, rdmab_addr(rb),
 			    rdmab_length(rb), rb->rg_direction);
 	rb->rg_device = NULL;
 }
-- 
2.11.0

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH, RFC 0/5] IB: Optimize DMA mapping
       [not found] ` <07c07529-4636-fafb-2598-7358d8a1460d-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
                     ` (4 preceding siblings ...)
  2016-12-08  1:12   ` [PATCH 5/5] treewide: Inline ib_dma_map_*() functions Bart Van Assche
@ 2016-12-08  6:48   ` Or Gerlitz
       [not found]     ` <CAJ3xEMi4HY9Ehp-V4rP5UieAk=GjAu0X4uEnP-yMDomcFDpHkA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  5 siblings, 1 reply; 36+ messages in thread
From: Or Gerlitz @ 2016-12-08  6:48 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Doug Ledford, Christoph Hellwig, Sagi Grimberg,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA

On Thu, Dec 8, 2016 at 3:10 AM, Bart Van Assche
<bart.vanassche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote:
> The DMA mapping operations are in the hot path so it is important that the
> overhead of these operations is as low as possible. There has been a reason
> in the past to have DMA mapping operations that are specific to the IB
> subsystem but that reason no longer exists today.

can you elaborate on that a little further... I recall it was
something with the ipath and Co
drivers and memory extensions on 32 bit systems?

> An additional benefit is that the size of HW and SW
> drivers that do not use DMA is reduced by switching to dma_noop_ops.

few words on that?
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH, RFC 0/5] IB: Optimize DMA mapping
       [not found]     ` <CAJ3xEMi4HY9Ehp-V4rP5UieAk=GjAu0X4uEnP-yMDomcFDpHkA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2016-12-08 16:51       ` Bart Van Assche
  0 siblings, 0 replies; 36+ messages in thread
From: Bart Van Assche @ 2016-12-08 16:51 UTC (permalink / raw)
  To: Or Gerlitz
  Cc: Doug Ledford, Christoph Hellwig, Sagi Grimberg,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA

On 12/07/2016 10:48 PM, Or Gerlitz wrote:
> On Thu, Dec 8, 2016 at 3:10 AM, Bart Van Assche
> <bart.vanassche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote:
>> The DMA mapping operations are in the hot path so it is important that the
>> overhead of these operations is as low as possible. There has been a reason
>> in the past to have DMA mapping operations that are specific to the IB
>> subsystem but that reason no longer exists today.
> 
> can you elaborate on that a little further... I recall it was
> something with the ipath and Co
> drivers and memory extensions on 32 bit systems?
> 
>> An additional benefit is that the size of HW and SW
>> drivers that do not use DMA is reduced by switching to dma_noop_ops.
> 
> few words on that?

Hello Or,

The following two patches removed the dma_address and dma_len callback
functions from struct ib_dma_mapping_ops:

commit 446bf432a9b084d9f3471eca309cc53fa434ccc7
Author: Mike Marciniszyn <mike.marciniszyn-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Date:   Fri Mar 28 13:26:42 2014 -0400

    IB/qib: Remove ib_sg_dma_address() and ib_sg_dma_len() overloads

commit ea58a595657db88f55b5159442fdf0e34e1b4d95
Author: Mike Marciniszyn <mike.marciniszyn-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Date:   Fri Mar 28 13:26:59 2014 -0400

    IB/core: Remove overload in ib_sg_dma*

Regarding your second question: all the struct ib_dma_mapping_ops
instances that exist today have the same functionality as the functions
in dma_noop_ops but with slightly different calling conventions. Hence
the switch to dma_noop_ops.

In case anyone would like to see the diffstat for the entire patch
series, it is as follows:

 b/arch/alpha/include/asm/dma-mapping.h                     |    4 
 b/arch/alpha/kernel/pci-noop.c                             |    4 
 b/arch/alpha/kernel/pci_iommu.c                            |    4 
 b/arch/arc/include/asm/dma-mapping.h                       |    4 
 b/arch/arc/mm/dma.c                                        |    2 
 b/arch/arm/common/dmabounce.c                              |    2 
 b/arch/arm/include/asm/device.h                            |    1 
 b/arch/arm/include/asm/dma-mapping.h                       |   23 
 b/arch/arm/include/asm/xen/hypervisor.h                    |    2 
 b/arch/arm/mm/dma-mapping.c                                |   22 
 b/arch/arm/xen/mm.c                                        |    4 
 b/arch/arm64/include/asm/device.h                          |    1 
 b/arch/arm64/include/asm/dma-mapping.h                     |   14 
 b/arch/arm64/mm/dma-mapping.c                              |   14 
 b/arch/avr32/include/asm/dma-mapping.h                     |    4 
 b/arch/avr32/mm/dma-coherent.c                             |    2 
 b/arch/blackfin/include/asm/dma-mapping.h                  |    4 
 b/arch/blackfin/kernel/dma-mapping.c                       |    2 
 b/arch/c6x/include/asm/dma-mapping.h                       |    4 
 b/arch/c6x/kernel/dma.c                                    |    2 
 b/arch/cris/arch-v32/drivers/pci/dma.c                     |    2 
 b/arch/cris/include/asm/dma-mapping.h                      |    6 
 b/arch/frv/include/asm/dma-mapping.h                       |    4 
 b/arch/frv/mb93090-mb00/pci-dma-nommu.c                    |    2 
 b/arch/frv/mb93090-mb00/pci-dma.c                          |    2 
 b/arch/h8300/include/asm/dma-mapping.h                     |    4 
 b/arch/h8300/kernel/dma.c                                  |    2 
 b/arch/hexagon/include/asm/dma-mapping.h                   |    7 
 b/arch/hexagon/kernel/dma.c                                |    4 
 b/arch/ia64/hp/common/hwsw_iommu.c                         |    4 
 b/arch/ia64/hp/common/sba_iommu.c                          |    4 
 b/arch/ia64/include/asm/dma-mapping.h                      |    7 
 b/arch/ia64/include/asm/machvec.h                          |    4 
 b/arch/ia64/kernel/dma-mapping.c                           |    4 
 b/arch/ia64/kernel/pci-dma.c                               |   10 
 b/arch/ia64/kernel/pci-swiotlb.c                           |    2 
 b/arch/m68k/include/asm/dma-mapping.h                      |    4 
 b/arch/m68k/kernel/dma.c                                   |    2 
 b/arch/metag/include/asm/dma-mapping.h                     |    4 
 b/arch/metag/kernel/dma.c                                  |    2 
 b/arch/microblaze/include/asm/dma-mapping.h                |    4 
 b/arch/microblaze/kernel/dma.c                             |    2 
 b/arch/mips/cavium-octeon/dma-octeon.c                     |    4 
 b/arch/mips/include/asm/device.h                           |    5 
 b/arch/mips/include/asm/dma-mapping.h                      |    9 
 b/arch/mips/include/asm/mach-cavium-octeon/dma-coherence.h |    2 
 b/arch/mips/include/asm/netlogic/common.h                  |    2 
 b/arch/mips/loongson64/common/dma-swiotlb.c                |    2 
 b/arch/mips/mm/dma-default.c                               |    4 
 b/arch/mips/netlogic/common/nlm-dma.c                      |    2 
 b/arch/mips/pci/pci-octeon.c                               |    2 
 b/arch/mn10300/include/asm/dma-mapping.h                   |    4 
 b/arch/mn10300/mm/dma-alloc.c                              |    2 
 b/arch/nios2/include/asm/dma-mapping.h                     |    4 
 b/arch/nios2/mm/dma-mapping.c                              |    2 
 b/arch/openrisc/include/asm/dma-mapping.h                  |    4 
 b/arch/openrisc/kernel/dma.c                               |    2 
 b/arch/parisc/include/asm/dma-mapping.h                    |    8 
 b/arch/parisc/kernel/drivers.c                             |    2 
 b/arch/parisc/kernel/pci-dma.c                             |    4 
 b/arch/powerpc/include/asm/device.h                        |    4 
 b/arch/powerpc/include/asm/dma-mapping.h                   |   19 
 b/arch/powerpc/include/asm/pci.h                           |    4 
 b/arch/powerpc/include/asm/ps3.h                           |    2 
 b/arch/powerpc/include/asm/swiotlb.h                       |    2 
 b/arch/powerpc/kernel/dma-swiotlb.c                        |    2 
 b/arch/powerpc/kernel/dma.c                                |    8 
 b/arch/powerpc/kernel/ibmebus.c                            |    4 
 b/arch/powerpc/kernel/pci-common.c                         |    6 
 b/arch/powerpc/kernel/vio.c                                |    2 
 b/arch/powerpc/platforms/cell/iommu.c                      |    6 
 b/arch/powerpc/platforms/pasemi/iommu.c                    |    2 
 b/arch/powerpc/platforms/pasemi/setup.c                    |    2 
 b/arch/powerpc/platforms/powernv/npu-dma.c                 |    2 
 b/arch/powerpc/platforms/ps3/system-bus.c                  |    8 
 b/arch/s390/include/asm/device.h                           |    1 
 b/arch/s390/include/asm/dma-mapping.h                      |    6 
 b/arch/s390/pci/pci.c                                      |    2 
 b/arch/s390/pci/pci_dma.c                                  |    2 
 b/arch/sh/include/asm/dma-mapping.h                        |    4 
 b/arch/sh/kernel/dma-nommu.c                               |    2 
 b/arch/sh/mm/consistent.c                                  |    2 
 b/arch/sparc/include/asm/dma-mapping.h                     |   10 
 b/arch/sparc/kernel/iommu.c                                |    4 
 b/arch/sparc/kernel/ioport.c                               |    8 
 b/arch/sparc/kernel/pci_sun4v.c                            |    2 
 b/arch/tile/include/asm/device.h                           |    3 
 b/arch/tile/include/asm/dma-mapping.h                      |   20 
 b/arch/tile/kernel/pci-dma.c                               |   24 -
 b/arch/unicore32/include/asm/dma-mapping.h                 |    4 
 b/arch/unicore32/mm/dma-swiotlb.c                          |    2 
 b/arch/x86/include/asm/device.h                            |    5 
 b/arch/x86/include/asm/dma-mapping.h                       |   11 
 b/arch/x86/include/asm/iommu.h                             |    2 
 b/arch/x86/kernel/amd_gart_64.c                            |    2 
 b/arch/x86/kernel/pci-calgary_64.c                         |    6 
 b/arch/x86/kernel/pci-dma.c                                |    4 
 b/arch/x86/kernel/pci-nommu.c                              |    2 
 b/arch/x86/kernel/pci-swiotlb.c                            |    2 
 b/arch/x86/pci/common.c                                    |    2 
 b/arch/x86/pci/sta2x11-fixup.c                             |   10 
 b/arch/x86/xen/pci-swiotlb-xen.c                           |    2 
 b/arch/xtensa/include/asm/device.h                         |    4 
 b/arch/xtensa/include/asm/dma-mapping.h                    |    9 
 b/arch/xtensa/kernel/pci-dma.c                             |    2 
 b/drivers/infiniband/core/mad.c                            |   28 -
 b/drivers/infiniband/core/rw.c                             |   30 -
 b/drivers/infiniband/core/umem.c                           |    4 
 b/drivers/infiniband/core/umem_odp.c                       |    6 
 b/drivers/infiniband/hw/mlx4/cq.c                          |    2 
 b/drivers/infiniband/hw/mlx4/mad.c                         |   28 -
 b/drivers/infiniband/hw/mlx4/mr.c                          |    4 
 b/drivers/infiniband/hw/mlx4/qp.c                          |   10 
 b/drivers/infiniband/hw/mlx5/mr.c                          |    4 
 b/drivers/infiniband/hw/qib/qib_keys.c                     |    2 
 b/drivers/infiniband/sw/rdmavt/Makefile                    |    2 
 b/drivers/infiniband/sw/rdmavt/mr.c                        |    4 
 b/drivers/infiniband/sw/rdmavt/vt.c                        |    4 
 b/drivers/infiniband/sw/rdmavt/vt.h                        |    1 
 b/drivers/infiniband/sw/rxe/Makefile                       |    1 
 b/drivers/infiniband/sw/rxe/rxe_loc.h                      |    2 
 b/drivers/infiniband/sw/rxe/rxe_verbs.c                    |    2 
 b/drivers/infiniband/ulp/ipoib/ipoib_cm.c                  |   20 
 b/drivers/infiniband/ulp/ipoib/ipoib_ib.c                  |   22 
 b/drivers/infiniband/ulp/iser/iscsi_iser.c                 |    6 
 b/drivers/infiniband/ulp/iser/iser_initiator.c             |   38 -
 b/drivers/infiniband/ulp/iser/iser_memory.c                |   12 
 b/drivers/infiniband/ulp/iser/iser_verbs.c                 |    2 
 b/drivers/infiniband/ulp/isert/ib_isert.c                  |   54 +-
 b/drivers/infiniband/ulp/srp/ib_srp.c                      |   50 +-
 b/drivers/infiniband/ulp/srpt/ib_srpt.c                    |   12 
 b/drivers/iommu/amd_iommu.c                                |   10 
 b/drivers/misc/mic/bus/mic_bus.c                           |    4 
 b/drivers/misc/mic/bus/scif_bus.c                          |    4 
 b/drivers/misc/mic/bus/scif_bus.h                          |    2 
 b/drivers/misc/mic/bus/vop_bus.c                           |    2 
 b/drivers/misc/mic/host/mic_boot.c                         |    4 
 b/drivers/nvme/host/rdma.c                                 |   22 
 b/drivers/nvme/target/rdma.c                               |   20 
 b/drivers/parisc/ccio-dma.c                                |    2 
 b/drivers/parisc/sba_iommu.c                               |    2 
 b/drivers/pci/host/vmd.c                                   |    2 
 b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h      |   14 
 b/include/linux/device.h                                   |    2 
 b/include/linux/dma-mapping.h                              |   54 +-
 b/include/linux/mic_bus.h                                  |    2 
 b/include/rdma/ib_verbs.h                                  |  310 -------------
 b/lib/dma-noop.c                                           |    2 
 b/net/9p/trans_rdma.c                                      |   12 
 b/net/rds/ib.h                                             |   14 
 b/net/rds/ib_cm.c                                          |   12 
 b/net/rds/ib_fmr.c                                         |   10 
 b/net/rds/ib_frmr.c                                        |    8 
 b/net/rds/ib_rdma.c                                        |    6 
 b/net/rds/ib_recv.c                                        |   14 
 b/net/rds/ib_send.c                                        |   26 -
 b/net/sunrpc/xprtrdma/fmr_ops.c                            |    6 
 b/net/sunrpc/xprtrdma/frwr_ops.c                           |    6 
 b/net/sunrpc/xprtrdma/rpc_rdma.c                           |   14 
 b/net/sunrpc/xprtrdma/svc_rdma_backchannel.c               |    4 
 b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c                  |    8 
 b/net/sunrpc/xprtrdma/svc_rdma_sendto.c                    |   14 
 b/net/sunrpc/xprtrdma/svc_rdma_transport.c                 |    8 
 b/net/sunrpc/xprtrdma/verbs.c                              |    8 
 drivers/infiniband/hw/hfi1/dma.c                           |  183 -------
 drivers/infiniband/hw/qib/qib_dma.c                        |  169 -------
 drivers/infiniband/sw/rdmavt/dma.c                         |  198 --------
 drivers/infiniband/sw/rdmavt/dma.h                         |   53 --
 drivers/infiniband/sw/rxe/rxe_dma.c                        |  183 -------
 169 files changed, 553 insertions(+), 1712 deletions(-)

Bart.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 1/5] treewide: constify most struct dma_map_ops
       [not found]     ` <f6b70724-772c-c17f-f1be-1681fab31228-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
@ 2016-12-09 18:20       ` Christoph Hellwig
  0 siblings, 0 replies; 36+ messages in thread
From: Christoph Hellwig @ 2016-12-09 18:20 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Doug Ledford, Christoph Hellwig, Sagi Grimberg,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA

Looks fine,

Reviewed-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 2/5] misc: vop: Remove a cast
       [not found]     ` <6fff2450-6442-4539-47ff-67f04a593c06-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
@ 2016-12-09 18:20       ` Christoph Hellwig
  0 siblings, 0 replies; 36+ messages in thread
From: Christoph Hellwig @ 2016-12-09 18:20 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Doug Ledford, Christoph Hellwig, Sagi Grimberg,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA

Looks fine,

Reviewed-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 3/5] Move dma_ops from archdata into struct device
  2016-12-08  1:11   ` [PATCH 3/5] Move dma_ops from archdata into struct device Bart Van Assche
@ 2016-12-09 18:22       ` Christoph Hellwig
  0 siblings, 0 replies; 36+ messages in thread
From: Christoph Hellwig @ 2016-12-09 18:22 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: linux-arch, Sagi Grimberg, linux-rdma, linux-kernel,
	virtualization, Doug Ledford, David Woodhouse, Christoph Hellwig

We'll need a bit of a wieder audience for this I think..

On Wed, Dec 07, 2016 at 05:11:28PM -0800, Bart Van Assche wrote:
> Additionally, introduce set_dma_ops(). A later patch will introduce a
> call to that function in the RDMA drivers that will be modified to use
> dma_noop_ops.

This looks good to me, and we had a lot of talk about this for other
purposes for a while. 

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 3/5] Move dma_ops from archdata into struct device
@ 2016-12-09 18:22       ` Christoph Hellwig
  0 siblings, 0 replies; 36+ messages in thread
From: Christoph Hellwig @ 2016-12-09 18:22 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Doug Ledford, Christoph Hellwig, Sagi Grimberg, linux-rdma,
	David Woodhouse, virtualization, linux-arch, linux-kernel

We'll need a bit of a wieder audience for this I think..

On Wed, Dec 07, 2016 at 05:11:28PM -0800, Bart Van Assche wrote:
> Additionally, introduce set_dma_ops(). A later patch will introduce a
> call to that function in the RDMA drivers that will be modified to use
> dma_noop_ops.

This looks good to me, and we had a lot of talk about this for other
purposes for a while. 

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 4/5] IB: Switch from struct ib_dma_mapping_ops to struct dma_mapping_ops
       [not found]     ` <25d066c2-59d7-2be7-dd56-e29e99b43620-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
@ 2016-12-09 18:23       ` Christoph Hellwig
  2016-12-09 18:24       ` Christoph Hellwig
  1 sibling, 0 replies; 36+ messages in thread
From: Christoph Hellwig @ 2016-12-09 18:23 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Doug Ledford, Christoph Hellwig, Sagi Grimberg,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA

Looks fine:

Reviewed-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>

Even better with a follow up patch to kill the ib_* wrappers..
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 5/5] treewide: Inline ib_dma_map_*() functions
       [not found]     ` <9bdf696e-ec64-d60e-3d7e-7ad5b3000d60-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
@ 2016-12-09 18:23       ` Christoph Hellwig
  0 siblings, 0 replies; 36+ messages in thread
From: Christoph Hellwig @ 2016-12-09 18:23 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Doug Ledford, Christoph Hellwig, Sagi Grimberg,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA

Ah, there we go:

Reviewed-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 4/5] IB: Switch from struct ib_dma_mapping_ops to struct dma_mapping_ops
       [not found]     ` <25d066c2-59d7-2be7-dd56-e29e99b43620-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
  2016-12-09 18:23       ` Christoph Hellwig
@ 2016-12-09 18:24       ` Christoph Hellwig
       [not found]         ` <20161209182429.GF16622-jcswGhMUV9g@public.gmane.org>
  1 sibling, 1 reply; 36+ messages in thread
From: Christoph Hellwig @ 2016-12-09 18:24 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Doug Ledford, Christoph Hellwig, Sagi Grimberg,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA

> Additionally, use dma_noop_ops instead of duplicating it.

Oops, I don't think that part actually is correct.  Both rdmavt
and the weird intel drivers want the _virtual_ address in their dma
address instead of the physical one set by dma_noop_ops.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 3/5] Move dma_ops from archdata into struct device
  2016-12-09 18:22       ` Christoph Hellwig
  (?)
  (?)
@ 2016-12-09 19:13       ` David Woodhouse
  2016-12-09 19:46           ` Bart Van Assche
  -1 siblings, 1 reply; 36+ messages in thread
From: David Woodhouse @ 2016-12-09 19:13 UTC (permalink / raw)
  To: Christoph Hellwig, Bart Van Assche
  Cc: Doug Ledford, Sagi Grimberg, linux-rdma, virtualization,
	linux-arch, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 721 bytes --]

On Fri, 2016-12-09 at 19:22 +0100, Christoph Hellwig wrote:
> We'll need a bit of a wieder audience for this I think..
> 
> On Wed, Dec 07, 2016 at 05:11:28PM -0800, Bart Van Assche wrote:
> > Additionally, introduce set_dma_ops(). A later patch will introduce a
> > call to that function in the RDMA drivers that will be modified to use
> > dma_noop_ops.
> 
> This looks good to me, and we had a lot of talk about this for other
> purposes for a while. 

Hm, I'm not convinced we want per-device dma_ops. What we want is per-
device IOMMU ops, and any dma_ops are just a generic or platform-
specific (in some cases) wrapper around those. We shouldn't normally
need per-device DMA ops at all.

-- 
dwmw2

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5760 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 3/5] Move dma_ops from archdata into struct device
  2016-12-09 18:22       ` Christoph Hellwig
  (?)
@ 2016-12-09 19:13       ` David Woodhouse
  -1 siblings, 0 replies; 36+ messages in thread
From: David Woodhouse @ 2016-12-09 19:13 UTC (permalink / raw)
  To: Christoph Hellwig, Bart Van Assche
  Cc: linux-arch, Sagi Grimberg, linux-rdma, linux-kernel,
	virtualization, Doug Ledford


[-- Attachment #1.1: Type: text/plain, Size: 721 bytes --]

On Fri, 2016-12-09 at 19:22 +0100, Christoph Hellwig wrote:
> We'll need a bit of a wieder audience for this I think..
> 
> On Wed, Dec 07, 2016 at 05:11:28PM -0800, Bart Van Assche wrote:
> > Additionally, introduce set_dma_ops(). A later patch will introduce a
> > call to that function in the RDMA drivers that will be modified to use
> > dma_noop_ops.
> 
> This looks good to me, and we had a lot of talk about this for other
> purposes for a while. 

Hm, I'm not convinced we want per-device dma_ops. What we want is per-
device IOMMU ops, and any dma_ops are just a generic or platform-
specific (in some cases) wrapper around those. We shouldn't normally
need per-device DMA ops at all.

-- 
dwmw2

[-- Attachment #1.2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5760 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 3/5] Move dma_ops from archdata into struct device
  2016-12-09 19:13       ` David Woodhouse
  2016-12-09 19:46           ` Bart Van Assche
@ 2016-12-09 19:46           ` Bart Van Assche
  0 siblings, 0 replies; 36+ messages in thread
From: Bart Van Assche @ 2016-12-09 19:46 UTC (permalink / raw)
  To: David Woodhouse, Christoph Hellwig
  Cc: linux-arch, Sagi Grimberg, linux-rdma, linux-kernel,
	virtualization, Doug Ledford

On 12/09/2016 11:13 AM, David Woodhouse wrote:
> On Fri, 2016-12-09 at 19:22 +0100, Christoph Hellwig wrote:
>> We'll need a bit of a wieder audience for this I think..
>>
>> On Wed, Dec 07, 2016 at 05:11:28PM -0800, Bart Van Assche wrote:
>>> Additionally, introduce set_dma_ops(). A later patch will introduce a
>>> call to that function in the RDMA drivers that will be modified to use
>>> dma_noop_ops.
>>
>> This looks good to me, and we had a lot of talk about this for other
>> purposes for a while.
>
> Hm, I'm not convinced we want per-device dma_ops. What we want is per-
> device IOMMU ops, and any dma_ops are just a generic or platform-
> specific (in some cases) wrapper around those. We shouldn't normally
> need per-device DMA ops at all.

Hello David,

Can you recommend an approach for e.g. the qib driver 
(drivers/infiniband/hw/qib)? That driver uses the CPU (PIO) instead of 
DMA to transfer data to a PCIe device. Sorry but I don't see how 
per-device IOMMU ops would allow to avoid that e.g. a cache flush is 
triggered before PIO starts.

Bart.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 3/5] Move dma_ops from archdata into struct device
@ 2016-12-09 19:46           ` Bart Van Assche
  0 siblings, 0 replies; 36+ messages in thread
From: Bart Van Assche @ 2016-12-09 19:46 UTC (permalink / raw)
  To: David Woodhouse, Christoph Hellwig
  Cc: Doug Ledford, Sagi Grimberg, linux-rdma, virtualization,
	linux-arch, linux-kernel

On 12/09/2016 11:13 AM, David Woodhouse wrote:
> On Fri, 2016-12-09 at 19:22 +0100, Christoph Hellwig wrote:
>> We'll need a bit of a wieder audience for this I think..
>>
>> On Wed, Dec 07, 2016 at 05:11:28PM -0800, Bart Van Assche wrote:
>>> Additionally, introduce set_dma_ops(). A later patch will introduce a
>>> call to that function in the RDMA drivers that will be modified to use
>>> dma_noop_ops.
>>
>> This looks good to me, and we had a lot of talk about this for other
>> purposes for a while.
>
> Hm, I'm not convinced we want per-device dma_ops. What we want is per-
> device IOMMU ops, and any dma_ops are just a generic or platform-
> specific (in some cases) wrapper around those. We shouldn't normally
> need per-device DMA ops at all.

Hello David,

Can you recommend an approach for e.g. the qib driver 
(drivers/infiniband/hw/qib)? That driver uses the CPU (PIO) instead of 
DMA to transfer data to a PCIe device. Sorry but I don't see how 
per-device IOMMU ops would allow to avoid that e.g. a cache flush is 
triggered before PIO starts.

Bart.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 3/5] Move dma_ops from archdata into struct device
@ 2016-12-09 19:46           ` Bart Van Assche
  0 siblings, 0 replies; 36+ messages in thread
From: Bart Van Assche @ 2016-12-09 19:46 UTC (permalink / raw)
  To: David Woodhouse, Christoph Hellwig
  Cc: Doug Ledford, Sagi Grimberg, linux-rdma, virtualization,
	linux-arch, linux-kernel

On 12/09/2016 11:13 AM, David Woodhouse wrote:
> On Fri, 2016-12-09 at 19:22 +0100, Christoph Hellwig wrote:
>> We'll need a bit of a wieder audience for this I think..
>>
>> On Wed, Dec 07, 2016 at 05:11:28PM -0800, Bart Van Assche wrote:
>>> Additionally, introduce set_dma_ops(). A later patch will introduce a
>>> call to that function in the RDMA drivers that will be modified to use
>>> dma_noop_ops.
>>
>> This looks good to me, and we had a lot of talk about this for other
>> purposes for a while.
>
> Hm, I'm not convinced we want per-device dma_ops. What we want is per-
> device IOMMU ops, and any dma_ops are just a generic or platform-
> specific (in some cases) wrapper around those. We shouldn't normally
> need per-device DMA ops at all.

Hello David,

Can you recommend an approach for e.g. the qib driver 
(drivers/infiniband/hw/qib)? That driver uses the CPU (PIO) instead of 
DMA to transfer data to a PCIe device. Sorry but I don't see how 
per-device IOMMU ops would allow to avoid that e.g. a cache flush is 
triggered before PIO starts.

Bart.


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 4/5] IB: Switch from struct ib_dma_mapping_ops to struct dma_mapping_ops
       [not found]         ` <20161209182429.GF16622-jcswGhMUV9g@public.gmane.org>
@ 2016-12-19 16:42           ` Dennis Dalessandro
       [not found]             ` <52e8398f-a146-721c-3b92-0892b4abbff8-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
  0 siblings, 1 reply; 36+ messages in thread
From: Dennis Dalessandro @ 2016-12-19 16:42 UTC (permalink / raw)
  To: Christoph Hellwig, Bart Van Assche
  Cc: Doug Ledford, Sagi Grimberg, linux-rdma-u79uwXL29TY76Z2rM5mHXA

On 12/09/2016 01:24 PM, Christoph Hellwig wrote:
>> Additionally, use dma_noop_ops instead of duplicating it.
>
> Oops, I don't think that part actually is correct.  Both rdmavt
> and the weird intel drivers want the _virtual_ address in their dma
> address instead of the physical one set by dma_noop_ops.

Yes, Christoph is correct in that our drivers use the virtual address. 
It would require more changes in the rest of the driver to make it use 
the physical address. Possible I think, but more work.

-Denny

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 4/5] IB: Switch from struct ib_dma_mapping_ops to struct dma_mapping_ops
       [not found]             ` <52e8398f-a146-721c-3b92-0892b4abbff8-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
@ 2016-12-19 16:55               ` Bart Van Assche
       [not found]                 ` <1482166487.25336.10.camel-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
  0 siblings, 1 reply; 36+ messages in thread
From: Bart Van Assche @ 2016-12-19 16:55 UTC (permalink / raw)
  To: hch-jcswGhMUV9g, dennis.dalessandro-ral2JQCrhuEAvxtiuMwx3w
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	dledford-H+wXaHxf7aLQT0dZR+AlfA, sagi-NQWnxTmZq1alnMjI0IkVqw

On Mon, 2016-12-19 at 11:42 -0500, Dennis Dalessandro wrote:
> On 12/09/2016 01:24 PM, Christoph Hellwig wrote:
> > > Additionally, use dma_noop_ops instead of duplicating it.
> > 
> > Oops, I don't think that part actually is correct.  Both rdmavt
> > and the weird intel drivers want the _virtual_ address in their dma
> > address instead of the physical one set by dma_noop_ops.
> 
> Yes, Christoph is correct in that our drivers use the virtual address. 
> It would require more changes in the rest of the driver to make it use 
> the physical address. Possible I think, but more work.

Hello Denny,

Once the v4.10 merge window has closed and v4.10-rc1 is out I will
repost this patch series. I will make sure when I repost this patch
series that the DMA code for the RDMA drivers touched by this patch
series uses virtual addresses instead of physical.

Bart.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: Testing latest linux-next 4.9 ib_srp and ib_srpt
       [not found]                 ` <1482166487.25336.10.camel-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
@ 2016-12-20 19:33                   ` Laurence Oberman
       [not found]                     ` <1918919536.8196250.1482262406057.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 36+ messages in thread
From: Laurence Oberman @ 2016-12-20 19:33 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

Hello Bart

I pulled the latest linux-next and built kernels for both server and client to rerun all my EDR tests for srp. 

For some reason the I/O size is being capped again to 1MB in my testing.
Using my same testbed.
Remember we spent a lot of time making sure we could do 4MB I/O :)

Its working fine in the RHEL 7.3 kernel so before I start going back testing upstream kernels decided to ask.

Have you tested large I/O with latest linux-next

Server Configuration
---------------------
Linux fedstorage.bos.redhat.com 4.9.0+ 

[root@fedstorage modprobe.d]# cat ib_srp.conf
options ib_srp cmd_sg_entries=64 indirect_sg_entries=2048

[root@fedstorage modprobe.d]# cat ib_srpt.conf
options ib_srpt srp_max_req_size=8296

Also Using 

# Set the srp_sq_size
for i in /sys/kernel/config/target/srpt/0xfe800000000000007cfe900300726e4e /sys/kernel/config/target/srpt/0xfe800000000000007cfe900300726e4f
do 
	echo 16384 > $i/tpgt_1/attrib/srp_sq_size
done

Client Configuration
--------------------
Linux ibclient 4.9.0+

[root@ibclient modprobe.d]# cat ib_srp.conf
options ib_srp cmd_sg_entries=255 indirect_sg_entries=2048 

dd if=/dev/sdw bs=4096k of=/dev/null iflag=direct

### RECORD    4 >>> ibclient <<< (1482261733.001) (Tue Dec 20 14:22:13 2016) ###
# DISK STATISTICS (/sec)
#                   <---------reads---------------><---------writes--------------><--------averages--------> Pct
#Time     Name       KBytes Merged  IOs Size  Wait  KBytes Merged  IOs Size  Wait  RWSize  QLen  Wait SvcTim Util
14:22:13 sdw         1373184      0 1341 1024     2       0      0    0    0     0    1024     3     2      0   97


If I reboot into my 7.3 kernel its back to what I expect

 dd if=/dev/sdw bs=4096k of=/dev/null iflag=direct


### RECORD    3 >>> ibclient <<< (1482262254.001) (Tue Dec 20 14:30:54 2016) ###
# DISK STATISTICS (/sec)
#                   <---------reads---------------><---------writes--------------><--------averages--------> Pct
#Time     Name       KBytes Merged  IOs Size  Wait  KBytes Merged  IOs Size  Wait  RWSize  QLen  Wait SvcTim Util
14:30:54 sdw         172032    129   42 4096     3       0      0    0    0     0    4096     1     3      3   130

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: Testing latest linux-next 4.9 ib_srp and ib_srpt
       [not found]                     ` <1918919536.8196250.1482262406057.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2016-12-20 19:43                       ` Laurence Oberman
       [not found]                         ` <2052479881.8196880.1482263028727.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 36+ messages in thread
From: Laurence Oberman @ 2016-12-20 19:43 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA



----- Original Message -----
> From: "Laurence Oberman" <loberman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> To: "Bart Van Assche" <Bart.VanAssche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
> Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> Sent: Tuesday, December 20, 2016 2:33:26 PM
> Subject: Re: Testing latest linux-next 4.9 ib_srp and ib_srpt
> 
> Hello Bart
> 
> I pulled the latest linux-next and built kernels for both server and client
> to rerun all my EDR tests for srp.
> 
> For some reason the I/O size is being capped again to 1MB in my testing.
> Using my same testbed.
> Remember we spent a lot of time making sure we could do 4MB I/O :)
> 
> Its working fine in the RHEL 7.3 kernel so before I start going back testing
> upstream kernels decided to ask.
> 
> Have you tested large I/O with latest linux-next
> 
> Server Configuration
> ---------------------
> Linux fedstorage.bos.redhat.com 4.9.0+
> 
> [root@fedstorage modprobe.d]# cat ib_srp.conf
> options ib_srp cmd_sg_entries=64 indirect_sg_entries=2048
> 
> [root@fedstorage modprobe.d]# cat ib_srpt.conf
> options ib_srpt srp_max_req_size=8296
> 
> Also Using
> 
> # Set the srp_sq_size
> for i in /sys/kernel/config/target/srpt/0xfe800000000000007cfe900300726e4e
> /sys/kernel/config/target/srpt/0xfe800000000000007cfe900300726e4f
> do
> 	echo 16384 > $i/tpgt_1/attrib/srp_sq_size
> done
> 
> Client Configuration
> --------------------
> Linux ibclient 4.9.0+
> 
> [root@ibclient modprobe.d]# cat ib_srp.conf
> options ib_srp cmd_sg_entries=255 indirect_sg_entries=2048
> 
> dd if=/dev/sdw bs=4096k of=/dev/null iflag=direct
> 
> ### RECORD    4 >>> ibclient <<< (1482261733.001) (Tue Dec 20 14:22:13 2016)
> ###
> # DISK STATISTICS (/sec)
> #
> <---------reads---------------><---------writes--------------><--------averages-------->
> Pct
> #Time     Name       KBytes Merged  IOs Size  Wait  KBytes Merged  IOs Size
> Wait  RWSize  QLen  Wait SvcTim Util
> 14:22:13 sdw         1373184      0 1341 1024     2       0      0    0    0
> 0    1024     3     2      0   97
> 
> 
> If I reboot into my 7.3 kernel its back to what I expect
> 
>  dd if=/dev/sdw bs=4096k of=/dev/null iflag=direct
> 
> 
> ### RECORD    3 >>> ibclient <<< (1482262254.001) (Tue Dec 20 14:30:54 2016)
> ###
> # DISK STATISTICS (/sec)
> #
> <---------reads---------------><---------writes--------------><--------averages-------->
> Pct
> #Time     Name       KBytes Merged  IOs Size  Wait  KBytes Merged  IOs Size
> Wait  RWSize  QLen  Wait SvcTim Util
> 14:30:54 sdw         172032    129   42 4096     3       0      0    0    0
> 0    4096     1     3      3   130
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

Hi Bart,

Just FYI

That dd snap was just as I had stopped the dd.

Here is a stable dd snap with the RHEL kernel, I just noticed the merging, need to reboot back into upstream to compare again.
No merging seen in upstream.

Thanks
Laurence

### RECORD    6 >>> ibclient <<< (1482262723.001) (Tue Dec 20 14:38:43 2016) ###
# DISK STATISTICS (/sec)
#                   <---------reads---------------><---------writes--------------><--------averages--------> Pct
#Time     Name       KBytes Merged  IOs Size  Wait  KBytes Merged  IOs Size  Wait  RWSize  QLen  Wait SvcTim Util
14:38:43 sdw         1200128    879  293 4096     3       0      0    0    0     0    4096     1     3      3   95
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: Testing latest linux-next 4.9 ib_srp and ib_srpt
       [not found]                         ` <2052479881.8196880.1482263028727.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2016-12-20 20:44                           ` Laurence Oberman
       [not found]                             ` <1668746735.8200653.1482266682585.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 36+ messages in thread
From: Laurence Oberman @ 2016-12-20 20:44 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA



----- Original Message -----
> From: "Laurence Oberman" <loberman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> To: "Bart Van Assche" <Bart.VanAssche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
> Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> Sent: Tuesday, December 20, 2016 2:43:48 PM
> Subject: Re: Testing latest linux-next 4.9 ib_srp and ib_srpt
> 
> 
> 
> ----- Original Message -----
> > From: "Laurence Oberman" <loberman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > To: "Bart Van Assche" <Bart.VanAssche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
> > Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > Sent: Tuesday, December 20, 2016 2:33:26 PM
> > Subject: Re: Testing latest linux-next 4.9 ib_srp and ib_srpt
> > 
> > Hello Bart
> > 
> > I pulled the latest linux-next and built kernels for both server and client
> > to rerun all my EDR tests for srp.
> > 
> > For some reason the I/O size is being capped again to 1MB in my testing.
> > Using my same testbed.
> > Remember we spent a lot of time making sure we could do 4MB I/O :)
> > 
> > Its working fine in the RHEL 7.3 kernel so before I start going back
> > testing
> > upstream kernels decided to ask.
> > 
> > Have you tested large I/O with latest linux-next
> > 
> > Server Configuration
> > ---------------------
> > Linux fedstorage.bos.redhat.com 4.9.0+
> > 
> > [root@fedstorage modprobe.d]# cat ib_srp.conf
> > options ib_srp cmd_sg_entries=64 indirect_sg_entries=2048
> > 
> > [root@fedstorage modprobe.d]# cat ib_srpt.conf
> > options ib_srpt srp_max_req_size=8296
> > 
> > Also Using
> > 
> > # Set the srp_sq_size
> > for i in /sys/kernel/config/target/srpt/0xfe800000000000007cfe900300726e4e
> > /sys/kernel/config/target/srpt/0xfe800000000000007cfe900300726e4f
> > do
> > 	echo 16384 > $i/tpgt_1/attrib/srp_sq_size
> > done
> > 
> > Client Configuration
> > --------------------
> > Linux ibclient 4.9.0+
> > 
> > [root@ibclient modprobe.d]# cat ib_srp.conf
> > options ib_srp cmd_sg_entries=255 indirect_sg_entries=2048
> > 
> > dd if=/dev/sdw bs=4096k of=/dev/null iflag=direct
> > 
> > ### RECORD    4 >>> ibclient <<< (1482261733.001) (Tue Dec 20 14:22:13
> > 2016)
> > ###
> > # DISK STATISTICS (/sec)
> > #
> > <---------reads---------------><---------writes--------------><--------averages-------->
> > Pct
> > #Time     Name       KBytes Merged  IOs Size  Wait  KBytes Merged  IOs Size
> > Wait  RWSize  QLen  Wait SvcTim Util
> > 14:22:13 sdw         1373184      0 1341 1024     2       0      0    0
> > 0
> > 0    1024     3     2      0   97
> > 
> > 
> > If I reboot into my 7.3 kernel its back to what I expect
> > 
> >  dd if=/dev/sdw bs=4096k of=/dev/null iflag=direct
> > 
> > 
> > ### RECORD    3 >>> ibclient <<< (1482262254.001) (Tue Dec 20 14:30:54
> > 2016)
> > ###
> > # DISK STATISTICS (/sec)
> > #
> > <---------reads---------------><---------writes--------------><--------averages-------->
> > Pct
> > #Time     Name       KBytes Merged  IOs Size  Wait  KBytes Merged  IOs Size
> > Wait  RWSize  QLen  Wait SvcTim Util
> > 14:30:54 sdw         172032    129   42 4096     3       0      0    0    0
> > 0    4096     1     3      3   130
> > 
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> > the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > 
> 
> Hi Bart,
> 
> Just FYI
> 
> That dd snap was just as I had stopped the dd.
> 
> Here is a stable dd snap with the RHEL kernel, I just noticed the merging,
> need to reboot back into upstream to compare again.
> No merging seen in upstream.
> 
> Thanks
> Laurence
> 
> ### RECORD    6 >>> ibclient <<< (1482262723.001) (Tue Dec 20 14:38:43 2016)
> ###
> # DISK STATISTICS (/sec)
> #
> <---------reads---------------><---------writes--------------><--------averages-------->
> Pct
> #Time     Name       KBytes Merged  IOs Size  Wait  KBytes Merged  IOs Size
> Wait  RWSize  QLen  Wait SvcTim Util
> 14:38:43 sdw         1200128    879  293 4096     3       0      0    0    0
> 0    4096     1     3      3   95
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

Replying to my own message to keep the thread going


This is Linux ibclient 4.8.0-rc4 

Behaves like I expected, and I see the 4MB I/O sizes. as I had already tested this.

dd if=/dev/sdw bs=4096k of=/dev/null iflag=direct


### RECORD    6 >>> ibclient <<< (1482266543.001) (Tue Dec 20 15:42:23 2016) ###
# DISK STATISTICS (/sec)
#                   <---------reads---------------><---------writes--------------><--------averages--------> Pct
#Time     Name       KBytes Merged  IOs Size  Wait  KBytes Merged  IOs Size  Wait  RWSize  QLen  Wait SvcTim Util
15:42:23 sdw         278528    201   68 4096     2       0      0    0    0     0    4096     1     2      2   206

Rebooting back into 4.9 and will bisect leading to it once I confirm

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: Testing latest linux-next 4.9 ib_srp and ib_srpt
       [not found]                             ` <1668746735.8200653.1482266682585.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2016-12-20 21:09                               ` Laurence Oberman
  2016-12-21  3:31                               ` Laurence Oberman
  1 sibling, 0 replies; 36+ messages in thread
From: Laurence Oberman @ 2016-12-20 21:09 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA



----- Original Message -----
> From: "Laurence Oberman" <loberman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> To: "Bart Van Assche" <Bart.VanAssche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
> Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> Sent: Tuesday, December 20, 2016 3:44:42 PM
> Subject: Re: Testing latest linux-next 4.9 ib_srp and ib_srpt
> 
> 
> 
> ----- Original Message -----
> > From: "Laurence Oberman" <loberman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > To: "Bart Van Assche" <Bart.VanAssche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
> > Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > Sent: Tuesday, December 20, 2016 2:43:48 PM
> > Subject: Re: Testing latest linux-next 4.9 ib_srp and ib_srpt
> > 
> > 
> > 
> > ----- Original Message -----
> > > From: "Laurence Oberman" <loberman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > > To: "Bart Van Assche" <Bart.VanAssche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
> > > Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > > Sent: Tuesday, December 20, 2016 2:33:26 PM
> > > Subject: Re: Testing latest linux-next 4.9 ib_srp and ib_srpt
> > > 
> > > Hello Bart
> > > 
> > > I pulled the latest linux-next and built kernels for both server and
> > > client
> > > to rerun all my EDR tests for srp.
> > > 
> > > For some reason the I/O size is being capped again to 1MB in my testing.
> > > Using my same testbed.
> > > Remember we spent a lot of time making sure we could do 4MB I/O :)
> > > 
> > > Its working fine in the RHEL 7.3 kernel so before I start going back
> > > testing
> > > upstream kernels decided to ask.
> > > 
> > > Have you tested large I/O with latest linux-next
> > > 
> > > Server Configuration
> > > ---------------------
> > > Linux fedstorage.bos.redhat.com 4.9.0+
> > > 
> > > [root@fedstorage modprobe.d]# cat ib_srp.conf
> > > options ib_srp cmd_sg_entries=64 indirect_sg_entries=2048
> > > 
> > > [root@fedstorage modprobe.d]# cat ib_srpt.conf
> > > options ib_srpt srp_max_req_size=8296
> > > 
> > > Also Using
> > > 
> > > # Set the srp_sq_size
> > > for i in
> > > /sys/kernel/config/target/srpt/0xfe800000000000007cfe900300726e4e
> > > /sys/kernel/config/target/srpt/0xfe800000000000007cfe900300726e4f
> > > do
> > > 	echo 16384 > $i/tpgt_1/attrib/srp_sq_size
> > > done
> > > 
> > > Client Configuration
> > > --------------------
> > > Linux ibclient 4.9.0+
> > > 
> > > [root@ibclient modprobe.d]# cat ib_srp.conf
> > > options ib_srp cmd_sg_entries=255 indirect_sg_entries=2048
> > > 
> > > dd if=/dev/sdw bs=4096k of=/dev/null iflag=direct
> > > 
> > > ### RECORD    4 >>> ibclient <<< (1482261733.001) (Tue Dec 20 14:22:13
> > > 2016)
> > > ###
> > > # DISK STATISTICS (/sec)
> > > #
> > > <---------reads---------------><---------writes--------------><--------averages-------->
> > > Pct
> > > #Time     Name       KBytes Merged  IOs Size  Wait  KBytes Merged  IOs
> > > Size
> > > Wait  RWSize  QLen  Wait SvcTim Util
> > > 14:22:13 sdw         1373184      0 1341 1024     2       0      0    0
> > > 0
> > > 0    1024     3     2      0   97
> > > 
> > > 
> > > If I reboot into my 7.3 kernel its back to what I expect
> > > 
> > >  dd if=/dev/sdw bs=4096k of=/dev/null iflag=direct
> > > 
> > > 
> > > ### RECORD    3 >>> ibclient <<< (1482262254.001) (Tue Dec 20 14:30:54
> > > 2016)
> > > ###
> > > # DISK STATISTICS (/sec)
> > > #
> > > <---------reads---------------><---------writes--------------><--------averages-------->
> > > Pct
> > > #Time     Name       KBytes Merged  IOs Size  Wait  KBytes Merged  IOs
> > > Size
> > > Wait  RWSize  QLen  Wait SvcTim Util
> > > 14:30:54 sdw         172032    129   42 4096     3       0      0    0
> > > 0
> > > 0    4096     1     3      3   130
> > > 
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> > > the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > 
> > 
> > Hi Bart,
> > 
> > Just FYI
> > 
> > That dd snap was just as I had stopped the dd.
> > 
> > Here is a stable dd snap with the RHEL kernel, I just noticed the merging,
> > need to reboot back into upstream to compare again.
> > No merging seen in upstream.
> > 
> > Thanks
> > Laurence
> > 
> > ### RECORD    6 >>> ibclient <<< (1482262723.001) (Tue Dec 20 14:38:43
> > 2016)
> > ###
> > # DISK STATISTICS (/sec)
> > #
> > <---------reads---------------><---------writes--------------><--------averages-------->
> > Pct
> > #Time     Name       KBytes Merged  IOs Size  Wait  KBytes Merged  IOs Size
> > Wait  RWSize  QLen  Wait SvcTim Util
> > 14:38:43 sdw         1200128    879  293 4096     3       0      0    0
> > 0
> > 0    4096     1     3      3   95
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> > the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > 
> 
> Replying to my own message to keep the thread going
> 
> 
> This is Linux ibclient 4.8.0-rc4
> 
> Behaves like I expected, and I see the 4MB I/O sizes. as I had already tested
> this.
> 
> dd if=/dev/sdw bs=4096k of=/dev/null iflag=direct
> 
> 
> ### RECORD    6 >>> ibclient <<< (1482266543.001) (Tue Dec 20 15:42:23 2016)
> ###
> # DISK STATISTICS (/sec)
> #
> <---------reads---------------><---------writes--------------><--------averages-------->
> Pct
> #Time     Name       KBytes Merged  IOs Size  Wait  KBytes Merged  IOs Size
> Wait  RWSize  QLen  Wait SvcTim Util
> 15:42:23 sdw         278528    201   68 4096     2       0      0    0    0
> 0    4096     1     2      2   206
> 
> Rebooting back into 4.9 and will bisect leading to it once I confirm
> 
> 

[root@ibclient ~]# uname -a
Linux ibclient 4.9.0+ #1 SMP Tue Dec 20 10:06:26 EST 2016 x86_64 x86_64 x86_64 GNU/Linux

Yep, back into 4.9 and we are at 1MB I/O

Confirmed regression or behavior change in 4.9

while true; do dd if=/dev/sdw bs=4096k of=/dev/null iflag=direct 1>/dev/null 2>&1 ; done &

# DISK STATISTICS (/sec)
#                   <---------reads---------------><---------writes--------------><--------averages--------> Pct
#Time     Name       KBytes Merged  IOs Size  Wait  KBytes Merged  IOs Size  Wait  RWSize  QLen  Wait SvcTim Util
16:06:33 sdw         1405952      0 1373 1024     2       0      0    0    0     0    1024     3     2      0   98
16:06:34 sdw         1424384      0 1391 1024     2       0      0    0    0     0    1024     3     2      0   97
16:06:35 sdw         1462272      0 1428 1024     2       0      0    0    0     0    1024     3     2      0   97
16:06:36 sdw         1417216      0 1384 1024     2       0      0    0    0     0    1024     3     2      0   97
16:06:37 sdw         1380352      0 1348 1024     2       0      0    0    0     0    1024     3     2      0   98
16:06:38 sdw         1404928      0 1372 1024     2       0      0    0    0     0    1024     3     2      0   98
16:06:39 sdw         1458176      0 1424 1024     2       0      0    0    0     0    1024     3     2      0   97

I will try chase it down and be back with an update

Thanks
Laurence
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: Testing latest linux-next 4.9 ib_srp and ib_srpt
       [not found]                             ` <1668746735.8200653.1482266682585.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  2016-12-20 21:09                               ` Laurence Oberman
@ 2016-12-21  3:31                               ` Laurence Oberman
       [not found]                                 ` <120559766.8215321.1482291094018.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  1 sibling, 1 reply; 36+ messages in thread
From: Laurence Oberman @ 2016-12-21  3:31 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA



----- Original Message -----
> From: "Laurence Oberman" <loberman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> To: "Bart Van Assche" <Bart.VanAssche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
> Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> Sent: Tuesday, December 20, 2016 3:44:42 PM
> Subject: Re: Testing latest linux-next 4.9 ib_srp and ib_srpt
> 
> 
> 
> ----- Original Message -----
> > From: "Laurence Oberman" <loberman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > To: "Bart Van Assche" <Bart.VanAssche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
> > Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > Sent: Tuesday, December 20, 2016 2:43:48 PM
> > Subject: Re: Testing latest linux-next 4.9 ib_srp and ib_srpt
> > 
> > 
> > 
> > ----- Original Message -----
> > > From: "Laurence Oberman" <loberman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > > To: "Bart Van Assche" <Bart.VanAssche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
> > > Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > > Sent: Tuesday, December 20, 2016 2:33:26 PM
> > > Subject: Re: Testing latest linux-next 4.9 ib_srp and ib_srpt
> > > 
> > > Hello Bart
> > > 
> > > I pulled the latest linux-next and built kernels for both server and
> > > client
> > > to rerun all my EDR tests for srp.
> > > 
> > > For some reason the I/O size is being capped again to 1MB in my testing.
> > > Using my same testbed.
> > > Remember we spent a lot of time making sure we could do 4MB I/O :)
> > > 
> > > Its working fine in the RHEL 7.3 kernel so before I start going back
> > > testing
> > > upstream kernels decided to ask.
> > > 
> > > Have you tested large I/O with latest linux-next
> > > 
> > > Server Configuration
> > > ---------------------
> > > Linux fedstorage.bos.redhat.com 4.9.0+
> > > 
> > > [root@fedstorage modprobe.d]# cat ib_srp.conf
> > > options ib_srp cmd_sg_entries=64 indirect_sg_entries=2048
> > > 
> > > [root@fedstorage modprobe.d]# cat ib_srpt.conf
> > > options ib_srpt srp_max_req_size=8296
> > > 
> > > Also Using
> > > 
> > > # Set the srp_sq_size
> > > for i in
> > > /sys/kernel/config/target/srpt/0xfe800000000000007cfe900300726e4e
> > > /sys/kernel/config/target/srpt/0xfe800000000000007cfe900300726e4f
> > > do
> > > 	echo 16384 > $i/tpgt_1/attrib/srp_sq_size
> > > done
> > > 
> > > Client Configuration
> > > --------------------
> > > Linux ibclient 4.9.0+
> > > 
> > > [root@ibclient modprobe.d]# cat ib_srp.conf
> > > options ib_srp cmd_sg_entries=255 indirect_sg_entries=2048
> > > 
> > > dd if=/dev/sdw bs=4096k of=/dev/null iflag=direct
> > > 
> > > ### RECORD    4 >>> ibclient <<< (1482261733.001) (Tue Dec 20 14:22:13
> > > 2016)
> > > ###
> > > # DISK STATISTICS (/sec)
> > > #
> > > <---------reads---------------><---------writes--------------><--------averages-------->
> > > Pct
> > > #Time     Name       KBytes Merged  IOs Size  Wait  KBytes Merged  IOs
> > > Size
> > > Wait  RWSize  QLen  Wait SvcTim Util
> > > 14:22:13 sdw         1373184      0 1341 1024     2       0      0    0
> > > 0
> > > 0    1024     3     2      0   97
> > > 
> > > 
> > > If I reboot into my 7.3 kernel its back to what I expect
> > > 
> > >  dd if=/dev/sdw bs=4096k of=/dev/null iflag=direct
> > > 
> > > 
> > > ### RECORD    3 >>> ibclient <<< (1482262254.001) (Tue Dec 20 14:30:54
> > > 2016)
> > > ###
> > > # DISK STATISTICS (/sec)
> > > #
> > > <---------reads---------------><---------writes--------------><--------averages-------->
> > > Pct
> > > #Time     Name       KBytes Merged  IOs Size  Wait  KBytes Merged  IOs
> > > Size
> > > Wait  RWSize  QLen  Wait SvcTim Util
> > > 14:30:54 sdw         172032    129   42 4096     3       0      0    0
> > > 0
> > > 0    4096     1     3      3   130
> > > 
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> > > the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > 
> > 
> > Hi Bart,
> > 
> > Just FYI
> > 
> > That dd snap was just as I had stopped the dd.
> > 
> > Here is a stable dd snap with the RHEL kernel, I just noticed the merging,
> > need to reboot back into upstream to compare again.
> > No merging seen in upstream.
> > 
> > Thanks
> > Laurence
> > 
> > ### RECORD    6 >>> ibclient <<< (1482262723.001) (Tue Dec 20 14:38:43
> > 2016)
> > ###
> > # DISK STATISTICS (/sec)
> > #
> > <---------reads---------------><---------writes--------------><--------averages-------->
> > Pct
> > #Time     Name       KBytes Merged  IOs Size  Wait  KBytes Merged  IOs Size
> > Wait  RWSize  QLen  Wait SvcTim Util
> > 14:38:43 sdw         1200128    879  293 4096     3       0      0    0
> > 0
> > 0    4096     1     3      3   95
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> > the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > 
> 
> Replying to my own message to keep the thread going
> 
> 
> This is Linux ibclient 4.8.0-rc4
> 
> Behaves like I expected, and I see the 4MB I/O sizes. as I had already tested
> this.
> 
> dd if=/dev/sdw bs=4096k of=/dev/null iflag=direct
> 
> 
> ### RECORD    6 >>> ibclient <<< (1482266543.001) (Tue Dec 20 15:42:23 2016)
> ###
> # DISK STATISTICS (/sec)
> #
> <---------reads---------------><---------writes--------------><--------averages-------->
> Pct
> #Time     Name       KBytes Merged  IOs Size  Wait  KBytes Merged  IOs Size
> Wait  RWSize  QLen  Wait SvcTim Util
> 15:42:23 sdw         278528    201   68 4096     2       0      0    0    0
> 0    4096     1     2      2   206
> 
> Rebooting back into 4.9 and will bisect leading to it once I confirm
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

So this is where I got to here:

These all worked and I get the 4MB I/O size with direct I/O

v4.8-rc4
v4.8-rc5
v4.8-rc6
v4.8-rc7
v4.8-rc8
v4.9
v4.9-rc1
v4.9-rc2
v4.9-rc3
v4.9-rc4
v4.9-rc5
v4.9-rc6
v4.9-rc7
v4.9-rc8

Then git checkout master and build final test kernel
4.9.0+

This one fails

# DISK STATISTICS (/sec)
#                   <---------reads---------------><---------writes--------------><--------averages--------> Pct
#Time     Name       KBytes Merged  IOs Size  Wait  KBytes Merged  IOs Size  Wait  RWSize  QLen  Wait SvcTim Util
22:12:48 sdw         1413120      0 1380 1024     2       0      0    0    0     0    1024     3     2      0   99
22:12:49 sdw         1409024      0 1376 1024     2       0      0    0    0     0    1024     3     2      0   98
22:12:50 sdw         1445888      0 1412 1024     2       0      0    0    0     0    1024     3     2      0   98
22:12:51 sdw         1429504      0 1396 1024     2       0      0    0    0     0    1024     3     2      0   98
22:12:52 sdw         1426432      0 1393 1024     2       0      0    0    0     0    1024     3     2      0   98
22:12:53 sdw         1408000      0 1375 1024     2       0      0    0    0     0    1024     3     2      0   98
                                ***      ****                                         ****

The last commit for 4.9-rc8 was 3e5de27

Between this and the master checkout HEAD there are 2313 lines of commits
This was the giant merge.

Something broke here. 
I guess I will have to bisect the hard way, unless somebody realizes which of the commits broke it.

Commits for SRP, none of these look like candidates to break the I/O size and merge

9032ad7 Merge branches 'misc', 'qedr', 'reject-helpers', 'rxe' and 'srp' into merge-test
4fa354c IB/srp: Make writing the add_target sysfs attr interruptible
290081b IB/srp: Make mapping failures easier to debug
3787d99 IB/srp: Make login failures easier to debug
042dd76 IB/srp: Introduce a local variable in srp_add_one()
1a1faf7 IB/srp: Fix CONFIG_DYNAMIC_DEBUG=n build
0d38c24 IB/srpt: Report login failures only once

dm related, however I am testing against an sd, not dm device here

ef548c5 dm flakey: introduce "error_writes" feature
e99dda8f dm cache policy smq: use hash_32() instead of hash_32_generic()
027c431 dm crypt: reject key strings containing whitespace chars
b446396 dm space map: always set ev if sm_ll_mutate() succeeds
0c79ce0 dm space map metadata: skip useless memcpy in metadata_ll_init_index()
314c25c dm space map metadata: fix 'struct sm_metadata' leak on failed create
58fc4fe Documentation: dm raid: define data_offset status field
11e2968 dm raid: fix discard support regression
affa9d2 dm raid: don't allow "write behind" with raid4/5/6
54cd640 dm mpath: use hw_handler_params if attached hw_handler is same as requested
c538f6e dm crypt: add ability to use keys from the kernel key retention service
0637018 dm array: remove a dead assignment in populate_ablock_with_values()
6080758 dm ioctl: use offsetof() instead of open-coding it
b23df0d dm rq: simplify use_blk_mq initialization
41c73a4 dm bufio: drop the lock when doing GFP_NOIO allocation
d12067f dm bufio: don't take the lock in dm_bufio_shrink_count
9ea61ca dm bufio: avoid sleeping while holding the dm_bufio lock
5b8c01f dm table: simplify dm_table_determine_type()
301fc3f dm table: an 'all_blk_mq' table must be loaded for a blk-mq DM device
6936c12 dm table: fix 'all_blk_mq' inconsistency when an empty table is loaded

I have time next week so will try narrow it down via bisect by commit

Thanks
Laurence
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: Testing latest linux-next 4.9 ib_srp and ib_srpt
       [not found]                                 ` <120559766.8215321.1482291094018.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2016-12-21  6:34                                   ` Laurence Oberman
       [not found]                                     ` <1928955380.8220327.1482302041315.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 36+ messages in thread
From: Laurence Oberman @ 2016-12-21  6:34 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA



----- Original Message -----
> From: "Laurence Oberman" <loberman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> To: "Bart Van Assche" <Bart.VanAssche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
> Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> Sent: Tuesday, December 20, 2016 10:31:34 PM
> Subject: Re: Testing latest linux-next 4.9 ib_srp and ib_srpt
> 
> 
> 
> ----- Original Message -----
> > From: "Laurence Oberman" <loberman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > To: "Bart Van Assche" <Bart.VanAssche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
> > Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > Sent: Tuesday, December 20, 2016 3:44:42 PM
> > Subject: Re: Testing latest linux-next 4.9 ib_srp and ib_srpt
> > 
> > 
> > 
> > ----- Original Message -----
> > > From: "Laurence Oberman" <loberman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > > To: "Bart Van Assche" <Bart.VanAssche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
> > > Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > > Sent: Tuesday, December 20, 2016 2:43:48 PM
> > > Subject: Re: Testing latest linux-next 4.9 ib_srp and ib_srpt
> > > 
> > > 
> > > 
> > > ----- Original Message -----
> > > > From: "Laurence Oberman" <loberman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > > > To: "Bart Van Assche" <Bart.VanAssche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
> > > > Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > > > Sent: Tuesday, December 20, 2016 2:33:26 PM
> > > > Subject: Re: Testing latest linux-next 4.9 ib_srp and ib_srpt
> > > > 
> > > > Hello Bart
> > > > 
> > > > I pulled the latest linux-next and built kernels for both server and
> > > > client
> > > > to rerun all my EDR tests for srp.
> > > > 
> > > > For some reason the I/O size is being capped again to 1MB in my
> > > > testing.
> > > > Using my same testbed.
> > > > Remember we spent a lot of time making sure we could do 4MB I/O :)
> > > > 
> > > > Its working fine in the RHEL 7.3 kernel so before I start going back
> > > > testing
> > > > upstream kernels decided to ask.
> > > > 
> > > > Have you tested large I/O with latest linux-next
> > > > 
> > > > Server Configuration
> > > > ---------------------
> > > > Linux fedstorage.bos.redhat.com 4.9.0+
> > > > 
> > > > [root@fedstorage modprobe.d]# cat ib_srp.conf
> > > > options ib_srp cmd_sg_entries=64 indirect_sg_entries=2048
> > > > 
> > > > [root@fedstorage modprobe.d]# cat ib_srpt.conf
> > > > options ib_srpt srp_max_req_size=8296
> > > > 
> > > > Also Using
> > > > 
> > > > # Set the srp_sq_size
> > > > for i in
> > > > /sys/kernel/config/target/srpt/0xfe800000000000007cfe900300726e4e
> > > > /sys/kernel/config/target/srpt/0xfe800000000000007cfe900300726e4f
> > > > do
> > > > 	echo 16384 > $i/tpgt_1/attrib/srp_sq_size
> > > > done
> > > > 
> > > > Client Configuration
> > > > --------------------
> > > > Linux ibclient 4.9.0+
> > > > 
> > > > [root@ibclient modprobe.d]# cat ib_srp.conf
> > > > options ib_srp cmd_sg_entries=255 indirect_sg_entries=2048
> > > > 
> > > > dd if=/dev/sdw bs=4096k of=/dev/null iflag=direct
> > > > 
> > > > ### RECORD    4 >>> ibclient <<< (1482261733.001) (Tue Dec 20 14:22:13
> > > > 2016)
> > > > ###
> > > > # DISK STATISTICS (/sec)
> > > > #
> > > > <---------reads---------------><---------writes--------------><--------averages-------->
> > > > Pct
> > > > #Time     Name       KBytes Merged  IOs Size  Wait  KBytes Merged  IOs
> > > > Size
> > > > Wait  RWSize  QLen  Wait SvcTim Util
> > > > 14:22:13 sdw         1373184      0 1341 1024     2       0      0    0
> > > > 0
> > > > 0    1024     3     2      0   97
> > > > 
> > > > 
> > > > If I reboot into my 7.3 kernel its back to what I expect
> > > > 
> > > >  dd if=/dev/sdw bs=4096k of=/dev/null iflag=direct
> > > > 
> > > > 
> > > > ### RECORD    3 >>> ibclient <<< (1482262254.001) (Tue Dec 20 14:30:54
> > > > 2016)
> > > > ###
> > > > # DISK STATISTICS (/sec)
> > > > #
> > > > <---------reads---------------><---------writes--------------><--------averages-------->
> > > > Pct
> > > > #Time     Name       KBytes Merged  IOs Size  Wait  KBytes Merged  IOs
> > > > Size
> > > > Wait  RWSize  QLen  Wait SvcTim Util
> > > > 14:30:54 sdw         172032    129   42 4096     3       0      0    0
> > > > 0
> > > > 0    4096     1     3      3   130
> > > > 
> > > > --
> > > > To unsubscribe from this list: send the line "unsubscribe linux-rdma"
> > > > in
> > > > the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > > 
> > > 
> > > Hi Bart,
> > > 
> > > Just FYI
> > > 
> > > That dd snap was just as I had stopped the dd.
> > > 
> > > Here is a stable dd snap with the RHEL kernel, I just noticed the
> > > merging,
> > > need to reboot back into upstream to compare again.
> > > No merging seen in upstream.
> > > 
> > > Thanks
> > > Laurence
> > > 
> > > ### RECORD    6 >>> ibclient <<< (1482262723.001) (Tue Dec 20 14:38:43
> > > 2016)
> > > ###
> > > # DISK STATISTICS (/sec)
> > > #
> > > <---------reads---------------><---------writes--------------><--------averages-------->
> > > Pct
> > > #Time     Name       KBytes Merged  IOs Size  Wait  KBytes Merged  IOs
> > > Size
> > > Wait  RWSize  QLen  Wait SvcTim Util
> > > 14:38:43 sdw         1200128    879  293 4096     3       0      0    0
> > > 0
> > > 0    4096     1     3      3   95
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> > > the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > 
> > 
> > Replying to my own message to keep the thread going
> > 
> > 
> > This is Linux ibclient 4.8.0-rc4
> > 
> > Behaves like I expected, and I see the 4MB I/O sizes. as I had already
> > tested
> > this.
> > 
> > dd if=/dev/sdw bs=4096k of=/dev/null iflag=direct
> > 
> > 
> > ### RECORD    6 >>> ibclient <<< (1482266543.001) (Tue Dec 20 15:42:23
> > 2016)
> > ###
> > # DISK STATISTICS (/sec)
> > #
> > <---------reads---------------><---------writes--------------><--------averages-------->
> > Pct
> > #Time     Name       KBytes Merged  IOs Size  Wait  KBytes Merged  IOs Size
> > Wait  RWSize  QLen  Wait SvcTim Util
> > 15:42:23 sdw         278528    201   68 4096     2       0      0    0    0
> > 0    4096     1     2      2   206
> > 
> > Rebooting back into 4.9 and will bisect leading to it once I confirm
> > 
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> > the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > 
> 
> So this is where I got to here:
> 
> These all worked and I get the 4MB I/O size with direct I/O
> 
> v4.8-rc4
> v4.8-rc5
> v4.8-rc6
> v4.8-rc7
> v4.8-rc8
> v4.9
> v4.9-rc1
> v4.9-rc2
> v4.9-rc3
> v4.9-rc4
> v4.9-rc5
> v4.9-rc6
> v4.9-rc7
> v4.9-rc8
> 
> Then git checkout master and build final test kernel
> 4.9.0+
> 
> This one fails
> 
> # DISK STATISTICS (/sec)
> #
> <---------reads---------------><---------writes--------------><--------averages-------->
> Pct
> #Time     Name       KBytes Merged  IOs Size  Wait  KBytes Merged  IOs Size
> Wait  RWSize  QLen  Wait SvcTim Util
> 22:12:48 sdw         1413120      0 1380 1024     2       0      0    0    0
> 0    1024     3     2      0   99
> 22:12:49 sdw         1409024      0 1376 1024     2       0      0    0    0
> 0    1024     3     2      0   98
> 22:12:50 sdw         1445888      0 1412 1024     2       0      0    0    0
> 0    1024     3     2      0   98
> 22:12:51 sdw         1429504      0 1396 1024     2       0      0    0    0
> 0    1024     3     2      0   98
> 22:12:52 sdw         1426432      0 1393 1024     2       0      0    0    0
> 0    1024     3     2      0   98
> 22:12:53 sdw         1408000      0 1375 1024     2       0      0    0    0
> 0    1024     3     2      0   98
>                                 ***      ****
>                                 ****
> 
> The last commit for 4.9-rc8 was 3e5de27
> 
> Between this and the master checkout HEAD there are 2313 lines of commits
> This was the giant merge.
> 
> Something broke here.
> I guess I will have to bisect the hard way, unless somebody realizes which of
> the commits broke it.
> 
> Commits for SRP, none of these look like candidates to break the I/O size and
> merge
> 
> 9032ad7 Merge branches 'misc', 'qedr', 'reject-helpers', 'rxe' and 'srp' into
> merge-test
> 4fa354c IB/srp: Make writing the add_target sysfs attr interruptible
> 290081b IB/srp: Make mapping failures easier to debug
> 3787d99 IB/srp: Make login failures easier to debug
> 042dd76 IB/srp: Introduce a local variable in srp_add_one()
> 1a1faf7 IB/srp: Fix CONFIG_DYNAMIC_DEBUG=n build
> 0d38c24 IB/srpt: Report login failures only once
> 
> dm related, however I am testing against an sd, not dm device here
> 
> ef548c5 dm flakey: introduce "error_writes" feature
> e99dda8f dm cache policy smq: use hash_32() instead of hash_32_generic()
> 027c431 dm crypt: reject key strings containing whitespace chars
> b446396 dm space map: always set ev if sm_ll_mutate() succeeds
> 0c79ce0 dm space map metadata: skip useless memcpy in
> metadata_ll_init_index()
> 314c25c dm space map metadata: fix 'struct sm_metadata' leak on failed create
> 58fc4fe Documentation: dm raid: define data_offset status field
> 11e2968 dm raid: fix discard support regression
> affa9d2 dm raid: don't allow "write behind" with raid4/5/6
> 54cd640 dm mpath: use hw_handler_params if attached hw_handler is same as
> requested
> c538f6e dm crypt: add ability to use keys from the kernel key retention
> service
> 0637018 dm array: remove a dead assignment in populate_ablock_with_values()
> 6080758 dm ioctl: use offsetof() instead of open-coding it
> b23df0d dm rq: simplify use_blk_mq initialization
> 41c73a4 dm bufio: drop the lock when doing GFP_NOIO allocation
> d12067f dm bufio: don't take the lock in dm_bufio_shrink_count
> 9ea61ca dm bufio: avoid sleeping while holding the dm_bufio lock
> 5b8c01f dm table: simplify dm_table_determine_type()
> 301fc3f dm table: an 'all_blk_mq' table must be loaded for a blk-mq DM device
> 6936c12 dm table: fix 'all_blk_mq' inconsistency when an empty table is
> loaded
> 
> I have time next week so will try narrow it down via bisect by commit
> 
> Thanks
> Laurence
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

Started with bisect at > 6000 revisions and 13 cycles between v4.9-rc8 and master.
bisect good for v4.9-rc8, bisect bad for master HEAD

Down to around 700 now, but its late, going to bed now so will finish tomorrow  

Thanks
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: Testing latest linux-next 4.9 ib_srp and ib_srpt
       [not found]                                     ` <1928955380.8220327.1482302041315.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2016-12-21  8:08                                       ` Bart Van Assche
  2016-12-22  2:10                                       ` Laurence Oberman
  1 sibling, 0 replies; 36+ messages in thread
From: Bart Van Assche @ 2016-12-21  8:08 UTC (permalink / raw)
  To: loberman-H+wXaHxf7aLQT0dZR+AlfA; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="utf-8", Size: 496 bytes --]

On Wed, 2016-12-21 at 01:34 -0500, Laurence Oberman wrote:
> Started with bisect at > 6000 revisions and 13 cycles between v4.9-rc8 and master.
> bisect good for v4.9-rc8, bisect bad for master HEAD

Hello Laurence,

The number of block layer changes that went into what will become 4.10-rc1 is
huge. Running a bisect seems like a good idea to me.

Bart.N‹§²æìr¸›yúèšØb²X¬¶Ç§vØ^–)Þº{.nÇ+‰·¥Š{±­ÙšŠ{ayº\x1dʇڙë,j\a­¢f£¢·hš‹»öì\x17/oSc¾™Ú³9˜uÀ¦æå‰È&jw¨®\x03(­éšŽŠÝ¢j"ú\x1a¶^[m§ÿïêäz¹Þ–Šàþf£¢·hšˆ§~ˆmš

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: Testing latest linux-next 4.9 ib_srp and ib_srpt
       [not found]                                     ` <1928955380.8220327.1482302041315.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  2016-12-21  8:08                                       ` Bart Van Assche
@ 2016-12-22  2:10                                       ` Laurence Oberman
       [not found]                                         ` <1337539588.8422488.1482372617094.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  1 sibling, 1 reply; 36+ messages in thread
From: Laurence Oberman @ 2016-12-22  2:10 UTC (permalink / raw)
  To: Bart Van Assche, Christoph Hellwig
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, linux-scsi-u79uwXL29TY76Z2rM5mHXA



----- Original Message -----
> From: "Laurence Oberman" <loberman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> To: "Bart Van Assche" <Bart.VanAssche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
> Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> Sent: Wednesday, December 21, 2016 1:34:01 AM
> Subject: Re: Testing latest linux-next 4.9 ib_srp and ib_srpt
> 
> 
> 
> ----- Original Message -----
> > From: "Laurence Oberman" <loberman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > To: "Bart Van Assche" <Bart.VanAssche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
> > Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > Sent: Tuesday, December 20, 2016 10:31:34 PM
> > Subject: Re: Testing latest linux-next 4.9 ib_srp and ib_srpt
> > 
> > 
> > 
> > ----- Original Message -----
> > > From: "Laurence Oberman" <loberman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > > To: "Bart Van Assche" <Bart.VanAssche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
> > > Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > > Sent: Tuesday, December 20, 2016 3:44:42 PM
> > > Subject: Re: Testing latest linux-next 4.9 ib_srp and ib_srpt
> > > 
> > > 
> > > 
> > > ----- Original Message -----
> > > > From: "Laurence Oberman" <loberman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > > > To: "Bart Van Assche" <Bart.VanAssche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
> > > > Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > > > Sent: Tuesday, December 20, 2016 2:43:48 PM
> > > > Subject: Re: Testing latest linux-next 4.9 ib_srp and ib_srpt
> > > > 
> > > > 
> > > > 
> > > > ----- Original Message -----
> > > > > From: "Laurence Oberman" <loberman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > > > > To: "Bart Van Assche" <Bart.VanAssche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
> > > > > Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > > > > Sent: Tuesday, December 20, 2016 2:33:26 PM
> > > > > Subject: Re: Testing latest linux-next 4.9 ib_srp and ib_srpt
> > > > > 
> > > > > Hello Bart
> > > > > 
> > > > > I pulled the latest linux-next and built kernels for both server and
> > > > > client
> > > > > to rerun all my EDR tests for srp.
> > > > > 
> > > > > For some reason the I/O size is being capped again to 1MB in my
> > > > > testing.
> > > > > Using my same testbed.
> > > > > Remember we spent a lot of time making sure we could do 4MB I/O :)
> > > > > 
> > > > > Its working fine in the RHEL 7.3 kernel so before I start going back
> > > > > testing
> > > > > upstream kernels decided to ask.
> > > > > 
> > > > > Have you tested large I/O with latest linux-next
> > > > > 
> > > > > Server Configuration
> > > > > ---------------------
> > > > > Linux fedstorage.bos.redhat.com 4.9.0+
> > > > > 
> > > > > [root@fedstorage modprobe.d]# cat ib_srp.conf
> > > > > options ib_srp cmd_sg_entries=64 indirect_sg_entries=2048
> > > > > 
> > > > > [root@fedstorage modprobe.d]# cat ib_srpt.conf
> > > > > options ib_srpt srp_max_req_size=8296
> > > > > 
> > > > > Also Using
> > > > > 
> > > > > # Set the srp_sq_size
> > > > > for i in
> > > > > /sys/kernel/config/target/srpt/0xfe800000000000007cfe900300726e4e
> > > > > /sys/kernel/config/target/srpt/0xfe800000000000007cfe900300726e4f
> > > > > do
> > > > > 	echo 16384 > $i/tpgt_1/attrib/srp_sq_size
> > > > > done
> > > > > 
> > > > > Client Configuration
> > > > > --------------------
> > > > > Linux ibclient 4.9.0+
> > > > > 
> > > > > [root@ibclient modprobe.d]# cat ib_srp.conf
> > > > > options ib_srp cmd_sg_entries=255 indirect_sg_entries=2048
> > > > > 
> > > > > dd if=/dev/sdw bs=4096k of=/dev/null iflag=direct
> > > > > 
> > > > > ### RECORD    4 >>> ibclient <<< (1482261733.001) (Tue Dec 20
> > > > > 14:22:13
> > > > > 2016)
> > > > > ###
> > > > > # DISK STATISTICS (/sec)
> > > > > #
> > > > > <---------reads---------------><---------writes--------------><--------averages-------->
> > > > > Pct
> > > > > #Time     Name       KBytes Merged  IOs Size  Wait  KBytes Merged
> > > > > IOs
> > > > > Size
> > > > > Wait  RWSize  QLen  Wait SvcTim Util
> > > > > 14:22:13 sdw         1373184      0 1341 1024     2       0      0
> > > > > 0
> > > > > 0
> > > > > 0    1024     3     2      0   97
> > > > > 
> > > > > 
> > > > > If I reboot into my 7.3 kernel its back to what I expect
> > > > > 
> > > > >  dd if=/dev/sdw bs=4096k of=/dev/null iflag=direct
> > > > > 
> > > > > 
> > > > > ### RECORD    3 >>> ibclient <<< (1482262254.001) (Tue Dec 20
> > > > > 14:30:54
> > > > > 2016)
> > > > > ###
> > > > > # DISK STATISTICS (/sec)
> > > > > #
> > > > > <---------reads---------------><---------writes--------------><--------averages-------->
> > > > > Pct
> > > > > #Time     Name       KBytes Merged  IOs Size  Wait  KBytes Merged
> > > > > IOs
> > > > > Size
> > > > > Wait  RWSize  QLen  Wait SvcTim Util
> > > > > 14:30:54 sdw         172032    129   42 4096     3       0      0
> > > > > 0
> > > > > 0
> > > > > 0    4096     1     3      3   130
> > > > > 
> > > > > --
> > > > > To unsubscribe from this list: send the line "unsubscribe linux-rdma"
> > > > > in
> > > > > the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > > > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > > > 
> > > > 
> > > > Hi Bart,
> > > > 
> > > > Just FYI
> > > > 
> > > > That dd snap was just as I had stopped the dd.
> > > > 
> > > > Here is a stable dd snap with the RHEL kernel, I just noticed the
> > > > merging,
> > > > need to reboot back into upstream to compare again.
> > > > No merging seen in upstream.
> > > > 
> > > > Thanks
> > > > Laurence
> > > > 
> > > > ### RECORD    6 >>> ibclient <<< (1482262723.001) (Tue Dec 20 14:38:43
> > > > 2016)
> > > > ###
> > > > # DISK STATISTICS (/sec)
> > > > #
> > > > <---------reads---------------><---------writes--------------><--------averages-------->
> > > > Pct
> > > > #Time     Name       KBytes Merged  IOs Size  Wait  KBytes Merged  IOs
> > > > Size
> > > > Wait  RWSize  QLen  Wait SvcTim Util
> > > > 14:38:43 sdw         1200128    879  293 4096     3       0      0    0
> > > > 0
> > > > 0    4096     1     3      3   95
> > > > --
> > > > To unsubscribe from this list: send the line "unsubscribe linux-rdma"
> > > > in
> > > > the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > > 
> > > 
> > > Replying to my own message to keep the thread going
> > > 
> > > 
> > > This is Linux ibclient 4.8.0-rc4
> > > 
> > > Behaves like I expected, and I see the 4MB I/O sizes. as I had already
> > > tested
> > > this.
> > > 
> > > dd if=/dev/sdw bs=4096k of=/dev/null iflag=direct
> > > 
> > > 
> > > ### RECORD    6 >>> ibclient <<< (1482266543.001) (Tue Dec 20 15:42:23
> > > 2016)
> > > ###
> > > # DISK STATISTICS (/sec)
> > > #
> > > <---------reads---------------><---------writes--------------><--------averages-------->
> > > Pct
> > > #Time     Name       KBytes Merged  IOs Size  Wait  KBytes Merged  IOs
> > > Size
> > > Wait  RWSize  QLen  Wait SvcTim Util
> > > 15:42:23 sdw         278528    201   68 4096     2       0      0    0
> > > 0
> > > 0    4096     1     2      2   206
> > > 
> > > Rebooting back into 4.9 and will bisect leading to it once I confirm
> > > 
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> > > the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > 
> > 
> > So this is where I got to here:
> > 
> > These all worked and I get the 4MB I/O size with direct I/O
> > 
> > v4.8-rc4
> > v4.8-rc5
> > v4.8-rc6
> > v4.8-rc7
> > v4.8-rc8
> > v4.9
> > v4.9-rc1
> > v4.9-rc2
> > v4.9-rc3
> > v4.9-rc4
> > v4.9-rc5
> > v4.9-rc6
> > v4.9-rc7
> > v4.9-rc8
> > 
> > Then git checkout master and build final test kernel
> > 4.9.0+
> > 
> > This one fails
> > 
> > # DISK STATISTICS (/sec)
> > #
> > <---------reads---------------><---------writes--------------><--------averages-------->
> > Pct
> > #Time     Name       KBytes Merged  IOs Size  Wait  KBytes Merged  IOs Size
> > Wait  RWSize  QLen  Wait SvcTim Util
> > 22:12:48 sdw         1413120      0 1380 1024     2       0      0    0
> > 0
> > 0    1024     3     2      0   99
> > 22:12:49 sdw         1409024      0 1376 1024     2       0      0    0
> > 0
> > 0    1024     3     2      0   98
> > 22:12:50 sdw         1445888      0 1412 1024     2       0      0    0
> > 0
> > 0    1024     3     2      0   98
> > 22:12:51 sdw         1429504      0 1396 1024     2       0      0    0
> > 0
> > 0    1024     3     2      0   98
> > 22:12:52 sdw         1426432      0 1393 1024     2       0      0    0
> > 0
> > 0    1024     3     2      0   98
> > 22:12:53 sdw         1408000      0 1375 1024     2       0      0    0
> > 0
> > 0    1024     3     2      0   98
> >                                 ***      ****
> >                                 ****
> > 
> > The last commit for 4.9-rc8 was 3e5de27
> > 
> > Between this and the master checkout HEAD there are 2313 lines of commits
> > This was the giant merge.
> > 
> > Something broke here.
> > I guess I will have to bisect the hard way, unless somebody realizes which
> > of
> > the commits broke it.
> > 
> > Commits for SRP, none of these look like candidates to break the I/O size
> > and
> > merge
> > 
> > 9032ad7 Merge branches 'misc', 'qedr', 'reject-helpers', 'rxe' and 'srp'
> > into
> > merge-test
> > 4fa354c IB/srp: Make writing the add_target sysfs attr interruptible
> > 290081b IB/srp: Make mapping failures easier to debug
> > 3787d99 IB/srp: Make login failures easier to debug
> > 042dd76 IB/srp: Introduce a local variable in srp_add_one()
> > 1a1faf7 IB/srp: Fix CONFIG_DYNAMIC_DEBUG=n build
> > 0d38c24 IB/srpt: Report login failures only once
> > 
> > dm related, however I am testing against an sd, not dm device here
> > 
> > ef548c5 dm flakey: introduce "error_writes" feature
> > e99dda8f dm cache policy smq: use hash_32() instead of hash_32_generic()
> > 027c431 dm crypt: reject key strings containing whitespace chars
> > b446396 dm space map: always set ev if sm_ll_mutate() succeeds
> > 0c79ce0 dm space map metadata: skip useless memcpy in
> > metadata_ll_init_index()
> > 314c25c dm space map metadata: fix 'struct sm_metadata' leak on failed
> > create
> > 58fc4fe Documentation: dm raid: define data_offset status field
> > 11e2968 dm raid: fix discard support regression
> > affa9d2 dm raid: don't allow "write behind" with raid4/5/6
> > 54cd640 dm mpath: use hw_handler_params if attached hw_handler is same as
> > requested
> > c538f6e dm crypt: add ability to use keys from the kernel key retention
> > service
> > 0637018 dm array: remove a dead assignment in populate_ablock_with_values()
> > 6080758 dm ioctl: use offsetof() instead of open-coding it
> > b23df0d dm rq: simplify use_blk_mq initialization
> > 41c73a4 dm bufio: drop the lock when doing GFP_NOIO allocation
> > d12067f dm bufio: don't take the lock in dm_bufio_shrink_count
> > 9ea61ca dm bufio: avoid sleeping while holding the dm_bufio lock
> > 5b8c01f dm table: simplify dm_table_determine_type()
> > 301fc3f dm table: an 'all_blk_mq' table must be loaded for a blk-mq DM
> > device
> > 6936c12 dm table: fix 'all_blk_mq' inconsistency when an empty table is
> > loaded
> > 
> > I have time next week so will try narrow it down via bisect by commit
> > 
> > Thanks
> > Laurence
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> > the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > 
> 
> Started with bisect at > 6000 revisions and 13 cycles between v4.9-rc8 and
> master.
> bisect good for v4.9-rc8, bisect bad for master HEAD
> 
> Down to around 700 now, but its late, going to bed now so will finish
> tomorrow
> 
> Thanks
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

Hi Bart, Christoph

After multiple bisects (6000 revisions, 13 cycles), I got to this one.
Of course there are a huge amount of block layer changes as we know in rc10.

[loberman@ibclient linux-next.orig]$ git bisect bad
Bisecting: 0 revisions left to test after this (roughly 0 steps)
[542ff7bf18c63cf403e36a4a1c71d86dc120d924] block: new direct I/O implementation

This commit is the one that seems to have changed the behavior.
Max I/O size restricted 1MB, even when 4MB I/O is requested, no merging seen.

This is not going to only affect SRP targets.

I will review the code and will be happy to test any patches.
I will leave the test bed in place.


commit 542ff7bf18c63cf403e36a4a1c71d86dc120d924
Author: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
Date:   Wed Nov 16 23:14:22 2016 -0700

    block: new direct I/O implementation
    
    Similar to the simple fast path, but we now need a dio structure to
    track multiple-bio completions.  It's basically a cut-down version
    of the new iomap-based direct I/O code for filesystems, but without
    all the logic to call into the filesystem for extent lookup or
    allocation, and without the complex I/O completion workqueue handler
    for AIO - instead we just use the FUA bit on the bios to ensure
    data is flushed to stable storage.
    
    Signed-off-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
    Signed-off-by: Jens Axboe <axboe-b10kYP2dOMg@public.gmane.org>

diff --git a/fs/block_dev.c b/fs/block_dev.c
index a1b9abe..35cc494 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -270,11 +270,161 @@ static void blkdev_bio_end_io_simple(struct bio *bio)
        return ret;
 }
 
+struct blkdev_dio {
+       union {
+               struct kiocb            *iocb;
+               struct task_struct      *waiter;
+       };
+       size_t                  size;
+       atomic_t                ref;
+       bool                    multi_bio : 1;
+       bool                    should_dirty : 1;
+       bool                    is_sync : 1;
+       struct bio              bio;
+};
+
+static struct bio_set *blkdev_dio_pool __read_mostly;
+
+static void blkdev_bio_end_io(struct bio *bio)
+{
+       struct blkdev_dio *dio = bio->bi_private;
+       bool should_dirty = dio->should_dirty;
+
+       if (dio->multi_bio && !atomic_dec_and_test(&dio->ref)) {
+               if (bio->bi_error && !dio->bio.bi_error)
+                       dio->bio.bi_error = bio->bi_error;
+       } else {
+               if (!dio->is_sync) {
+                       struct kiocb *iocb = dio->iocb;
+                       ssize_t ret = dio->bio.bi_error;
+
+                       if (likely(!ret)) {
+                               ret = dio->size;
+                               iocb->ki_pos += ret;
+                       }
+
+                       dio->iocb->ki_complete(iocb, ret, 0);
+                       bio_put(&dio->bio);
+               } else {
+                       struct task_struct *waiter = dio->waiter;
+
+                       WRITE_ONCE(dio->waiter, NULL);
+                       wake_up_process(waiter);
+               }
+       }
+
+       if (should_dirty) {
+               bio_check_pages_dirty(bio);
+       } else {
+               struct bio_vec *bvec;
+               int i;
+
+               bio_for_each_segment_all(bvec, bio, i)
+                       put_page(bvec->bv_page);
+               bio_put(bio);
+       }
+}
+
 static ssize_t
-blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+__blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages)
 {
        struct file *file = iocb->ki_filp;
        struct inode *inode = bdev_file_inode(file);
+       struct block_device *bdev = I_BDEV(inode);
+       unsigned blkbits = blksize_bits(bdev_logical_block_size(bdev));
+       struct blkdev_dio *dio;
+       struct bio *bio;
+       bool is_read = (iov_iter_rw(iter) == READ);
+       loff_t pos = iocb->ki_pos;
+       blk_qc_t qc = BLK_QC_T_NONE;
+       int ret;
+
+       if ((pos | iov_iter_alignment(iter)) & ((1 << blkbits) - 1))
+               return -EINVAL;
+
+       bio = bio_alloc_bioset(GFP_KERNEL, nr_pages, blkdev_dio_pool);
+       bio_get(bio); /* extra ref for the completion handler */
+
+       dio = container_of(bio, struct blkdev_dio, bio);
+       dio->is_sync = is_sync_kiocb(iocb);
+       if (dio->is_sync)
+               dio->waiter = current;
+       else
+               dio->iocb = iocb;
+
+       dio->size = 0;
+       dio->multi_bio = false;
+       dio->should_dirty = is_read && (iter->type == ITER_IOVEC);
+
+       for (;;) {
+               bio->bi_bdev = bdev;
+               bio->bi_iter.bi_sector = pos >> blkbits;
+               bio->bi_private = dio;
+               bio->bi_end_io = blkdev_bio_end_io;
+
+               ret = bio_iov_iter_get_pages(bio, iter);
+               if (unlikely(ret)) {
+                       bio->bi_error = ret;
+                       bio_endio(bio);
+                       break;
+               }
+
+               if (is_read) {
+                       bio->bi_opf = REQ_OP_READ;
+                       if (dio->should_dirty)
+                               bio_set_pages_dirty(bio);
+               } else {
+                       bio->bi_opf = dio_bio_write_op(iocb);
+                       task_io_account_write(bio->bi_iter.bi_size);
+               }
+
+               dio->size += bio->bi_iter.bi_size;
+               pos += bio->bi_iter.bi_size;
+
+               nr_pages = iov_iter_npages(iter, BIO_MAX_PAGES);
+               if (!nr_pages) {
+                       qc = submit_bio(bio);
+                       break;
+               }
+
+               if (!dio->multi_bio) {
+                       dio->multi_bio = true;
+                       atomic_set(&dio->ref, 2);
+               } else {
+                       atomic_inc(&dio->ref);
+               }
+
+               submit_bio(bio);
+               bio = bio_alloc(GFP_KERNEL, nr_pages);
+       }
+
+       if (!dio->is_sync)
+               return -EIOCBQUEUED;
+
+       for (;;) {
+               set_current_state(TASK_UNINTERRUPTIBLE);
+               if (!READ_ONCE(dio->waiter))
+                       break;
+
+               if (!(iocb->ki_flags & IOCB_HIPRI) ||
+                   !blk_mq_poll(bdev_get_queue(bdev), qc))
+                       io_schedule();
+       }
+       __set_current_state(TASK_RUNNING);
+
+       ret = dio->bio.bi_error;
+       if (likely(!ret)) {
+               ret = dio->size;
+               iocb->ki_pos += ret;
+       }
+
+       bio_put(&dio->bio);
+       return ret;
+}
+
+static ssize_t
+blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+{
        int nr_pages;
 
        nr_pages = iov_iter_npages(iter, BIO_MAX_PAGES + 1);
@@ -282,10 +432,18 @@ static void blkdev_bio_end_io_simple(struct bio *bio)
                return 0;
        if (is_sync_kiocb(iocb) && nr_pages <= BIO_MAX_PAGES)
                return __blkdev_direct_IO_simple(iocb, iter, nr_pages);
-       return __blockdev_direct_IO(iocb, inode, I_BDEV(inode), iter,
-                                   blkdev_get_block, NULL, NULL,
-                                   DIO_SKIP_DIO_COUNT);
+
+       return __blkdev_direct_IO(iocb, iter, min(nr_pages, BIO_MAX_PAGES));
+}
+
+static __init int blkdev_init(void)
+{
+       blkdev_dio_pool = bioset_create(4, offsetof(struct blkdev_dio, bio));
+       if (!blkdev_dio_pool)
+               return -ENOMEM;
+       return 0;
 }
+module_init(blkdev_init);
 
 int __sync_blockdev(struct block_device *bdev, int wait)
 {

Thanks
Laurence
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: Testing latest linux-next 4.9 ib_srp and ib_srpt
       [not found]                                         ` <1337539588.8422488.1482372617094.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2016-12-22  6:23                                           ` Christoph Hellwig
       [not found]                                             ` <20161222062321.GA30326-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
  0 siblings, 1 reply; 36+ messages in thread
From: Christoph Hellwig @ 2016-12-22  6:23 UTC (permalink / raw)
  To: Laurence Oberman
  Cc: Bart Van Assche, Christoph Hellwig,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linux-scsi-u79uwXL29TY76Z2rM5mHXA

I just got CC'ed to a massive chain of full quotes.  If you want an
answer from me please write a readable mail, thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: Testing latest linux-next 4.9 ib_srp and ib_srpt sees I/O capped at 1MB and no merging
       [not found]                                             ` <20161222062321.GA30326-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
@ 2016-12-22 13:17                                               ` Laurence Oberman
       [not found]                                                 ` <1661819060.8462293.1482412678092.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 36+ messages in thread
From: Laurence Oberman @ 2016-12-22 13:17 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Bart Van Assche, linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linux-scsi-u79uwXL29TY76Z2rM5mHXA


Hello Christoph, apologies, here is a clear summary of the issue.

During testing of the latest linux-next with rc-10 block layer changes I noticed that I/O was being capped at 1MB size and no merging was seen.

The issue was not apparent on 4.8.0-rc8 or earlier.

dd if=/dev/sdw bs=4096k of=/dev/null iflag=direct

### RECORD    6 >>> ibclient <<< (1482266543.001) (Tue Dec 20 15:42:23 2016) ###
# DISK STATISTICS (/sec)
#                   <---------reads---------------><---------writes--------------><--------averages--------> Pct
#Time     Name       KBytes Merged  IOs Size  Wait  KBytes Merged  IOs Size  Wait  RWSize  QLen  Wait SvcTim Util
15:42:23 sdw         278528    201   68 4096     2       0      0    0    0     0    4096     1     2      2   206

Then git checkout master and build final test kernel
4.9.0+

This one clearly shows the I.O at 1MB and no merging.

# DISK STATISTICS (/sec)
#                   <---------reads---------------><---------writes--------------><--------averages--------> Pct
#Time     Name       KBytes Merged  IOs Size  Wait  KBytes Merged  IOs Size  Wait  RWSize  QLen  Wait SvcTim Util
22:12:48 sdw         1413120      0 1380 1024     2       0      0    0    0     0    1024     3     2      0   99
22:12:49 sdw         1409024      0 1376 1024     2       0      0    0    0     0    1024     3     2      0   98
22:12:50 sdw         1445888      0 1412 1024     2       0      0    0    0     0    1024     3     2      0   98
22:12:51 sdw         1429504      0 1396 1024     2       0      0    0    0     0    1024     3     2      0   98
22:12:52 sdw         1426432      0 1393 1024     2       0      0    0    0     0    1024     3     2      0   98
22:12:53 sdw         1408000      0 1375 1024     2       0      0    0    0     0    1024     3     2      0   98
                                ***      ****                                         ****

After multiple bisects (6000 revisions, 13 cycles), I got to this one.
Of course there are a huge amount of block layer changes as we know in rc10.

[loberman@ibclient linux-next.orig]$ git bisect bad
Bisecting: 0 revisions left to test after this (roughly 0 steps)
[542ff7bf18c63cf403e36a4a1c71d86dc120d924] block: new direct I/O implementation

This commit is the one that seems to have changed the behavior.
Max I/O size restricted 1MB, even when 4MB I/O is requested, no merging seen.

This is not going to only affect SRP targets.

I will be happy to test any patches and the test bed is always in place.


commit 542ff7bf18c63cf403e36a4a1c71d86dc120d924
Author: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
Date:   Wed Nov 16 23:14:22 2016 -0700

    block: new direct I/O implementation
    
    Similar to the simple fast path, but we now need a dio structure to
    track multiple-bio completions.  It's basically a cut-down version
    of the new iomap-based direct I/O code for filesystems, but without
    all the logic to call into the filesystem for extent lookup or
    allocation, and without the complex I/O completion workqueue handler
    for AIO - instead we just use the FUA bit on the bios to ensure
    data is flushed to stable storage.
    
    Signed-off-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
    Signed-off-by: Jens Axboe <axboe-b10kYP2dOMg@public.gmane.org>

Many Thanks
Laurence
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: Testing latest linux-next 4.9 ib_srp and ib_srpt sees I/O capped at 1MB and no merging
       [not found]                                                 ` <1661819060.8462293.1482412678092.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2016-12-22 15:40                                                   ` Christoph Hellwig
  2016-12-22 19:54                                                     ` block: add back plugging in __blkdev_direct_IO kbuild test robot
       [not found]                                                     ` <20161222154049.GA4638-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
  0 siblings, 2 replies; 36+ messages in thread
From: Christoph Hellwig @ 2016-12-22 15:40 UTC (permalink / raw)
  To: Laurence Oberman
  Cc: Christoph Hellwig, Bart Van Assche,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linux-scsi-u79uwXL29TY76Z2rM5mHXA

Hi Laurance,

please try the patch below:

---
>From 69febe1cfb55844862f768447432249781001f9c Mon Sep 17 00:00:00 2001
From: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
Date: Thu, 22 Dec 2016 16:38:29 +0100
Subject: block: add back plugging in __blkdev_direct_IO

This allows sending larger than 1 MB requess to devices that support
large I/O sizes.

Signed-off-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
Reported-by: Laurence Oberman <loberman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
 fs/block_dev.c | 3 +++
 fs/iomap.c     | 1 -
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/fs/block_dev.c b/fs/block_dev.c
index 7c45072..206a92a 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -328,6 +328,7 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages)
 	struct file *file = iocb->ki_filp;
 	struct inode *inode = bdev_file_inode(file);
 	struct block_device *bdev = I_BDEV(inode);
+	struct blk_plug plug;
 	struct blkdev_dio *dio;
 	struct bio *bio;
 	bool is_read = (iov_iter_rw(iter) == READ);
@@ -353,6 +354,7 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages)
 	dio->multi_bio = false;
 	dio->should_dirty = is_read && (iter->type == ITER_IOVEC);
 
+	blk_start_plug(&plug);
 	for (;;) {
 		bio->bi_bdev = bdev;
 		bio->bi_iter.bi_sector = pos >> 9;
@@ -394,6 +396,7 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages)
 		submit_bio(bio);
 		bio = bio_alloc(GFP_KERNEL, nr_pages);
 	}
+	blk_finish_plug(&plug);
 
 	if (!dio->is_sync)
 		return -EIOCBQUEUED;
diff --git a/fs/iomap.c b/fs/iomap.c
index 354a123..3adf1e1 100644
--- a/fs/iomap.c
+++ b/fs/iomap.c
@@ -844,7 +844,6 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter, struct iomap_ops *ops,
 	size_t count = iov_iter_count(iter);
 	loff_t pos = iocb->ki_pos, end = iocb->ki_pos + count - 1, ret = 0;
 	unsigned int flags = IOMAP_DIRECT;
-	struct blk_plug plug;
 	struct iomap_dio *dio;
 
 	lockdep_assert_held(&inode->i_rwsem);
-- 
2.1.4

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: Testing latest linux-next 4.9 ib_srp and ib_srpt sees I/O capped at 1MB and no merging
       [not found]                                                     ` <20161222154049.GA4638-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
@ 2016-12-22 17:59                                                       ` Laurence Oberman
  2016-12-22 19:55                                                       ` block: add back plugging in __blkdev_direct_IO kbuild test robot
  1 sibling, 0 replies; 36+ messages in thread
From: Laurence Oberman @ 2016-12-22 17:59 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Bart Van Assche, linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linux-scsi-u79uwXL29TY76Z2rM5mHXA


----- Original Message -----
> From: "Christoph Hellwig" <hch-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
> To: "Laurence Oberman" <loberman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Cc: "Christoph Hellwig" <hch-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>, "Bart Van Assche" <Bart.VanAssche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>,
> linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-scsi-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> Sent: Thursday, December 22, 2016 10:40:49 AM
> Subject: Re: Testing latest linux-next 4.9 ib_srp and ib_srpt sees I/O capped at 1MB and no merging
> 
> Hi Laurance,
> 
> please try the patch below:
> 
> ---
> From 69febe1cfb55844862f768447432249781001f9c Mon Sep 17 00:00:00 2001
> From: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
> Date: Thu, 22 Dec 2016 16:38:29 +0100
> Subject: block: add back plugging in __blkdev_direct_IO
> 
> This allows sending larger than 1 MB requess to devices that support
> large I/O sizes.
> 
> Signed-off-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
> Reported-by: Laurence Oberman <loberman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> ---
>  fs/block_dev.c | 3 +++
>  fs/iomap.c     | 1 -
>  2 files changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/block_dev.c b/fs/block_dev.c
> index 7c45072..206a92a 100644
> --- a/fs/block_dev.c
> +++ b/fs/block_dev.c
> @@ -328,6 +328,7 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter
> *iter, int nr_pages)
>  	struct file *file = iocb->ki_filp;
>  	struct inode *inode = bdev_file_inode(file);
>  	struct block_device *bdev = I_BDEV(inode);
> +	struct blk_plug plug;
>  	struct blkdev_dio *dio;
>  	struct bio *bio;
>  	bool is_read = (iov_iter_rw(iter) == READ);
> @@ -353,6 +354,7 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter
> *iter, int nr_pages)
>  	dio->multi_bio = false;
>  	dio->should_dirty = is_read && (iter->type == ITER_IOVEC);
>  
> +	blk_start_plug(&plug);
>  	for (;;) {
>  		bio->bi_bdev = bdev;
>  		bio->bi_iter.bi_sector = pos >> 9;
> @@ -394,6 +396,7 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter
> *iter, int nr_pages)
>  		submit_bio(bio);
>  		bio = bio_alloc(GFP_KERNEL, nr_pages);
>  	}
> +	blk_finish_plug(&plug);
>  
>  	if (!dio->is_sync)
>  		return -EIOCBQUEUED;
> diff --git a/fs/iomap.c b/fs/iomap.c
> index 354a123..3adf1e1 100644
> --- a/fs/iomap.c
> +++ b/fs/iomap.c
> @@ -844,7 +844,6 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
> struct iomap_ops *ops,
>  	size_t count = iov_iter_count(iter);
>  	loff_t pos = iocb->ki_pos, end = iocb->ki_pos + count - 1, ret = 0;
>  	unsigned int flags = IOMAP_DIRECT;
> -	struct blk_plug plug;
>  	struct iomap_dio *dio;
>  
>  	lockdep_assert_held(&inode->i_rwsem);
> --
> 2.1.4
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

Hello Christoph

The patch works and I now see 4MB I/O

# DISK STATISTICS (/sec)
#                   <---------reads---------><---------writes---------><--------averages--------> Pct
#Time     Name       KBytes Merged  IOs Size  KBytes Merged  IOs Size  RWSize  QLen  Wait SvcTim Util
11:53:58 sdah        143360    105   35 4096       0      0    0    0    4096     1    28     28   99
11:53:59 sdah        139264    102   34 4096       0      0    0    0    4096     1    29     29   99
11:54:00 sdah        143360    105   35 4096       0      0    0    0    4096     1    28     28   99

I think you forgot to remove calls to blk_start_plug and blk_finish_plug in fs/iomap.c in your patch.
I took them out and built the test kernel that way.

Let me know if you will just remove those in the final or want a patch.


Thanks fo rthe super quick response

Regards
Laurence
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: block: add back plugging in __blkdev_direct_IO
  2016-12-22 15:40                                                   ` Christoph Hellwig
@ 2016-12-22 19:54                                                     ` kbuild test robot
       [not found]                                                     ` <20161222154049.GA4638-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
  1 sibling, 0 replies; 36+ messages in thread
From: kbuild test robot @ 2016-12-22 19:54 UTC (permalink / raw)
  Cc: kbuild-all, Laurence Oberman, Christoph Hellwig, Bart Van Assche,
	linux-rdma, linux-scsi

[-- Attachment #1: Type: text/plain, Size: 1878 bytes --]

Hi Christoph,

[auto build test ERROR on linus/master]
[also build test ERROR on next-20161222]
[cannot apply to v4.9]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Christoph-Hellwig/block-add-back-plugging-in-__blkdev_direct_IO/20161223-002453
config: x86_64-rhel (attached as .config)
compiler: gcc-6 (Debian 6.2.0-3) 6.2.0 20160901
reproduce:
        # save the attached .config to linux build tree
        make ARCH=x86_64 

All errors (new ones prefixed by >>):

   fs/iomap.c: In function 'iomap_dio_rw':
>> fs/iomap.c:897:18: error: 'plug' undeclared (first use in this function)
     blk_start_plug(&plug);
                     ^~~~
   fs/iomap.c:897:18: note: each undeclared identifier is reported only once for each function it appears in

vim +/plug +897 fs/iomap.c

ff6a9292 Christoph Hellwig 2016-11-30  891  		WARN_ON_ONCE(ret);
ff6a9292 Christoph Hellwig 2016-11-30  892  		ret = 0;
ff6a9292 Christoph Hellwig 2016-11-30  893  	}
ff6a9292 Christoph Hellwig 2016-11-30  894  
ff6a9292 Christoph Hellwig 2016-11-30  895  	inode_dio_begin(inode);
ff6a9292 Christoph Hellwig 2016-11-30  896  
ff6a9292 Christoph Hellwig 2016-11-30 @897  	blk_start_plug(&plug);
ff6a9292 Christoph Hellwig 2016-11-30  898  	do {
ff6a9292 Christoph Hellwig 2016-11-30  899  		ret = iomap_apply(inode, pos, count, flags, ops, dio,
ff6a9292 Christoph Hellwig 2016-11-30  900  				iomap_dio_actor);

:::::: The code at line 897 was first introduced by commit
:::::: ff6a9292e6f633d596826be5ba70d3ef90cc3300 iomap: implement direct I/O

:::::: TO: Christoph Hellwig <hch@lst.de>
:::::: CC: Dave Chinner <david@fromorbit.com>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 38270 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: block: add back plugging in __blkdev_direct_IO
       [not found]                                                     ` <20161222154049.GA4638-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
  2016-12-22 17:59                                                       ` Testing latest linux-next 4.9 ib_srp and ib_srpt sees I/O capped at 1MB and no merging Laurence Oberman
@ 2016-12-22 19:55                                                       ` kbuild test robot
  1 sibling, 0 replies; 36+ messages in thread
From: kbuild test robot @ 2016-12-22 19:55 UTC (permalink / raw)
  Cc: kbuild-all-JC7UmRfGjtg, Laurence Oberman, Christoph Hellwig,
	Bart Van Assche, linux-rdma-u79uwXL29TY76Z2rM5mHXA,
	linux-scsi-u79uwXL29TY76Z2rM5mHXA

[-- Attachment #1: Type: text/plain, Size: 1925 bytes --]

Hi Christoph,

[auto build test ERROR on linus/master]
[also build test ERROR on next-20161222]
[cannot apply to v4.9]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Christoph-Hellwig/block-add-back-plugging-in-__blkdev_direct_IO/20161223-002453
config: x86_64-lkp (attached as .config)
compiler: gcc-6 (Debian 6.2.0-3) 6.2.0 20160901
reproduce:
        # save the attached .config to linux build tree
        make ARCH=x86_64 

All errors (new ones prefixed by >>):

   fs/iomap.c: In function 'iomap_dio_rw':
>> fs/iomap.c:897:18: error: 'plug' undeclared (first use in this function)
     blk_start_plug(&plug);
                     ^~~~
   fs/iomap.c:897:18: note: each undeclared identifier is reported only once for each function it appears in

vim +/plug +897 fs/iomap.c

ff6a9292 Christoph Hellwig 2016-11-30  891  		WARN_ON_ONCE(ret);
ff6a9292 Christoph Hellwig 2016-11-30  892  		ret = 0;
ff6a9292 Christoph Hellwig 2016-11-30  893  	}
ff6a9292 Christoph Hellwig 2016-11-30  894  
ff6a9292 Christoph Hellwig 2016-11-30  895  	inode_dio_begin(inode);
ff6a9292 Christoph Hellwig 2016-11-30  896  
ff6a9292 Christoph Hellwig 2016-11-30 @897  	blk_start_plug(&plug);
ff6a9292 Christoph Hellwig 2016-11-30  898  	do {
ff6a9292 Christoph Hellwig 2016-11-30  899  		ret = iomap_apply(inode, pos, count, flags, ops, dio,
ff6a9292 Christoph Hellwig 2016-11-30  900  				iomap_dio_actor);

:::::: The code at line 897 was first introduced by commit
:::::: ff6a9292e6f633d596826be5ba70d3ef90cc3300 iomap: implement direct I/O

:::::: TO: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
:::::: CC: Dave Chinner <david-FqsqvQoI3Ljby3iVrkZq2A@public.gmane.org>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 24656 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2016-12-22 19:55 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-08  1:10 [PATCH, RFC 0/5] IB: Optimize DMA mapping Bart Van Assche
     [not found] ` <07c07529-4636-fafb-2598-7358d8a1460d-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
2016-12-08  1:10   ` [PATCH 1/5] treewide: constify most struct dma_map_ops Bart Van Assche
     [not found]     ` <f6b70724-772c-c17f-f1be-1681fab31228-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
2016-12-09 18:20       ` Christoph Hellwig
2016-12-08  1:10   ` [PATCH 2/5] misc: vop: Remove a cast Bart Van Assche
     [not found]     ` <6fff2450-6442-4539-47ff-67f04a593c06-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
2016-12-09 18:20       ` Christoph Hellwig
2016-12-08  1:11   ` [PATCH 3/5] Move dma_ops from archdata into struct device Bart Van Assche
2016-12-09 18:22     ` Christoph Hellwig
2016-12-09 18:22       ` Christoph Hellwig
2016-12-09 19:13       ` David Woodhouse
2016-12-09 19:13       ` David Woodhouse
2016-12-09 19:46         ` Bart Van Assche
2016-12-09 19:46           ` Bart Van Assche
2016-12-09 19:46           ` Bart Van Assche
2016-12-08  1:11   ` [PATCH 4/5] IB: Switch from struct ib_dma_mapping_ops to struct dma_mapping_ops Bart Van Assche
     [not found]     ` <25d066c2-59d7-2be7-dd56-e29e99b43620-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
2016-12-09 18:23       ` Christoph Hellwig
2016-12-09 18:24       ` Christoph Hellwig
     [not found]         ` <20161209182429.GF16622-jcswGhMUV9g@public.gmane.org>
2016-12-19 16:42           ` Dennis Dalessandro
     [not found]             ` <52e8398f-a146-721c-3b92-0892b4abbff8-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
2016-12-19 16:55               ` Bart Van Assche
     [not found]                 ` <1482166487.25336.10.camel-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
2016-12-20 19:33                   ` Testing latest linux-next 4.9 ib_srp and ib_srpt Laurence Oberman
     [not found]                     ` <1918919536.8196250.1482262406057.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2016-12-20 19:43                       ` Laurence Oberman
     [not found]                         ` <2052479881.8196880.1482263028727.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2016-12-20 20:44                           ` Laurence Oberman
     [not found]                             ` <1668746735.8200653.1482266682585.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2016-12-20 21:09                               ` Laurence Oberman
2016-12-21  3:31                               ` Laurence Oberman
     [not found]                                 ` <120559766.8215321.1482291094018.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2016-12-21  6:34                                   ` Laurence Oberman
     [not found]                                     ` <1928955380.8220327.1482302041315.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2016-12-21  8:08                                       ` Bart Van Assche
2016-12-22  2:10                                       ` Laurence Oberman
     [not found]                                         ` <1337539588.8422488.1482372617094.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2016-12-22  6:23                                           ` Christoph Hellwig
     [not found]                                             ` <20161222062321.GA30326-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
2016-12-22 13:17                                               ` Testing latest linux-next 4.9 ib_srp and ib_srpt sees I/O capped at 1MB and no merging Laurence Oberman
     [not found]                                                 ` <1661819060.8462293.1482412678092.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2016-12-22 15:40                                                   ` Christoph Hellwig
2016-12-22 19:54                                                     ` block: add back plugging in __blkdev_direct_IO kbuild test robot
     [not found]                                                     ` <20161222154049.GA4638-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
2016-12-22 17:59                                                       ` Testing latest linux-next 4.9 ib_srp and ib_srpt sees I/O capped at 1MB and no merging Laurence Oberman
2016-12-22 19:55                                                       ` block: add back plugging in __blkdev_direct_IO kbuild test robot
2016-12-08  1:12   ` [PATCH 5/5] treewide: Inline ib_dma_map_*() functions Bart Van Assche
     [not found]     ` <9bdf696e-ec64-d60e-3d7e-7ad5b3000d60-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
2016-12-09 18:23       ` Christoph Hellwig
2016-12-08  6:48   ` [PATCH, RFC 0/5] IB: Optimize DMA mapping Or Gerlitz
     [not found]     ` <CAJ3xEMi4HY9Ehp-V4rP5UieAk=GjAu0X4uEnP-yMDomcFDpHkA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2016-12-08 16:51       ` Bart Van Assche

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.