linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/9] IB: Optimize DMA mapping
@ 2017-01-11  0:56 Bart Van Assche
  2017-01-11  0:56 ` [PATCH 3/9] dma: Add dma_virt_ops Bart Van Assche
                   ` (7 more replies)
  0 siblings, 8 replies; 17+ messages in thread
From: Bart Van Assche @ 2017-01-11  0:56 UTC (permalink / raw)
  To: Doug Ledford; +Cc: linux-rdma, linux-kernel

Hello Doug,

As you know there are two sets of DMA mapping operations in the Linux
kernel:
- One set of DMA mapping operations that is used by most drivers.
- Another set of DMA mapping operations that is only used by the RDMA
  drivers.
Having two sets of DMA mapping operations is not only a source of
confusion but also a source of unnecessary overhead. The DMA mapping
operations are in the hot path so it is important that the overhead
of these operations is as low as possible. Hence this patch series
that converts the RDMA code to the standard DMA mapping API and
thereby eliminates the if (dev->dma_ops) test from the hot path. An
additional benefit is that the size of HW and SW drivers that do not
use DMA is reduced by switching to dma_virt_ops.

Bart Van Assche (9):
  treewide: Constify most dma_map_ops structures
  Move dma_ops from archdata into struct device
  dma: Add dma_virt_ops
  IB/hf1: Remove DMA mapping code
  IB/qib: Remove DMA mapping code
  IB: Use dma_virt_ops instead of duplicating it
  RDS: IB: Remove an unused structure member
  IB: Convert ib_dma_*_coherent() argument type from u64 into dma_addr_t
  treewide: Inline ib_dma_map_*() functions

 arch/alpha/include/asm/dma-mapping.h               |   4 +-
 arch/alpha/kernel/pci-noop.c                       |   4 +-
 arch/alpha/kernel/pci_iommu.c                      |   4 +-
 arch/arc/include/asm/dma-mapping.h                 |   4 +-
 arch/arc/mm/dma.c                                  |   2 +-
 arch/arm/common/dmabounce.c                        |   2 +-
 arch/arm/include/asm/device.h                      |   1 -
 arch/arm/include/asm/dma-mapping.h                 |  23 +-
 arch/arm/mm/dma-mapping.c                          |  22 +-
 arch/arm/xen/mm.c                                  |   4 +-
 arch/arm64/include/asm/device.h                    |   1 -
 arch/arm64/include/asm/dma-mapping.h               |  12 +-
 arch/arm64/mm/dma-mapping.c                        |  14 +-
 arch/avr32/include/asm/dma-mapping.h               |   4 +-
 arch/avr32/mm/dma-coherent.c                       |   2 +-
 arch/blackfin/include/asm/dma-mapping.h            |   4 +-
 arch/blackfin/kernel/dma-mapping.c                 |   2 +-
 arch/c6x/include/asm/dma-mapping.h                 |   4 +-
 arch/c6x/kernel/dma.c                              |   2 +-
 arch/cris/arch-v32/drivers/pci/dma.c               |   2 +-
 arch/cris/include/asm/dma-mapping.h                |   6 +-
 arch/frv/include/asm/dma-mapping.h                 |   4 +-
 arch/frv/mb93090-mb00/pci-dma-nommu.c              |   2 +-
 arch/frv/mb93090-mb00/pci-dma.c                    |   2 +-
 arch/h8300/include/asm/dma-mapping.h               |   4 +-
 arch/h8300/kernel/dma.c                            |   2 +-
 arch/hexagon/include/asm/dma-mapping.h             |   7 +-
 arch/hexagon/kernel/dma.c                          |   4 +-
 arch/ia64/hp/common/hwsw_iommu.c                   |   4 +-
 arch/ia64/hp/common/sba_iommu.c                    |   4 +-
 arch/ia64/include/asm/dma-mapping.h                |   7 +-
 arch/ia64/include/asm/machvec.h                    |   4 +-
 arch/ia64/kernel/dma-mapping.c                     |   4 +-
 arch/ia64/kernel/pci-dma.c                         |  10 +-
 arch/ia64/kernel/pci-swiotlb.c                     |   2 +-
 arch/m32r/include/asm/device.h                     |   2 +-
 arch/m32r/include/asm/dma-mapping.h                |   4 +-
 arch/m68k/include/asm/dma-mapping.h                |   4 +-
 arch/m68k/kernel/dma.c                             |   2 +-
 arch/metag/include/asm/dma-mapping.h               |   4 +-
 arch/metag/kernel/dma.c                            |   2 +-
 arch/microblaze/include/asm/dma-mapping.h          |   4 +-
 arch/microblaze/kernel/dma.c                       |   2 +-
 arch/mips/cavium-octeon/dma-octeon.c               |   4 +-
 arch/mips/include/asm/device.h                     |   5 -
 arch/mips/include/asm/dma-mapping.h                |   9 +-
 .../include/asm/mach-cavium-octeon/dma-coherence.h |   2 +-
 arch/mips/include/asm/netlogic/common.h            |   2 +-
 arch/mips/loongson64/common/dma-swiotlb.c          |   2 +-
 arch/mips/mm/dma-default.c                         |   4 +-
 arch/mips/netlogic/common/nlm-dma.c                |   2 +-
 arch/mips/pci/pci-octeon.c                         |   2 +-
 arch/mn10300/include/asm/dma-mapping.h             |   4 +-
 arch/mn10300/mm/dma-alloc.c                        |   2 +-
 arch/nios2/include/asm/dma-mapping.h               |   4 +-
 arch/nios2/mm/dma-mapping.c                        |   2 +-
 arch/openrisc/include/asm/dma-mapping.h            |   4 +-
 arch/openrisc/kernel/dma.c                         |   2 +-
 arch/parisc/include/asm/dma-mapping.h              |   8 +-
 arch/parisc/kernel/drivers.c                       |   2 +-
 arch/parisc/kernel/pci-dma.c                       |   4 +-
 arch/powerpc/include/asm/device.h                  |   4 -
 arch/powerpc/include/asm/dma-mapping.h             |  19 +-
 arch/powerpc/include/asm/pci.h                     |   4 +-
 arch/powerpc/include/asm/ps3.h                     |   2 +-
 arch/powerpc/include/asm/swiotlb.h                 |   2 +-
 arch/powerpc/kernel/dma-swiotlb.c                  |   2 +-
 arch/powerpc/kernel/dma.c                          |   8 +-
 arch/powerpc/kernel/pci-common.c                   |   6 +-
 arch/powerpc/platforms/cell/iommu.c                |   6 +-
 arch/powerpc/platforms/pasemi/iommu.c              |   2 +-
 arch/powerpc/platforms/pasemi/setup.c              |   2 +-
 arch/powerpc/platforms/powernv/npu-dma.c           |   2 +-
 arch/powerpc/platforms/ps3/system-bus.c            |   8 +-
 arch/powerpc/platforms/pseries/ibmebus.c           |   4 +-
 arch/powerpc/platforms/pseries/vio.c               |   2 +-
 arch/s390/include/asm/device.h                     |   1 -
 arch/s390/include/asm/dma-mapping.h                |   6 +-
 arch/s390/pci/pci.c                                |   2 +-
 arch/s390/pci/pci_dma.c                            |   2 +-
 arch/sh/include/asm/dma-mapping.h                  |   4 +-
 arch/sh/kernel/dma-nommu.c                         |   2 +-
 arch/sh/mm/consistent.c                            |   2 +-
 arch/sparc/include/asm/dma-mapping.h               |  10 +-
 arch/sparc/kernel/iommu.c                          |   4 +-
 arch/sparc/kernel/ioport.c                         |   8 +-
 arch/sparc/kernel/pci_sun4v.c                      |   2 +-
 arch/tile/include/asm/device.h                     |   3 -
 arch/tile/include/asm/dma-mapping.h                |  20 +-
 arch/tile/kernel/pci-dma.c                         |  24 +-
 arch/unicore32/include/asm/dma-mapping.h           |   4 +-
 arch/unicore32/mm/dma-swiotlb.c                    |   2 +-
 arch/x86/include/asm/device.h                      |   5 +-
 arch/x86/include/asm/dma-mapping.h                 |  11 +-
 arch/x86/include/asm/iommu.h                       |   2 +-
 arch/x86/kernel/amd_gart_64.c                      |   2 +-
 arch/x86/kernel/pci-calgary_64.c                   |   6 +-
 arch/x86/kernel/pci-dma.c                          |   4 +-
 arch/x86/kernel/pci-nommu.c                        |   2 +-
 arch/x86/kernel/pci-swiotlb.c                      |   2 +-
 arch/x86/pci/common.c                              |   2 +-
 arch/x86/pci/sta2x11-fixup.c                       |  10 +-
 arch/x86/xen/pci-swiotlb-xen.c                     |   2 +-
 arch/xtensa/include/asm/device.h                   |   4 -
 arch/xtensa/include/asm/dma-mapping.h              |   9 +-
 arch/xtensa/kernel/pci-dma.c                       |   2 +-
 drivers/infiniband/core/mad.c                      |  28 +-
 drivers/infiniband/core/rw.c                       |  30 +-
 drivers/infiniband/core/umem.c                     |   4 +-
 drivers/infiniband/core/umem_odp.c                 |   6 +-
 drivers/infiniband/hw/hfi1/dma.c                   | 183 ------------
 drivers/infiniband/hw/mlx4/cq.c                    |   2 +-
 drivers/infiniband/hw/mlx4/mad.c                   |  28 +-
 drivers/infiniband/hw/mlx4/mr.c                    |   4 +-
 drivers/infiniband/hw/mlx4/qp.c                    |  10 +-
 drivers/infiniband/hw/mlx5/mr.c                    |   4 +-
 drivers/infiniband/hw/qib/qib_dma.c                | 169 -----------
 drivers/infiniband/hw/qib/qib_keys.c               |   5 +-
 drivers/infiniband/sw/rdmavt/Makefile              |   2 +-
 drivers/infiniband/sw/rdmavt/dma.c                 | 198 -------------
 drivers/infiniband/sw/rdmavt/dma.h                 |  53 ----
 drivers/infiniband/sw/rdmavt/mr.c                  |   8 +-
 drivers/infiniband/sw/rdmavt/vt.c                  |   5 +-
 drivers/infiniband/sw/rdmavt/vt.h                  |   1 -
 drivers/infiniband/sw/rxe/Makefile                 |   1 -
 drivers/infiniband/sw/rxe/rxe_dma.c                | 183 ------------
 drivers/infiniband/sw/rxe/rxe_loc.h                |   2 -
 drivers/infiniband/sw/rxe/rxe_verbs.c              |   3 +-
 drivers/infiniband/ulp/ipoib/ipoib_cm.c            |  20 +-
 drivers/infiniband/ulp/ipoib/ipoib_ib.c            |  22 +-
 drivers/infiniband/ulp/iser/iscsi_iser.c           |   6 +-
 drivers/infiniband/ulp/iser/iser_initiator.c       |  38 +--
 drivers/infiniband/ulp/iser/iser_memory.c          |  12 +-
 drivers/infiniband/ulp/iser/iser_verbs.c           |   2 +-
 drivers/infiniband/ulp/isert/ib_isert.c            |  60 ++--
 drivers/infiniband/ulp/srp/ib_srp.c                |  50 ++--
 drivers/infiniband/ulp/srpt/ib_srpt.c              |  12 +-
 drivers/iommu/amd_iommu.c                          |  10 +-
 drivers/misc/mic/bus/mic_bus.c                     |   4 +-
 drivers/misc/mic/bus/scif_bus.c                    |   4 +-
 drivers/misc/mic/bus/scif_bus.h                    |   2 +-
 drivers/misc/mic/bus/vop_bus.c                     |   2 +-
 drivers/misc/mic/host/mic_boot.c                   |   4 +-
 drivers/nvme/host/rdma.c                           |  22 +-
 drivers/nvme/target/rdma.c                         |  20 +-
 drivers/parisc/ccio-dma.c                          |   2 +-
 drivers/parisc/sba_iommu.c                         |   2 +-
 drivers/pci/host/vmd.c                             |   2 +-
 .../staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h    |  14 +-
 include/linux/device.h                             |   2 +
 include/linux/dma-mapping.h                        |  55 ++--
 include/linux/mic_bus.h                            |   2 +-
 include/rdma/ib_verbs.h                            | 310 ---------------------
 include/xen/arm/hypervisor.h                       |   2 +-
 lib/Makefile                                       |   1 +
 lib/dma-noop.c                                     |   4 +-
 lib/dma-virt.c                                     |  73 +++++
 net/9p/trans_rdma.c                                |  12 +-
 net/rds/ib.h                                       |  45 +--
 net/rds/ib_cm.c                                    |  18 +-
 net/rds/ib_fmr.c                                   |  10 +-
 net/rds/ib_frmr.c                                  |   8 +-
 net/rds/ib_mr.h                                    |   1 -
 net/rds/ib_rdma.c                                  |   6 +-
 net/rds/ib_recv.c                                  |  14 +-
 net/rds/ib_send.c                                  |  28 +-
 net/sunrpc/xprtrdma/fmr_ops.c                      |   6 +-
 net/sunrpc/xprtrdma/frwr_ops.c                     |   6 +-
 net/sunrpc/xprtrdma/rpc_rdma.c                     |  14 +-
 net/sunrpc/xprtrdma/svc_rdma_backchannel.c         |   4 +-
 net/sunrpc/xprtrdma/svc_rdma_recvfrom.c            |   8 +-
 net/sunrpc/xprtrdma/svc_rdma_sendto.c              |  14 +-
 net/sunrpc/xprtrdma/svc_rdma_transport.c           |   8 +-
 net/sunrpc/xprtrdma/verbs.c                        |   8 +-
 174 files changed, 637 insertions(+), 1764 deletions(-)
 delete mode 100644 drivers/infiniband/hw/hfi1/dma.c
 delete mode 100644 drivers/infiniband/hw/qib/qib_dma.c
 delete mode 100644 drivers/infiniband/sw/rdmavt/dma.c
 delete mode 100644 drivers/infiniband/sw/rdmavt/dma.h
 delete mode 100644 drivers/infiniband/sw/rxe/rxe_dma.c
 create mode 100644 lib/dma-virt.c

-- 
2.11.0

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 3/9] dma: Add dma_virt_ops
  2017-01-11  0:56 [PATCH 0/9] IB: Optimize DMA mapping Bart Van Assche
@ 2017-01-11  0:56 ` Bart Van Assche
  2017-01-11  8:56   ` Christoph Hellwig
  2017-01-11  0:56 ` [PATCH 4/9] IB/hf1: Remove DMA mapping code Bart Van Assche
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Bart Van Assche @ 2017-01-11  0:56 UTC (permalink / raw)
  To: Doug Ledford
  Cc: linux-rdma, linux-kernel, Christian Borntraeger, Joerg Roedel,
	Andy Lutomirski, Michael S . Tsirkin

Several RDMA drivers need to provide a DMA mapping API but use the
CPU to transfer data. Provide DMA mapping operations that are
suitable for these drivers.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Joerg Roedel <jroedel@suse.de>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Michael S. Tsirkin <mst@redhat.com>
---
 include/linux/dma-mapping.h |  1 +
 lib/Makefile                |  1 +
 lib/dma-noop.c              |  2 +-
 lib/dma-virt.c              | 73 +++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 76 insertions(+), 1 deletion(-)
 create mode 100644 lib/dma-virt.c

diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index ab8710888ddf..426c43d4fdbf 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -128,6 +128,7 @@ struct dma_map_ops {
 };
 
 extern const struct dma_map_ops dma_noop_ops;
+extern const struct dma_map_ops dma_virt_ops;
 
 #define DMA_BIT_MASK(n)	(((n) == 64) ? ~0ULL : ((1ULL<<(n))-1))
 
diff --git a/lib/Makefile b/lib/Makefile
index bc4073a8cd08..2d6c3fcd432c 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -27,6 +27,7 @@ lib-y := ctype.o string.o vsprintf.o cmdline.o \
 lib-$(CONFIG_MMU) += ioremap.o
 lib-$(CONFIG_SMP) += cpumask.o
 lib-$(CONFIG_HAS_DMA) += dma-noop.o
+lib-$(CONFIG_HAS_DMA) += dma-virt.o
 
 lib-y	+= kobject.o klist.o
 obj-y	+= lockref.o
diff --git a/lib/dma-noop.c b/lib/dma-noop.c
index 65e49dd35b7b..de26c8b68f34 100644
--- a/lib/dma-noop.c
+++ b/lib/dma-noop.c
@@ -1,7 +1,7 @@
 /*
  *	lib/dma-noop.c
  *
- * Simple DMA noop-ops that map 1:1 with memory
+ * DMA operations that map to physical addresses without flushing memory.
  */
 #include <linux/export.h>
 #include <linux/mm.h>
diff --git a/lib/dma-virt.c b/lib/dma-virt.c
new file mode 100644
index 000000000000..b3573b6ad5b1
--- /dev/null
+++ b/lib/dma-virt.c
@@ -0,0 +1,73 @@
+/*
+ *	lib/dma-virt.c
+ *
+ * DMA operations that map to virtual addresses without flushing memory.
+ */
+#include <linux/export.h>
+#include <linux/mm.h>
+#include <linux/dma-mapping.h>
+#include <linux/scatterlist.h>
+
+static void *dma_virt_alloc(struct device *dev, size_t size,
+			    dma_addr_t *dma_handle, gfp_t gfp,
+			    unsigned long attrs)
+{
+	void *ret;
+
+	ret = (void *)__get_free_pages(gfp, get_order(size));
+	if (ret)
+		*dma_handle = (uintptr_t)ret;
+	return ret;
+}
+
+static void dma_virt_free(struct device *dev, size_t size,
+			  void *cpu_addr, dma_addr_t dma_addr,
+			  unsigned long attrs)
+{
+	free_pages((unsigned long)cpu_addr, get_order(size));
+}
+
+static dma_addr_t dma_virt_map_page(struct device *dev, struct page *page,
+				      unsigned long offset, size_t size,
+				      enum dma_data_direction dir,
+				      unsigned long attrs)
+{
+	return (uintptr_t)(page_address(page) + offset);
+}
+
+static int dma_virt_map_sg(struct device *dev, struct scatterlist *sgl, int nents,
+			     enum dma_data_direction dir,
+			     unsigned long attrs)
+{
+	int i;
+	struct scatterlist *sg;
+
+	for_each_sg(sgl, sg, nents, i) {
+		BUG_ON(!sg_page(sg));
+		sg_dma_address(sg) = (uintptr_t)sg_virt(sg);
+		sg_dma_len(sg) = sg->length;
+	}
+
+	return nents;
+}
+
+static int dma_virt_mapping_error(struct device *dev, dma_addr_t dma_addr)
+{
+	return false;
+}
+
+static int dma_virt_supported(struct device *dev, u64 mask)
+{
+	return true;
+}
+
+const struct dma_map_ops dma_virt_ops = {
+	.alloc			= dma_virt_alloc,
+	.free			= dma_virt_free,
+	.map_page		= dma_virt_map_page,
+	.map_sg			= dma_virt_map_sg,
+	.mapping_error		= dma_virt_mapping_error,
+	.dma_supported		= dma_virt_supported,
+};
+
+EXPORT_SYMBOL(dma_virt_ops);
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 4/9] IB/hf1: Remove DMA mapping code
  2017-01-11  0:56 [PATCH 0/9] IB: Optimize DMA mapping Bart Van Assche
  2017-01-11  0:56 ` [PATCH 3/9] dma: Add dma_virt_ops Bart Van Assche
@ 2017-01-11  0:56 ` Bart Van Assche
  2017-01-11  0:56 ` [PATCH 5/9] IB/qib: " Bart Van Assche
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 17+ messages in thread
From: Bart Van Assche @ 2017-01-11  0:56 UTC (permalink / raw)
  To: Doug Ledford; +Cc: linux-rdma, linux-kernel, Dennis Dalessandro, Dean Luick

The hfi1 DMA mapping code has never been built in any upstream kernel.
Hence remove it.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Dennis Dalessandro <dennis.dalessandro@intel.com>
Cc: Dean Luick <dean.luick@intel.com>
---
 drivers/infiniband/hw/hfi1/dma.c | 183 ---------------------------------------
 1 file changed, 183 deletions(-)
 delete mode 100644 drivers/infiniband/hw/hfi1/dma.c

diff --git a/drivers/infiniband/hw/hfi1/dma.c b/drivers/infiniband/hw/hfi1/dma.c
deleted file mode 100644
index 7e8dab892848..000000000000
--- a/drivers/infiniband/hw/hfi1/dma.c
+++ /dev/null
@@ -1,183 +0,0 @@
-/*
- * Copyright(c) 2015, 2016 Intel Corporation.
- *
- * This file is provided under a dual BSD/GPLv2 license.  When using or
- * redistributing this file, you may do so under either license.
- *
- * GPL LICENSE SUMMARY
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
- * General Public License for more details.
- *
- * BSD LICENSE
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- *  - Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- *  - Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in
- *    the documentation and/or other materials provided with the
- *    distribution.
- *  - Neither the name of Intel Corporation nor the names of its
- *    contributors may be used to endorse or promote products derived
- *    from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- */
-#include <linux/types.h>
-#include <linux/scatterlist.h>
-
-#include "verbs.h"
-
-#define BAD_DMA_ADDRESS ((u64)0)
-
-/*
- * The following functions implement driver specific replacements
- * for the ib_dma_*() functions.
- *
- * These functions return kernel virtual addresses instead of
- * device bus addresses since the driver uses the CPU to copy
- * data instead of using hardware DMA.
- */
-
-static int hfi1_mapping_error(struct ib_device *dev, u64 dma_addr)
-{
-	return dma_addr == BAD_DMA_ADDRESS;
-}
-
-static u64 hfi1_dma_map_single(struct ib_device *dev, void *cpu_addr,
-			       size_t size, enum dma_data_direction direction)
-{
-	if (WARN_ON(!valid_dma_direction(direction)))
-		return BAD_DMA_ADDRESS;
-
-	return (u64)cpu_addr;
-}
-
-static void hfi1_dma_unmap_single(struct ib_device *dev, u64 addr, size_t size,
-				  enum dma_data_direction direction)
-{
-	/* This is a stub, nothing to be done here */
-}
-
-static u64 hfi1_dma_map_page(struct ib_device *dev, struct page *page,
-			     unsigned long offset, size_t size,
-			    enum dma_data_direction direction)
-{
-	u64 addr;
-
-	if (WARN_ON(!valid_dma_direction(direction)))
-		return BAD_DMA_ADDRESS;
-
-	if (offset + size > PAGE_SIZE)
-		return BAD_DMA_ADDRESS;
-
-	addr = (u64)page_address(page);
-	if (addr)
-		addr += offset;
-
-	return addr;
-}
-
-static void hfi1_dma_unmap_page(struct ib_device *dev, u64 addr, size_t size,
-				enum dma_data_direction direction)
-{
-	/* This is a stub, nothing to be done here */
-}
-
-static int hfi1_map_sg(struct ib_device *dev, struct scatterlist *sgl,
-		       int nents, enum dma_data_direction direction)
-{
-	struct scatterlist *sg;
-	u64 addr;
-	int i;
-	int ret = nents;
-
-	if (WARN_ON(!valid_dma_direction(direction)))
-		return BAD_DMA_ADDRESS;
-
-	for_each_sg(sgl, sg, nents, i) {
-		addr = (u64)page_address(sg_page(sg));
-		if (!addr) {
-			ret = 0;
-			break;
-		}
-		sg->dma_address = addr + sg->offset;
-#ifdef CONFIG_NEED_SG_DMA_LENGTH
-		sg->dma_length = sg->length;
-#endif
-	}
-	return ret;
-}
-
-static void hfi1_unmap_sg(struct ib_device *dev,
-			  struct scatterlist *sg, int nents,
-			 enum dma_data_direction direction)
-{
-	/* This is a stub, nothing to be done here */
-}
-
-static void hfi1_sync_single_for_cpu(struct ib_device *dev, u64 addr,
-				     size_t size, enum dma_data_direction dir)
-{
-}
-
-static void hfi1_sync_single_for_device(struct ib_device *dev, u64 addr,
-					size_t size,
-					enum dma_data_direction dir)
-{
-}
-
-static void *hfi1_dma_alloc_coherent(struct ib_device *dev, size_t size,
-				     u64 *dma_handle, gfp_t flag)
-{
-	struct page *p;
-	void *addr = NULL;
-
-	p = alloc_pages(flag, get_order(size));
-	if (p)
-		addr = page_address(p);
-	if (dma_handle)
-		*dma_handle = (u64)addr;
-	return addr;
-}
-
-static void hfi1_dma_free_coherent(struct ib_device *dev, size_t size,
-				   void *cpu_addr, u64 dma_handle)
-{
-	free_pages((unsigned long)cpu_addr, get_order(size));
-}
-
-struct ib_dma_mapping_ops hfi1_dma_mapping_ops = {
-	.mapping_error = hfi1_mapping_error,
-	.map_single = hfi1_dma_map_single,
-	.unmap_single = hfi1_dma_unmap_single,
-	.map_page = hfi1_dma_map_page,
-	.unmap_page = hfi1_dma_unmap_page,
-	.map_sg = hfi1_map_sg,
-	.unmap_sg = hfi1_unmap_sg,
-	.sync_single_for_cpu = hfi1_sync_single_for_cpu,
-	.sync_single_for_device = hfi1_sync_single_for_device,
-	.alloc_coherent = hfi1_dma_alloc_coherent,
-	.free_coherent = hfi1_dma_free_coherent
-};
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 5/9] IB/qib: Remove DMA mapping code
  2017-01-11  0:56 [PATCH 0/9] IB: Optimize DMA mapping Bart Van Assche
  2017-01-11  0:56 ` [PATCH 3/9] dma: Add dma_virt_ops Bart Van Assche
  2017-01-11  0:56 ` [PATCH 4/9] IB/hf1: Remove DMA mapping code Bart Van Assche
@ 2017-01-11  0:56 ` Bart Van Assche
  2017-01-12 13:15   ` Leon Romanovsky
  2017-01-11  0:56 ` [PATCH 6/9] IB: Use dma_virt_ops instead of duplicating it Bart Van Assche
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Bart Van Assche @ 2017-01-11  0:56 UTC (permalink / raw)
  To: Doug Ledford
  Cc: linux-rdma, linux-kernel, Mike Marciniszyn, Dennis Dalessandro

The qib DMA mapping code is no longer built since commit eb636ac0e49e
("IB/qib: Remove dma.c and use rdmavt version of dma functions"). Hence
remove it.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Mike Marciniszyn <mike.marciniszyn@intel.com>
Cc: Dennis Dalessandro <dennis.dalessandro@intel.com>
---
 drivers/infiniband/hw/qib/qib_dma.c  | 169 -----------------------------------
 drivers/infiniband/hw/qib/qib_keys.c |   5 +-
 2 files changed, 1 insertion(+), 173 deletions(-)
 delete mode 100644 drivers/infiniband/hw/qib/qib_dma.c

diff --git a/drivers/infiniband/hw/qib/qib_dma.c b/drivers/infiniband/hw/qib/qib_dma.c
deleted file mode 100644
index 59fe092b4b0f..000000000000
--- a/drivers/infiniband/hw/qib/qib_dma.c
+++ /dev/null
@@ -1,169 +0,0 @@
-/*
- * Copyright (c) 2006, 2009, 2010 QLogic, Corporation. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *     Redistribution and use in source and binary forms, with or
- *     without modification, are permitted provided that the following
- *     conditions are met:
- *
- *      - Redistributions of source code must retain the above
- *        copyright notice, this list of conditions and the following
- *        disclaimer.
- *
- *      - Redistributions in binary form must reproduce the above
- *        copyright notice, this list of conditions and the following
- *        disclaimer in the documentation and/or other materials
- *        provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
- */
-#include <linux/types.h>
-#include <linux/scatterlist.h>
-
-#include "qib_verbs.h"
-
-#define BAD_DMA_ADDRESS ((u64) 0)
-
-/*
- * The following functions implement driver specific replacements
- * for the ib_dma_*() functions.
- *
- * These functions return kernel virtual addresses instead of
- * device bus addresses since the driver uses the CPU to copy
- * data instead of using hardware DMA.
- */
-
-static int qib_mapping_error(struct ib_device *dev, u64 dma_addr)
-{
-	return dma_addr == BAD_DMA_ADDRESS;
-}
-
-static u64 qib_dma_map_single(struct ib_device *dev, void *cpu_addr,
-			      size_t size, enum dma_data_direction direction)
-{
-	BUG_ON(!valid_dma_direction(direction));
-	return (u64) cpu_addr;
-}
-
-static void qib_dma_unmap_single(struct ib_device *dev, u64 addr, size_t size,
-				 enum dma_data_direction direction)
-{
-	BUG_ON(!valid_dma_direction(direction));
-}
-
-static u64 qib_dma_map_page(struct ib_device *dev, struct page *page,
-			    unsigned long offset, size_t size,
-			    enum dma_data_direction direction)
-{
-	u64 addr;
-
-	BUG_ON(!valid_dma_direction(direction));
-
-	if (offset + size > PAGE_SIZE) {
-		addr = BAD_DMA_ADDRESS;
-		goto done;
-	}
-
-	addr = (u64) page_address(page);
-	if (addr)
-		addr += offset;
-	/* TODO: handle highmem pages */
-
-done:
-	return addr;
-}
-
-static void qib_dma_unmap_page(struct ib_device *dev, u64 addr, size_t size,
-			       enum dma_data_direction direction)
-{
-	BUG_ON(!valid_dma_direction(direction));
-}
-
-static int qib_map_sg(struct ib_device *dev, struct scatterlist *sgl,
-		      int nents, enum dma_data_direction direction)
-{
-	struct scatterlist *sg;
-	u64 addr;
-	int i;
-	int ret = nents;
-
-	BUG_ON(!valid_dma_direction(direction));
-
-	for_each_sg(sgl, sg, nents, i) {
-		addr = (u64) page_address(sg_page(sg));
-		/* TODO: handle highmem pages */
-		if (!addr) {
-			ret = 0;
-			break;
-		}
-		sg->dma_address = addr + sg->offset;
-#ifdef CONFIG_NEED_SG_DMA_LENGTH
-		sg->dma_length = sg->length;
-#endif
-	}
-	return ret;
-}
-
-static void qib_unmap_sg(struct ib_device *dev,
-			 struct scatterlist *sg, int nents,
-			 enum dma_data_direction direction)
-{
-	BUG_ON(!valid_dma_direction(direction));
-}
-
-static void qib_sync_single_for_cpu(struct ib_device *dev, u64 addr,
-				    size_t size, enum dma_data_direction dir)
-{
-}
-
-static void qib_sync_single_for_device(struct ib_device *dev, u64 addr,
-				       size_t size,
-				       enum dma_data_direction dir)
-{
-}
-
-static void *qib_dma_alloc_coherent(struct ib_device *dev, size_t size,
-				    u64 *dma_handle, gfp_t flag)
-{
-	struct page *p;
-	void *addr = NULL;
-
-	p = alloc_pages(flag, get_order(size));
-	if (p)
-		addr = page_address(p);
-	if (dma_handle)
-		*dma_handle = (u64) addr;
-	return addr;
-}
-
-static void qib_dma_free_coherent(struct ib_device *dev, size_t size,
-				  void *cpu_addr, u64 dma_handle)
-{
-	free_pages((unsigned long) cpu_addr, get_order(size));
-}
-
-struct ib_dma_mapping_ops qib_dma_mapping_ops = {
-	.mapping_error = qib_mapping_error,
-	.map_single = qib_dma_map_single,
-	.unmap_single = qib_dma_unmap_single,
-	.map_page = qib_dma_map_page,
-	.unmap_page = qib_dma_unmap_page,
-	.map_sg = qib_map_sg,
-	.unmap_sg = qib_unmap_sg,
-	.sync_single_for_cpu = qib_sync_single_for_cpu,
-	.sync_single_for_device = qib_sync_single_for_device,
-	.alloc_coherent = qib_dma_alloc_coherent,
-	.free_coherent = qib_dma_free_coherent
-};
diff --git a/drivers/infiniband/hw/qib/qib_keys.c b/drivers/infiniband/hw/qib/qib_keys.c
index 2c3c93572c17..8fdf79f8d4e4 100644
--- a/drivers/infiniband/hw/qib/qib_keys.c
+++ b/drivers/infiniband/hw/qib/qib_keys.c
@@ -158,10 +158,7 @@ int qib_rkey_ok(struct rvt_qp *qp, struct rvt_sge *sge,
 	unsigned n, m;
 	size_t off;
 
-	/*
-	 * We use RKEY == zero for kernel virtual addresses
-	 * (see qib_get_dma_mr and qib_dma.c).
-	 */
+	/* We use RKEY == zero for kernel virtual addresses */
 	rcu_read_lock();
 	if (rkey == 0) {
 		struct rvt_pd *pd = ibpd_to_rvtpd(qp->ibqp.pd);
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 6/9] IB: Use dma_virt_ops instead of duplicating it
  2017-01-11  0:56 [PATCH 0/9] IB: Optimize DMA mapping Bart Van Assche
                   ` (2 preceding siblings ...)
  2017-01-11  0:56 ` [PATCH 5/9] IB/qib: " Bart Van Assche
@ 2017-01-11  0:56 ` Bart Van Assche
  2017-01-12 13:17   ` Leon Romanovsky
  2017-01-11  0:56 ` [PATCH 7/9] RDS: IB: Remove an unused structure member Bart Van Assche
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Bart Van Assche @ 2017-01-11  0:56 UTC (permalink / raw)
  To: Doug Ledford
  Cc: linux-rdma, linux-kernel, Christoph Hellwig, Andrew Boyer,
	Dennis Dalessandro, Jonathan Toppins, Alex Estrin

Additionally, switch from struct ib_dma_mapping_ops to struct
dma_mapping_ops. Update the comments that referred to the source
files removed by this patch.

This patch eliminates one branch from every ib_dma_map_*() call.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Andrew Boyer <andrew.boyer@dell.com>
Cc: Dennis Dalessandro <dennis.dalessandro@intel.com>
Cc: Jonathan Toppins <jtoppins@redhat.com>
Cc: Alex Estrin <alex.estrin@intel.com>
---
 drivers/infiniband/sw/rdmavt/Makefile |   2 +-
 drivers/infiniband/sw/rdmavt/dma.c    | 198 ----------------------------------
 drivers/infiniband/sw/rdmavt/dma.h    |  53 ---------
 drivers/infiniband/sw/rdmavt/mr.c     |   8 +-
 drivers/infiniband/sw/rdmavt/vt.c     |   5 +-
 drivers/infiniband/sw/rdmavt/vt.h     |   1 -
 drivers/infiniband/sw/rxe/Makefile    |   1 -
 drivers/infiniband/sw/rxe/rxe_dma.c   | 183 -------------------------------
 drivers/infiniband/sw/rxe/rxe_loc.h   |   2 -
 drivers/infiniband/sw/rxe/rxe_verbs.c |   3 +-
 include/rdma/ib_verbs.h               | 117 +++-----------------
 11 files changed, 25 insertions(+), 548 deletions(-)
 delete mode 100644 drivers/infiniband/sw/rdmavt/dma.c
 delete mode 100644 drivers/infiniband/sw/rdmavt/dma.h
 delete mode 100644 drivers/infiniband/sw/rxe/rxe_dma.c

diff --git a/drivers/infiniband/sw/rdmavt/Makefile b/drivers/infiniband/sw/rdmavt/Makefile
index ccaa7992ac97..2a821d2fb569 100644
--- a/drivers/infiniband/sw/rdmavt/Makefile
+++ b/drivers/infiniband/sw/rdmavt/Makefile
@@ -7,7 +7,7 @@
 #
 obj-$(CONFIG_INFINIBAND_RDMAVT) += rdmavt.o
 
-rdmavt-y := vt.o ah.o cq.o dma.o mad.o mcast.o mmap.o mr.o pd.o qp.o srq.o \
+rdmavt-y := vt.o ah.o cq.o mad.o mcast.o mmap.o mr.o pd.o qp.o srq.o \
 	trace.o
 
 CFLAGS_trace.o = -I$(src)
diff --git a/drivers/infiniband/sw/rdmavt/dma.c b/drivers/infiniband/sw/rdmavt/dma.c
deleted file mode 100644
index f2cefb0d9180..000000000000
--- a/drivers/infiniband/sw/rdmavt/dma.c
+++ /dev/null
@@ -1,198 +0,0 @@
-/*
- * Copyright(c) 2016 Intel Corporation.
- *
- * This file is provided under a dual BSD/GPLv2 license.  When using or
- * redistributing this file, you may do so under either license.
- *
- * GPL LICENSE SUMMARY
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
- * General Public License for more details.
- *
- * BSD LICENSE
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- *  - Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- *  - Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in
- *    the documentation and/or other materials provided with the
- *    distribution.
- *  - Neither the name of Intel Corporation nor the names of its
- *    contributors may be used to endorse or promote products derived
- *    from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- */
-#include <linux/types.h>
-#include <linux/scatterlist.h>
-#include <rdma/ib_verbs.h>
-
-#include "dma.h"
-
-#define BAD_DMA_ADDRESS ((u64)0)
-
-/*
- * The following functions implement driver specific replacements
- * for the ib_dma_*() functions.
- *
- * These functions return kernel virtual addresses instead of
- * device bus addresses since the driver uses the CPU to copy
- * data instead of using hardware DMA.
- */
-
-static int rvt_mapping_error(struct ib_device *dev, u64 dma_addr)
-{
-	return dma_addr == BAD_DMA_ADDRESS;
-}
-
-static u64 rvt_dma_map_single(struct ib_device *dev, void *cpu_addr,
-			      size_t size, enum dma_data_direction direction)
-{
-	if (WARN_ON(!valid_dma_direction(direction)))
-		return BAD_DMA_ADDRESS;
-
-	return (u64)cpu_addr;
-}
-
-static void rvt_dma_unmap_single(struct ib_device *dev, u64 addr, size_t size,
-				 enum dma_data_direction direction)
-{
-	/* This is a stub, nothing to be done here */
-}
-
-static u64 rvt_dma_map_page(struct ib_device *dev, struct page *page,
-			    unsigned long offset, size_t size,
-			    enum dma_data_direction direction)
-{
-	u64 addr;
-
-	if (WARN_ON(!valid_dma_direction(direction)))
-		return BAD_DMA_ADDRESS;
-
-	addr = (u64)page_address(page);
-	if (addr)
-		addr += offset;
-
-	return addr;
-}
-
-static void rvt_dma_unmap_page(struct ib_device *dev, u64 addr, size_t size,
-			       enum dma_data_direction direction)
-{
-	/* This is a stub, nothing to be done here */
-}
-
-static int rvt_map_sg(struct ib_device *dev, struct scatterlist *sgl,
-		      int nents, enum dma_data_direction direction)
-{
-	struct scatterlist *sg;
-	u64 addr;
-	int i;
-	int ret = nents;
-
-	if (WARN_ON(!valid_dma_direction(direction)))
-		return 0;
-
-	for_each_sg(sgl, sg, nents, i) {
-		addr = (u64)page_address(sg_page(sg));
-		if (!addr) {
-			ret = 0;
-			break;
-		}
-		sg->dma_address = addr + sg->offset;
-#ifdef CONFIG_NEED_SG_DMA_LENGTH
-		sg->dma_length = sg->length;
-#endif
-	}
-	return ret;
-}
-
-static void rvt_unmap_sg(struct ib_device *dev,
-			 struct scatterlist *sg, int nents,
-			 enum dma_data_direction direction)
-{
-	/* This is a stub, nothing to be done here */
-}
-
-static int rvt_map_sg_attrs(struct ib_device *dev, struct scatterlist *sgl,
-			    int nents, enum dma_data_direction direction,
-			    unsigned long attrs)
-{
-	return rvt_map_sg(dev, sgl, nents, direction);
-}
-
-static void rvt_unmap_sg_attrs(struct ib_device *dev,
-			       struct scatterlist *sg, int nents,
-			       enum dma_data_direction direction,
-			       unsigned long attrs)
-{
-	return rvt_unmap_sg(dev, sg, nents, direction);
-}
-
-static void rvt_sync_single_for_cpu(struct ib_device *dev, u64 addr,
-				    size_t size, enum dma_data_direction dir)
-{
-}
-
-static void rvt_sync_single_for_device(struct ib_device *dev, u64 addr,
-				       size_t size,
-				       enum dma_data_direction dir)
-{
-}
-
-static void *rvt_dma_alloc_coherent(struct ib_device *dev, size_t size,
-				    u64 *dma_handle, gfp_t flag)
-{
-	struct page *p;
-	void *addr = NULL;
-
-	p = alloc_pages(flag, get_order(size));
-	if (p)
-		addr = page_address(p);
-	if (dma_handle)
-		*dma_handle = (u64)addr;
-	return addr;
-}
-
-static void rvt_dma_free_coherent(struct ib_device *dev, size_t size,
-				  void *cpu_addr, u64 dma_handle)
-{
-	free_pages((unsigned long)cpu_addr, get_order(size));
-}
-
-struct ib_dma_mapping_ops rvt_default_dma_mapping_ops = {
-	.mapping_error = rvt_mapping_error,
-	.map_single = rvt_dma_map_single,
-	.unmap_single = rvt_dma_unmap_single,
-	.map_page = rvt_dma_map_page,
-	.unmap_page = rvt_dma_unmap_page,
-	.map_sg = rvt_map_sg,
-	.unmap_sg = rvt_unmap_sg,
-	.map_sg_attrs = rvt_map_sg_attrs,
-	.unmap_sg_attrs = rvt_unmap_sg_attrs,
-	.sync_single_for_cpu = rvt_sync_single_for_cpu,
-	.sync_single_for_device = rvt_sync_single_for_device,
-	.alloc_coherent = rvt_dma_alloc_coherent,
-	.free_coherent = rvt_dma_free_coherent
-};
diff --git a/drivers/infiniband/sw/rdmavt/dma.h b/drivers/infiniband/sw/rdmavt/dma.h
deleted file mode 100644
index 979f07e09195..000000000000
--- a/drivers/infiniband/sw/rdmavt/dma.h
+++ /dev/null
@@ -1,53 +0,0 @@
-#ifndef DEF_RDMAVTDMA_H
-#define DEF_RDMAVTDMA_H
-
-/*
- * Copyright(c) 2016 Intel Corporation.
- *
- * This file is provided under a dual BSD/GPLv2 license.  When using or
- * redistributing this file, you may do so under either license.
- *
- * GPL LICENSE SUMMARY
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of version 2 of the GNU General Public License as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
- * General Public License for more details.
- *
- * BSD LICENSE
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- *  - Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- *  - Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in
- *    the documentation and/or other materials provided with the
- *    distribution.
- *  - Neither the name of Intel Corporation nor the names of its
- *    contributors may be used to endorse or promote products derived
- *    from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- */
-
-extern struct ib_dma_mapping_ops rvt_default_dma_mapping_ops;
-
-#endif          /* DEF_RDMAVTDMA_H */
diff --git a/drivers/infiniband/sw/rdmavt/mr.c b/drivers/infiniband/sw/rdmavt/mr.c
index 52fd15276ee6..14d0ac6efd08 100644
--- a/drivers/infiniband/sw/rdmavt/mr.c
+++ b/drivers/infiniband/sw/rdmavt/mr.c
@@ -305,8 +305,8 @@ static void __rvt_free_mr(struct rvt_mr *mr)
  * @acc: access flags
  *
  * Return: the memory region on success, otherwise returns an errno.
- * Note that all DMA addresses should be created via the
- * struct ib_dma_mapping_ops functions (see dma.c).
+ * Note that all DMA addresses should be created via the functions in
+ * struct dma_virt_ops.
  */
 struct ib_mr *rvt_get_dma_mr(struct ib_pd *pd, int acc)
 {
@@ -782,7 +782,7 @@ int rvt_lkey_ok(struct rvt_lkey_table *rkt, struct rvt_pd *pd,
 
 	/*
 	 * We use LKEY == zero for kernel virtual addresses
-	 * (see rvt_get_dma_mr and dma.c).
+	 * (see rvt_get_dma_mr() and dma_virt_ops).
 	 */
 	rcu_read_lock();
 	if (sge->lkey == 0) {
@@ -880,7 +880,7 @@ int rvt_rkey_ok(struct rvt_qp *qp, struct rvt_sge *sge,
 
 	/*
 	 * We use RKEY == zero for kernel virtual addresses
-	 * (see rvt_get_dma_mr and dma.c).
+	 * (see rvt_get_dma_mr() and dma_virt_ops).
 	 */
 	rcu_read_lock();
 	if (rkey == 0) {
diff --git a/drivers/infiniband/sw/rdmavt/vt.c b/drivers/infiniband/sw/rdmavt/vt.c
index d430c2f7cec4..6a81b179f631 100644
--- a/drivers/infiniband/sw/rdmavt/vt.c
+++ b/drivers/infiniband/sw/rdmavt/vt.c
@@ -47,6 +47,7 @@
 
 #include <linux/module.h>
 #include <linux/kernel.h>
+#include <linux/dma-mapping.h>
 #include "vt.h"
 #include "trace.h"
 
@@ -777,8 +778,8 @@ int rvt_register_device(struct rvt_dev_info *rdi)
 	}
 
 	/* DMA Operations */
-	rdi->ibdev.dma_ops =
-		rdi->ibdev.dma_ops ? : &rvt_default_dma_mapping_ops;
+	if (rdi->ibdev.dma_device->dma_ops == NULL)
+		set_dma_ops(rdi->ibdev.dma_device, &dma_virt_ops);
 
 	/* Protection Domain */
 	spin_lock_init(&rdi->n_pds_lock);
diff --git a/drivers/infiniband/sw/rdmavt/vt.h b/drivers/infiniband/sw/rdmavt/vt.h
index 6b01eaa4461b..f363505312be 100644
--- a/drivers/infiniband/sw/rdmavt/vt.h
+++ b/drivers/infiniband/sw/rdmavt/vt.h
@@ -50,7 +50,6 @@
 
 #include <rdma/rdma_vt.h>
 #include <linux/pci.h>
-#include "dma.h"
 #include "pd.h"
 #include "qp.h"
 #include "ah.h"
diff --git a/drivers/infiniband/sw/rxe/Makefile b/drivers/infiniband/sw/rxe/Makefile
index 3b3fb9d1c470..ec35ff022a42 100644
--- a/drivers/infiniband/sw/rxe/Makefile
+++ b/drivers/infiniband/sw/rxe/Makefile
@@ -14,7 +14,6 @@ rdma_rxe-y := \
 	rxe_qp.o \
 	rxe_cq.o \
 	rxe_mr.o \
-	rxe_dma.o \
 	rxe_opcode.o \
 	rxe_mmap.o \
 	rxe_icrc.o \
diff --git a/drivers/infiniband/sw/rxe/rxe_dma.c b/drivers/infiniband/sw/rxe/rxe_dma.c
deleted file mode 100644
index a0f8af5851ae..000000000000
--- a/drivers/infiniband/sw/rxe/rxe_dma.c
+++ /dev/null
@@ -1,183 +0,0 @@
-/*
- * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
- * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *     Redistribution and use in source and binary forms, with or
- *     without modification, are permitted provided that the following
- *     conditions are met:
- *
- *	- Redistributions of source code must retain the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer.
- *
- *	- Redistributions in binary form must reproduce the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer in the documentation and/or other materials
- *	  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
- */
-
-#include "rxe.h"
-#include "rxe_loc.h"
-
-#define DMA_BAD_ADDER ((u64)0)
-
-static int rxe_mapping_error(struct ib_device *dev, u64 dma_addr)
-{
-	return dma_addr == DMA_BAD_ADDER;
-}
-
-static u64 rxe_dma_map_single(struct ib_device *dev,
-			      void *cpu_addr, size_t size,
-			      enum dma_data_direction direction)
-{
-	WARN_ON(!valid_dma_direction(direction));
-	return (uintptr_t)cpu_addr;
-}
-
-static void rxe_dma_unmap_single(struct ib_device *dev,
-				 u64 addr, size_t size,
-				 enum dma_data_direction direction)
-{
-	WARN_ON(!valid_dma_direction(direction));
-}
-
-static u64 rxe_dma_map_page(struct ib_device *dev,
-			    struct page *page,
-			    unsigned long offset,
-			    size_t size, enum dma_data_direction direction)
-{
-	u64 addr;
-
-	WARN_ON(!valid_dma_direction(direction));
-
-	if (offset + size > PAGE_SIZE) {
-		addr = DMA_BAD_ADDER;
-		goto done;
-	}
-
-	addr = (uintptr_t)page_address(page);
-	if (addr)
-		addr += offset;
-
-done:
-	return addr;
-}
-
-static void rxe_dma_unmap_page(struct ib_device *dev,
-			       u64 addr, size_t size,
-			       enum dma_data_direction direction)
-{
-	WARN_ON(!valid_dma_direction(direction));
-}
-
-static int rxe_map_sg(struct ib_device *dev, struct scatterlist *sgl,
-		      int nents, enum dma_data_direction direction)
-{
-	struct scatterlist *sg;
-	u64 addr;
-	int i;
-	int ret = nents;
-
-	WARN_ON(!valid_dma_direction(direction));
-
-	for_each_sg(sgl, sg, nents, i) {
-		addr = (uintptr_t)page_address(sg_page(sg));
-		if (!addr) {
-			ret = 0;
-			break;
-		}
-		sg->dma_address = addr + sg->offset;
-#ifdef CONFIG_NEED_SG_DMA_LENGTH
-		sg->dma_length = sg->length;
-#endif
-	}
-
-	return ret;
-}
-
-static void rxe_unmap_sg(struct ib_device *dev,
-			 struct scatterlist *sg, int nents,
-			 enum dma_data_direction direction)
-{
-	WARN_ON(!valid_dma_direction(direction));
-}
-
-static int rxe_map_sg_attrs(struct ib_device *dev, struct scatterlist *sgl,
-			    int nents, enum dma_data_direction direction,
-			    unsigned long attrs)
-{
-	return rxe_map_sg(dev, sgl, nents, direction);
-}
-
-static void rxe_unmap_sg_attrs(struct ib_device *dev,
-			       struct scatterlist *sg, int nents,
-			       enum dma_data_direction direction,
-			       unsigned long attrs)
-{
-	rxe_unmap_sg(dev, sg, nents, direction);
-}
-
-static void rxe_sync_single_for_cpu(struct ib_device *dev,
-				    u64 addr,
-				    size_t size, enum dma_data_direction dir)
-{
-}
-
-static void rxe_sync_single_for_device(struct ib_device *dev,
-				       u64 addr,
-				       size_t size, enum dma_data_direction dir)
-{
-}
-
-static void *rxe_dma_alloc_coherent(struct ib_device *dev, size_t size,
-				    u64 *dma_handle, gfp_t flag)
-{
-	struct page *p;
-	void *addr = NULL;
-
-	p = alloc_pages(flag, get_order(size));
-	if (p)
-		addr = page_address(p);
-
-	if (dma_handle)
-		*dma_handle = (uintptr_t)addr;
-
-	return addr;
-}
-
-static void rxe_dma_free_coherent(struct ib_device *dev, size_t size,
-				  void *cpu_addr, u64 dma_handle)
-{
-	free_pages((unsigned long)cpu_addr, get_order(size));
-}
-
-struct ib_dma_mapping_ops rxe_dma_mapping_ops = {
-	.mapping_error		= rxe_mapping_error,
-	.map_single		= rxe_dma_map_single,
-	.unmap_single		= rxe_dma_unmap_single,
-	.map_page		= rxe_dma_map_page,
-	.unmap_page		= rxe_dma_unmap_page,
-	.map_sg			= rxe_map_sg,
-	.unmap_sg		= rxe_unmap_sg,
-	.map_sg_attrs		= rxe_map_sg_attrs,
-	.unmap_sg_attrs		= rxe_unmap_sg_attrs,
-	.sync_single_for_cpu	= rxe_sync_single_for_cpu,
-	.sync_single_for_device	= rxe_sync_single_for_device,
-	.alloc_coherent		= rxe_dma_alloc_coherent,
-	.free_coherent		= rxe_dma_free_coherent
-};
diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
index efe4c6a35442..267f89d91da7 100644
--- a/drivers/infiniband/sw/rxe/rxe_loc.h
+++ b/drivers/infiniband/sw/rxe/rxe_loc.h
@@ -221,8 +221,6 @@ int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq,
 		      struct ib_srq_attr *attr, enum ib_srq_attr_mask mask,
 		      struct ib_udata *udata);
 
-extern struct ib_dma_mapping_ops rxe_dma_mapping_ops;
-
 void rxe_release(struct kref *kref);
 
 int rxe_completer(void *arg);
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
index beb7021ff18a..b75c19bfcb6b 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
@@ -31,6 +31,7 @@
  * SOFTWARE.
  */
 
+#include <linux/dma-mapping.h>
 #include "rxe.h"
 #include "rxe_loc.h"
 #include "rxe_queue.h"
@@ -1237,7 +1238,7 @@ int rxe_register_device(struct rxe_dev *rxe)
 	dev->dma_device = rxe->ifc_ops->dma_device(rxe);
 	dev->local_dma_lkey = 0;
 	dev->node_guid = rxe->ifc_ops->node_guid(rxe);
-	dev->dma_ops = &rxe_dma_mapping_ops;
+	set_dma_ops(dev->dma_device, &dma_virt_ops);
 
 	dev->uverbs_abi_ver = RXE_UVERBS_ABI_VERSION;
 	dev->uverbs_cmd_mask = BIT_ULL(IB_USER_VERBS_CMD_GET_CONTEXT)
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index 958a24d8fae7..89e80eb77e06 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -1783,53 +1783,6 @@ struct ib_cache {
 	u8                     *lmc_cache;
 };
 
-struct ib_dma_mapping_ops {
-	int		(*mapping_error)(struct ib_device *dev,
-					 u64 dma_addr);
-	u64		(*map_single)(struct ib_device *dev,
-				      void *ptr, size_t size,
-				      enum dma_data_direction direction);
-	void		(*unmap_single)(struct ib_device *dev,
-					u64 addr, size_t size,
-					enum dma_data_direction direction);
-	u64		(*map_page)(struct ib_device *dev,
-				    struct page *page, unsigned long offset,
-				    size_t size,
-				    enum dma_data_direction direction);
-	void		(*unmap_page)(struct ib_device *dev,
-				      u64 addr, size_t size,
-				      enum dma_data_direction direction);
-	int		(*map_sg)(struct ib_device *dev,
-				  struct scatterlist *sg, int nents,
-				  enum dma_data_direction direction);
-	void		(*unmap_sg)(struct ib_device *dev,
-				    struct scatterlist *sg, int nents,
-				    enum dma_data_direction direction);
-	int		(*map_sg_attrs)(struct ib_device *dev,
-					struct scatterlist *sg, int nents,
-					enum dma_data_direction direction,
-					unsigned long attrs);
-	void		(*unmap_sg_attrs)(struct ib_device *dev,
-					  struct scatterlist *sg, int nents,
-					  enum dma_data_direction direction,
-					  unsigned long attrs);
-	void		(*sync_single_for_cpu)(struct ib_device *dev,
-					       u64 dma_handle,
-					       size_t size,
-					       enum dma_data_direction dir);
-	void		(*sync_single_for_device)(struct ib_device *dev,
-						  u64 dma_handle,
-						  size_t size,
-						  enum dma_data_direction dir);
-	void		*(*alloc_coherent)(struct ib_device *dev,
-					   size_t size,
-					   u64 *dma_handle,
-					   gfp_t flag);
-	void		(*free_coherent)(struct ib_device *dev,
-					 size_t size, void *cpu_addr,
-					 u64 dma_handle);
-};
-
 struct iw_cm_verbs;
 
 struct ib_port_immutable {
@@ -2091,7 +2044,6 @@ struct ib_device {
 							   struct ib_rwq_ind_table_init_attr *init_attr,
 							   struct ib_udata *udata);
 	int                        (*destroy_rwq_ind_table)(struct ib_rwq_ind_table *wq_ind_table);
-	struct ib_dma_mapping_ops   *dma_ops;
 
 	struct module               *owner;
 	struct device                dev;
@@ -2966,8 +2918,6 @@ static inline int ib_req_ncomp_notif(struct ib_cq *cq, int wc_cnt)
  */
 static inline int ib_dma_mapping_error(struct ib_device *dev, u64 dma_addr)
 {
-	if (dev->dma_ops)
-		return dev->dma_ops->mapping_error(dev, dma_addr);
 	return dma_mapping_error(dev->dma_device, dma_addr);
 }
 
@@ -2982,8 +2932,6 @@ static inline u64 ib_dma_map_single(struct ib_device *dev,
 				    void *cpu_addr, size_t size,
 				    enum dma_data_direction direction)
 {
-	if (dev->dma_ops)
-		return dev->dma_ops->map_single(dev, cpu_addr, size, direction);
 	return dma_map_single(dev->dma_device, cpu_addr, size, direction);
 }
 
@@ -2998,10 +2946,7 @@ static inline void ib_dma_unmap_single(struct ib_device *dev,
 				       u64 addr, size_t size,
 				       enum dma_data_direction direction)
 {
-	if (dev->dma_ops)
-		dev->dma_ops->unmap_single(dev, addr, size, direction);
-	else
-		dma_unmap_single(dev->dma_device, addr, size, direction);
+	dma_unmap_single(dev->dma_device, addr, size, direction);
 }
 
 static inline u64 ib_dma_map_single_attrs(struct ib_device *dev,
@@ -3036,8 +2981,6 @@ static inline u64 ib_dma_map_page(struct ib_device *dev,
 				  size_t size,
 					 enum dma_data_direction direction)
 {
-	if (dev->dma_ops)
-		return dev->dma_ops->map_page(dev, page, offset, size, direction);
 	return dma_map_page(dev->dma_device, page, offset, size, direction);
 }
 
@@ -3052,10 +2995,7 @@ static inline void ib_dma_unmap_page(struct ib_device *dev,
 				     u64 addr, size_t size,
 				     enum dma_data_direction direction)
 {
-	if (dev->dma_ops)
-		dev->dma_ops->unmap_page(dev, addr, size, direction);
-	else
-		dma_unmap_page(dev->dma_device, addr, size, direction);
+	dma_unmap_page(dev->dma_device, addr, size, direction);
 }
 
 /**
@@ -3069,8 +3009,6 @@ static inline int ib_dma_map_sg(struct ib_device *dev,
 				struct scatterlist *sg, int nents,
 				enum dma_data_direction direction)
 {
-	if (dev->dma_ops)
-		return dev->dma_ops->map_sg(dev, sg, nents, direction);
 	return dma_map_sg(dev->dma_device, sg, nents, direction);
 }
 
@@ -3085,10 +3023,7 @@ static inline void ib_dma_unmap_sg(struct ib_device *dev,
 				   struct scatterlist *sg, int nents,
 				   enum dma_data_direction direction)
 {
-	if (dev->dma_ops)
-		dev->dma_ops->unmap_sg(dev, sg, nents, direction);
-	else
-		dma_unmap_sg(dev->dma_device, sg, nents, direction);
+	dma_unmap_sg(dev->dma_device, sg, nents, direction);
 }
 
 static inline int ib_dma_map_sg_attrs(struct ib_device *dev,
@@ -3096,12 +3031,8 @@ static inline int ib_dma_map_sg_attrs(struct ib_device *dev,
 				      enum dma_data_direction direction,
 				      unsigned long dma_attrs)
 {
-	if (dev->dma_ops)
-		return dev->dma_ops->map_sg_attrs(dev, sg, nents, direction,
-						  dma_attrs);
-	else
-		return dma_map_sg_attrs(dev->dma_device, sg, nents, direction,
-					dma_attrs);
+	return dma_map_sg_attrs(dev->dma_device, sg, nents, direction,
+				dma_attrs);
 }
 
 static inline void ib_dma_unmap_sg_attrs(struct ib_device *dev,
@@ -3109,12 +3040,7 @@ static inline void ib_dma_unmap_sg_attrs(struct ib_device *dev,
 					 enum dma_data_direction direction,
 					 unsigned long dma_attrs)
 {
-	if (dev->dma_ops)
-		return dev->dma_ops->unmap_sg_attrs(dev, sg, nents, direction,
-						  dma_attrs);
-	else
-		dma_unmap_sg_attrs(dev->dma_device, sg, nents, direction,
-				   dma_attrs);
+	dma_unmap_sg_attrs(dev->dma_device, sg, nents, direction, dma_attrs);
 }
 /**
  * ib_sg_dma_address - Return the DMA address from a scatter/gather entry
@@ -3156,10 +3082,7 @@ static inline void ib_dma_sync_single_for_cpu(struct ib_device *dev,
 					      size_t size,
 					      enum dma_data_direction dir)
 {
-	if (dev->dma_ops)
-		dev->dma_ops->sync_single_for_cpu(dev, addr, size, dir);
-	else
-		dma_sync_single_for_cpu(dev->dma_device, addr, size, dir);
+	dma_sync_single_for_cpu(dev->dma_device, addr, size, dir);
 }
 
 /**
@@ -3174,10 +3097,7 @@ static inline void ib_dma_sync_single_for_device(struct ib_device *dev,
 						 size_t size,
 						 enum dma_data_direction dir)
 {
-	if (dev->dma_ops)
-		dev->dma_ops->sync_single_for_device(dev, addr, size, dir);
-	else
-		dma_sync_single_for_device(dev->dma_device, addr, size, dir);
+	dma_sync_single_for_device(dev->dma_device, addr, size, dir);
 }
 
 /**
@@ -3192,16 +3112,12 @@ static inline void *ib_dma_alloc_coherent(struct ib_device *dev,
 					   u64 *dma_handle,
 					   gfp_t flag)
 {
-	if (dev->dma_ops)
-		return dev->dma_ops->alloc_coherent(dev, size, dma_handle, flag);
-	else {
-		dma_addr_t handle;
-		void *ret;
-
-		ret = dma_alloc_coherent(dev->dma_device, size, &handle, flag);
-		*dma_handle = handle;
-		return ret;
-	}
+	dma_addr_t handle;
+	void *ret;
+
+	ret = dma_alloc_coherent(dev->dma_device, size, &handle, flag);
+	*dma_handle = handle;
+	return ret;
 }
 
 /**
@@ -3215,10 +3131,7 @@ static inline void ib_dma_free_coherent(struct ib_device *dev,
 					size_t size, void *cpu_addr,
 					u64 dma_handle)
 {
-	if (dev->dma_ops)
-		dev->dma_ops->free_coherent(dev, size, cpu_addr, dma_handle);
-	else
-		dma_free_coherent(dev->dma_device, size, cpu_addr, dma_handle);
+	dma_free_coherent(dev->dma_device, size, cpu_addr, dma_handle);
 }
 
 /**
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 7/9] RDS: IB: Remove an unused structure member
  2017-01-11  0:56 [PATCH 0/9] IB: Optimize DMA mapping Bart Van Assche
                   ` (3 preceding siblings ...)
  2017-01-11  0:56 ` [PATCH 6/9] IB: Use dma_virt_ops instead of duplicating it Bart Van Assche
@ 2017-01-11  0:56 ` Bart Van Assche
  2017-01-11  1:21   ` santosh.shilimkar
  2017-01-11  0:56 ` [PATCH 8/9] IB: Convert ib_dma_*_coherent() argument type from u64 into dma_addr_t Bart Van Assche
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Bart Van Assche @ 2017-01-11  0:56 UTC (permalink / raw)
  To: Doug Ledford
  Cc: linux-rdma, linux-kernel, Santosh Shilimkar, Santosh Shilimkar,
	David S . Miller, netdev, rds-devel

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Santosh Shilimkar <ssantosh@kernel.org>
Cc: Santosh Shilimkar <santosh.shilimkar@oracle.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: linux-rdma@vger.kernel.org
Cc: netdev@vger.kernel.org
Cc: rds-devel@oss.oracle.com
---
 net/rds/ib_mr.h | 1 -
 1 file changed, 1 deletion(-)

diff --git a/net/rds/ib_mr.h b/net/rds/ib_mr.h
index 1c754f4acbe5..24c086db4511 100644
--- a/net/rds/ib_mr.h
+++ b/net/rds/ib_mr.h
@@ -45,7 +45,6 @@
 
 struct rds_ib_fmr {
 	struct ib_fmr		*fmr;
-	u64			*dma;
 };
 
 enum rds_ib_fr_state {
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 8/9] IB: Convert ib_dma_*_coherent() argument type from u64 into dma_addr_t
  2017-01-11  0:56 [PATCH 0/9] IB: Optimize DMA mapping Bart Van Assche
                   ` (4 preceding siblings ...)
  2017-01-11  0:56 ` [PATCH 7/9] RDS: IB: Remove an unused structure member Bart Van Assche
@ 2017-01-11  0:56 ` Bart Van Assche
  2017-01-12 13:12   ` Leon Romanovsky
  2017-01-11  0:56 ` [PATCH 9/9] treewide: Inline ib_dma_map_*() functions Bart Van Assche
  2017-01-11  1:28 ` [PATCH 0/9] IB: Optimize DMA mapping santosh.shilimkar
  7 siblings, 1 reply; 17+ messages in thread
From: Bart Van Assche @ 2017-01-11  0:56 UTC (permalink / raw)
  To: Doug Ledford
  Cc: linux-rdma, linux-kernel, David S . Miller, netdev, rds-devel

This patch does not change any functionality.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: linux-rdma@vger.kernel.org
Cc: netdev@vger.kernel.org
Cc: rds-devel@oss.oracle.com
---
 include/rdma/ib_verbs.h | 11 +++--------
 net/rds/ib.h            |  6 +++---
 2 files changed, 6 insertions(+), 11 deletions(-)

diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index 89e80eb77e06..de8dfb61d2b6 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -3109,15 +3109,10 @@ static inline void ib_dma_sync_single_for_device(struct ib_device *dev,
  */
 static inline void *ib_dma_alloc_coherent(struct ib_device *dev,
 					   size_t size,
-					   u64 *dma_handle,
+					   dma_addr_t *dma_handle,
 					   gfp_t flag)
 {
-	dma_addr_t handle;
-	void *ret;
-
-	ret = dma_alloc_coherent(dev->dma_device, size, &handle, flag);
-	*dma_handle = handle;
-	return ret;
+	return dma_alloc_coherent(dev->dma_device, size, dma_handle, flag);
 }
 
 /**
@@ -3129,7 +3124,7 @@ static inline void *ib_dma_alloc_coherent(struct ib_device *dev,
  */
 static inline void ib_dma_free_coherent(struct ib_device *dev,
 					size_t size, void *cpu_addr,
-					u64 dma_handle)
+					dma_addr_t dma_handle)
 {
 	dma_free_coherent(dev->dma_device, size, cpu_addr, dma_handle);
 }
diff --git a/net/rds/ib.h b/net/rds/ib.h
index 45ac8e8e58f4..d21ca88ab628 100644
--- a/net/rds/ib.h
+++ b/net/rds/ib.h
@@ -134,7 +134,7 @@ struct rds_ib_connection {
 	struct rds_ib_work_ring	i_send_ring;
 	struct rm_data_op	*i_data_op;
 	struct rds_header	*i_send_hdrs;
-	u64			i_send_hdrs_dma;
+	dma_addr_t		i_send_hdrs_dma;
 	struct rds_ib_send_work *i_sends;
 	atomic_t		i_signaled_sends;
 
@@ -144,7 +144,7 @@ struct rds_ib_connection {
 	struct rds_ib_incoming	*i_ibinc;
 	u32			i_recv_data_rem;
 	struct rds_header	*i_recv_hdrs;
-	u64			i_recv_hdrs_dma;
+	dma_addr_t		i_recv_hdrs_dma;
 	struct rds_ib_recv_work *i_recvs;
 	u64			i_ack_recv;	/* last ACK received */
 	struct rds_ib_refill_cache i_cache_incs;
@@ -161,7 +161,7 @@ struct rds_ib_connection {
 	struct rds_header	*i_ack;
 	struct ib_send_wr	i_ack_wr;
 	struct ib_sge		i_ack_sge;
-	u64			i_ack_dma;
+	dma_addr_t		i_ack_dma;
 	unsigned long		i_ack_queued;
 
 	/* Flow control related information
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 9/9] treewide: Inline ib_dma_map_*() functions
  2017-01-11  0:56 [PATCH 0/9] IB: Optimize DMA mapping Bart Van Assche
                   ` (5 preceding siblings ...)
  2017-01-11  0:56 ` [PATCH 8/9] IB: Convert ib_dma_*_coherent() argument type from u64 into dma_addr_t Bart Van Assche
@ 2017-01-11  0:56 ` Bart Van Assche
  2017-01-12 11:45   ` Sagi Grimberg
  2017-01-12 13:09   ` Leon Romanovsky
  2017-01-11  1:28 ` [PATCH 0/9] IB: Optimize DMA mapping santosh.shilimkar
  7 siblings, 2 replies; 17+ messages in thread
From: Bart Van Assche @ 2017-01-11  0:56 UTC (permalink / raw)
  To: Doug Ledford
  Cc: linux-rdma, linux-kernel, Andreas Dilger, Anna Schumaker,
	David S . Miller, Eric Van Hensbergen, James Simmons,
	Latchesar Ionkov, Oleg Drokin, Ron Minnich, Trond Myklebust,
	devel, linux-nfs, linux-nvme, lustre-devel, netdev, rds-devel,
	target-devel, v9fs-developer

Almost all changes in this patch except the removal of local variables
that became superfluous and the actual removal of the ib_dma_map_*()
functions have been generated as follows:

git grep -lE 'ib_(sg_|)dma_' |
  xargs -d\\n \
    sed -i -e 's/\([^[:alnum:]_]\)ib_dma_\([^(]*\)(\&\([^,]\+\),/\1dma_\2(\3.dma_device,/g' \
           -e 's/\([^[:alnum:]_]\)ib_dma_\([^(]*\)(\([^,]\+\),/\1dma_\2(\3->dma_device,/g' \
	   -e 's/ib_sg_dma_\(len\|address\)(\([^,]\+\), /sg_dma_\1(/g'

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Andreas Dilger <andreas.dilger@intel.com>
Cc: Anna Schumaker <anna.schumaker@netapp.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Eric Van Hensbergen <ericvh@gmail.com>
Cc: James Simmons <jsimmons@infradead.org>
Cc: Latchesar Ionkov <lucho@ionkov.net>
Cc: Oleg Drokin <oleg.drokin@intel.com>
Cc: Ron Minnich <rminnich@sandia.gov>
Cc: Trond Myklebust <trond.myklebust@primarydata.com>
Cc: devel@driverdev.osuosl.org
Cc: linux-nfs@vger.kernel.org
Cc: linux-nvme@lists.infradead.org
Cc: linux-rdma@vger.kernel.org
Cc: lustre-devel@lists.lustre.org
Cc: netdev@vger.kernel.org
Cc: rds-devel@oss.oracle.com
Cc: target-devel@vger.kernel.org
Cc: v9fs-developer@lists.sourceforge.net
---
 drivers/infiniband/core/mad.c                      |  28 +--
 drivers/infiniband/core/rw.c                       |  30 ++-
 drivers/infiniband/core/umem.c                     |   4 +-
 drivers/infiniband/core/umem_odp.c                 |   6 +-
 drivers/infiniband/hw/mlx4/cq.c                    |   2 +-
 drivers/infiniband/hw/mlx4/mad.c                   |  28 +--
 drivers/infiniband/hw/mlx4/mr.c                    |   4 +-
 drivers/infiniband/hw/mlx4/qp.c                    |  10 +-
 drivers/infiniband/hw/mlx5/mr.c                    |   4 +-
 drivers/infiniband/ulp/ipoib/ipoib_cm.c            |  20 +-
 drivers/infiniband/ulp/ipoib/ipoib_ib.c            |  22 +--
 drivers/infiniband/ulp/iser/iscsi_iser.c           |   6 +-
 drivers/infiniband/ulp/iser/iser_initiator.c       |  38 ++--
 drivers/infiniband/ulp/iser/iser_memory.c          |  12 +-
 drivers/infiniband/ulp/iser/iser_verbs.c           |   2 +-
 drivers/infiniband/ulp/isert/ib_isert.c            |  60 +++---
 drivers/infiniband/ulp/srp/ib_srp.c                |  50 +++--
 drivers/infiniband/ulp/srpt/ib_srpt.c              |  10 +-
 drivers/nvme/host/rdma.c                           |  22 +--
 drivers/nvme/target/rdma.c                         |  20 +-
 .../staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h    |  14 +-
 include/rdma/ib_verbs.h                            | 218 ---------------------
 net/9p/trans_rdma.c                                |  12 +-
 net/rds/ib.h                                       |  39 ----
 net/rds/ib_cm.c                                    |  18 +-
 net/rds/ib_fmr.c                                   |  10 +-
 net/rds/ib_frmr.c                                  |   8 +-
 net/rds/ib_rdma.c                                  |   6 +-
 net/rds/ib_recv.c                                  |  14 +-
 net/rds/ib_send.c                                  |  28 +--
 net/sunrpc/xprtrdma/fmr_ops.c                      |   6 +-
 net/sunrpc/xprtrdma/frwr_ops.c                     |   6 +-
 net/sunrpc/xprtrdma/rpc_rdma.c                     |  14 +-
 net/sunrpc/xprtrdma/svc_rdma_backchannel.c         |   4 +-
 net/sunrpc/xprtrdma/svc_rdma_recvfrom.c            |   8 +-
 net/sunrpc/xprtrdma/svc_rdma_sendto.c              |  14 +-
 net/sunrpc/xprtrdma/svc_rdma_transport.c           |   8 +-
 net/sunrpc/xprtrdma/verbs.c                        |   8 +-
 38 files changed, 275 insertions(+), 538 deletions(-)

diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c
index a009f7132c73..2d51f0bdc13f 100644
--- a/drivers/infiniband/core/mad.c
+++ b/drivers/infiniband/core/mad.c
@@ -1152,21 +1152,21 @@ int ib_send_mad(struct ib_mad_send_wr_private *mad_send_wr)
 
 	mad_agent = mad_send_wr->send_buf.mad_agent;
 	sge = mad_send_wr->sg_list;
-	sge[0].addr = ib_dma_map_single(mad_agent->device,
+	sge[0].addr = dma_map_single(mad_agent->device->dma_device,
 					mad_send_wr->send_buf.mad,
 					sge[0].length,
 					DMA_TO_DEVICE);
-	if (unlikely(ib_dma_mapping_error(mad_agent->device, sge[0].addr)))
+	if (unlikely(dma_mapping_error(mad_agent->device->dma_device, sge[0].addr)))
 		return -ENOMEM;
 
 	mad_send_wr->header_mapping = sge[0].addr;
 
-	sge[1].addr = ib_dma_map_single(mad_agent->device,
+	sge[1].addr = dma_map_single(mad_agent->device->dma_device,
 					ib_get_payload(mad_send_wr),
 					sge[1].length,
 					DMA_TO_DEVICE);
-	if (unlikely(ib_dma_mapping_error(mad_agent->device, sge[1].addr))) {
-		ib_dma_unmap_single(mad_agent->device,
+	if (unlikely(dma_mapping_error(mad_agent->device->dma_device, sge[1].addr))) {
+		dma_unmap_single(mad_agent->device->dma_device,
 				    mad_send_wr->header_mapping,
 				    sge[0].length, DMA_TO_DEVICE);
 		return -ENOMEM;
@@ -1189,10 +1189,10 @@ int ib_send_mad(struct ib_mad_send_wr_private *mad_send_wr)
 	}
 	spin_unlock_irqrestore(&qp_info->send_queue.lock, flags);
 	if (ret) {
-		ib_dma_unmap_single(mad_agent->device,
+		dma_unmap_single(mad_agent->device->dma_device,
 				    mad_send_wr->header_mapping,
 				    sge[0].length, DMA_TO_DEVICE);
-		ib_dma_unmap_single(mad_agent->device,
+		dma_unmap_single(mad_agent->device->dma_device,
 				    mad_send_wr->payload_mapping,
 				    sge[1].length, DMA_TO_DEVICE);
 	}
@@ -2191,7 +2191,7 @@ static void ib_mad_recv_done(struct ib_cq *cq, struct ib_wc *wc)
 	mad_priv_hdr = container_of(mad_list, struct ib_mad_private_header,
 				    mad_list);
 	recv = container_of(mad_priv_hdr, struct ib_mad_private, header);
-	ib_dma_unmap_single(port_priv->device,
+	dma_unmap_single(port_priv->device->dma_device,
 			    recv->header.mapping,
 			    mad_priv_dma_size(recv),
 			    DMA_FROM_DEVICE);
@@ -2432,10 +2432,10 @@ static void ib_mad_send_done(struct ib_cq *cq, struct ib_wc *wc)
 	qp_info = send_queue->qp_info;
 
 retry:
-	ib_dma_unmap_single(mad_send_wr->send_buf.mad_agent->device,
+	dma_unmap_single(mad_send_wr->send_buf.mad_agent->device->dma_device,
 			    mad_send_wr->header_mapping,
 			    mad_send_wr->sg_list[0].length, DMA_TO_DEVICE);
-	ib_dma_unmap_single(mad_send_wr->send_buf.mad_agent->device,
+	dma_unmap_single(mad_send_wr->send_buf.mad_agent->device->dma_device,
 			    mad_send_wr->payload_mapping,
 			    mad_send_wr->sg_list[1].length, DMA_TO_DEVICE);
 	queued_send_wr = NULL;
@@ -2853,11 +2853,11 @@ static int ib_mad_post_receive_mads(struct ib_mad_qp_info *qp_info,
 			}
 		}
 		sg_list.length = mad_priv_dma_size(mad_priv);
-		sg_list.addr = ib_dma_map_single(qp_info->port_priv->device,
+		sg_list.addr = dma_map_single(qp_info->port_priv->device->dma_device,
 						 &mad_priv->grh,
 						 mad_priv_dma_size(mad_priv),
 						 DMA_FROM_DEVICE);
-		if (unlikely(ib_dma_mapping_error(qp_info->port_priv->device,
+		if (unlikely(dma_mapping_error(qp_info->port_priv->device->dma_device,
 						  sg_list.addr))) {
 			ret = -ENOMEM;
 			break;
@@ -2878,7 +2878,7 @@ static int ib_mad_post_receive_mads(struct ib_mad_qp_info *qp_info,
 			list_del(&mad_priv->header.mad_list.list);
 			recv_queue->count--;
 			spin_unlock_irqrestore(&recv_queue->lock, flags);
-			ib_dma_unmap_single(qp_info->port_priv->device,
+			dma_unmap_single(qp_info->port_priv->device->dma_device,
 					    mad_priv->header.mapping,
 					    mad_priv_dma_size(mad_priv),
 					    DMA_FROM_DEVICE);
@@ -2917,7 +2917,7 @@ static void cleanup_recv_queue(struct ib_mad_qp_info *qp_info)
 		/* Remove from posted receive MAD list */
 		list_del(&mad_list->list);
 
-		ib_dma_unmap_single(qp_info->port_priv->device,
+		dma_unmap_single(qp_info->port_priv->device->dma_device,
 				    recv->header.mapping,
 				    mad_priv_dma_size(recv),
 				    DMA_FROM_DEVICE);
diff --git a/drivers/infiniband/core/rw.c b/drivers/infiniband/core/rw.c
index dbfd854c32c9..f8aef874f636 100644
--- a/drivers/infiniband/core/rw.c
+++ b/drivers/infiniband/core/rw.c
@@ -178,7 +178,6 @@ static int rdma_rw_init_map_wrs(struct rdma_rw_ctx *ctx, struct ib_qp *qp,
 		struct scatterlist *sg, u32 sg_cnt, u32 offset,
 		u64 remote_addr, u32 rkey, enum dma_data_direction dir)
 {
-	struct ib_device *dev = qp->pd->device;
 	u32 max_sge = dir == DMA_TO_DEVICE ? qp->max_write_sge :
 		      qp->max_read_sge;
 	struct ib_sge *sge;
@@ -208,8 +207,8 @@ static int rdma_rw_init_map_wrs(struct rdma_rw_ctx *ctx, struct ib_qp *qp,
 		rdma_wr->wr.sg_list = sge;
 
 		for (j = 0; j < nr_sge; j++, sg = sg_next(sg)) {
-			sge->addr = ib_sg_dma_address(dev, sg) + offset;
-			sge->length = ib_sg_dma_len(dev, sg) - offset;
+			sge->addr = sg_dma_address(sg) + offset;
+			sge->length = sg_dma_len(sg) - offset;
 			sge->lkey = qp->pd->local_dma_lkey;
 
 			total_len += sge->length;
@@ -235,14 +234,13 @@ static int rdma_rw_init_single_wr(struct rdma_rw_ctx *ctx, struct ib_qp *qp,
 		struct scatterlist *sg, u32 offset, u64 remote_addr, u32 rkey,
 		enum dma_data_direction dir)
 {
-	struct ib_device *dev = qp->pd->device;
 	struct ib_rdma_wr *rdma_wr = &ctx->single.wr;
 
 	ctx->nr_ops = 1;
 
 	ctx->single.sge.lkey = qp->pd->local_dma_lkey;
-	ctx->single.sge.addr = ib_sg_dma_address(dev, sg) + offset;
-	ctx->single.sge.length = ib_sg_dma_len(dev, sg) - offset;
+	ctx->single.sge.addr = sg_dma_address(sg) + offset;
+	ctx->single.sge.length = sg_dma_len(sg) - offset;
 
 	memset(rdma_wr, 0, sizeof(*rdma_wr));
 	if (dir == DMA_TO_DEVICE)
@@ -280,7 +278,7 @@ int rdma_rw_ctx_init(struct rdma_rw_ctx *ctx, struct ib_qp *qp, u8 port_num,
 	struct ib_device *dev = qp->pd->device;
 	int ret;
 
-	ret = ib_dma_map_sg(dev, sg, sg_cnt, dir);
+	ret = dma_map_sg(dev->dma_device, sg, sg_cnt, dir);
 	if (!ret)
 		return -ENOMEM;
 	sg_cnt = ret;
@@ -289,7 +287,7 @@ int rdma_rw_ctx_init(struct rdma_rw_ctx *ctx, struct ib_qp *qp, u8 port_num,
 	 * Skip to the S/G entry that sg_offset falls into:
 	 */
 	for (;;) {
-		u32 len = ib_sg_dma_len(dev, sg);
+		u32 len = sg_dma_len(sg);
 
 		if (sg_offset < len)
 			break;
@@ -319,7 +317,7 @@ int rdma_rw_ctx_init(struct rdma_rw_ctx *ctx, struct ib_qp *qp, u8 port_num,
 	return ret;
 
 out_unmap_sg:
-	ib_dma_unmap_sg(dev, sg, sg_cnt, dir);
+	dma_unmap_sg(dev->dma_device, sg, sg_cnt, dir);
 	return ret;
 }
 EXPORT_SYMBOL(rdma_rw_ctx_init);
@@ -358,12 +356,12 @@ int rdma_rw_ctx_signature_init(struct rdma_rw_ctx *ctx, struct ib_qp *qp,
 		return -EINVAL;
 	}
 
-	ret = ib_dma_map_sg(dev, sg, sg_cnt, dir);
+	ret = dma_map_sg(dev->dma_device, sg, sg_cnt, dir);
 	if (!ret)
 		return -ENOMEM;
 	sg_cnt = ret;
 
-	ret = ib_dma_map_sg(dev, prot_sg, prot_sg_cnt, dir);
+	ret = dma_map_sg(dev->dma_device, prot_sg, prot_sg_cnt, dir);
 	if (!ret) {
 		ret = -ENOMEM;
 		goto out_unmap_sg;
@@ -457,9 +455,9 @@ int rdma_rw_ctx_signature_init(struct rdma_rw_ctx *ctx, struct ib_qp *qp,
 out_free_ctx:
 	kfree(ctx->sig);
 out_unmap_prot_sg:
-	ib_dma_unmap_sg(dev, prot_sg, prot_sg_cnt, dir);
+	dma_unmap_sg(dev->dma_device, prot_sg, prot_sg_cnt, dir);
 out_unmap_sg:
-	ib_dma_unmap_sg(dev, sg, sg_cnt, dir);
+	dma_unmap_sg(dev->dma_device, sg, sg_cnt, dir);
 	return ret;
 }
 EXPORT_SYMBOL(rdma_rw_ctx_signature_init);
@@ -606,7 +604,7 @@ void rdma_rw_ctx_destroy(struct rdma_rw_ctx *ctx, struct ib_qp *qp, u8 port_num,
 		break;
 	}
 
-	ib_dma_unmap_sg(qp->pd->device, sg, sg_cnt, dir);
+	dma_unmap_sg(qp->pd->device->dma_device, sg, sg_cnt, dir);
 }
 EXPORT_SYMBOL(rdma_rw_ctx_destroy);
 
@@ -631,11 +629,11 @@ void rdma_rw_ctx_destroy_signature(struct rdma_rw_ctx *ctx, struct ib_qp *qp,
 		return;
 
 	ib_mr_pool_put(qp, &qp->rdma_mrs, ctx->sig->data.mr);
-	ib_dma_unmap_sg(qp->pd->device, sg, sg_cnt, dir);
+	dma_unmap_sg(qp->pd->device->dma_device, sg, sg_cnt, dir);
 
 	if (ctx->sig->prot.mr) {
 		ib_mr_pool_put(qp, &qp->rdma_mrs, ctx->sig->prot.mr);
-		ib_dma_unmap_sg(qp->pd->device, prot_sg, prot_sg_cnt, dir);
+		dma_unmap_sg(qp->pd->device->dma_device, prot_sg, prot_sg_cnt, dir);
 	}
 
 	ib_mr_pool_put(qp, &qp->sig_mrs, ctx->sig->sig_mr);
diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
index 1e62a5f0cb28..146ebdbd3f7c 100644
--- a/drivers/infiniband/core/umem.c
+++ b/drivers/infiniband/core/umem.c
@@ -50,7 +50,7 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d
 	int i;
 
 	if (umem->nmap > 0)
-		ib_dma_unmap_sg(dev, umem->sg_head.sgl,
+		dma_unmap_sg(dev->dma_device, umem->sg_head.sgl,
 				umem->npages,
 				DMA_BIDIRECTIONAL);
 
@@ -214,7 +214,7 @@ struct ib_umem *ib_umem_get(struct ib_ucontext *context, unsigned long addr,
 		sg_list_start = sg;
 	}
 
-	umem->nmap = ib_dma_map_sg_attrs(context->device,
+	umem->nmap = dma_map_sg_attrs(context->device->dma_device,
 				  umem->sg_head.sgl,
 				  umem->npages,
 				  DMA_BIDIRECTIONAL,
diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c
index 6b079a31dced..066628ec6ed0 100644
--- a/drivers/infiniband/core/umem_odp.c
+++ b/drivers/infiniband/core/umem_odp.c
@@ -456,11 +456,11 @@ static int ib_umem_odp_map_dma_single_page(
 		goto out;
 	}
 	if (!(umem->odp_data->dma_list[page_index])) {
-		dma_addr = ib_dma_map_page(dev,
+		dma_addr = dma_map_page(dev->dma_device,
 					   page,
 					   0, PAGE_SIZE,
 					   DMA_BIDIRECTIONAL);
-		if (ib_dma_mapping_error(dev, dma_addr)) {
+		if (dma_mapping_error(dev->dma_device, dma_addr)) {
 			ret = -EFAULT;
 			goto out;
 		}
@@ -645,7 +645,7 @@ void ib_umem_odp_unmap_dma_pages(struct ib_umem *umem, u64 virt,
 
 			WARN_ON(!dma_addr);
 
-			ib_dma_unmap_page(dev, dma_addr, PAGE_SIZE,
+			dma_unmap_page(dev->dma_device, dma_addr, PAGE_SIZE,
 					  DMA_BIDIRECTIONAL);
 			if (dma & ODP_WRITE_ALLOWED_BIT) {
 				struct page *head_page = compound_head(page);
diff --git a/drivers/infiniband/hw/mlx4/cq.c b/drivers/infiniband/hw/mlx4/cq.c
index 6a0fec357dae..138ac59655ab 100644
--- a/drivers/infiniband/hw/mlx4/cq.c
+++ b/drivers/infiniband/hw/mlx4/cq.c
@@ -584,7 +584,7 @@ static void use_tunnel_data(struct mlx4_ib_qp *qp, struct mlx4_ib_cq *cq, struct
 {
 	struct mlx4_ib_proxy_sqp_hdr *hdr;
 
-	ib_dma_sync_single_for_cpu(qp->ibqp.device,
+	dma_sync_single_for_cpu(qp->ibqp.device->dma_device,
 				   qp->sqp_proxy_rcv[tail].map,
 				   sizeof (struct mlx4_ib_proxy_sqp_hdr),
 				   DMA_FROM_DEVICE);
diff --git a/drivers/infiniband/hw/mlx4/mad.c b/drivers/infiniband/hw/mlx4/mad.c
index db564ccc0f92..221fe481e6f7 100644
--- a/drivers/infiniband/hw/mlx4/mad.c
+++ b/drivers/infiniband/hw/mlx4/mad.c
@@ -582,7 +582,7 @@ int mlx4_ib_send_to_slave(struct mlx4_ib_dev *dev, int slave, u8 port,
 	if (tun_qp->tx_ring[tun_tx_ix].ah)
 		ib_destroy_ah(tun_qp->tx_ring[tun_tx_ix].ah);
 	tun_qp->tx_ring[tun_tx_ix].ah = ah;
-	ib_dma_sync_single_for_cpu(&dev->ib_dev,
+	dma_sync_single_for_cpu(dev->ib_dev.dma_device,
 				   tun_qp->tx_ring[tun_tx_ix].buf.map,
 				   sizeof (struct mlx4_rcv_tunnel_mad),
 				   DMA_TO_DEVICE);
@@ -624,7 +624,7 @@ int mlx4_ib_send_to_slave(struct mlx4_ib_dev *dev, int slave, u8 port,
 		tun_mad->hdr.slid_mac_47_32 = cpu_to_be16(wc->slid);
 	}
 
-	ib_dma_sync_single_for_device(&dev->ib_dev,
+	dma_sync_single_for_device(dev->ib_dev.dma_device,
 				      tun_qp->tx_ring[tun_tx_ix].buf.map,
 				      sizeof (struct mlx4_rcv_tunnel_mad),
 				      DMA_TO_DEVICE);
@@ -1321,7 +1321,7 @@ static int mlx4_ib_post_pv_qp_buf(struct mlx4_ib_demux_pv_ctx *ctx,
 	recv_wr.num_sge = 1;
 	recv_wr.wr_id = (u64) index | MLX4_TUN_WRID_RECV |
 		MLX4_TUN_SET_WRID_QPN(tun_qp->proxy_qpt);
-	ib_dma_sync_single_for_device(ctx->ib_dev, tun_qp->ring[index].map,
+	dma_sync_single_for_device(ctx->ib_dev->dma_device, tun_qp->ring[index].map,
 				      size, DMA_FROM_DEVICE);
 	return ib_post_recv(tun_qp->qp, &recv_wr, &bad_recv_wr);
 }
@@ -1412,14 +1412,14 @@ int mlx4_ib_send_to_wire(struct mlx4_ib_dev *dev, int slave, u8 port,
 	if (sqp->tx_ring[wire_tx_ix].ah)
 		ib_destroy_ah(sqp->tx_ring[wire_tx_ix].ah);
 	sqp->tx_ring[wire_tx_ix].ah = ah;
-	ib_dma_sync_single_for_cpu(&dev->ib_dev,
+	dma_sync_single_for_cpu(dev->ib_dev.dma_device,
 				   sqp->tx_ring[wire_tx_ix].buf.map,
 				   sizeof (struct mlx4_mad_snd_buf),
 				   DMA_TO_DEVICE);
 
 	memcpy(&sqp_mad->payload, mad, sizeof *mad);
 
-	ib_dma_sync_single_for_device(&dev->ib_dev,
+	dma_sync_single_for_device(dev->ib_dev.dma_device,
 				      sqp->tx_ring[wire_tx_ix].buf.map,
 				      sizeof (struct mlx4_mad_snd_buf),
 				      DMA_TO_DEVICE);
@@ -1504,7 +1504,7 @@ static void mlx4_ib_multiplex_mad(struct mlx4_ib_demux_pv_ctx *ctx, struct ib_wc
 	}
 
 	/* Map transaction ID */
-	ib_dma_sync_single_for_cpu(ctx->ib_dev, tun_qp->ring[wr_ix].map,
+	dma_sync_single_for_cpu(ctx->ib_dev->dma_device, tun_qp->ring[wr_ix].map,
 				   sizeof (struct mlx4_tunnel_mad),
 				   DMA_FROM_DEVICE);
 	switch (tunnel->mad.mad_hdr.method) {
@@ -1627,11 +1627,11 @@ static int mlx4_ib_alloc_pv_bufs(struct mlx4_ib_demux_pv_ctx *ctx,
 		tun_qp->ring[i].addr = kmalloc(rx_buf_size, GFP_KERNEL);
 		if (!tun_qp->ring[i].addr)
 			goto err;
-		tun_qp->ring[i].map = ib_dma_map_single(ctx->ib_dev,
+		tun_qp->ring[i].map = dma_map_single(ctx->ib_dev->dma_device,
 							tun_qp->ring[i].addr,
 							rx_buf_size,
 							DMA_FROM_DEVICE);
-		if (ib_dma_mapping_error(ctx->ib_dev, tun_qp->ring[i].map)) {
+		if (dma_mapping_error(ctx->ib_dev->dma_device, tun_qp->ring[i].map)) {
 			kfree(tun_qp->ring[i].addr);
 			goto err;
 		}
@@ -1643,11 +1643,11 @@ static int mlx4_ib_alloc_pv_bufs(struct mlx4_ib_demux_pv_ctx *ctx,
 		if (!tun_qp->tx_ring[i].buf.addr)
 			goto tx_err;
 		tun_qp->tx_ring[i].buf.map =
-			ib_dma_map_single(ctx->ib_dev,
+			dma_map_single(ctx->ib_dev->dma_device,
 					  tun_qp->tx_ring[i].buf.addr,
 					  tx_buf_size,
 					  DMA_TO_DEVICE);
-		if (ib_dma_mapping_error(ctx->ib_dev,
+		if (dma_mapping_error(ctx->ib_dev->dma_device,
 					 tun_qp->tx_ring[i].buf.map)) {
 			kfree(tun_qp->tx_ring[i].buf.addr);
 			goto tx_err;
@@ -1664,7 +1664,7 @@ static int mlx4_ib_alloc_pv_bufs(struct mlx4_ib_demux_pv_ctx *ctx,
 tx_err:
 	while (i > 0) {
 		--i;
-		ib_dma_unmap_single(ctx->ib_dev, tun_qp->tx_ring[i].buf.map,
+		dma_unmap_single(ctx->ib_dev->dma_device, tun_qp->tx_ring[i].buf.map,
 				    tx_buf_size, DMA_TO_DEVICE);
 		kfree(tun_qp->tx_ring[i].buf.addr);
 	}
@@ -1674,7 +1674,7 @@ static int mlx4_ib_alloc_pv_bufs(struct mlx4_ib_demux_pv_ctx *ctx,
 err:
 	while (i > 0) {
 		--i;
-		ib_dma_unmap_single(ctx->ib_dev, tun_qp->ring[i].map,
+		dma_unmap_single(ctx->ib_dev->dma_device, tun_qp->ring[i].map,
 				    rx_buf_size, DMA_FROM_DEVICE);
 		kfree(tun_qp->ring[i].addr);
 	}
@@ -1704,13 +1704,13 @@ static void mlx4_ib_free_pv_qp_bufs(struct mlx4_ib_demux_pv_ctx *ctx,
 
 
 	for (i = 0; i < MLX4_NUM_TUNNEL_BUFS; i++) {
-		ib_dma_unmap_single(ctx->ib_dev, tun_qp->ring[i].map,
+		dma_unmap_single(ctx->ib_dev->dma_device, tun_qp->ring[i].map,
 				    rx_buf_size, DMA_FROM_DEVICE);
 		kfree(tun_qp->ring[i].addr);
 	}
 
 	for (i = 0; i < MLX4_NUM_TUNNEL_BUFS; i++) {
-		ib_dma_unmap_single(ctx->ib_dev, tun_qp->tx_ring[i].buf.map,
+		dma_unmap_single(ctx->ib_dev->dma_device, tun_qp->tx_ring[i].buf.map,
 				    tx_buf_size, DMA_TO_DEVICE);
 		kfree(tun_qp->tx_ring[i].buf.addr);
 		if (tun_qp->tx_ring[i].ah)
diff --git a/drivers/infiniband/hw/mlx4/mr.c b/drivers/infiniband/hw/mlx4/mr.c
index 5d73989d9771..89297b832a9f 100644
--- a/drivers/infiniband/hw/mlx4/mr.c
+++ b/drivers/infiniband/hw/mlx4/mr.c
@@ -538,12 +538,12 @@ int mlx4_ib_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents,
 
 	mr->npages = 0;
 
-	ib_dma_sync_single_for_cpu(ibmr->device, mr->page_map,
+	dma_sync_single_for_cpu(ibmr->device->dma_device, mr->page_map,
 				   mr->page_map_size, DMA_TO_DEVICE);
 
 	rc = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, mlx4_set_page);
 
-	ib_dma_sync_single_for_device(ibmr->device, mr->page_map,
+	dma_sync_single_for_device(ibmr->device->dma_device, mr->page_map,
 				      mr->page_map_size, DMA_TO_DEVICE);
 
 	return rc;
diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
index c068add8838b..a8c4ef02aef1 100644
--- a/drivers/infiniband/hw/mlx4/qp.c
+++ b/drivers/infiniband/hw/mlx4/qp.c
@@ -570,10 +570,10 @@ static int alloc_proxy_bufs(struct ib_device *dev, struct mlx4_ib_qp *qp)
 		if (!qp->sqp_proxy_rcv[i].addr)
 			goto err;
 		qp->sqp_proxy_rcv[i].map =
-			ib_dma_map_single(dev, qp->sqp_proxy_rcv[i].addr,
+			dma_map_single(dev->dma_device, qp->sqp_proxy_rcv[i].addr,
 					  sizeof (struct mlx4_ib_proxy_sqp_hdr),
 					  DMA_FROM_DEVICE);
-		if (ib_dma_mapping_error(dev, qp->sqp_proxy_rcv[i].map)) {
+		if (dma_mapping_error(dev->dma_device, qp->sqp_proxy_rcv[i].map)) {
 			kfree(qp->sqp_proxy_rcv[i].addr);
 			goto err;
 		}
@@ -583,7 +583,7 @@ static int alloc_proxy_bufs(struct ib_device *dev, struct mlx4_ib_qp *qp)
 err:
 	while (i > 0) {
 		--i;
-		ib_dma_unmap_single(dev, qp->sqp_proxy_rcv[i].map,
+		dma_unmap_single(dev->dma_device, qp->sqp_proxy_rcv[i].map,
 				    sizeof (struct mlx4_ib_proxy_sqp_hdr),
 				    DMA_FROM_DEVICE);
 		kfree(qp->sqp_proxy_rcv[i].addr);
@@ -598,7 +598,7 @@ static void free_proxy_bufs(struct ib_device *dev, struct mlx4_ib_qp *qp)
 	int i;
 
 	for (i = 0; i < qp->rq.wqe_cnt; i++) {
-		ib_dma_unmap_single(dev, qp->sqp_proxy_rcv[i].map,
+		dma_unmap_single(dev->dma_device, qp->sqp_proxy_rcv[i].map,
 				    sizeof (struct mlx4_ib_proxy_sqp_hdr),
 				    DMA_FROM_DEVICE);
 		kfree(qp->sqp_proxy_rcv[i].addr);
@@ -3306,7 +3306,7 @@ int mlx4_ib_post_recv(struct ib_qp *ibqp, struct ib_recv_wr *wr,
 
 		if (qp->mlx4_ib_qp_type & (MLX4_IB_QPT_PROXY_SMI_OWNER |
 		    MLX4_IB_QPT_PROXY_SMI | MLX4_IB_QPT_PROXY_GSI)) {
-			ib_dma_sync_single_for_device(ibqp->device,
+			dma_sync_single_for_device(ibqp->device->dma_device,
 						      qp->sqp_proxy_rcv[ind].map,
 						      sizeof (struct mlx4_ib_proxy_sqp_hdr),
 						      DMA_FROM_DEVICE);
diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
index 00175191fdc6..68655e994bfe 100644
--- a/drivers/infiniband/hw/mlx5/mr.c
+++ b/drivers/infiniband/hw/mlx5/mr.c
@@ -1865,7 +1865,7 @@ int mlx5_ib_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents,
 
 	mr->ndescs = 0;
 
-	ib_dma_sync_single_for_cpu(ibmr->device, mr->desc_map,
+	dma_sync_single_for_cpu(ibmr->device->dma_device, mr->desc_map,
 				   mr->desc_size * mr->max_descs,
 				   DMA_TO_DEVICE);
 
@@ -1875,7 +1875,7 @@ int mlx5_ib_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents,
 		n = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset,
 				mlx5_set_page);
 
-	ib_dma_sync_single_for_device(ibmr->device, mr->desc_map,
+	dma_sync_single_for_device(ibmr->device->dma_device, mr->desc_map,
 				      mr->desc_size * mr->max_descs,
 				      DMA_TO_DEVICE);
 
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_cm.c b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
index 096c4f6fbd65..0c6b0a0c607b 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
@@ -83,10 +83,10 @@ static void ipoib_cm_dma_unmap_rx(struct ipoib_dev_priv *priv, int frags,
 {
 	int i;
 
-	ib_dma_unmap_single(priv->ca, mapping[0], IPOIB_CM_HEAD_SIZE, DMA_FROM_DEVICE);
+	dma_unmap_single(priv->ca->dma_device, mapping[0], IPOIB_CM_HEAD_SIZE, DMA_FROM_DEVICE);
 
 	for (i = 0; i < frags; ++i)
-		ib_dma_unmap_page(priv->ca, mapping[i + 1], PAGE_SIZE, DMA_FROM_DEVICE);
+		dma_unmap_page(priv->ca->dma_device, mapping[i + 1], PAGE_SIZE, DMA_FROM_DEVICE);
 }
 
 static int ipoib_cm_post_receive_srq(struct net_device *dev, int id)
@@ -158,9 +158,9 @@ static struct sk_buff *ipoib_cm_alloc_rx_skb(struct net_device *dev,
 	 */
 	skb_reserve(skb, IPOIB_CM_RX_RESERVE);
 
-	mapping[0] = ib_dma_map_single(priv->ca, skb->data, IPOIB_CM_HEAD_SIZE,
+	mapping[0] = dma_map_single(priv->ca->dma_device, skb->data, IPOIB_CM_HEAD_SIZE,
 				       DMA_FROM_DEVICE);
-	if (unlikely(ib_dma_mapping_error(priv->ca, mapping[0]))) {
+	if (unlikely(dma_mapping_error(priv->ca->dma_device, mapping[0]))) {
 		dev_kfree_skb_any(skb);
 		return NULL;
 	}
@@ -172,9 +172,9 @@ static struct sk_buff *ipoib_cm_alloc_rx_skb(struct net_device *dev,
 			goto partial_error;
 		skb_fill_page_desc(skb, i, page, 0, PAGE_SIZE);
 
-		mapping[i + 1] = ib_dma_map_page(priv->ca, page,
+		mapping[i + 1] = dma_map_page(priv->ca->dma_device, page,
 						 0, PAGE_SIZE, DMA_FROM_DEVICE);
-		if (unlikely(ib_dma_mapping_error(priv->ca, mapping[i + 1])))
+		if (unlikely(dma_mapping_error(priv->ca->dma_device, mapping[i + 1])))
 			goto partial_error;
 	}
 
@@ -183,10 +183,10 @@ static struct sk_buff *ipoib_cm_alloc_rx_skb(struct net_device *dev,
 
 partial_error:
 
-	ib_dma_unmap_single(priv->ca, mapping[0], IPOIB_CM_HEAD_SIZE, DMA_FROM_DEVICE);
+	dma_unmap_single(priv->ca->dma_device, mapping[0], IPOIB_CM_HEAD_SIZE, DMA_FROM_DEVICE);
 
 	for (; i > 0; --i)
-		ib_dma_unmap_page(priv->ca, mapping[i], PAGE_SIZE, DMA_FROM_DEVICE);
+		dma_unmap_page(priv->ca->dma_device, mapping[i], PAGE_SIZE, DMA_FROM_DEVICE);
 
 	dev_kfree_skb_any(skb);
 	return NULL;
@@ -626,10 +626,10 @@ void ipoib_cm_handle_rx_wc(struct net_device *dev, struct ib_wc *wc)
 		small_skb = dev_alloc_skb(dlen + IPOIB_CM_RX_RESERVE);
 		if (small_skb) {
 			skb_reserve(small_skb, IPOIB_CM_RX_RESERVE);
-			ib_dma_sync_single_for_cpu(priv->ca, rx_ring[wr_id].mapping[0],
+			dma_sync_single_for_cpu(priv->ca->dma_device, rx_ring[wr_id].mapping[0],
 						   dlen, DMA_FROM_DEVICE);
 			skb_copy_from_linear_data(skb, small_skb->data, dlen);
-			ib_dma_sync_single_for_device(priv->ca, rx_ring[wr_id].mapping[0],
+			dma_sync_single_for_device(priv->ca->dma_device, rx_ring[wr_id].mapping[0],
 						      dlen, DMA_FROM_DEVICE);
 			skb_put(small_skb, dlen);
 			skb = small_skb;
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_ib.c b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
index 5038f9d2d753..ccf540abedac 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_ib.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
@@ -92,7 +92,7 @@ void ipoib_free_ah(struct kref *kref)
 static void ipoib_ud_dma_unmap_rx(struct ipoib_dev_priv *priv,
 				  u64 mapping[IPOIB_UD_RX_SG])
 {
-	ib_dma_unmap_single(priv->ca, mapping[0],
+	dma_unmap_single(priv->ca->dma_device, mapping[0],
 			    IPOIB_UD_BUF_SIZE(priv->max_ib_mtu),
 			    DMA_FROM_DEVICE);
 }
@@ -139,9 +139,9 @@ static struct sk_buff *ipoib_alloc_rx_skb(struct net_device *dev, int id)
 	skb_reserve(skb, sizeof(struct ipoib_pseudo_header));
 
 	mapping = priv->rx_ring[id].mapping;
-	mapping[0] = ib_dma_map_single(priv->ca, skb->data, buf_size,
+	mapping[0] = dma_map_single(priv->ca->dma_device, skb->data, buf_size,
 				       DMA_FROM_DEVICE);
-	if (unlikely(ib_dma_mapping_error(priv->ca, mapping[0])))
+	if (unlikely(dma_mapping_error(priv->ca->dma_device, mapping[0])))
 		goto error;
 
 	priv->rx_ring[id].skb = skb;
@@ -278,9 +278,9 @@ int ipoib_dma_map_tx(struct ib_device *ca, struct ipoib_tx_buf *tx_req)
 	int off;
 
 	if (skb_headlen(skb)) {
-		mapping[0] = ib_dma_map_single(ca, skb->data, skb_headlen(skb),
+		mapping[0] = dma_map_single(ca->dma_device, skb->data, skb_headlen(skb),
 					       DMA_TO_DEVICE);
-		if (unlikely(ib_dma_mapping_error(ca, mapping[0])))
+		if (unlikely(dma_mapping_error(ca->dma_device, mapping[0])))
 			return -EIO;
 
 		off = 1;
@@ -289,11 +289,11 @@ int ipoib_dma_map_tx(struct ib_device *ca, struct ipoib_tx_buf *tx_req)
 
 	for (i = 0; i < skb_shinfo(skb)->nr_frags; ++i) {
 		const skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
-		mapping[i + off] = ib_dma_map_page(ca,
+		mapping[i + off] = dma_map_page(ca->dma_device,
 						 skb_frag_page(frag),
 						 frag->page_offset, skb_frag_size(frag),
 						 DMA_TO_DEVICE);
-		if (unlikely(ib_dma_mapping_error(ca, mapping[i + off])))
+		if (unlikely(dma_mapping_error(ca->dma_device, mapping[i + off])))
 			goto partial_error;
 	}
 	return 0;
@@ -302,11 +302,11 @@ int ipoib_dma_map_tx(struct ib_device *ca, struct ipoib_tx_buf *tx_req)
 	for (; i > 0; --i) {
 		const skb_frag_t *frag = &skb_shinfo(skb)->frags[i - 1];
 
-		ib_dma_unmap_page(ca, mapping[i - !off], skb_frag_size(frag), DMA_TO_DEVICE);
+		dma_unmap_page(ca->dma_device, mapping[i - !off], skb_frag_size(frag), DMA_TO_DEVICE);
 	}
 
 	if (off)
-		ib_dma_unmap_single(ca, mapping[0], skb_headlen(skb), DMA_TO_DEVICE);
+		dma_unmap_single(ca->dma_device, mapping[0], skb_headlen(skb), DMA_TO_DEVICE);
 
 	return -EIO;
 }
@@ -320,7 +320,7 @@ void ipoib_dma_unmap_tx(struct ipoib_dev_priv *priv,
 	int off;
 
 	if (skb_headlen(skb)) {
-		ib_dma_unmap_single(priv->ca, mapping[0], skb_headlen(skb),
+		dma_unmap_single(priv->ca->dma_device, mapping[0], skb_headlen(skb),
 				    DMA_TO_DEVICE);
 		off = 1;
 	} else
@@ -329,7 +329,7 @@ void ipoib_dma_unmap_tx(struct ipoib_dev_priv *priv,
 	for (i = 0; i < skb_shinfo(skb)->nr_frags; ++i) {
 		const skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
 
-		ib_dma_unmap_page(priv->ca, mapping[i + off],
+		dma_unmap_page(priv->ca->dma_device, mapping[i + off],
 				  skb_frag_size(frag), DMA_TO_DEVICE);
 	}
 }
diff --git a/drivers/infiniband/ulp/iser/iscsi_iser.c b/drivers/infiniband/ulp/iser/iscsi_iser.c
index 9104e6b8cac9..f2b2baccfe69 100644
--- a/drivers/infiniband/ulp/iser/iscsi_iser.c
+++ b/drivers/infiniband/ulp/iser/iscsi_iser.c
@@ -198,9 +198,9 @@ iser_initialize_task_headers(struct iscsi_task *task,
 		goto out;
 	}
 
-	dma_addr = ib_dma_map_single(device->ib_device, (void *)tx_desc,
+	dma_addr = dma_map_single(device->ib_device->dma_device, (void *)tx_desc,
 				ISER_HEADERS_LEN, DMA_TO_DEVICE);
-	if (ib_dma_mapping_error(device->ib_device, dma_addr)) {
+	if (dma_mapping_error(device->ib_device->dma_device, dma_addr)) {
 		ret = -ENOMEM;
 		goto out;
 	}
@@ -375,7 +375,7 @@ static void iscsi_iser_cleanup_task(struct iscsi_task *task)
 		return;
 
 	if (likely(tx_desc->mapped)) {
-		ib_dma_unmap_single(device->ib_device, tx_desc->dma_addr,
+		dma_unmap_single(device->ib_device->dma_device, tx_desc->dma_addr,
 				    ISER_HEADERS_LEN, DMA_TO_DEVICE);
 		tx_desc->mapped = false;
 	}
diff --git a/drivers/infiniband/ulp/iser/iser_initiator.c b/drivers/infiniband/ulp/iser/iser_initiator.c
index 81ae2e30dd12..2e65513833bd 100644
--- a/drivers/infiniband/ulp/iser/iser_initiator.c
+++ b/drivers/infiniband/ulp/iser/iser_initiator.c
@@ -164,7 +164,7 @@ static void iser_create_send_desc(struct iser_conn	*iser_conn,
 {
 	struct iser_device *device = iser_conn->ib_conn.device;
 
-	ib_dma_sync_single_for_cpu(device->ib_device,
+	dma_sync_single_for_cpu(device->ib_device->dma_device,
 		tx_desc->dma_addr, ISER_HEADERS_LEN, DMA_TO_DEVICE);
 
 	memset(&tx_desc->iser_header, 0, sizeof(struct iser_ctrl));
@@ -180,10 +180,10 @@ static void iser_free_login_buf(struct iser_conn *iser_conn)
 	if (!desc->req)
 		return;
 
-	ib_dma_unmap_single(device->ib_device, desc->req_dma,
+	dma_unmap_single(device->ib_device->dma_device, desc->req_dma,
 			    ISCSI_DEF_MAX_RECV_SEG_LEN, DMA_TO_DEVICE);
 
-	ib_dma_unmap_single(device->ib_device, desc->rsp_dma,
+	dma_unmap_single(device->ib_device->dma_device, desc->rsp_dma,
 			    ISER_RX_LOGIN_SIZE, DMA_FROM_DEVICE);
 
 	kfree(desc->req);
@@ -203,10 +203,10 @@ static int iser_alloc_login_buf(struct iser_conn *iser_conn)
 	if (!desc->req)
 		return -ENOMEM;
 
-	desc->req_dma = ib_dma_map_single(device->ib_device, desc->req,
+	desc->req_dma = dma_map_single(device->ib_device->dma_device, desc->req,
 					  ISCSI_DEF_MAX_RECV_SEG_LEN,
 					  DMA_TO_DEVICE);
-	if (ib_dma_mapping_error(device->ib_device,
+	if (dma_mapping_error(device->ib_device->dma_device,
 				desc->req_dma))
 		goto free_req;
 
@@ -214,10 +214,10 @@ static int iser_alloc_login_buf(struct iser_conn *iser_conn)
 	if (!desc->rsp)
 		goto unmap_req;
 
-	desc->rsp_dma = ib_dma_map_single(device->ib_device, desc->rsp,
+	desc->rsp_dma = dma_map_single(device->ib_device->dma_device, desc->rsp,
 					   ISER_RX_LOGIN_SIZE,
 					   DMA_FROM_DEVICE);
-	if (ib_dma_mapping_error(device->ib_device,
+	if (dma_mapping_error(device->ib_device->dma_device,
 				desc->rsp_dma))
 		goto free_rsp;
 
@@ -226,7 +226,7 @@ static int iser_alloc_login_buf(struct iser_conn *iser_conn)
 free_rsp:
 	kfree(desc->rsp);
 unmap_req:
-	ib_dma_unmap_single(device->ib_device, desc->req_dma,
+	dma_unmap_single(device->ib_device->dma_device, desc->req_dma,
 			    ISCSI_DEF_MAX_RECV_SEG_LEN,
 			    DMA_TO_DEVICE);
 free_req:
@@ -265,9 +265,9 @@ int iser_alloc_rx_descriptors(struct iser_conn *iser_conn,
 	rx_desc = iser_conn->rx_descs;
 
 	for (i = 0; i < iser_conn->qp_max_recv_dtos; i++, rx_desc++)  {
-		dma_addr = ib_dma_map_single(device->ib_device, (void *)rx_desc,
+		dma_addr = dma_map_single(device->ib_device->dma_device, (void *)rx_desc,
 					ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
-		if (ib_dma_mapping_error(device->ib_device, dma_addr))
+		if (dma_mapping_error(device->ib_device->dma_device, dma_addr))
 			goto rx_desc_dma_map_failed;
 
 		rx_desc->dma_addr = dma_addr;
@@ -284,7 +284,7 @@ int iser_alloc_rx_descriptors(struct iser_conn *iser_conn,
 rx_desc_dma_map_failed:
 	rx_desc = iser_conn->rx_descs;
 	for (j = 0; j < i; j++, rx_desc++)
-		ib_dma_unmap_single(device->ib_device, rx_desc->dma_addr,
+		dma_unmap_single(device->ib_device->dma_device, rx_desc->dma_addr,
 				    ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
 	kfree(iser_conn->rx_descs);
 	iser_conn->rx_descs = NULL;
@@ -309,7 +309,7 @@ void iser_free_rx_descriptors(struct iser_conn *iser_conn)
 
 	rx_desc = iser_conn->rx_descs;
 	for (i = 0; i < iser_conn->qp_max_recv_dtos; i++, rx_desc++)
-		ib_dma_unmap_single(device->ib_device, rx_desc->dma_addr,
+		dma_unmap_single(device->ib_device->dma_device, rx_desc->dma_addr,
 				    ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
 	kfree(iser_conn->rx_descs);
 	/* make sure we never redo any unmapping */
@@ -522,12 +522,12 @@ int iser_send_control(struct iscsi_conn *conn,
 			goto send_control_error;
 		}
 
-		ib_dma_sync_single_for_cpu(device->ib_device, desc->req_dma,
+		dma_sync_single_for_cpu(device->ib_device->dma_device, desc->req_dma,
 					   task->data_count, DMA_TO_DEVICE);
 
 		memcpy(desc->req, task->data, task->data_count);
 
-		ib_dma_sync_single_for_device(device->ib_device, desc->req_dma,
+		dma_sync_single_for_device(device->ib_device->dma_device, desc->req_dma,
 					      task->data_count, DMA_TO_DEVICE);
 
 		tx_dsg->addr = desc->req_dma;
@@ -570,7 +570,7 @@ void iser_login_rsp(struct ib_cq *cq, struct ib_wc *wc)
 		return;
 	}
 
-	ib_dma_sync_single_for_cpu(ib_conn->device->ib_device,
+	dma_sync_single_for_cpu(ib_conn->device->ib_device->dma_device,
 				   desc->rsp_dma, ISER_RX_LOGIN_SIZE,
 				   DMA_FROM_DEVICE);
 
@@ -583,7 +583,7 @@ void iser_login_rsp(struct ib_cq *cq, struct ib_wc *wc)
 
 	iscsi_iser_recv(iser_conn->iscsi_conn, hdr, data, length);
 
-	ib_dma_sync_single_for_device(ib_conn->device->ib_device,
+	dma_sync_single_for_device(ib_conn->device->ib_device->dma_device,
 				      desc->rsp_dma, ISER_RX_LOGIN_SIZE,
 				      DMA_FROM_DEVICE);
 
@@ -655,7 +655,7 @@ void iser_task_rsp(struct ib_cq *cq, struct ib_wc *wc)
 		return;
 	}
 
-	ib_dma_sync_single_for_cpu(ib_conn->device->ib_device,
+	dma_sync_single_for_cpu(ib_conn->device->ib_device->dma_device,
 				   desc->dma_addr, ISER_RX_PAYLOAD_SIZE,
 				   DMA_FROM_DEVICE);
 
@@ -673,7 +673,7 @@ void iser_task_rsp(struct ib_cq *cq, struct ib_wc *wc)
 
 	iscsi_iser_recv(iser_conn->iscsi_conn, hdr, desc->data, length);
 
-	ib_dma_sync_single_for_device(ib_conn->device->ib_device,
+	dma_sync_single_for_device(ib_conn->device->ib_device->dma_device,
 				      desc->dma_addr, ISER_RX_PAYLOAD_SIZE,
 				      DMA_FROM_DEVICE);
 
@@ -724,7 +724,7 @@ void iser_dataout_comp(struct ib_cq *cq, struct ib_wc *wc)
 	if (unlikely(wc->status != IB_WC_SUCCESS))
 		iser_err_comp(wc, "dataout");
 
-	ib_dma_unmap_single(device->ib_device, desc->dma_addr,
+	dma_unmap_single(device->ib_device->dma_device, desc->dma_addr,
 			    ISER_HEADERS_LEN, DMA_TO_DEVICE);
 	kmem_cache_free(ig.desc_cache, desc);
 }
diff --git a/drivers/infiniband/ulp/iser/iser_memory.c b/drivers/infiniband/ulp/iser/iser_memory.c
index 9c3e9ab53a41..ae89be7d69a9 100644
--- a/drivers/infiniband/ulp/iser/iser_memory.c
+++ b/drivers/infiniband/ulp/iser/iser_memory.c
@@ -145,9 +145,9 @@ static void iser_data_buf_dump(struct iser_data_buf *data,
 	for_each_sg(data->sg, sg, data->dma_nents, i)
 		iser_dbg("sg[%d] dma_addr:0x%lX page:0x%p "
 			 "off:0x%x sz:0x%x dma_len:0x%x\n",
-			 i, (unsigned long)ib_sg_dma_address(ibdev, sg),
+			 i, (unsigned long)sg_dma_address(sg),
 			 sg_page(sg), sg->offset,
-			 sg->length, ib_sg_dma_len(ibdev, sg));
+			 sg->length, sg_dma_len(sg));
 }
 
 static void iser_dump_page_vec(struct iser_page_vec *page_vec)
@@ -170,7 +170,7 @@ int iser_dma_map_task_data(struct iscsi_iser_task *iser_task,
 	iser_task->dir[iser_dir] = 1;
 	dev = iser_task->iser_conn->ib_conn.device->ib_device;
 
-	data->dma_nents = ib_dma_map_sg(dev, data->sg, data->size, dma_dir);
+	data->dma_nents = dma_map_sg(dev->dma_device, data->sg, data->size, dma_dir);
 	if (data->dma_nents == 0) {
 		iser_err("dma_map_sg failed!!!\n");
 		return -EINVAL;
@@ -185,7 +185,7 @@ void iser_dma_unmap_task_data(struct iscsi_iser_task *iser_task,
 	struct ib_device *dev;
 
 	dev = iser_task->iser_conn->ib_conn.device->ib_device;
-	ib_dma_unmap_sg(dev, data->sg, data->size, dir);
+	dma_unmap_sg(dev->dma_device, data->sg, data->size, dir);
 }
 
 static int
@@ -204,8 +204,8 @@ iser_reg_dma(struct iser_device *device, struct iser_data_buf *mem,
 		reg->rkey = device->pd->unsafe_global_rkey;
 	else
 		reg->rkey = 0;
-	reg->sge.addr = ib_sg_dma_address(device->ib_device, &sg[0]);
-	reg->sge.length = ib_sg_dma_len(device->ib_device, &sg[0]);
+	reg->sge.addr = sg_dma_address(&sg[0]);
+	reg->sge.length = sg_dma_len(&sg[0]);
 
 	iser_dbg("Single DMA entry: lkey=0x%x, rkey=0x%x, addr=0x%llx,"
 		 " length=0x%x\n", reg->sge.lkey, reg->rkey,
diff --git a/drivers/infiniband/ulp/iser/iser_verbs.c b/drivers/infiniband/ulp/iser/iser_verbs.c
index 8ae7a3beddb7..632e57f9bd58 100644
--- a/drivers/infiniband/ulp/iser/iser_verbs.c
+++ b/drivers/infiniband/ulp/iser/iser_verbs.c
@@ -1077,7 +1077,7 @@ int iser_post_send(struct ib_conn *ib_conn, struct iser_tx_desc *tx_desc,
 	struct ib_send_wr *bad_wr, *wr = iser_tx_next_wr(tx_desc);
 	int ib_ret;
 
-	ib_dma_sync_single_for_device(ib_conn->device->ib_device,
+	dma_sync_single_for_device(ib_conn->device->ib_device->dma_device,
 				      tx_desc->dma_addr, ISER_HEADERS_LEN,
 				      DMA_TO_DEVICE);
 
diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
index 314e95516068..e81c49d10d29 100644
--- a/drivers/infiniband/ulp/isert/ib_isert.c
+++ b/drivers/infiniband/ulp/isert/ib_isert.c
@@ -189,9 +189,9 @@ isert_alloc_rx_descriptors(struct isert_conn *isert_conn)
 	rx_desc = isert_conn->rx_descs;
 
 	for (i = 0; i < ISERT_QP_MAX_RECV_DTOS; i++, rx_desc++)  {
-		dma_addr = ib_dma_map_single(ib_dev, (void *)rx_desc,
+		dma_addr = dma_map_single(ib_dev->dma_device, (void *)rx_desc,
 					ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
-		if (ib_dma_mapping_error(ib_dev, dma_addr))
+		if (dma_mapping_error(ib_dev->dma_device, dma_addr))
 			goto dma_map_fail;
 
 		rx_desc->dma_addr = dma_addr;
@@ -208,7 +208,7 @@ isert_alloc_rx_descriptors(struct isert_conn *isert_conn)
 dma_map_fail:
 	rx_desc = isert_conn->rx_descs;
 	for (j = 0; j < i; j++, rx_desc++) {
-		ib_dma_unmap_single(ib_dev, rx_desc->dma_addr,
+		dma_unmap_single(ib_dev->dma_device, rx_desc->dma_addr,
 				    ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
 	}
 	kfree(isert_conn->rx_descs);
@@ -229,7 +229,7 @@ isert_free_rx_descriptors(struct isert_conn *isert_conn)
 
 	rx_desc = isert_conn->rx_descs;
 	for (i = 0; i < ISERT_QP_MAX_RECV_DTOS; i++, rx_desc++)  {
-		ib_dma_unmap_single(ib_dev, rx_desc->dma_addr,
+		dma_unmap_single(ib_dev->dma_device, rx_desc->dma_addr,
 				    ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
 	}
 
@@ -410,11 +410,11 @@ isert_free_login_buf(struct isert_conn *isert_conn)
 {
 	struct ib_device *ib_dev = isert_conn->device->ib_device;
 
-	ib_dma_unmap_single(ib_dev, isert_conn->login_rsp_dma,
+	dma_unmap_single(ib_dev->dma_device, isert_conn->login_rsp_dma,
 			    ISER_RX_PAYLOAD_SIZE, DMA_TO_DEVICE);
 	kfree(isert_conn->login_rsp_buf);
 
-	ib_dma_unmap_single(ib_dev, isert_conn->login_req_dma,
+	dma_unmap_single(ib_dev->dma_device, isert_conn->login_req_dma,
 			    ISER_RX_PAYLOAD_SIZE,
 			    DMA_FROM_DEVICE);
 	kfree(isert_conn->login_req_buf);
@@ -431,10 +431,10 @@ isert_alloc_login_buf(struct isert_conn *isert_conn,
 	if (!isert_conn->login_req_buf)
 		return -ENOMEM;
 
-	isert_conn->login_req_dma = ib_dma_map_single(ib_dev,
+	isert_conn->login_req_dma = dma_map_single(ib_dev->dma_device,
 				isert_conn->login_req_buf,
 				ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
-	ret = ib_dma_mapping_error(ib_dev, isert_conn->login_req_dma);
+	ret = dma_mapping_error(ib_dev->dma_device, isert_conn->login_req_dma);
 	if (ret) {
 		isert_err("login_req_dma mapping error: %d\n", ret);
 		isert_conn->login_req_dma = 0;
@@ -447,10 +447,10 @@ isert_alloc_login_buf(struct isert_conn *isert_conn,
 		goto out_unmap_login_req_buf;
 	}
 
-	isert_conn->login_rsp_dma = ib_dma_map_single(ib_dev,
+	isert_conn->login_rsp_dma = dma_map_single(ib_dev->dma_device,
 					isert_conn->login_rsp_buf,
 					ISER_RX_PAYLOAD_SIZE, DMA_TO_DEVICE);
-	ret = ib_dma_mapping_error(ib_dev, isert_conn->login_rsp_dma);
+	ret = dma_mapping_error(ib_dev->dma_device, isert_conn->login_rsp_dma);
 	if (ret) {
 		isert_err("login_rsp_dma mapping error: %d\n", ret);
 		isert_conn->login_rsp_dma = 0;
@@ -462,7 +462,7 @@ isert_alloc_login_buf(struct isert_conn *isert_conn,
 out_free_login_rsp_buf:
 	kfree(isert_conn->login_rsp_buf);
 out_unmap_login_req_buf:
-	ib_dma_unmap_single(ib_dev, isert_conn->login_req_dma,
+	dma_unmap_single(ib_dev->dma_device, isert_conn->login_req_dma,
 			    ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
 out_free_login_req_buf:
 	kfree(isert_conn->login_req_buf);
@@ -854,7 +854,7 @@ isert_login_post_send(struct isert_conn *isert_conn, struct iser_tx_desc *tx_des
 	struct ib_send_wr send_wr, *send_wr_failed;
 	int ret;
 
-	ib_dma_sync_single_for_device(ib_dev, tx_desc->dma_addr,
+	dma_sync_single_for_device(ib_dev->dma_device, tx_desc->dma_addr,
 				      ISER_HEADERS_LEN, DMA_TO_DEVICE);
 
 	tx_desc->tx_cqe.done = isert_login_send_done;
@@ -881,7 +881,7 @@ isert_create_send_desc(struct isert_conn *isert_conn,
 	struct isert_device *device = isert_conn->device;
 	struct ib_device *ib_dev = device->ib_device;
 
-	ib_dma_sync_single_for_cpu(ib_dev, tx_desc->dma_addr,
+	dma_sync_single_for_cpu(ib_dev->dma_device, tx_desc->dma_addr,
 				   ISER_HEADERS_LEN, DMA_TO_DEVICE);
 
 	memset(&tx_desc->iser_header, 0, sizeof(struct iser_ctrl));
@@ -903,10 +903,10 @@ isert_init_tx_hdrs(struct isert_conn *isert_conn,
 	struct ib_device *ib_dev = device->ib_device;
 	u64 dma_addr;
 
-	dma_addr = ib_dma_map_single(ib_dev, (void *)tx_desc,
+	dma_addr = dma_map_single(ib_dev->dma_device, (void *)tx_desc,
 			ISER_HEADERS_LEN, DMA_TO_DEVICE);
-	if (ib_dma_mapping_error(ib_dev, dma_addr)) {
-		isert_err("ib_dma_mapping_error() failed\n");
+	if (dma_mapping_error(ib_dev->dma_device, dma_addr)) {
+		isert_err("dma_mapping_error() failed\n");
 		return -ENOMEM;
 	}
 
@@ -992,12 +992,12 @@ isert_put_login_tx(struct iscsi_conn *conn, struct iscsi_login *login,
 	if (length > 0) {
 		struct ib_sge *tx_dsg = &tx_desc->tx_sg[1];
 
-		ib_dma_sync_single_for_cpu(ib_dev, isert_conn->login_rsp_dma,
+		dma_sync_single_for_cpu(ib_dev->dma_device, isert_conn->login_rsp_dma,
 					   length, DMA_TO_DEVICE);
 
 		memcpy(isert_conn->login_rsp_buf, login->rsp_buf, length);
 
-		ib_dma_sync_single_for_device(ib_dev, isert_conn->login_rsp_dma,
+		dma_sync_single_for_device(ib_dev->dma_device, isert_conn->login_rsp_dma,
 					      length, DMA_TO_DEVICE);
 
 		tx_dsg->addr	= isert_conn->login_rsp_dma;
@@ -1397,7 +1397,7 @@ isert_recv_done(struct ib_cq *cq, struct ib_wc *wc)
 		return;
 	}
 
-	ib_dma_sync_single_for_cpu(ib_dev, rx_desc->dma_addr,
+	dma_sync_single_for_cpu(ib_dev->dma_device, rx_desc->dma_addr,
 			ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
 
 	isert_dbg("DMA: 0x%llx, iSCSI opcode: 0x%02x, ITT: 0x%08x, flags: 0x%02x dlen: %d\n",
@@ -1432,7 +1432,7 @@ isert_recv_done(struct ib_cq *cq, struct ib_wc *wc)
 	isert_rx_opcode(isert_conn, rx_desc,
 			read_stag, read_va, write_stag, write_va);
 
-	ib_dma_sync_single_for_device(ib_dev, rx_desc->dma_addr,
+	dma_sync_single_for_device(ib_dev->dma_device, rx_desc->dma_addr,
 			ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
 }
 
@@ -1447,7 +1447,7 @@ isert_login_recv_done(struct ib_cq *cq, struct ib_wc *wc)
 		return;
 	}
 
-	ib_dma_sync_single_for_cpu(ib_dev, isert_conn->login_req_dma,
+	dma_sync_single_for_cpu(ib_dev->dma_device, isert_conn->login_req_dma,
 			ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
 
 	isert_conn->login_req_len = wc->byte_len - ISER_HEADERS_LEN;
@@ -1463,7 +1463,7 @@ isert_login_recv_done(struct ib_cq *cq, struct ib_wc *wc)
 	complete(&isert_conn->login_req_comp);
 	mutex_unlock(&isert_conn->mutex);
 
-	ib_dma_sync_single_for_device(ib_dev, isert_conn->login_req_dma,
+	dma_sync_single_for_device(ib_dev->dma_device, isert_conn->login_req_dma,
 				ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
 }
 
@@ -1571,7 +1571,7 @@ isert_unmap_tx_desc(struct iser_tx_desc *tx_desc, struct ib_device *ib_dev)
 {
 	if (tx_desc->dma_addr != 0) {
 		isert_dbg("unmap single for tx_desc->dma_addr\n");
-		ib_dma_unmap_single(ib_dev, tx_desc->dma_addr,
+		dma_unmap_single(ib_dev->dma_device, tx_desc->dma_addr,
 				    ISER_HEADERS_LEN, DMA_TO_DEVICE);
 		tx_desc->dma_addr = 0;
 	}
@@ -1583,7 +1583,7 @@ isert_completion_put(struct iser_tx_desc *tx_desc, struct isert_cmd *isert_cmd,
 {
 	if (isert_cmd->pdu_buf_dma != 0) {
 		isert_dbg("unmap single for isert_cmd->pdu_buf_dma\n");
-		ib_dma_unmap_single(ib_dev, isert_cmd->pdu_buf_dma,
+		dma_unmap_single(ib_dev->dma_device, isert_cmd->pdu_buf_dma,
 				    isert_cmd->pdu_buf_len, DMA_TO_DEVICE);
 		isert_cmd->pdu_buf_dma = 0;
 	}
@@ -1841,10 +1841,10 @@ isert_put_response(struct iscsi_conn *conn, struct iscsi_cmd *cmd)
 		hton24(hdr->dlength, (u32)cmd->se_cmd.scsi_sense_length);
 		pdu_len = cmd->se_cmd.scsi_sense_length + padding;
 
-		isert_cmd->pdu_buf_dma = ib_dma_map_single(ib_dev,
+		isert_cmd->pdu_buf_dma = dma_map_single(ib_dev->dma_device,
 				(void *)cmd->sense_buffer, pdu_len,
 				DMA_TO_DEVICE);
-		if (ib_dma_mapping_error(ib_dev, isert_cmd->pdu_buf_dma))
+		if (dma_mapping_error(ib_dev->dma_device, isert_cmd->pdu_buf_dma))
 			return -ENOMEM;
 
 		isert_cmd->pdu_buf_len = pdu_len;
@@ -1970,10 +1970,10 @@ isert_put_reject(struct iscsi_cmd *cmd, struct iscsi_conn *conn)
 	isert_init_tx_hdrs(isert_conn, &isert_cmd->tx_desc);
 
 	hton24(hdr->dlength, ISCSI_HDR_LEN);
-	isert_cmd->pdu_buf_dma = ib_dma_map_single(ib_dev,
+	isert_cmd->pdu_buf_dma = dma_map_single(ib_dev->dma_device,
 			(void *)cmd->buf_ptr, ISCSI_HDR_LEN,
 			DMA_TO_DEVICE);
-	if (ib_dma_mapping_error(ib_dev, isert_cmd->pdu_buf_dma))
+	if (dma_mapping_error(ib_dev->dma_device, isert_cmd->pdu_buf_dma))
 		return -ENOMEM;
 	isert_cmd->pdu_buf_len = ISCSI_HDR_LEN;
 	tx_dsg->addr	= isert_cmd->pdu_buf_dma;
@@ -2013,9 +2013,9 @@ isert_put_text_rsp(struct iscsi_cmd *cmd, struct iscsi_conn *conn)
 		struct ib_sge *tx_dsg = &isert_cmd->tx_desc.tx_sg[1];
 		void *txt_rsp_buf = cmd->buf_ptr;
 
-		isert_cmd->pdu_buf_dma = ib_dma_map_single(ib_dev,
+		isert_cmd->pdu_buf_dma = dma_map_single(ib_dev->dma_device,
 				txt_rsp_buf, txt_rsp_len, DMA_TO_DEVICE);
-		if (ib_dma_mapping_error(ib_dev, isert_cmd->pdu_buf_dma))
+		if (dma_mapping_error(ib_dev->dma_device, isert_cmd->pdu_buf_dma))
 			return -ENOMEM;
 
 		isert_cmd->pdu_buf_len = txt_rsp_len;
diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
index cd150c19d0d2..751865f0429d 100644
--- a/drivers/infiniband/ulp/srp/ib_srp.c
+++ b/drivers/infiniband/ulp/srp/ib_srp.c
@@ -233,9 +233,9 @@ static struct srp_iu *srp_alloc_iu(struct srp_host *host, size_t size,
 	if (!iu->buf)
 		goto out_free_iu;
 
-	iu->dma = ib_dma_map_single(host->srp_dev->dev, iu->buf, size,
+	iu->dma = dma_map_single(host->srp_dev->dev->dma_device, iu->buf, size,
 				    direction);
-	if (ib_dma_mapping_error(host->srp_dev->dev, iu->dma))
+	if (dma_mapping_error(host->srp_dev->dev->dma_device, iu->dma))
 		goto out_free_buf;
 
 	iu->size      = size;
@@ -256,7 +256,7 @@ static void srp_free_iu(struct srp_host *host, struct srp_iu *iu)
 	if (!iu)
 		return;
 
-	ib_dma_unmap_single(host->srp_dev->dev, iu->dma, iu->size,
+	dma_unmap_single(host->srp_dev->dev->dma_device, iu->dma, iu->size,
 			    iu->direction);
 	kfree(iu->buf);
 	kfree(iu);
@@ -843,7 +843,7 @@ static void srp_free_req_data(struct srp_target_port *target,
 			kfree(req->map_page);
 		}
 		if (req->indirect_dma_addr) {
-			ib_dma_unmap_single(ibdev, req->indirect_dma_addr,
+			dma_unmap_single(ibdev->dma_device, req->indirect_dma_addr,
 					    target->indirect_size,
 					    DMA_TO_DEVICE);
 		}
@@ -888,10 +888,10 @@ static int srp_alloc_req_data(struct srp_rdma_ch *ch)
 		if (!req->indirect_desc)
 			goto out;
 
-		dma_addr = ib_dma_map_single(ibdev, req->indirect_desc,
+		dma_addr = dma_map_single(ibdev->dma_device, req->indirect_desc,
 					     target->indirect_size,
 					     DMA_TO_DEVICE);
-		if (ib_dma_mapping_error(ibdev, dma_addr))
+		if (dma_mapping_error(ibdev->dma_device, dma_addr))
 			goto out;
 
 		req->indirect_dma_addr = dma_addr;
@@ -1096,7 +1096,7 @@ static void srp_unmap_data(struct scsi_cmnd *scmnd,
 			ib_fmr_pool_unmap(*pfmr);
 	}
 
-	ib_dma_unmap_sg(ibdev, scsi_sglist(scmnd), scsi_sg_count(scmnd),
+	dma_unmap_sg(ibdev->dma_device, scsi_sglist(scmnd), scsi_sg_count(scmnd),
 			scmnd->sc_data_direction);
 }
 
@@ -1429,9 +1429,8 @@ static int srp_map_sg_entry(struct srp_map_state *state,
 {
 	struct srp_target_port *target = ch->target;
 	struct srp_device *dev = target->srp_host->srp_dev;
-	struct ib_device *ibdev = dev->dev;
-	dma_addr_t dma_addr = ib_sg_dma_address(ibdev, sg);
-	unsigned int dma_len = ib_sg_dma_len(ibdev, sg);
+	dma_addr_t dma_addr = sg_dma_address(sg);
+	unsigned int dma_len = sg_dma_len(sg);
 	unsigned int len = 0;
 	int ret;
 
@@ -1525,13 +1524,12 @@ static int srp_map_sg_dma(struct srp_map_state *state, struct srp_rdma_ch *ch,
 			  int count)
 {
 	struct srp_target_port *target = ch->target;
-	struct srp_device *dev = target->srp_host->srp_dev;
 	struct scatterlist *sg;
 	int i;
 
 	for_each_sg(scat, sg, count, i) {
-		srp_map_desc(state, ib_sg_dma_address(dev->dev, sg),
-			     ib_sg_dma_len(dev->dev, sg),
+		srp_map_desc(state, sg_dma_address(sg),
+			     sg_dma_len(sg),
 			     target->pd->unsafe_global_rkey);
 	}
 
@@ -1659,7 +1657,7 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_rdma_ch *ch,
 	dev = target->srp_host->srp_dev;
 	ibdev = dev->dev;
 
-	count = ib_dma_map_sg(ibdev, scat, nents, scmnd->sc_data_direction);
+	count = dma_map_sg(ibdev->dma_device, scat, nents, scmnd->sc_data_direction);
 	if (unlikely(count == 0))
 		return -EIO;
 
@@ -1691,9 +1689,9 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_rdma_ch *ch,
 		 */
 		struct srp_direct_buf *buf = (void *) cmd->add_data;
 
-		buf->va  = cpu_to_be64(ib_sg_dma_address(ibdev, scat));
+		buf->va  = cpu_to_be64(sg_dma_address(scat));
 		buf->key = cpu_to_be32(pd->unsafe_global_rkey);
-		buf->len = cpu_to_be32(ib_sg_dma_len(ibdev, scat));
+		buf->len = cpu_to_be32(sg_dma_len(scat));
 
 		req->nmdesc = 0;
 		/* Debugging help. */
@@ -1707,7 +1705,7 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_rdma_ch *ch,
 	 */
 	indirect_hdr = (void *) cmd->add_data;
 
-	ib_dma_sync_single_for_cpu(ibdev, req->indirect_dma_addr,
+	dma_sync_single_for_cpu(ibdev->dma_device, req->indirect_dma_addr,
 				   target->indirect_size, DMA_TO_DEVICE);
 
 	memset(&state, 0, sizeof(state));
@@ -1789,7 +1787,7 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_rdma_ch *ch,
 	else
 		cmd->data_in_desc_cnt = count;
 
-	ib_dma_sync_single_for_device(ibdev, req->indirect_dma_addr, table_len,
+	dma_sync_single_for_device(ibdev->dma_device, req->indirect_dma_addr, table_len,
 				      DMA_TO_DEVICE);
 
 map_complete:
@@ -2084,9 +2082,9 @@ static int srp_response_common(struct srp_rdma_ch *ch, s32 req_delta,
 		return 1;
 	}
 
-	ib_dma_sync_single_for_cpu(dev, iu->dma, len, DMA_TO_DEVICE);
+	dma_sync_single_for_cpu(dev->dma_device, iu->dma, len, DMA_TO_DEVICE);
 	memcpy(iu->buf, rsp, len);
-	ib_dma_sync_single_for_device(dev, iu->dma, len, DMA_TO_DEVICE);
+	dma_sync_single_for_device(dev->dma_device, iu->dma, len, DMA_TO_DEVICE);
 
 	err = srp_post_send(ch, iu, len);
 	if (err) {
@@ -2144,7 +2142,7 @@ static void srp_recv_done(struct ib_cq *cq, struct ib_wc *wc)
 		return;
 	}
 
-	ib_dma_sync_single_for_cpu(dev, iu->dma, ch->max_ti_iu_len,
+	dma_sync_single_for_cpu(dev->dma_device, iu->dma, ch->max_ti_iu_len,
 				   DMA_FROM_DEVICE);
 
 	opcode = *(u8 *) iu->buf;
@@ -2181,7 +2179,7 @@ static void srp_recv_done(struct ib_cq *cq, struct ib_wc *wc)
 		break;
 	}
 
-	ib_dma_sync_single_for_device(dev, iu->dma, ch->max_ti_iu_len,
+	dma_sync_single_for_device(dev->dma_device, iu->dma, ch->max_ti_iu_len,
 				      DMA_FROM_DEVICE);
 
 	res = srp_post_recv(ch, iu);
@@ -2267,7 +2265,7 @@ static int srp_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *scmnd)
 
 	req = &ch->req_ring[idx];
 	dev = target->srp_host->srp_dev->dev;
-	ib_dma_sync_single_for_cpu(dev, iu->dma, target->max_iu_len,
+	dma_sync_single_for_cpu(dev->dma_device, iu->dma, target->max_iu_len,
 				   DMA_TO_DEVICE);
 
 	scmnd->host_scribble = (void *) req;
@@ -2302,7 +2300,7 @@ static int srp_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *scmnd)
 		goto err_iu;
 	}
 
-	ib_dma_sync_single_for_device(dev, iu->dma, target->max_iu_len,
+	dma_sync_single_for_device(dev->dma_device, iu->dma, target->max_iu_len,
 				      DMA_TO_DEVICE);
 
 	if (srp_post_send(ch, iu, len)) {
@@ -2689,7 +2687,7 @@ static int srp_send_tsk_mgmt(struct srp_rdma_ch *ch, u64 req_tag, u64 lun,
 		return -1;
 	}
 
-	ib_dma_sync_single_for_cpu(dev, iu->dma, sizeof *tsk_mgmt,
+	dma_sync_single_for_cpu(dev->dma_device, iu->dma, sizeof *tsk_mgmt,
 				   DMA_TO_DEVICE);
 	tsk_mgmt = iu->buf;
 	memset(tsk_mgmt, 0, sizeof *tsk_mgmt);
@@ -2700,7 +2698,7 @@ static int srp_send_tsk_mgmt(struct srp_rdma_ch *ch, u64 req_tag, u64 lun,
 	tsk_mgmt->tsk_mgmt_func = func;
 	tsk_mgmt->task_tag	= req_tag;
 
-	ib_dma_sync_single_for_device(dev, iu->dma, sizeof *tsk_mgmt,
+	dma_sync_single_for_device(dev->dma_device, iu->dma, sizeof *tsk_mgmt,
 				      DMA_TO_DEVICE);
 	if (srp_post_send(ch, iu, sizeof(*tsk_mgmt))) {
 		srp_put_tx_iu(ch, iu, SRP_IU_TSK_MGMT);
diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
index 20444a7d867d..a1c2d602a4fa 100644
--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
+++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
@@ -626,8 +626,8 @@ static struct srpt_ioctx *srpt_alloc_ioctx(struct srpt_device *sdev,
 	if (!ioctx->buf)
 		goto err_free_ioctx;
 
-	ioctx->dma = ib_dma_map_single(sdev->device, ioctx->buf, dma_size, dir);
-	if (ib_dma_mapping_error(sdev->device, ioctx->dma))
+	ioctx->dma = dma_map_single(sdev->device->dma_device, ioctx->buf, dma_size, dir);
+	if (dma_mapping_error(sdev->device->dma_device, ioctx->dma))
 		goto err_free_buf;
 
 	return ioctx;
@@ -649,7 +649,7 @@ static void srpt_free_ioctx(struct srpt_device *sdev, struct srpt_ioctx *ioctx,
 	if (!ioctx)
 		return;
 
-	ib_dma_unmap_single(sdev->device, ioctx->dma, dma_size, dir);
+	dma_unmap_single(sdev->device->dma_device, ioctx->dma, dma_size, dir);
 	kfree(ioctx->buf);
 	kfree(ioctx);
 }
@@ -1492,7 +1492,7 @@ static void srpt_handle_new_iu(struct srpt_rdma_ch *ch,
 	BUG_ON(!ch);
 	BUG_ON(!recv_ioctx);
 
-	ib_dma_sync_single_for_cpu(ch->sport->sdev->device,
+	dma_sync_single_for_cpu(ch->sport->sdev->device->dma_device,
 				   recv_ioctx->ioctx.dma, srp_max_req_size,
 				   DMA_FROM_DEVICE);
 
@@ -2385,7 +2385,7 @@ static void srpt_queue_response(struct se_cmd *cmd)
 		goto out;
 	}
 
-	ib_dma_sync_single_for_device(sdev->device, ioctx->ioctx.dma, resp_len,
+	dma_sync_single_for_device(sdev->device->dma_device, ioctx->ioctx.dma, resp_len,
 				      DMA_TO_DEVICE);
 
 	sge.addr = ioctx->ioctx.dma;
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 208b6a08781c..de8156847327 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -207,7 +207,7 @@ static inline size_t nvme_rdma_inline_data_size(struct nvme_rdma_queue *queue)
 static void nvme_rdma_free_qe(struct ib_device *ibdev, struct nvme_rdma_qe *qe,
 		size_t capsule_size, enum dma_data_direction dir)
 {
-	ib_dma_unmap_single(ibdev, qe->dma, capsule_size, dir);
+	dma_unmap_single(ibdev->dma_device, qe->dma, capsule_size, dir);
 	kfree(qe->data);
 }
 
@@ -218,8 +218,8 @@ static int nvme_rdma_alloc_qe(struct ib_device *ibdev, struct nvme_rdma_qe *qe,
 	if (!qe->data)
 		return -ENOMEM;
 
-	qe->dma = ib_dma_map_single(ibdev, qe->data, capsule_size, dir);
-	if (ib_dma_mapping_error(ibdev, qe->dma)) {
+	qe->dma = dma_map_single(ibdev->dma_device, qe->data, capsule_size, dir);
+	if (dma_mapping_error(ibdev->dma_device, qe->dma)) {
 		kfree(qe->data);
 		return -ENOMEM;
 	}
@@ -895,7 +895,7 @@ static void nvme_rdma_unmap_data(struct nvme_rdma_queue *queue,
 		}
 	}
 
-	ib_dma_unmap_sg(ibdev, req->sg_table.sgl,
+	dma_unmap_sg(ibdev->dma_device, req->sg_table.sgl,
 			req->nents, rq_data_dir(rq) ==
 				    WRITE ? DMA_TO_DEVICE : DMA_FROM_DEVICE);
 
@@ -1008,7 +1008,7 @@ static int nvme_rdma_map_data(struct nvme_rdma_queue *queue,
 
 	req->nents = blk_rq_map_sg(rq->q, rq, req->sg_table.sgl);
 
-	count = ib_dma_map_sg(ibdev, req->sg_table.sgl, req->nents,
+	count = dma_map_sg(ibdev->dma_device, req->sg_table.sgl, req->nents,
 		    rq_data_dir(rq) == WRITE ? DMA_TO_DEVICE : DMA_FROM_DEVICE);
 	if (unlikely(count <= 0)) {
 		sg_free_table_chained(&req->sg_table, true);
@@ -1135,7 +1135,7 @@ static void nvme_rdma_submit_async_event(struct nvme_ctrl *arg, int aer_idx)
 	if (WARN_ON_ONCE(aer_idx != 0))
 		return;
 
-	ib_dma_sync_single_for_cpu(dev, sqe->dma, sizeof(*cmd), DMA_TO_DEVICE);
+	dma_sync_single_for_cpu(dev->dma_device, sqe->dma, sizeof(*cmd), DMA_TO_DEVICE);
 
 	memset(cmd, 0, sizeof(*cmd));
 	cmd->common.opcode = nvme_admin_async_event;
@@ -1143,7 +1143,7 @@ static void nvme_rdma_submit_async_event(struct nvme_ctrl *arg, int aer_idx)
 	cmd->common.flags |= NVME_CMD_SGL_METABUF;
 	nvme_rdma_set_sg_null(cmd);
 
-	ib_dma_sync_single_for_device(dev, sqe->dma, sizeof(*cmd),
+	dma_sync_single_for_device(dev->dma_device, sqe->dma, sizeof(*cmd),
 			DMA_TO_DEVICE);
 
 	ret = nvme_rdma_post_send(queue, sqe, &sge, 1, NULL, false);
@@ -1194,7 +1194,7 @@ static int __nvme_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc, int tag)
 		return 0;
 	}
 
-	ib_dma_sync_single_for_cpu(ibdev, qe->dma, len, DMA_FROM_DEVICE);
+	dma_sync_single_for_cpu(ibdev->dma_device, qe->dma, len, DMA_FROM_DEVICE);
 	/*
 	 * AEN requests are special as they don't time out and can
 	 * survive any kind of queue freeze and often don't respond to
@@ -1207,7 +1207,7 @@ static int __nvme_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc, int tag)
 				&cqe->result);
 	else
 		ret = nvme_rdma_process_nvme_rsp(queue, cqe, wc, tag);
-	ib_dma_sync_single_for_device(ibdev, qe->dma, len, DMA_FROM_DEVICE);
+	dma_sync_single_for_device(ibdev->dma_device, qe->dma, len, DMA_FROM_DEVICE);
 
 	nvme_rdma_post_recv(queue, qe);
 	return ret;
@@ -1455,7 +1455,7 @@ static int nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
 		return BLK_MQ_RQ_QUEUE_BUSY;
 
 	dev = queue->device->dev;
-	ib_dma_sync_single_for_cpu(dev, sqe->dma,
+	dma_sync_single_for_cpu(dev->dma_device, sqe->dma,
 			sizeof(struct nvme_command), DMA_TO_DEVICE);
 
 	ret = nvme_setup_cmd(ns, rq, c);
@@ -1473,7 +1473,7 @@ static int nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
 		goto err;
 	}
 
-	ib_dma_sync_single_for_device(dev, sqe->dma,
+	dma_sync_single_for_device(dev->dma_device, sqe->dma,
 			sizeof(struct nvme_command), DMA_TO_DEVICE);
 
 	if (rq->cmd_type == REQ_TYPE_FS && req_op(rq) == REQ_OP_FLUSH)
diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index 8c3760a78ac0..447ba53a9c0b 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -246,9 +246,9 @@ static int nvmet_rdma_alloc_cmd(struct nvmet_rdma_device *ndev,
 	if (!c->nvme_cmd)
 		goto out;
 
-	c->sge[0].addr = ib_dma_map_single(ndev->device, c->nvme_cmd,
+	c->sge[0].addr = dma_map_single(ndev->device->dma_device, c->nvme_cmd,
 			sizeof(*c->nvme_cmd), DMA_FROM_DEVICE);
-	if (ib_dma_mapping_error(ndev->device, c->sge[0].addr))
+	if (dma_mapping_error(ndev->device->dma_device, c->sge[0].addr))
 		goto out_free_cmd;
 
 	c->sge[0].length = sizeof(*c->nvme_cmd);
@@ -259,10 +259,10 @@ static int nvmet_rdma_alloc_cmd(struct nvmet_rdma_device *ndev,
 				get_order(NVMET_RDMA_INLINE_DATA_SIZE));
 		if (!c->inline_page)
 			goto out_unmap_cmd;
-		c->sge[1].addr = ib_dma_map_page(ndev->device,
+		c->sge[1].addr = dma_map_page(ndev->device->dma_device,
 				c->inline_page, 0, NVMET_RDMA_INLINE_DATA_SIZE,
 				DMA_FROM_DEVICE);
-		if (ib_dma_mapping_error(ndev->device, c->sge[1].addr))
+		if (dma_mapping_error(ndev->device->dma_device, c->sge[1].addr))
 			goto out_free_inline_page;
 		c->sge[1].length = NVMET_RDMA_INLINE_DATA_SIZE;
 		c->sge[1].lkey = ndev->pd->local_dma_lkey;
@@ -282,7 +282,7 @@ static int nvmet_rdma_alloc_cmd(struct nvmet_rdma_device *ndev,
 				get_order(NVMET_RDMA_INLINE_DATA_SIZE));
 	}
 out_unmap_cmd:
-	ib_dma_unmap_single(ndev->device, c->sge[0].addr,
+	dma_unmap_single(ndev->device->dma_device, c->sge[0].addr,
 			sizeof(*c->nvme_cmd), DMA_FROM_DEVICE);
 out_free_cmd:
 	kfree(c->nvme_cmd);
@@ -295,12 +295,12 @@ static void nvmet_rdma_free_cmd(struct nvmet_rdma_device *ndev,
 		struct nvmet_rdma_cmd *c, bool admin)
 {
 	if (!admin) {
-		ib_dma_unmap_page(ndev->device, c->sge[1].addr,
+		dma_unmap_page(ndev->device->dma_device, c->sge[1].addr,
 				NVMET_RDMA_INLINE_DATA_SIZE, DMA_FROM_DEVICE);
 		__free_pages(c->inline_page,
 				get_order(NVMET_RDMA_INLINE_DATA_SIZE));
 	}
-	ib_dma_unmap_single(ndev->device, c->sge[0].addr,
+	dma_unmap_single(ndev->device->dma_device, c->sge[0].addr,
 				sizeof(*c->nvme_cmd), DMA_FROM_DEVICE);
 	kfree(c->nvme_cmd);
 }
@@ -350,9 +350,9 @@ static int nvmet_rdma_alloc_rsp(struct nvmet_rdma_device *ndev,
 	if (!r->req.rsp)
 		goto out;
 
-	r->send_sge.addr = ib_dma_map_single(ndev->device, r->req.rsp,
+	r->send_sge.addr = dma_map_single(ndev->device->dma_device, r->req.rsp,
 			sizeof(*r->req.rsp), DMA_TO_DEVICE);
-	if (ib_dma_mapping_error(ndev->device, r->send_sge.addr))
+	if (dma_mapping_error(ndev->device->dma_device, r->send_sge.addr))
 		goto out_free_rsp;
 
 	r->send_sge.length = sizeof(*r->req.rsp);
@@ -378,7 +378,7 @@ static int nvmet_rdma_alloc_rsp(struct nvmet_rdma_device *ndev,
 static void nvmet_rdma_free_rsp(struct nvmet_rdma_device *ndev,
 		struct nvmet_rdma_rsp *r)
 {
-	ib_dma_unmap_single(ndev->device, r->send_sge.addr,
+	dma_unmap_single(ndev->device->dma_device, r->send_sge.addr,
 				sizeof(*r->req.rsp), DMA_TO_DEVICE);
 	kfree(r->req.rsp);
 }
diff --git a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h
index 14576977200f..f2c0c60eee1d 100644
--- a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h
+++ b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h
@@ -925,21 +925,21 @@ kiblnd_rd_msg_size(struct kib_rdma_desc *rd, int msgtype, int n)
 static inline __u64
 kiblnd_dma_mapping_error(struct ib_device *dev, u64 dma_addr)
 {
-	return ib_dma_mapping_error(dev, dma_addr);
+	return dma_mapping_error(dev->dma_device, dma_addr);
 }
 
 static inline __u64 kiblnd_dma_map_single(struct ib_device *dev,
 					  void *msg, size_t size,
 					  enum dma_data_direction direction)
 {
-	return ib_dma_map_single(dev, msg, size, direction);
+	return dma_map_single(dev->dma_device, msg, size, direction);
 }
 
 static inline void kiblnd_dma_unmap_single(struct ib_device *dev,
 					   __u64 addr, size_t size,
 					  enum dma_data_direction direction)
 {
-	ib_dma_unmap_single(dev, addr, size, direction);
+	dma_unmap_single(dev->dma_device, addr, size, direction);
 }
 
 #define KIBLND_UNMAP_ADDR_SET(p, m, a)  do {} while (0)
@@ -949,26 +949,26 @@ static inline int kiblnd_dma_map_sg(struct ib_device *dev,
 				    struct scatterlist *sg, int nents,
 				    enum dma_data_direction direction)
 {
-	return ib_dma_map_sg(dev, sg, nents, direction);
+	return dma_map_sg(dev->dma_device, sg, nents, direction);
 }
 
 static inline void kiblnd_dma_unmap_sg(struct ib_device *dev,
 				       struct scatterlist *sg, int nents,
 				       enum dma_data_direction direction)
 {
-	ib_dma_unmap_sg(dev, sg, nents, direction);
+	dma_unmap_sg(dev->dma_device, sg, nents, direction);
 }
 
 static inline __u64 kiblnd_sg_dma_address(struct ib_device *dev,
 					  struct scatterlist *sg)
 {
-	return ib_sg_dma_address(dev, sg);
+	return sg_dma_address(sg);
 }
 
 static inline unsigned int kiblnd_sg_dma_len(struct ib_device *dev,
 					     struct scatterlist *sg)
 {
-	return ib_sg_dma_len(dev, sg);
+	return sg_dma_len(sg);
 }
 
 /* XXX We use KIBLND_CONN_PARAM(e) as writable buffer, it's not strictly */
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index de8dfb61d2b6..b4b83603b5df 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -2912,224 +2912,6 @@ static inline int ib_req_ncomp_notif(struct ib_cq *cq, int wc_cnt)
 }
 
 /**
- * ib_dma_mapping_error - check a DMA addr for error
- * @dev: The device for which the dma_addr was created
- * @dma_addr: The DMA address to check
- */
-static inline int ib_dma_mapping_error(struct ib_device *dev, u64 dma_addr)
-{
-	return dma_mapping_error(dev->dma_device, dma_addr);
-}
-
-/**
- * ib_dma_map_single - Map a kernel virtual address to DMA address
- * @dev: The device for which the dma_addr is to be created
- * @cpu_addr: The kernel virtual address
- * @size: The size of the region in bytes
- * @direction: The direction of the DMA
- */
-static inline u64 ib_dma_map_single(struct ib_device *dev,
-				    void *cpu_addr, size_t size,
-				    enum dma_data_direction direction)
-{
-	return dma_map_single(dev->dma_device, cpu_addr, size, direction);
-}
-
-/**
- * ib_dma_unmap_single - Destroy a mapping created by ib_dma_map_single()
- * @dev: The device for which the DMA address was created
- * @addr: The DMA address
- * @size: The size of the region in bytes
- * @direction: The direction of the DMA
- */
-static inline void ib_dma_unmap_single(struct ib_device *dev,
-				       u64 addr, size_t size,
-				       enum dma_data_direction direction)
-{
-	dma_unmap_single(dev->dma_device, addr, size, direction);
-}
-
-static inline u64 ib_dma_map_single_attrs(struct ib_device *dev,
-					  void *cpu_addr, size_t size,
-					  enum dma_data_direction direction,
-					  unsigned long dma_attrs)
-{
-	return dma_map_single_attrs(dev->dma_device, cpu_addr, size,
-				    direction, dma_attrs);
-}
-
-static inline void ib_dma_unmap_single_attrs(struct ib_device *dev,
-					     u64 addr, size_t size,
-					     enum dma_data_direction direction,
-					     unsigned long dma_attrs)
-{
-	return dma_unmap_single_attrs(dev->dma_device, addr, size,
-				      direction, dma_attrs);
-}
-
-/**
- * ib_dma_map_page - Map a physical page to DMA address
- * @dev: The device for which the dma_addr is to be created
- * @page: The page to be mapped
- * @offset: The offset within the page
- * @size: The size of the region in bytes
- * @direction: The direction of the DMA
- */
-static inline u64 ib_dma_map_page(struct ib_device *dev,
-				  struct page *page,
-				  unsigned long offset,
-				  size_t size,
-					 enum dma_data_direction direction)
-{
-	return dma_map_page(dev->dma_device, page, offset, size, direction);
-}
-
-/**
- * ib_dma_unmap_page - Destroy a mapping created by ib_dma_map_page()
- * @dev: The device for which the DMA address was created
- * @addr: The DMA address
- * @size: The size of the region in bytes
- * @direction: The direction of the DMA
- */
-static inline void ib_dma_unmap_page(struct ib_device *dev,
-				     u64 addr, size_t size,
-				     enum dma_data_direction direction)
-{
-	dma_unmap_page(dev->dma_device, addr, size, direction);
-}
-
-/**
- * ib_dma_map_sg - Map a scatter/gather list to DMA addresses
- * @dev: The device for which the DMA addresses are to be created
- * @sg: The array of scatter/gather entries
- * @nents: The number of scatter/gather entries
- * @direction: The direction of the DMA
- */
-static inline int ib_dma_map_sg(struct ib_device *dev,
-				struct scatterlist *sg, int nents,
-				enum dma_data_direction direction)
-{
-	return dma_map_sg(dev->dma_device, sg, nents, direction);
-}
-
-/**
- * ib_dma_unmap_sg - Unmap a scatter/gather list of DMA addresses
- * @dev: The device for which the DMA addresses were created
- * @sg: The array of scatter/gather entries
- * @nents: The number of scatter/gather entries
- * @direction: The direction of the DMA
- */
-static inline void ib_dma_unmap_sg(struct ib_device *dev,
-				   struct scatterlist *sg, int nents,
-				   enum dma_data_direction direction)
-{
-	dma_unmap_sg(dev->dma_device, sg, nents, direction);
-}
-
-static inline int ib_dma_map_sg_attrs(struct ib_device *dev,
-				      struct scatterlist *sg, int nents,
-				      enum dma_data_direction direction,
-				      unsigned long dma_attrs)
-{
-	return dma_map_sg_attrs(dev->dma_device, sg, nents, direction,
-				dma_attrs);
-}
-
-static inline void ib_dma_unmap_sg_attrs(struct ib_device *dev,
-					 struct scatterlist *sg, int nents,
-					 enum dma_data_direction direction,
-					 unsigned long dma_attrs)
-{
-	dma_unmap_sg_attrs(dev->dma_device, sg, nents, direction, dma_attrs);
-}
-/**
- * ib_sg_dma_address - Return the DMA address from a scatter/gather entry
- * @dev: The device for which the DMA addresses were created
- * @sg: The scatter/gather entry
- *
- * Note: this function is obsolete. To do: change all occurrences of
- * ib_sg_dma_address() into sg_dma_address().
- */
-static inline u64 ib_sg_dma_address(struct ib_device *dev,
-				    struct scatterlist *sg)
-{
-	return sg_dma_address(sg);
-}
-
-/**
- * ib_sg_dma_len - Return the DMA length from a scatter/gather entry
- * @dev: The device for which the DMA addresses were created
- * @sg: The scatter/gather entry
- *
- * Note: this function is obsolete. To do: change all occurrences of
- * ib_sg_dma_len() into sg_dma_len().
- */
-static inline unsigned int ib_sg_dma_len(struct ib_device *dev,
-					 struct scatterlist *sg)
-{
-	return sg_dma_len(sg);
-}
-
-/**
- * ib_dma_sync_single_for_cpu - Prepare DMA region to be accessed by CPU
- * @dev: The device for which the DMA address was created
- * @addr: The DMA address
- * @size: The size of the region in bytes
- * @dir: The direction of the DMA
- */
-static inline void ib_dma_sync_single_for_cpu(struct ib_device *dev,
-					      u64 addr,
-					      size_t size,
-					      enum dma_data_direction dir)
-{
-	dma_sync_single_for_cpu(dev->dma_device, addr, size, dir);
-}
-
-/**
- * ib_dma_sync_single_for_device - Prepare DMA region to be accessed by device
- * @dev: The device for which the DMA address was created
- * @addr: The DMA address
- * @size: The size of the region in bytes
- * @dir: The direction of the DMA
- */
-static inline void ib_dma_sync_single_for_device(struct ib_device *dev,
-						 u64 addr,
-						 size_t size,
-						 enum dma_data_direction dir)
-{
-	dma_sync_single_for_device(dev->dma_device, addr, size, dir);
-}
-
-/**
- * ib_dma_alloc_coherent - Allocate memory and map it for DMA
- * @dev: The device for which the DMA address is requested
- * @size: The size of the region to allocate in bytes
- * @dma_handle: A pointer for returning the DMA address of the region
- * @flag: memory allocator flags
- */
-static inline void *ib_dma_alloc_coherent(struct ib_device *dev,
-					   size_t size,
-					   dma_addr_t *dma_handle,
-					   gfp_t flag)
-{
-	return dma_alloc_coherent(dev->dma_device, size, dma_handle, flag);
-}
-
-/**
- * ib_dma_free_coherent - Free memory allocated by ib_dma_alloc_coherent()
- * @dev: The device for which the DMA addresses were allocated
- * @size: The size of the region
- * @cpu_addr: the address returned by ib_dma_alloc_coherent()
- * @dma_handle: the DMA address returned by ib_dma_alloc_coherent()
- */
-static inline void ib_dma_free_coherent(struct ib_device *dev,
-					size_t size, void *cpu_addr,
-					dma_addr_t dma_handle)
-{
-	dma_free_coherent(dev->dma_device, size, cpu_addr, dma_handle);
-}
-
-/**
  * ib_dereg_mr - Deregisters a memory region and removes it from the
  *   HCA translation table.
  * @mr: The memory region to deregister.
diff --git a/net/9p/trans_rdma.c b/net/9p/trans_rdma.c
index 553ed4ecb6a0..38b45a37ed18 100644
--- a/net/9p/trans_rdma.c
+++ b/net/9p/trans_rdma.c
@@ -294,7 +294,7 @@ recv_done(struct ib_cq *cq, struct ib_wc *wc)
 	int16_t tag;
 
 	req = NULL;
-	ib_dma_unmap_single(rdma->cm_id->device, c->busa, client->msize,
+	dma_unmap_single(rdma->cm_id->device->dma_device, c->busa, client->msize,
 							 DMA_FROM_DEVICE);
 
 	if (wc->status != IB_WC_SUCCESS)
@@ -339,7 +339,7 @@ send_done(struct ib_cq *cq, struct ib_wc *wc)
 	struct p9_rdma_context *c =
 		container_of(wc->wr_cqe, struct p9_rdma_context, cqe);
 
-	ib_dma_unmap_single(rdma->cm_id->device,
+	dma_unmap_single(rdma->cm_id->device->dma_device,
 			    c->busa, c->req->tc->size,
 			    DMA_TO_DEVICE);
 	up(&rdma->sq_sem);
@@ -379,10 +379,10 @@ post_recv(struct p9_client *client, struct p9_rdma_context *c)
 	struct ib_recv_wr wr, *bad_wr;
 	struct ib_sge sge;
 
-	c->busa = ib_dma_map_single(rdma->cm_id->device,
+	c->busa = dma_map_single(rdma->cm_id->device->dma_device,
 				    c->rc->sdata, client->msize,
 				    DMA_FROM_DEVICE);
-	if (ib_dma_mapping_error(rdma->cm_id->device, c->busa))
+	if (dma_mapping_error(rdma->cm_id->device->dma_device, c->busa))
 		goto error;
 
 	c->cqe.done = recv_done;
@@ -469,10 +469,10 @@ static int rdma_request(struct p9_client *client, struct p9_req_t *req)
 	}
 	c->req = req;
 
-	c->busa = ib_dma_map_single(rdma->cm_id->device,
+	c->busa = dma_map_single(rdma->cm_id->device->dma_device,
 				    c->req->tc->sdata, c->req->tc->size,
 				    DMA_TO_DEVICE);
-	if (ib_dma_mapping_error(rdma->cm_id->device, c->busa)) {
+	if (dma_mapping_error(rdma->cm_id->device->dma_device, c->busa)) {
 		err = -EIO;
 		goto send_error;
 	}
diff --git a/net/rds/ib.h b/net/rds/ib.h
index d21ca88ab628..02e5fe8d6af8 100644
--- a/net/rds/ib.h
+++ b/net/rds/ib.h
@@ -275,45 +275,6 @@ struct rds_ib_statistics {
 
 extern struct workqueue_struct *rds_ib_wq;
 
-/*
- * Fake ib_dma_sync_sg_for_{cpu,device} as long as ib_verbs.h
- * doesn't define it.
- */
-static inline void rds_ib_dma_sync_sg_for_cpu(struct ib_device *dev,
-					      struct scatterlist *sglist,
-					      unsigned int sg_dma_len,
-					      int direction)
-{
-	struct scatterlist *sg;
-	unsigned int i;
-
-	for_each_sg(sglist, sg, sg_dma_len, i) {
-		ib_dma_sync_single_for_cpu(dev,
-				ib_sg_dma_address(dev, sg),
-				ib_sg_dma_len(dev, sg),
-				direction);
-	}
-}
-#define ib_dma_sync_sg_for_cpu	rds_ib_dma_sync_sg_for_cpu
-
-static inline void rds_ib_dma_sync_sg_for_device(struct ib_device *dev,
-						 struct scatterlist *sglist,
-						 unsigned int sg_dma_len,
-						 int direction)
-{
-	struct scatterlist *sg;
-	unsigned int i;
-
-	for_each_sg(sglist, sg, sg_dma_len, i) {
-		ib_dma_sync_single_for_device(dev,
-				ib_sg_dma_address(dev, sg),
-				ib_sg_dma_len(dev, sg),
-				direction);
-	}
-}
-#define ib_dma_sync_sg_for_device	rds_ib_dma_sync_sg_for_device
-
-
 /* ib.c */
 extern struct rds_transport rds_ib_transport;
 struct rds_ib_device *rds_ib_get_client_data(struct ib_device *device);
diff --git a/net/rds/ib_cm.c b/net/rds/ib_cm.c
index 5b2ab95afa07..7bd69104e8ba 100644
--- a/net/rds/ib_cm.c
+++ b/net/rds/ib_cm.c
@@ -456,31 +456,31 @@ static int rds_ib_setup_qp(struct rds_connection *conn)
 		goto out;
 	}
 
-	ic->i_send_hdrs = ib_dma_alloc_coherent(dev,
+	ic->i_send_hdrs = dma_alloc_coherent(dev->dma_device,
 					   ic->i_send_ring.w_nr *
 						sizeof(struct rds_header),
 					   &ic->i_send_hdrs_dma, GFP_KERNEL);
 	if (!ic->i_send_hdrs) {
 		ret = -ENOMEM;
-		rdsdebug("ib_dma_alloc_coherent send failed\n");
+		rdsdebug("dma_alloc_coherent send failed\n");
 		goto out;
 	}
 
-	ic->i_recv_hdrs = ib_dma_alloc_coherent(dev,
+	ic->i_recv_hdrs = dma_alloc_coherent(dev->dma_device,
 					   ic->i_recv_ring.w_nr *
 						sizeof(struct rds_header),
 					   &ic->i_recv_hdrs_dma, GFP_KERNEL);
 	if (!ic->i_recv_hdrs) {
 		ret = -ENOMEM;
-		rdsdebug("ib_dma_alloc_coherent recv failed\n");
+		rdsdebug("dma_alloc_coherent recv failed\n");
 		goto out;
 	}
 
-	ic->i_ack = ib_dma_alloc_coherent(dev, sizeof(struct rds_header),
+	ic->i_ack = dma_alloc_coherent(dev->dma_device, sizeof(struct rds_header),
 				       &ic->i_ack_dma, GFP_KERNEL);
 	if (!ic->i_ack) {
 		ret = -ENOMEM;
-		rdsdebug("ib_dma_alloc_coherent ack failed\n");
+		rdsdebug("dma_alloc_coherent ack failed\n");
 		goto out;
 	}
 
@@ -781,21 +781,21 @@ void rds_ib_conn_path_shutdown(struct rds_conn_path *cp)
 
 		/* then free the resources that ib callbacks use */
 		if (ic->i_send_hdrs)
-			ib_dma_free_coherent(dev,
+			dma_free_coherent(dev->dma_device,
 					   ic->i_send_ring.w_nr *
 						sizeof(struct rds_header),
 					   ic->i_send_hdrs,
 					   ic->i_send_hdrs_dma);
 
 		if (ic->i_recv_hdrs)
-			ib_dma_free_coherent(dev,
+			dma_free_coherent(dev->dma_device,
 					   ic->i_recv_ring.w_nr *
 						sizeof(struct rds_header),
 					   ic->i_recv_hdrs,
 					   ic->i_recv_hdrs_dma);
 
 		if (ic->i_ack)
-			ib_dma_free_coherent(dev, sizeof(struct rds_header),
+			dma_free_coherent(dev->dma_device, sizeof(struct rds_header),
 					     ic->i_ack, ic->i_ack_dma);
 
 		if (ic->i_sends)
diff --git a/net/rds/ib_fmr.c b/net/rds/ib_fmr.c
index 4fe8f4fec4ee..150e8f756bd9 100644
--- a/net/rds/ib_fmr.c
+++ b/net/rds/ib_fmr.c
@@ -100,7 +100,7 @@ int rds_ib_map_fmr(struct rds_ib_device *rds_ibdev, struct rds_ib_mr *ibmr,
 	int i, j;
 	int ret;
 
-	sg_dma_len = ib_dma_map_sg(dev, sg, nents, DMA_BIDIRECTIONAL);
+	sg_dma_len = dma_map_sg(dev->dma_device, sg, nents, DMA_BIDIRECTIONAL);
 	if (unlikely(!sg_dma_len)) {
 		pr_warn("RDS/IB: %s failed!\n", __func__);
 		return -EBUSY;
@@ -110,8 +110,8 @@ int rds_ib_map_fmr(struct rds_ib_device *rds_ibdev, struct rds_ib_mr *ibmr,
 	page_cnt = 0;
 
 	for (i = 0; i < sg_dma_len; ++i) {
-		unsigned int dma_len = ib_sg_dma_len(dev, &scat[i]);
-		u64 dma_addr = ib_sg_dma_address(dev, &scat[i]);
+		unsigned int dma_len = sg_dma_len(&scat[i]);
+		u64 dma_addr = sg_dma_address(&scat[i]);
 
 		if (dma_addr & ~PAGE_MASK) {
 			if (i > 0)
@@ -140,8 +140,8 @@ int rds_ib_map_fmr(struct rds_ib_device *rds_ibdev, struct rds_ib_mr *ibmr,
 
 	page_cnt = 0;
 	for (i = 0; i < sg_dma_len; ++i) {
-		unsigned int dma_len = ib_sg_dma_len(dev, &scat[i]);
-		u64 dma_addr = ib_sg_dma_address(dev, &scat[i]);
+		unsigned int dma_len = sg_dma_len(&scat[i]);
+		u64 dma_addr = sg_dma_address(&scat[i]);
 
 		for (j = 0; j < dma_len; j += PAGE_SIZE)
 			dma_pages[page_cnt++] =
diff --git a/net/rds/ib_frmr.c b/net/rds/ib_frmr.c
index d921adc62765..e30a8c6f003f 100644
--- a/net/rds/ib_frmr.c
+++ b/net/rds/ib_frmr.c
@@ -169,7 +169,7 @@ static int rds_ib_map_frmr(struct rds_ib_device *rds_ibdev,
 	ibmr->sg_dma_len = 0;
 	frmr->sg_byte_len = 0;
 	WARN_ON(ibmr->sg_dma_len);
-	ibmr->sg_dma_len = ib_dma_map_sg(dev, ibmr->sg, ibmr->sg_len,
+	ibmr->sg_dma_len = dma_map_sg(dev->dma_device, ibmr->sg, ibmr->sg_len,
 					 DMA_BIDIRECTIONAL);
 	if (unlikely(!ibmr->sg_dma_len)) {
 		pr_warn("RDS/IB: %s failed!\n", __func__);
@@ -182,8 +182,8 @@ static int rds_ib_map_frmr(struct rds_ib_device *rds_ibdev,
 
 	ret = -EINVAL;
 	for (i = 0; i < ibmr->sg_dma_len; ++i) {
-		unsigned int dma_len = ib_sg_dma_len(dev, &ibmr->sg[i]);
-		u64 dma_addr = ib_sg_dma_address(dev, &ibmr->sg[i]);
+		unsigned int dma_len = sg_dma_len(&ibmr->sg[i]);
+		u64 dma_addr = sg_dma_address(&ibmr->sg[i]);
 
 		frmr->sg_byte_len += dma_len;
 		if (dma_addr & ~PAGE_MASK) {
@@ -221,7 +221,7 @@ static int rds_ib_map_frmr(struct rds_ib_device *rds_ibdev,
 	return ret;
 
 out_unmap:
-	ib_dma_unmap_sg(rds_ibdev->dev, ibmr->sg, ibmr->sg_len,
+	dma_unmap_sg(rds_ibdev->dev->dma_device, ibmr->sg, ibmr->sg_len,
 			DMA_BIDIRECTIONAL);
 	ibmr->sg_dma_len = 0;
 	return ret;
diff --git a/net/rds/ib_rdma.c b/net/rds/ib_rdma.c
index 977f69886c00..b8276915ab3d 100644
--- a/net/rds/ib_rdma.c
+++ b/net/rds/ib_rdma.c
@@ -221,11 +221,11 @@ void rds_ib_sync_mr(void *trans_private, int direction)
 
 	switch (direction) {
 	case DMA_FROM_DEVICE:
-		ib_dma_sync_sg_for_cpu(rds_ibdev->dev, ibmr->sg,
+		dma_sync_sg_for_cpu(rds_ibdev->dev->dma_device, ibmr->sg,
 			ibmr->sg_dma_len, DMA_BIDIRECTIONAL);
 		break;
 	case DMA_TO_DEVICE:
-		ib_dma_sync_sg_for_device(rds_ibdev->dev, ibmr->sg,
+		dma_sync_sg_for_device(rds_ibdev->dev->dma_device, ibmr->sg,
 			ibmr->sg_dma_len, DMA_BIDIRECTIONAL);
 		break;
 	}
@@ -236,7 +236,7 @@ void __rds_ib_teardown_mr(struct rds_ib_mr *ibmr)
 	struct rds_ib_device *rds_ibdev = ibmr->device;
 
 	if (ibmr->sg_dma_len) {
-		ib_dma_unmap_sg(rds_ibdev->dev,
+		dma_unmap_sg(rds_ibdev->dev->dma_device,
 				ibmr->sg, ibmr->sg_len,
 				DMA_BIDIRECTIONAL);
 		ibmr->sg_dma_len = 0;
diff --git a/net/rds/ib_recv.c b/net/rds/ib_recv.c
index 606a11f681d2..374a4e038b24 100644
--- a/net/rds/ib_recv.c
+++ b/net/rds/ib_recv.c
@@ -225,7 +225,7 @@ static void rds_ib_recv_clear_one(struct rds_ib_connection *ic,
 		recv->r_ibinc = NULL;
 	}
 	if (recv->r_frag) {
-		ib_dma_unmap_sg(ic->i_cm_id->device, &recv->r_frag->f_sg, 1, DMA_FROM_DEVICE);
+		dma_unmap_sg(ic->i_cm_id->device->dma_device, &recv->r_frag->f_sg, 1, DMA_FROM_DEVICE);
 		rds_ib_frag_free(ic, recv->r_frag);
 		recv->r_frag = NULL;
 	}
@@ -331,7 +331,7 @@ static int rds_ib_recv_refill_one(struct rds_connection *conn,
 	if (!recv->r_frag)
 		goto out;
 
-	ret = ib_dma_map_sg(ic->i_cm_id->device, &recv->r_frag->f_sg,
+	ret = dma_map_sg(ic->i_cm_id->device->dma_device, &recv->r_frag->f_sg,
 			    1, DMA_FROM_DEVICE);
 	WARN_ON(ret != 1);
 
@@ -340,8 +340,8 @@ static int rds_ib_recv_refill_one(struct rds_connection *conn,
 	sge->length = sizeof(struct rds_header);
 
 	sge = &recv->r_sge[1];
-	sge->addr = ib_sg_dma_address(ic->i_cm_id->device, &recv->r_frag->f_sg);
-	sge->length = ib_sg_dma_len(ic->i_cm_id->device, &recv->r_frag->f_sg);
+	sge->addr = sg_dma_address(&recv->r_frag->f_sg);
+	sge->length = sg_dma_len(&recv->r_frag->f_sg);
 
 	ret = 0;
 out:
@@ -408,9 +408,7 @@ void rds_ib_recv_refill(struct rds_connection *conn, int prefill, gfp_t gfp)
 		ret = ib_post_recv(ic->i_cm_id->qp, &recv->r_wr, &failed_wr);
 		rdsdebug("recv %p ibinc %p page %p addr %lu ret %d\n", recv,
 			 recv->r_ibinc, sg_page(&recv->r_frag->f_sg),
-			 (long) ib_sg_dma_address(
-				ic->i_cm_id->device,
-				&recv->r_frag->f_sg),
+			 (long) sg_dma_address(&recv->r_frag->f_sg),
 			ret);
 		if (ret) {
 			rds_ib_conn_error(conn, "recv post on "
@@ -968,7 +966,7 @@ void rds_ib_recv_cqe_handler(struct rds_ib_connection *ic,
 
 	rds_ib_stats_inc(s_ib_rx_cq_event);
 	recv = &ic->i_recvs[rds_ib_ring_oldest(&ic->i_recv_ring)];
-	ib_dma_unmap_sg(ic->i_cm_id->device, &recv->r_frag->f_sg, 1,
+	dma_unmap_sg(ic->i_cm_id->device->dma_device, &recv->r_frag->f_sg, 1,
 			DMA_FROM_DEVICE);
 
 	/* Also process recvs in connecting state because it is possible
diff --git a/net/rds/ib_send.c b/net/rds/ib_send.c
index 84d90c97332f..bd23633e5c7a 100644
--- a/net/rds/ib_send.c
+++ b/net/rds/ib_send.c
@@ -74,7 +74,7 @@ static void rds_ib_send_unmap_data(struct rds_ib_connection *ic,
 				   int wc_status)
 {
 	if (op->op_nents)
-		ib_dma_unmap_sg(ic->i_cm_id->device,
+		dma_unmap_sg(ic->i_cm_id->device->dma_device,
 				op->op_sg, op->op_nents,
 				DMA_TO_DEVICE);
 }
@@ -84,7 +84,7 @@ static void rds_ib_send_unmap_rdma(struct rds_ib_connection *ic,
 				   int wc_status)
 {
 	if (op->op_mapped) {
-		ib_dma_unmap_sg(ic->i_cm_id->device,
+		dma_unmap_sg(ic->i_cm_id->device->dma_device,
 				op->op_sg, op->op_nents,
 				op->op_write ? DMA_TO_DEVICE : DMA_FROM_DEVICE);
 		op->op_mapped = 0;
@@ -106,7 +106,7 @@ static void rds_ib_send_unmap_rdma(struct rds_ib_connection *ic,
 	 * handling in the ACK processing code.
 	 *
 	 * Note: There's no need to explicitly sync any RDMA buffers using
-	 * ib_dma_sync_sg_for_cpu - the completion for the RDMA
+	 * dma_sync_sg_for_cpu - the completion for the RDMA
 	 * operation itself unmapped the RDMA buffers, which takes care
 	 * of synching.
 	 */
@@ -125,7 +125,7 @@ static void rds_ib_send_unmap_atomic(struct rds_ib_connection *ic,
 {
 	/* unmap atomic recvbuf */
 	if (op->op_mapped) {
-		ib_dma_unmap_sg(ic->i_cm_id->device, op->op_sg, 1,
+		dma_unmap_sg(ic->i_cm_id->device->dma_device, op->op_sg, 1,
 				DMA_FROM_DEVICE);
 		op->op_mapped = 0;
 	}
@@ -546,7 +546,7 @@ int rds_ib_xmit(struct rds_connection *conn, struct rds_message *rm,
 	/* map the message the first time we see it */
 	if (!ic->i_data_op) {
 		if (rm->data.op_nents) {
-			rm->data.op_count = ib_dma_map_sg(dev,
+			rm->data.op_count = dma_map_sg(dev->dma_device,
 							  rm->data.op_sg,
 							  rm->data.op_nents,
 							  DMA_TO_DEVICE);
@@ -640,16 +640,16 @@ int rds_ib_xmit(struct rds_connection *conn, struct rds_message *rm,
 		if (i < work_alloc
 		    && scat != &rm->data.op_sg[rm->data.op_count]) {
 			len = min(RDS_FRAG_SIZE,
-				ib_sg_dma_len(dev, scat) - rm->data.op_dmaoff);
+				sg_dma_len(scat) - rm->data.op_dmaoff);
 			send->s_wr.num_sge = 2;
 
-			send->s_sge[1].addr = ib_sg_dma_address(dev, scat);
+			send->s_sge[1].addr = sg_dma_address(scat);
 			send->s_sge[1].addr += rm->data.op_dmaoff;
 			send->s_sge[1].length = len;
 
 			bytes_sent += len;
 			rm->data.op_dmaoff += len;
-			if (rm->data.op_dmaoff == ib_sg_dma_len(dev, scat)) {
+			if (rm->data.op_dmaoff == sg_dma_len(scat)) {
 				scat++;
 				rm->data.op_dmasg++;
 				rm->data.op_dmaoff = 0;
@@ -797,7 +797,7 @@ int rds_ib_xmit_atomic(struct rds_connection *conn, struct rm_atomic_op *op)
 	rds_message_addref(container_of(send->s_op, struct rds_message, atomic));
 
 	/* map 8 byte retval buffer to the device */
-	ret = ib_dma_map_sg(ic->i_cm_id->device, op->op_sg, 1, DMA_FROM_DEVICE);
+	ret = dma_map_sg(ic->i_cm_id->device->dma_device, op->op_sg, 1, DMA_FROM_DEVICE);
 	rdsdebug("ic %p mapping atomic op %p. mapped %d pg\n", ic, op, ret);
 	if (ret != 1) {
 		rds_ib_ring_unalloc(&ic->i_send_ring, work_alloc);
@@ -807,8 +807,8 @@ int rds_ib_xmit_atomic(struct rds_connection *conn, struct rm_atomic_op *op)
 	}
 
 	/* Convert our struct scatterlist to struct ib_sge */
-	send->s_sge[0].addr = ib_sg_dma_address(ic->i_cm_id->device, op->op_sg);
-	send->s_sge[0].length = ib_sg_dma_len(ic->i_cm_id->device, op->op_sg);
+	send->s_sge[0].addr = sg_dma_address(op->op_sg);
+	send->s_sge[0].length = sg_dma_len(op->op_sg);
 	send->s_sge[0].lkey = ic->i_pd->local_dma_lkey;
 
 	rdsdebug("rva %Lx rpa %Lx len %u\n", op->op_remote_addr,
@@ -861,7 +861,7 @@ int rds_ib_xmit_rdma(struct rds_connection *conn, struct rm_rdma_op *op)
 
 	/* map the op the first time we see it */
 	if (!op->op_mapped) {
-		op->op_count = ib_dma_map_sg(ic->i_cm_id->device,
+		op->op_count = dma_map_sg(ic->i_cm_id->device->dma_device,
 					     op->op_sg, op->op_nents, (op->op_write) ?
 					     DMA_TO_DEVICE : DMA_FROM_DEVICE);
 		rdsdebug("ic %p mapping op %p: %d\n", ic, op, op->op_count);
@@ -920,9 +920,9 @@ int rds_ib_xmit_rdma(struct rds_connection *conn, struct rm_rdma_op *op)
 
 		for (j = 0; j < send->s_rdma_wr.wr.num_sge &&
 		     scat != &op->op_sg[op->op_count]; j++) {
-			len = ib_sg_dma_len(ic->i_cm_id->device, scat);
+			len = sg_dma_len(scat);
 			send->s_sge[j].addr =
-				 ib_sg_dma_address(ic->i_cm_id->device, scat);
+				 sg_dma_address(scat);
 			send->s_sge[j].length = len;
 			send->s_sge[j].lkey = ic->i_pd->local_dma_lkey;
 
diff --git a/net/sunrpc/xprtrdma/fmr_ops.c b/net/sunrpc/xprtrdma/fmr_ops.c
index 1ebb09e1ac4f..e47bf78ec478 100644
--- a/net/sunrpc/xprtrdma/fmr_ops.c
+++ b/net/sunrpc/xprtrdma/fmr_ops.c
@@ -136,7 +136,7 @@ fmr_op_recover_mr(struct rpcrdma_mw *mw)
 	rc = __fmr_unmap(mw);
 
 	/* ORDER: then DMA unmap */
-	ib_dma_unmap_sg(r_xprt->rx_ia.ri_device,
+	dma_unmap_sg(r_xprt->rx_ia.ri_device->dma_device,
 			mw->mw_sg, mw->mw_nents, mw->mw_dir);
 	if (rc)
 		goto out_release;
@@ -218,7 +218,7 @@ fmr_op_map(struct rpcrdma_xprt *r_xprt, struct rpcrdma_mr_seg *seg,
 	if (i == 0)
 		goto out_dmamap_err;
 
-	if (!ib_dma_map_sg(r_xprt->rx_ia.ri_device,
+	if (!dma_map_sg(r_xprt->rx_ia.ri_device->dma_device,
 			   mw->mw_sg, mw->mw_nents, mw->mw_dir))
 		goto out_dmamap_err;
 
@@ -284,7 +284,7 @@ fmr_op_unmap_sync(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
 	list_for_each_entry_safe(mw, tmp, &req->rl_registered, mw_list) {
 		list_del_init(&mw->mw_list);
 		list_del_init(&mw->fmr.fm_mr->list);
-		ib_dma_unmap_sg(r_xprt->rx_ia.ri_device,
+		dma_unmap_sg(r_xprt->rx_ia.ri_device->dma_device,
 				mw->mw_sg, mw->mw_nents, mw->mw_dir);
 		rpcrdma_put_mw(r_xprt, mw);
 	}
diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c
index 47bed5333c7f..65863a52a070 100644
--- a/net/sunrpc/xprtrdma/frwr_ops.c
+++ b/net/sunrpc/xprtrdma/frwr_ops.c
@@ -182,7 +182,7 @@ frwr_op_recover_mr(struct rpcrdma_mw *mw)
 
 	rc = __frwr_reset_mr(ia, mw);
 	if (state != FRMR_FLUSHED_LI)
-		ib_dma_unmap_sg(ia->ri_device,
+		dma_unmap_sg(ia->ri_device->dma_device,
 				mw->mw_sg, mw->mw_nents, mw->mw_dir);
 	if (rc)
 		goto out_release;
@@ -396,7 +396,7 @@ frwr_op_map(struct rpcrdma_xprt *r_xprt, struct rpcrdma_mr_seg *seg,
 	if (i == 0)
 		goto out_dmamap_err;
 
-	dma_nents = ib_dma_map_sg(ia->ri_device,
+	dma_nents = dma_map_sg(ia->ri_device->dma_device,
 				  mw->mw_sg, mw->mw_nents, mw->mw_dir);
 	if (!dma_nents)
 		goto out_dmamap_err;
@@ -538,7 +538,7 @@ frwr_op_unmap_sync(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
 		dprintk("RPC:       %s: DMA unmapping frmr %p\n",
 			__func__, &mw->frmr);
 		list_del_init(&mw->mw_list);
-		ib_dma_unmap_sg(ia->ri_device,
+		dma_unmap_sg(ia->ri_device->dma_device,
 				mw->mw_sg, mw->mw_nents, mw->mw_dir);
 		rpcrdma_put_mw(r_xprt, mw);
 	}
diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
index c52e0f2ffe52..8dbc42bd0080 100644
--- a/net/sunrpc/xprtrdma/rpc_rdma.c
+++ b/net/sunrpc/xprtrdma/rpc_rdma.c
@@ -476,7 +476,7 @@ rpcrdma_prepare_hdr_sge(struct rpcrdma_ia *ia, struct rpcrdma_req *req,
 	}
 	sge->length = len;
 
-	ib_dma_sync_single_for_device(ia->ri_device, sge->addr,
+	dma_sync_single_for_device(ia->ri_device->dma_device, sge->addr,
 				      sge->length, DMA_TO_DEVICE);
 	req->rl_send_wr.num_sge++;
 	return true;
@@ -505,7 +505,7 @@ rpcrdma_prepare_msg_sges(struct rpcrdma_ia *ia, struct rpcrdma_req *req,
 	sge[sge_no].addr = rdmab_addr(rb);
 	sge[sge_no].length = xdr->head[0].iov_len;
 	sge[sge_no].lkey = rdmab_lkey(rb);
-	ib_dma_sync_single_for_device(device, sge[sge_no].addr,
+	dma_sync_single_for_device(device->dma_device, sge[sge_no].addr,
 				      sge[sge_no].length, DMA_TO_DEVICE);
 
 	/* If there is a Read chunk, the page list is being handled
@@ -547,10 +547,10 @@ rpcrdma_prepare_msg_sges(struct rpcrdma_ia *ia, struct rpcrdma_req *req,
 				goto out_mapping_overflow;
 
 			len = min_t(u32, PAGE_SIZE - page_base, remaining);
-			sge[sge_no].addr = ib_dma_map_page(device, *ppages,
+			sge[sge_no].addr = dma_map_page(device->dma_device, *ppages,
 							   page_base, len,
 							   DMA_TO_DEVICE);
-			if (ib_dma_mapping_error(device, sge[sge_no].addr))
+			if (dma_mapping_error(device->dma_device, sge[sge_no].addr))
 				goto out_mapping_err;
 			sge[sge_no].length = len;
 			sge[sge_no].lkey = lkey;
@@ -574,10 +574,10 @@ rpcrdma_prepare_msg_sges(struct rpcrdma_ia *ia, struct rpcrdma_req *req,
 
 map_tail:
 		sge_no++;
-		sge[sge_no].addr = ib_dma_map_page(device, page,
+		sge[sge_no].addr = dma_map_page(device->dma_device, page,
 						   page_base, len,
 						   DMA_TO_DEVICE);
-		if (ib_dma_mapping_error(device, sge[sge_no].addr))
+		if (dma_mapping_error(device->dma_device, sge[sge_no].addr))
 			goto out_mapping_err;
 		sge[sge_no].length = len;
 		sge[sge_no].lkey = lkey;
@@ -628,7 +628,7 @@ rpcrdma_unmap_sges(struct rpcrdma_ia *ia, struct rpcrdma_req *req)
 
 	sge = &req->rl_send_sge[2];
 	for (count = req->rl_mapped_sges; count--; sge++)
-		ib_dma_unmap_page(device, sge->addr, sge->length,
+		dma_unmap_page(device->dma_device, sge->addr, sge->length,
 				  DMA_TO_DEVICE);
 	req->rl_mapped_sges = 0;
 }
diff --git a/net/sunrpc/xprtrdma/svc_rdma_backchannel.c b/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
index 288e35c2d8f4..73f33fe09bbf 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
@@ -123,9 +123,9 @@ static int svc_rdma_bc_sendto(struct svcxprt_rdma *rdma,
 	ctxt->sge[0].lkey = rdma->sc_pd->local_dma_lkey;
 	ctxt->sge[0].length = sndbuf->len;
 	ctxt->sge[0].addr =
-	    ib_dma_map_page(rdma->sc_cm_id->device, ctxt->pages[0], 0,
+	    dma_map_page(rdma->sc_cm_id->device->dma_device, ctxt->pages[0], 0,
 			    sndbuf->len, DMA_TO_DEVICE);
-	if (ib_dma_mapping_error(rdma->sc_cm_id->device, ctxt->sge[0].addr)) {
+	if (dma_mapping_error(rdma->sc_cm_id->device->dma_device, ctxt->sge[0].addr)) {
 		ret = -EIO;
 		goto out_unmap;
 	}
diff --git a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
index 57d35fbb1c28..28768d900258 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
@@ -151,11 +151,11 @@ int rdma_read_chunk_lcl(struct svcxprt_rdma *xprt,
 		rqstp->rq_respages = &rqstp->rq_arg.pages[pg_no+1];
 		rqstp->rq_next_page = rqstp->rq_respages + 1;
 		ctxt->sge[pno].addr =
-			ib_dma_map_page(xprt->sc_cm_id->device,
+			dma_map_page(xprt->sc_cm_id->device->dma_device,
 					head->arg.pages[pg_no], pg_off,
 					PAGE_SIZE - pg_off,
 					DMA_FROM_DEVICE);
-		ret = ib_dma_mapping_error(xprt->sc_cm_id->device,
+		ret = dma_mapping_error(xprt->sc_cm_id->device->dma_device,
 					   ctxt->sge[pno].addr);
 		if (ret)
 			goto err;
@@ -271,7 +271,7 @@ int rdma_read_chunk_frmr(struct svcxprt_rdma *xprt,
 	else
 		clear_bit(RDMACTXT_F_LAST_CTXT, &ctxt->flags);
 
-	dma_nents = ib_dma_map_sg(xprt->sc_cm_id->device,
+	dma_nents = dma_map_sg(xprt->sc_cm_id->device->dma_device,
 				  frmr->sg, frmr->sg_nents,
 				  frmr->direction);
 	if (!dma_nents) {
@@ -347,7 +347,7 @@ int rdma_read_chunk_frmr(struct svcxprt_rdma *xprt,
 	atomic_inc(&rdma_stat_read);
 	return ret;
  err:
-	ib_dma_unmap_sg(xprt->sc_cm_id->device,
+	dma_unmap_sg(xprt->sc_cm_id->device->dma_device,
 			frmr->sg, frmr->sg_nents, frmr->direction);
 	svc_rdma_put_context(ctxt, 0);
 	svc_rdma_put_frmr(xprt, frmr);
diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
index ad4d286a83c5..356c31e9468e 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
@@ -148,7 +148,7 @@ static dma_addr_t dma_map_xdr(struct svcxprt_rdma *xprt,
 			page = virt_to_page(xdr->tail[0].iov_base);
 		}
 	}
-	dma_addr = ib_dma_map_page(xprt->sc_cm_id->device, page, xdr_off,
+	dma_addr = dma_map_page(xprt->sc_cm_id->device->dma_device, page, xdr_off,
 				   min_t(size_t, PAGE_SIZE, len), dir);
 	return dma_addr;
 }
@@ -269,7 +269,7 @@ static int send_write(struct svcxprt_rdma *xprt, struct svc_rqst *rqstp,
 			dma_map_xdr(xprt, &rqstp->rq_res, xdr_off,
 				    sge_bytes, DMA_TO_DEVICE);
 		xdr_off += sge_bytes;
-		if (ib_dma_mapping_error(xprt->sc_cm_id->device,
+		if (dma_mapping_error(xprt->sc_cm_id->device->dma_device,
 					 sge[sge_no].addr))
 			goto err;
 		svc_rdma_count_mappings(xprt, ctxt);
@@ -478,9 +478,9 @@ static int send_reply(struct svcxprt_rdma *rdma,
 	ctxt->sge[0].lkey = rdma->sc_pd->local_dma_lkey;
 	ctxt->sge[0].length = svc_rdma_xdr_get_reply_hdr_len(rdma_resp);
 	ctxt->sge[0].addr =
-	    ib_dma_map_page(rdma->sc_cm_id->device, page, 0,
+	    dma_map_page(rdma->sc_cm_id->device->dma_device, page, 0,
 			    ctxt->sge[0].length, DMA_TO_DEVICE);
-	if (ib_dma_mapping_error(rdma->sc_cm_id->device, ctxt->sge[0].addr))
+	if (dma_mapping_error(rdma->sc_cm_id->device->dma_device, ctxt->sge[0].addr))
 		goto err;
 	svc_rdma_count_mappings(rdma, ctxt);
 
@@ -495,7 +495,7 @@ static int send_reply(struct svcxprt_rdma *rdma,
 			dma_map_xdr(rdma, &rqstp->rq_res, xdr_off,
 				    sge_bytes, DMA_TO_DEVICE);
 		xdr_off += sge_bytes;
-		if (ib_dma_mapping_error(rdma->sc_cm_id->device,
+		if (dma_mapping_error(rdma->sc_cm_id->device->dma_device,
 					 ctxt->sge[sge_no].addr))
 			goto err;
 		svc_rdma_count_mappings(rdma, ctxt);
@@ -677,9 +677,9 @@ void svc_rdma_send_error(struct svcxprt_rdma *xprt, struct rpcrdma_msg *rmsgp,
 	/* Prepare SGE for local address */
 	ctxt->sge[0].lkey = xprt->sc_pd->local_dma_lkey;
 	ctxt->sge[0].length = length;
-	ctxt->sge[0].addr = ib_dma_map_page(xprt->sc_cm_id->device,
+	ctxt->sge[0].addr = dma_map_page(xprt->sc_cm_id->device->dma_device,
 					    p, 0, length, DMA_TO_DEVICE);
-	if (ib_dma_mapping_error(xprt->sc_cm_id->device, ctxt->sge[0].addr)) {
+	if (dma_mapping_error(xprt->sc_cm_id->device->dma_device, ctxt->sge[0].addr)) {
 		dprintk("svcrdma: Error mapping buffer for protocol error\n");
 		svc_rdma_put_context(ctxt, 1);
 		return;
diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c
index ca2799af05a6..b9e86e3f0fad 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_transport.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c
@@ -237,7 +237,7 @@ void svc_rdma_unmap_dma(struct svc_rdma_op_ctxt *ctxt)
 		 * last WR that uses it completes.
 		 */
 		if (ctxt->sge[i].lkey == lkey)
-			ib_dma_unmap_page(device,
+			dma_unmap_page(device->dma_device,
 					    ctxt->sge[i].addr,
 					    ctxt->sge[i].length,
 					    ctxt->direction);
@@ -600,10 +600,10 @@ int svc_rdma_post_recv(struct svcxprt_rdma *xprt, gfp_t flags)
 		if (!page)
 			goto err_put_ctxt;
 		ctxt->pages[sge_no] = page;
-		pa = ib_dma_map_page(xprt->sc_cm_id->device,
+		pa = dma_map_page(xprt->sc_cm_id->device->dma_device,
 				     page, 0, PAGE_SIZE,
 				     DMA_FROM_DEVICE);
-		if (ib_dma_mapping_error(xprt->sc_cm_id->device, pa))
+		if (dma_mapping_error(xprt->sc_cm_id->device->dma_device, pa))
 			goto err_put_ctxt;
 		svc_rdma_count_mappings(xprt, ctxt);
 		ctxt->sge[sge_no].addr = pa;
@@ -941,7 +941,7 @@ void svc_rdma_put_frmr(struct svcxprt_rdma *rdma,
 		       struct svc_rdma_fastreg_mr *frmr)
 {
 	if (frmr) {
-		ib_dma_unmap_sg(rdma->sc_cm_id->device,
+		dma_unmap_sg(rdma->sc_cm_id->device->dma_device,
 				frmr->sg, frmr->sg_nents, frmr->direction);
 		spin_lock_bh(&rdma->sc_frmr_q_lock);
 		WARN_ON_ONCE(!list_empty(&frmr->frmr_list));
diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index 11d07748f699..613040464ccc 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -179,7 +179,7 @@ rpcrdma_wc_receive(struct ib_cq *cq, struct ib_wc *wc)
 	rep->rr_wc_flags = wc->wc_flags;
 	rep->rr_inv_rkey = wc->ex.invalidate_rkey;
 
-	ib_dma_sync_single_for_cpu(rep->rr_device,
+	dma_sync_single_for_cpu(rep->rr_device->dma_device,
 				   rdmab_addr(rep->rr_rdmabuf),
 				   rep->rr_len, DMA_FROM_DEVICE);
 
@@ -1259,11 +1259,11 @@ __rpcrdma_dma_map_regbuf(struct rpcrdma_ia *ia, struct rpcrdma_regbuf *rb)
 	if (rb->rg_direction == DMA_NONE)
 		return false;
 
-	rb->rg_iov.addr = ib_dma_map_single(ia->ri_device,
+	rb->rg_iov.addr = dma_map_single(ia->ri_device->dma_device,
 					    (void *)rb->rg_base,
 					    rdmab_length(rb),
 					    rb->rg_direction);
-	if (ib_dma_mapping_error(ia->ri_device, rdmab_addr(rb)))
+	if (dma_mapping_error(ia->ri_device->dma_device, rdmab_addr(rb)))
 		return false;
 
 	rb->rg_device = ia->ri_device;
@@ -1277,7 +1277,7 @@ rpcrdma_dma_unmap_regbuf(struct rpcrdma_regbuf *rb)
 	if (!rpcrdma_regbuf_is_mapped(rb))
 		return;
 
-	ib_dma_unmap_single(rb->rg_device, rdmab_addr(rb),
+	dma_unmap_single(rb->rg_device->dma_device, rdmab_addr(rb),
 			    rdmab_length(rb), rb->rg_direction);
 	rb->rg_device = NULL;
 }
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 7/9] RDS: IB: Remove an unused structure member
  2017-01-11  0:56 ` [PATCH 7/9] RDS: IB: Remove an unused structure member Bart Van Assche
@ 2017-01-11  1:21   ` santosh.shilimkar
  0 siblings, 0 replies; 17+ messages in thread
From: santosh.shilimkar @ 2017-01-11  1:21 UTC (permalink / raw)
  To: Bart Van Assche, Doug Ledford
  Cc: linux-rdma, linux-kernel, Santosh Shilimkar, David S . Miller,
	netdev, rds-devel

On 1/10/17 4:56 PM, Bart Van Assche wrote:
> Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
>
Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 0/9] IB: Optimize DMA mapping
  2017-01-11  0:56 [PATCH 0/9] IB: Optimize DMA mapping Bart Van Assche
                   ` (6 preceding siblings ...)
  2017-01-11  0:56 ` [PATCH 9/9] treewide: Inline ib_dma_map_*() functions Bart Van Assche
@ 2017-01-11  1:28 ` santosh.shilimkar
  7 siblings, 0 replies; 17+ messages in thread
From: santosh.shilimkar @ 2017-01-11  1:28 UTC (permalink / raw)
  To: Bart Van Assche, Doug Ledford; +Cc: linux-rdma, linux-kernel

On 1/10/17 4:56 PM, Bart Van Assche wrote:
> Hello Doug,
>
> As you know there are two sets of DMA mapping operations in the Linux
> kernel:
> - One set of DMA mapping operations that is used by most drivers.
> - Another set of DMA mapping operations that is only used by the RDMA
>   drivers.
> Having two sets of DMA mapping operations is not only a source of
> confusion but also a source of unnecessary overhead. The DMA mapping
> operations are in the hot path so it is important that the overhead
> of these operations is as low as possible. Hence this patch series
> that converts the RDMA code to the standard DMA mapping API and
> thereby eliminates the if (dev->dma_ops) test from the hot path. An
> additional benefit is that the size of HW and SW drivers that do not
> use DMA is reduced by switching to dma_virt_ops.
>
This is really good series. I was always wondering why the extra
indirection was added first place on streaming APIs.

Regards,
Santosh

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/9] dma: Add dma_virt_ops
  2017-01-11  0:56 ` [PATCH 3/9] dma: Add dma_virt_ops Bart Van Assche
@ 2017-01-11  8:56   ` Christoph Hellwig
  2017-01-12  0:07     ` Bart Van Assche
  0 siblings, 1 reply; 17+ messages in thread
From: Christoph Hellwig @ 2017-01-11  8:56 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Doug Ledford, linux-rdma, linux-kernel, Christian Borntraeger,
	Joerg Roedel, Andy Lutomirski, Michael S . Tsirkin

> +lib-$(CONFIG_HAS_DMA) += dma-virt.o

There probably should be a config option for it for two reasons:

 - do not bloat kernels that don't need it.
 - the feature can only work for 32-bit architectures or for
   64-bit architectures that set ARCH_DMA_ADDR_T_64BIT…
   Altenatiely this option would have to force
   ARCH_DMA_ADDR_T_64BIT when not yet set for 64-bit architectures.

And yes, this is currently broken already for, but we'd better fix it.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/9] dma: Add dma_virt_ops
  2017-01-11  8:56   ` Christoph Hellwig
@ 2017-01-12  0:07     ` Bart Van Assche
  0 siblings, 0 replies; 17+ messages in thread
From: Bart Van Assche @ 2017-01-12  0:07 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Doug Ledford, linux-rdma, linux-kernel, Christian Borntraeger,
	Joerg Roedel, Andy Lutomirski, Michael S . Tsirkin

On 01/11/2017 12:56 AM, Christoph Hellwig wrote:
>> +lib-$(CONFIG_HAS_DMA) += dma-virt.o
> 
> There probably should be a config option for it for two reasons:
> 
>  - do not bloat kernels that don't need it.
>  - the feature can only work for 32-bit architectures or for
>    64-bit architectures that set ARCH_DMA_ADDR_T_64BIT…
>    Alternatively this option would have to force
>    ARCH_DMA_ADDR_T_64BIT when not yet set for 64-bit architectures.
> 
> And yes, this is currently broken already for, but we'd better fix it.

Hello Christoph,

That sounds like a good idea to me. I will make sure that both
dma_noop_ops and dma_virt_ops are only built if needed.

Bart.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 9/9] treewide: Inline ib_dma_map_*() functions
  2017-01-11  0:56 ` [PATCH 9/9] treewide: Inline ib_dma_map_*() functions Bart Van Assche
@ 2017-01-12 11:45   ` Sagi Grimberg
  2017-01-12 13:09   ` Leon Romanovsky
  1 sibling, 0 replies; 17+ messages in thread
From: Sagi Grimberg @ 2017-01-12 11:45 UTC (permalink / raw)
  To: Bart Van Assche, Doug Ledford
  Cc: Latchesar Ionkov, devel, linux-nfs, Andreas Dilger, linux-rdma,
	netdev, Trond Myklebust, linux-kernel, linux-nvme,
	Anna Schumaker, Oleg Drokin, Eric Van Hensbergen, target-devel,
	Ron Minnich, James Simmons, v9fs-developer, rds-devel,
	David S . Miller, lustre-devel

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 9/9] treewide: Inline ib_dma_map_*() functions
  2017-01-11  0:56 ` [PATCH 9/9] treewide: Inline ib_dma_map_*() functions Bart Van Assche
  2017-01-12 11:45   ` Sagi Grimberg
@ 2017-01-12 13:09   ` Leon Romanovsky
  1 sibling, 0 replies; 17+ messages in thread
From: Leon Romanovsky @ 2017-01-12 13:09 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Doug Ledford, Latchesar Ionkov, devel, linux-nfs, Andreas Dilger,
	linux-rdma, netdev, Trond Myklebust, linux-kernel, linux-nvme,
	Anna Schumaker, Oleg Drokin, Eric Van Hensbergen, target-devel,
	Ron Minnich, James Simmons, v9fs-developer, rds-devel,
	David S . Miller, lustre-devel

[-- Attachment #1: Type: text/plain, Size: 2051 bytes --]

On Tue, Jan 10, 2017 at 04:56:48PM -0800, Bart Van Assche wrote:
> Almost all changes in this patch except the removal of local variables
> that became superfluous and the actual removal of the ib_dma_map_*()
> functions have been generated as follows:
>
> git grep -lE 'ib_(sg_|)dma_' |
>   xargs -d\\n \
>     sed -i -e 's/\([^[:alnum:]_]\)ib_dma_\([^(]*\)(\&\([^,]\+\),/\1dma_\2(\3.dma_device,/g' \
>            -e 's/\([^[:alnum:]_]\)ib_dma_\([^(]*\)(\([^,]\+\),/\1dma_\2(\3->dma_device,/g' \
> 	   -e 's/ib_sg_dma_\(len\|address\)(\([^,]\+\), /sg_dma_\1(/g'
>
> Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Cc: Andreas Dilger <andreas.dilger@intel.com>
> Cc: Anna Schumaker <anna.schumaker@netapp.com>
> Cc: David S. Miller <davem@davemloft.net>
> Cc: Eric Van Hensbergen <ericvh@gmail.com>
> Cc: James Simmons <jsimmons@infradead.org>
> Cc: Latchesar Ionkov <lucho@ionkov.net>
> Cc: Oleg Drokin <oleg.drokin@intel.com>
> Cc: Ron Minnich <rminnich@sandia.gov>
> Cc: Trond Myklebust <trond.myklebust@primarydata.com>
> Cc: devel@driverdev.osuosl.org
> Cc: linux-nfs@vger.kernel.org
> Cc: linux-nvme@lists.infradead.org
> Cc: linux-rdma@vger.kernel.org
> Cc: lustre-devel@lists.lustre.org
> Cc: netdev@vger.kernel.org
> Cc: rds-devel@oss.oracle.com
> Cc: target-devel@vger.kernel.org
> Cc: v9fs-developer@lists.sourceforge.net
> ---
>  drivers/infiniband/core/mad.c                      |  28 +--
>  drivers/infiniband/core/rw.c                       |  30 ++-
>  drivers/infiniband/core/umem.c                     |   4 +-
>  drivers/infiniband/core/umem_odp.c                 |   6 +-
>  drivers/infiniband/hw/mlx4/cq.c                    |   2 +-
>  drivers/infiniband/hw/mlx4/mad.c                   |  28 +--
>  drivers/infiniband/hw/mlx4/mr.c                    |   4 +-
>  drivers/infiniband/hw/mlx4/qp.c                    |  10 +-
>  drivers/infiniband/hw/mlx5/mr.c                    |   4 +-

For mlx5 and mlx4 parts.
Acked-by: Leon Romanovsky <leonro@mellanox.com>

Thanks

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 8/9] IB: Convert ib_dma_*_coherent() argument type from u64 into dma_addr_t
  2017-01-11  0:56 ` [PATCH 8/9] IB: Convert ib_dma_*_coherent() argument type from u64 into dma_addr_t Bart Van Assche
@ 2017-01-12 13:12   ` Leon Romanovsky
  0 siblings, 0 replies; 17+ messages in thread
From: Leon Romanovsky @ 2017-01-12 13:12 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Doug Ledford, linux-rdma, linux-kernel, David S . Miller, netdev,
	rds-devel

[-- Attachment #1: Type: text/plain, Size: 518 bytes --]

On Tue, Jan 10, 2017 at 04:56:47PM -0800, Bart Van Assche wrote:
> This patch does not change any functionality.
>
> Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
> Cc: David S. Miller <davem@davemloft.net>
> Cc: linux-rdma@vger.kernel.org
> Cc: netdev@vger.kernel.org
> Cc: rds-devel@oss.oracle.com
> ---
>  include/rdma/ib_verbs.h | 11 +++--------
>  net/rds/ib.h            |  6 +++---
>  2 files changed, 6 insertions(+), 11 deletions(-)
>

Thanks,
Reviewed-by: Leon Romanovsky <leonro@mellanox.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5/9] IB/qib: Remove DMA mapping code
  2017-01-11  0:56 ` [PATCH 5/9] IB/qib: " Bart Van Assche
@ 2017-01-12 13:15   ` Leon Romanovsky
  0 siblings, 0 replies; 17+ messages in thread
From: Leon Romanovsky @ 2017-01-12 13:15 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Doug Ledford, linux-rdma, linux-kernel, Mike Marciniszyn,
	Dennis Dalessandro

[-- Attachment #1: Type: text/plain, Size: 663 bytes --]

On Tue, Jan 10, 2017 at 04:56:44PM -0800, Bart Van Assche wrote:
> The qib DMA mapping code is no longer built since commit eb636ac0e49e
> ("IB/qib: Remove dma.c and use rdmavt version of dma functions"). Hence
> remove it.
>
> Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
> Cc: Mike Marciniszyn <mike.marciniszyn@intel.com>
> Cc: Dennis Dalessandro <dennis.dalessandro@intel.com>
> ---
>  drivers/infiniband/hw/qib/qib_dma.c  | 169 -----------------------------------
>  drivers/infiniband/hw/qib/qib_keys.c |   5 +-
>  2 files changed, 1 insertion(+), 173 deletions(-)
>  delete mode 100644 drivers/infiniband/hw/qib/qib_dma.c
>

Nice diff stat.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 6/9] IB: Use dma_virt_ops instead of duplicating it
  2017-01-11  0:56 ` [PATCH 6/9] IB: Use dma_virt_ops instead of duplicating it Bart Van Assche
@ 2017-01-12 13:17   ` Leon Romanovsky
  0 siblings, 0 replies; 17+ messages in thread
From: Leon Romanovsky @ 2017-01-12 13:17 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Doug Ledford, linux-rdma, linux-kernel, Christoph Hellwig,
	Andrew Boyer, Dennis Dalessandro, Jonathan Toppins, Alex Estrin

[-- Attachment #1: Type: text/plain, Size: 1512 bytes --]

On Tue, Jan 10, 2017 at 04:56:45PM -0800, Bart Van Assche wrote:
> Additionally, switch from struct ib_dma_mapping_ops to struct
> dma_mapping_ops. Update the comments that referred to the source
> files removed by this patch.
>
> This patch eliminates one branch from every ib_dma_map_*() call.
>
> Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Andrew Boyer <andrew.boyer@dell.com>
> Cc: Dennis Dalessandro <dennis.dalessandro@intel.com>
> Cc: Jonathan Toppins <jtoppins@redhat.com>
> Cc: Alex Estrin <alex.estrin@intel.com>
> ---
>  drivers/infiniband/sw/rdmavt/Makefile |   2 +-
>  drivers/infiniband/sw/rdmavt/dma.c    | 198 ----------------------------------
>  drivers/infiniband/sw/rdmavt/dma.h    |  53 ---------
>  drivers/infiniband/sw/rdmavt/mr.c     |   8 +-
>  drivers/infiniband/sw/rdmavt/vt.c     |   5 +-
>  drivers/infiniband/sw/rdmavt/vt.h     |   1 -
>  drivers/infiniband/sw/rxe/Makefile    |   1 -
>  drivers/infiniband/sw/rxe/rxe_dma.c   | 183 -------------------------------
>  drivers/infiniband/sw/rxe/rxe_loc.h   |   2 -
>  drivers/infiniband/sw/rxe/rxe_verbs.c |   3 +-
>  include/rdma/ib_verbs.h               | 117 +++-----------------
>  11 files changed, 25 insertions(+), 548 deletions(-)
>  delete mode 100644 drivers/infiniband/sw/rdmavt/dma.c
>  delete mode 100644 drivers/infiniband/sw/rdmavt/dma.h
>  delete mode 100644 drivers/infiniband/sw/rxe/rxe_dma.c
>

Thanks,
Reviewed-by: Leon Romanovsky <leonro@mellanox.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2017-01-12 13:19 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-11  0:56 [PATCH 0/9] IB: Optimize DMA mapping Bart Van Assche
2017-01-11  0:56 ` [PATCH 3/9] dma: Add dma_virt_ops Bart Van Assche
2017-01-11  8:56   ` Christoph Hellwig
2017-01-12  0:07     ` Bart Van Assche
2017-01-11  0:56 ` [PATCH 4/9] IB/hf1: Remove DMA mapping code Bart Van Assche
2017-01-11  0:56 ` [PATCH 5/9] IB/qib: " Bart Van Assche
2017-01-12 13:15   ` Leon Romanovsky
2017-01-11  0:56 ` [PATCH 6/9] IB: Use dma_virt_ops instead of duplicating it Bart Van Assche
2017-01-12 13:17   ` Leon Romanovsky
2017-01-11  0:56 ` [PATCH 7/9] RDS: IB: Remove an unused structure member Bart Van Assche
2017-01-11  1:21   ` santosh.shilimkar
2017-01-11  0:56 ` [PATCH 8/9] IB: Convert ib_dma_*_coherent() argument type from u64 into dma_addr_t Bart Van Assche
2017-01-12 13:12   ` Leon Romanovsky
2017-01-11  0:56 ` [PATCH 9/9] treewide: Inline ib_dma_map_*() functions Bart Van Assche
2017-01-12 11:45   ` Sagi Grimberg
2017-01-12 13:09   ` Leon Romanovsky
2017-01-11  1:28 ` [PATCH 0/9] IB: Optimize DMA mapping santosh.shilimkar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).