All of lore.kernel.org
 help / color / mirror / Atom feed
* memory registration updates
@ 2015-11-22 17:46 Christoph Hellwig
       [not found] ` <1448214409-7729-1-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
  0 siblings, 1 reply; 35+ messages in thread
From: Christoph Hellwig @ 2015-11-22 17:46 UTC (permalink / raw)
  To: dledford-H+wXaHxf7aLQT0dZR+AlfA; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

This series removes huge chunks of code related to old memory
registration methods that we don't use anymore, and then simplifies the
current memory registration API

This expects my "IB: merge struct ib_device_attr into struct ib_device"
patch to be already applied.

Also available as a git tree:

	http://git.infradead.org/users/hch/rdma.git/shortlog/refs/heads/rdma-mr
	git://git.infradead.org/users/hch/rdma.git rdma-mr

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH 01/11] IB: start documenting device capabilities
       [not found] ` <1448214409-7729-1-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
@ 2015-11-22 17:46   ` Christoph Hellwig
  2015-11-22 17:46   ` [PATCH 02/11] IB: remove ib_query_mr Christoph Hellwig
                     ` (11 subsequent siblings)
  12 siblings, 0 replies; 35+ messages in thread
From: Christoph Hellwig @ 2015-11-22 17:46 UTC (permalink / raw)
  To: dledford-H+wXaHxf7aLQT0dZR+AlfA; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

Just IB_DEVICE_LOCAL_DMA_LKEY and IB_DEVICE_MEM_MGT_EXTENSIONS for now
as I'm most familar with those.

Signed-off-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
Reviewed-by: Sagi Grimberg <sagig-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Reviewed-By: Jason Gunthorpe <jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
---
 include/rdma/ib_verbs.h | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index 45ce36e..6034f92 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -120,6 +120,14 @@ enum ib_device_cap_flags {
 	IB_DEVICE_RC_RNR_NAK_GEN	= (1<<12),
 	IB_DEVICE_SRQ_RESIZE		= (1<<13),
 	IB_DEVICE_N_NOTIFY_CQ		= (1<<14),
+
+	/*
+	 * This device supports a per-device lkey or stag that can be
+	 * used without performing a memory registration for the local
+	 * memory.  Note that ULPs should never check this flag, but
+	 * instead of use the local_dma_lkey flag in the ib_pd structure,
+	 * which will always contain a usable lkey.
+	 */
 	IB_DEVICE_LOCAL_DMA_LKEY	= (1<<15),
 	IB_DEVICE_RESERVED		= (1<<16), /* old SEND_W_INV */
 	IB_DEVICE_MEM_WINDOW		= (1<<17),
@@ -133,6 +141,16 @@ enum ib_device_cap_flags {
 	IB_DEVICE_UD_IP_CSUM		= (1<<18),
 	IB_DEVICE_UD_TSO		= (1<<19),
 	IB_DEVICE_XRC			= (1<<20),
+
+	/*
+	 * This device supports the IB "base memory management extension",
+	 * which includes support for fast registrations (IB_WR_REG_MR,
+	 * IB_WR_LOCAL_INV and IB_WR_SEND_WITH_INV verbs).  This flag should
+	 * also be set by any iWarp device which must support FRs to comply
+	 * to the iWarp verbs spec.  iWarp devices also support the
+	 * IB_WR_RDMA_READ_WITH_INV verb for RDMA READs that invalidate the
+	 * stag.
+	 */
 	IB_DEVICE_MEM_MGT_EXTENSIONS	= (1<<21),
 	IB_DEVICE_BLOCK_MULTICAST_LOOPBACK = (1<<22),
 	IB_DEVICE_MEM_WINDOW_TYPE_2A	= (1<<23),
-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 02/11] IB: remove ib_query_mr
       [not found] ` <1448214409-7729-1-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
  2015-11-22 17:46   ` [PATCH 01/11] IB: start documenting device capabilities Christoph Hellwig
@ 2015-11-22 17:46   ` Christoph Hellwig
  2015-11-22 17:46   ` [PATCH 03/11] IB: remove support for phys MRs Christoph Hellwig
                     ` (10 subsequent siblings)
  12 siblings, 0 replies; 35+ messages in thread
From: Christoph Hellwig @ 2015-11-22 17:46 UTC (permalink / raw)
  To: dledford-H+wXaHxf7aLQT0dZR+AlfA; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

This functionality has no users and was only supported by the staged out
EHCA driver.

Signed-off-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
---
 drivers/infiniband/core/verbs.c         |  7 -----
 drivers/staging/rdma/ehca/ehca_iverbs.h |  2 --
 drivers/staging/rdma/ehca/ehca_main.c   |  1 -
 drivers/staging/rdma/ehca/ehca_mrmw.c   | 49 ---------------------------------
 include/rdma/ib_verbs.h                 | 18 ------------
 5 files changed, 77 deletions(-)

diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
index 45c197a..0e21367 100644
--- a/drivers/infiniband/core/verbs.c
+++ b/drivers/infiniband/core/verbs.c
@@ -1216,13 +1216,6 @@ struct ib_mr *ib_get_dma_mr(struct ib_pd *pd, int mr_access_flags)
 }
 EXPORT_SYMBOL(ib_get_dma_mr);
 
-int ib_query_mr(struct ib_mr *mr, struct ib_mr_attr *mr_attr)
-{
-	return mr->device->query_mr ?
-		mr->device->query_mr(mr, mr_attr) : -ENOSYS;
-}
-EXPORT_SYMBOL(ib_query_mr);
-
 int ib_dereg_mr(struct ib_mr *mr)
 {
 	struct ib_pd *pd;
diff --git a/drivers/staging/rdma/ehca/ehca_iverbs.h b/drivers/staging/rdma/ehca/ehca_iverbs.h
index 75c9876..4a45ca3 100644
--- a/drivers/staging/rdma/ehca/ehca_iverbs.h
+++ b/drivers/staging/rdma/ehca/ehca_iverbs.h
@@ -94,8 +94,6 @@ int ehca_rereg_phys_mr(struct ib_mr *mr,
 		       struct ib_phys_buf *phys_buf_array,
 		       int num_phys_buf, int mr_access_flags, u64 *iova_start);
 
-int ehca_query_mr(struct ib_mr *mr, struct ib_mr_attr *mr_attr);
-
 int ehca_dereg_mr(struct ib_mr *mr);
 
 struct ib_mw *ehca_alloc_mw(struct ib_pd *pd, enum ib_mw_type type);
diff --git a/drivers/staging/rdma/ehca/ehca_main.c b/drivers/staging/rdma/ehca/ehca_main.c
index 285e560..0be7959 100644
--- a/drivers/staging/rdma/ehca/ehca_main.c
+++ b/drivers/staging/rdma/ehca/ehca_main.c
@@ -513,7 +513,6 @@ static int ehca_init_device(struct ehca_shca *shca)
 	shca->ib_device.get_dma_mr	    = ehca_get_dma_mr;
 	shca->ib_device.reg_phys_mr	    = ehca_reg_phys_mr;
 	shca->ib_device.reg_user_mr	    = ehca_reg_user_mr;
-	shca->ib_device.query_mr	    = ehca_query_mr;
 	shca->ib_device.dereg_mr	    = ehca_dereg_mr;
 	shca->ib_device.rereg_phys_mr	    = ehca_rereg_phys_mr;
 	shca->ib_device.alloc_mw	    = ehca_alloc_mw;
diff --git a/drivers/staging/rdma/ehca/ehca_mrmw.c b/drivers/staging/rdma/ehca/ehca_mrmw.c
index f914b30..eb274c1 100644
--- a/drivers/staging/rdma/ehca/ehca_mrmw.c
+++ b/drivers/staging/rdma/ehca/ehca_mrmw.c
@@ -589,55 +589,6 @@ rereg_phys_mr_exit0:
 	return ret;
 } /* end ehca_rereg_phys_mr() */
 
-/*----------------------------------------------------------------------*/
-
-int ehca_query_mr(struct ib_mr *mr, struct ib_mr_attr *mr_attr)
-{
-	int ret = 0;
-	u64 h_ret;
-	struct ehca_shca *shca =
-		container_of(mr->device, struct ehca_shca, ib_device);
-	struct ehca_mr *e_mr = container_of(mr, struct ehca_mr, ib.ib_mr);
-	unsigned long sl_flags;
-	struct ehca_mr_hipzout_parms hipzout;
-
-	if ((e_mr->flags & EHCA_MR_FLAG_FMR)) {
-		ehca_err(mr->device, "not supported for FMR, mr=%p e_mr=%p "
-			 "e_mr->flags=%x", mr, e_mr, e_mr->flags);
-		ret = -EINVAL;
-		goto query_mr_exit0;
-	}
-
-	memset(mr_attr, 0, sizeof(struct ib_mr_attr));
-	spin_lock_irqsave(&e_mr->mrlock, sl_flags);
-
-	h_ret = hipz_h_query_mr(shca->ipz_hca_handle, e_mr, &hipzout);
-	if (h_ret != H_SUCCESS) {
-		ehca_err(mr->device, "hipz_mr_query failed, h_ret=%lli mr=%p "
-			 "hca_hndl=%llx mr_hndl=%llx lkey=%x",
-			 h_ret, mr, shca->ipz_hca_handle.handle,
-			 e_mr->ipz_mr_handle.handle, mr->lkey);
-		ret = ehca2ib_return_code(h_ret);
-		goto query_mr_exit1;
-	}
-	mr_attr->pd = mr->pd;
-	mr_attr->device_virt_addr = hipzout.vaddr;
-	mr_attr->size = hipzout.len;
-	mr_attr->lkey = hipzout.lkey;
-	mr_attr->rkey = hipzout.rkey;
-	ehca_mrmw_reverse_map_acl(&hipzout.acl, &mr_attr->mr_access_flags);
-
-query_mr_exit1:
-	spin_unlock_irqrestore(&e_mr->mrlock, sl_flags);
-query_mr_exit0:
-	if (ret)
-		ehca_err(mr->device, "ret=%i mr=%p mr_attr=%p",
-			 ret, mr, mr_attr);
-	return ret;
-} /* end ehca_query_mr() */
-
-/*----------------------------------------------------------------------*/
-
 int ehca_dereg_mr(struct ib_mr *mr)
 {
 	int ret = 0;
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index 6034f92..83d6ee8 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -1165,15 +1165,6 @@ struct ib_phys_buf {
 	u64      size;
 };
 
-struct ib_mr_attr {
-	struct ib_pd	*pd;
-	u64		device_virt_addr;
-	u64		size;
-	int		mr_access_flags;
-	u32		lkey;
-	u32		rkey;
-};
-
 enum ib_mr_rereg_flags {
 	IB_MR_REREG_TRANS	= 1,
 	IB_MR_REREG_PD		= (1<<1),
@@ -1709,8 +1700,6 @@ struct ib_device {
 						    int mr_access_flags,
 						    struct ib_pd *pd,
 						    struct ib_udata *udata);
-	int                        (*query_mr)(struct ib_mr *mr,
-					       struct ib_mr_attr *mr_attr);
 	int                        (*dereg_mr)(struct ib_mr *mr);
 	struct ib_mr *		   (*alloc_mr)(struct ib_pd *pd,
 					       enum ib_mr_type mr_type,
@@ -2850,13 +2839,6 @@ static inline void ib_dma_free_coherent(struct ib_device *dev,
 }
 
 /**
- * ib_query_mr - Retrieves information about a specific memory region.
- * @mr: The memory region to retrieve information about.
- * @mr_attr: The attributes of the specified memory region.
- */
-int ib_query_mr(struct ib_mr *mr, struct ib_mr_attr *mr_attr);
-
-/**
  * ib_dereg_mr - Deregisters a memory region and removes it from the
  *   HCA translation table.
  * @mr: The memory region to deregister.
-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 03/11] IB: remove support for phys MRs
       [not found] ` <1448214409-7729-1-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
  2015-11-22 17:46   ` [PATCH 01/11] IB: start documenting device capabilities Christoph Hellwig
  2015-11-22 17:46   ` [PATCH 02/11] IB: remove ib_query_mr Christoph Hellwig
@ 2015-11-22 17:46   ` Christoph Hellwig
       [not found]     ` <1448214409-7729-4-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
  2015-11-22 17:46   ` [PATCH 04/11] IB: remove in-kernel support for memory windows Christoph Hellwig
                     ` (9 subsequent siblings)
  12 siblings, 1 reply; 35+ messages in thread
From: Christoph Hellwig @ 2015-11-22 17:46 UTC (permalink / raw)
  To: dledford-H+wXaHxf7aLQT0dZR+AlfA; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

We have stopped using phys MRs in the kernel a while ago, so let's
remove all the cruft used to implement them.

Signed-off-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
Reviewed-by: Steve Wise <swise-7bPotxP6k4+P2YhJcF5u+vpXobYPEAuW@public.gmane.org>	[cxgb3, cxgb4]
---
 drivers/infiniband/hw/cxgb3/iwch_mem.c       |  31 ---
 drivers/infiniband/hw/cxgb3/iwch_provider.c  |  69 ------
 drivers/infiniband/hw/cxgb3/iwch_provider.h  |   4 -
 drivers/infiniband/hw/cxgb4/iw_cxgb4.h       |  11 -
 drivers/infiniband/hw/cxgb4/mem.c            | 248 ---------------------
 drivers/infiniband/hw/cxgb4/provider.c       |   2 -
 drivers/infiniband/hw/mthca/mthca_provider.c |  84 -------
 drivers/infiniband/hw/nes/nes_cm.c           |   7 +-
 drivers/infiniband/hw/nes/nes_verbs.c        |   3 +-
 drivers/infiniband/hw/nes/nes_verbs.h        |   5 +
 drivers/infiniband/hw/ocrdma/ocrdma_main.c   |   1 -
 drivers/infiniband/hw/ocrdma/ocrdma_verbs.c  | 163 --------------
 drivers/infiniband/hw/ocrdma/ocrdma_verbs.h  |   3 -
 drivers/infiniband/hw/qib/qib_mr.c           |  51 +----
 drivers/infiniband/hw/qib/qib_verbs.c        |   1 -
 drivers/infiniband/hw/qib/qib_verbs.h        |   4 -
 drivers/staging/rdma/amso1100/c2_provider.c  |   1 -
 drivers/staging/rdma/ehca/ehca_iverbs.h      |  11 -
 drivers/staging/rdma/ehca/ehca_main.c        |   2 -
 drivers/staging/rdma/ehca/ehca_mrmw.c        | 321 ---------------------------
 drivers/staging/rdma/ehca/ehca_mrmw.h        |   5 -
 drivers/staging/rdma/hfi1/mr.c               |  51 +----
 drivers/staging/rdma/hfi1/verbs.c            |   1 -
 drivers/staging/rdma/hfi1/verbs.h            |   4 -
 drivers/staging/rdma/ipath/ipath_mr.c        |  55 -----
 drivers/staging/rdma/ipath/ipath_verbs.c     |   1 -
 drivers/staging/rdma/ipath/ipath_verbs.h     |   4 -
 include/rdma/ib_verbs.h                      |  16 +-
 28 files changed, 15 insertions(+), 1144 deletions(-)

diff --git a/drivers/infiniband/hw/cxgb3/iwch_mem.c b/drivers/infiniband/hw/cxgb3/iwch_mem.c
index 5c36ee2..3a5e27d 100644
--- a/drivers/infiniband/hw/cxgb3/iwch_mem.c
+++ b/drivers/infiniband/hw/cxgb3/iwch_mem.c
@@ -75,37 +75,6 @@ int iwch_register_mem(struct iwch_dev *rhp, struct iwch_pd *php,
 	return ret;
 }
 
-int iwch_reregister_mem(struct iwch_dev *rhp, struct iwch_pd *php,
-					struct iwch_mr *mhp,
-					int shift,
-					int npages)
-{
-	u32 stag;
-	int ret;
-
-	/* We could support this... */
-	if (npages > mhp->attr.pbl_size)
-		return -ENOMEM;
-
-	stag = mhp->attr.stag;
-	if (cxio_reregister_phys_mem(&rhp->rdev,
-				   &stag, mhp->attr.pdid,
-				   mhp->attr.perms,
-				   mhp->attr.zbva,
-				   mhp->attr.va_fbo,
-				   mhp->attr.len,
-				   shift - 12,
-				   mhp->attr.pbl_size, mhp->attr.pbl_addr))
-		return -ENOMEM;
-
-	ret = iwch_finish_mem_reg(mhp, stag);
-	if (ret)
-		cxio_dereg_mem(&rhp->rdev, mhp->attr.stag, mhp->attr.pbl_size,
-		       mhp->attr.pbl_addr);
-
-	return ret;
-}
-
 int iwch_alloc_pbl(struct iwch_mr *mhp, int npages)
 {
 	mhp->attr.pbl_addr = cxio_hal_pblpool_alloc(&mhp->rhp->rdev,
diff --git a/drivers/infiniband/hw/cxgb3/iwch_provider.c b/drivers/infiniband/hw/cxgb3/iwch_provider.c
index 1567b5b..9576e15 100644
--- a/drivers/infiniband/hw/cxgb3/iwch_provider.c
+++ b/drivers/infiniband/hw/cxgb3/iwch_provider.c
@@ -556,73 +556,6 @@ err:
 
 }
 
-static int iwch_reregister_phys_mem(struct ib_mr *mr,
-				     int mr_rereg_mask,
-				     struct ib_pd *pd,
-	                             struct ib_phys_buf *buffer_list,
-	                             int num_phys_buf,
-	                             int acc, u64 * iova_start)
-{
-
-	struct iwch_mr mh, *mhp;
-	struct iwch_pd *php;
-	struct iwch_dev *rhp;
-	__be64 *page_list = NULL;
-	int shift = 0;
-	u64 total_size;
-	int npages = 0;
-	int ret;
-
-	PDBG("%s ib_mr %p ib_pd %p\n", __func__, mr, pd);
-
-	/* There can be no memory windows */
-	if (atomic_read(&mr->usecnt))
-		return -EINVAL;
-
-	mhp = to_iwch_mr(mr);
-	rhp = mhp->rhp;
-	php = to_iwch_pd(mr->pd);
-
-	/* make sure we are on the same adapter */
-	if (rhp != php->rhp)
-		return -EINVAL;
-
-	memcpy(&mh, mhp, sizeof *mhp);
-
-	if (mr_rereg_mask & IB_MR_REREG_PD)
-		php = to_iwch_pd(pd);
-	if (mr_rereg_mask & IB_MR_REREG_ACCESS)
-		mh.attr.perms = iwch_ib_to_tpt_access(acc);
-	if (mr_rereg_mask & IB_MR_REREG_TRANS) {
-		ret = build_phys_page_list(buffer_list, num_phys_buf,
-					   iova_start,
-					   &total_size, &npages,
-					   &shift, &page_list);
-		if (ret)
-			return ret;
-	}
-
-	ret = iwch_reregister_mem(rhp, php, &mh, shift, npages);
-	kfree(page_list);
-	if (ret) {
-		return ret;
-	}
-	if (mr_rereg_mask & IB_MR_REREG_PD)
-		mhp->attr.pdid = php->pdid;
-	if (mr_rereg_mask & IB_MR_REREG_ACCESS)
-		mhp->attr.perms = iwch_ib_to_tpt_access(acc);
-	if (mr_rereg_mask & IB_MR_REREG_TRANS) {
-		mhp->attr.zbva = 0;
-		mhp->attr.va_fbo = *iova_start;
-		mhp->attr.page_size = shift - 12;
-		mhp->attr.len = (u32) total_size;
-		mhp->attr.pbl_size = npages;
-	}
-
-	return 0;
-}
-
-
 static struct ib_mr *iwch_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
 				      u64 virt, int acc, struct ib_udata *udata)
 {
@@ -1414,8 +1347,6 @@ int iwch_register_device(struct iwch_dev *dev)
 	dev->ibdev.resize_cq = iwch_resize_cq;
 	dev->ibdev.poll_cq = iwch_poll_cq;
 	dev->ibdev.get_dma_mr = iwch_get_dma_mr;
-	dev->ibdev.reg_phys_mr = iwch_register_phys_mem;
-	dev->ibdev.rereg_phys_mr = iwch_reregister_phys_mem;
 	dev->ibdev.reg_user_mr = iwch_reg_user_mr;
 	dev->ibdev.dereg_mr = iwch_dereg_mr;
 	dev->ibdev.alloc_mw = iwch_alloc_mw;
diff --git a/drivers/infiniband/hw/cxgb3/iwch_provider.h b/drivers/infiniband/hw/cxgb3/iwch_provider.h
index 2ac85b8..f4fa6d6 100644
--- a/drivers/infiniband/hw/cxgb3/iwch_provider.h
+++ b/drivers/infiniband/hw/cxgb3/iwch_provider.h
@@ -341,10 +341,6 @@ void iwch_unregister_device(struct iwch_dev *dev);
 void stop_read_rep_timer(struct iwch_qp *qhp);
 int iwch_register_mem(struct iwch_dev *rhp, struct iwch_pd *php,
 		      struct iwch_mr *mhp, int shift);
-int iwch_reregister_mem(struct iwch_dev *rhp, struct iwch_pd *php,
-					struct iwch_mr *mhp,
-					int shift,
-					int npages);
 int iwch_alloc_pbl(struct iwch_mr *mhp, int npages);
 void iwch_free_pbl(struct iwch_mr *mhp);
 int iwch_write_pbl(struct iwch_mr *mhp, __be64 *pages, int npages, int offset);
diff --git a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h
index 00e55fa..dd00cf2 100644
--- a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h
+++ b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h
@@ -968,17 +968,6 @@ struct ib_mr *c4iw_reg_user_mr(struct ib_pd *pd, u64 start,
 					   u64 length, u64 virt, int acc,
 					   struct ib_udata *udata);
 struct ib_mr *c4iw_get_dma_mr(struct ib_pd *pd, int acc);
-struct ib_mr *c4iw_register_phys_mem(struct ib_pd *pd,
-					struct ib_phys_buf *buffer_list,
-					int num_phys_buf,
-					int acc,
-					u64 *iova_start);
-int c4iw_reregister_phys_mem(struct ib_mr *mr,
-				     int mr_rereg_mask,
-				     struct ib_pd *pd,
-				     struct ib_phys_buf *buffer_list,
-				     int num_phys_buf,
-				     int acc, u64 *iova_start);
 int c4iw_dereg_mr(struct ib_mr *ib_mr);
 int c4iw_destroy_cq(struct ib_cq *ib_cq);
 struct ib_cq *c4iw_create_cq(struct ib_device *ibdev,
diff --git a/drivers/infiniband/hw/cxgb4/mem.c b/drivers/infiniband/hw/cxgb4/mem.c
index e1629ab..1eb833a 100644
--- a/drivers/infiniband/hw/cxgb4/mem.c
+++ b/drivers/infiniband/hw/cxgb4/mem.c
@@ -392,32 +392,6 @@ static int register_mem(struct c4iw_dev *rhp, struct c4iw_pd *php,
 	return ret;
 }
 
-static int reregister_mem(struct c4iw_dev *rhp, struct c4iw_pd *php,
-			  struct c4iw_mr *mhp, int shift, int npages)
-{
-	u32 stag;
-	int ret;
-
-	if (npages > mhp->attr.pbl_size)
-		return -ENOMEM;
-
-	stag = mhp->attr.stag;
-	ret = write_tpt_entry(&rhp->rdev, 0, &stag, 1, mhp->attr.pdid,
-			      FW_RI_STAG_NSMR, mhp->attr.perms,
-			      mhp->attr.mw_bind_enable, mhp->attr.zbva,
-			      mhp->attr.va_fbo, mhp->attr.len, shift - 12,
-			      mhp->attr.pbl_size, mhp->attr.pbl_addr);
-	if (ret)
-		return ret;
-
-	ret = finish_mem_reg(mhp, stag);
-	if (ret)
-		dereg_mem(&rhp->rdev, mhp->attr.stag, mhp->attr.pbl_size,
-		       mhp->attr.pbl_addr);
-
-	return ret;
-}
-
 static int alloc_pbl(struct c4iw_mr *mhp, int npages)
 {
 	mhp->attr.pbl_addr = c4iw_pblpool_alloc(&mhp->rhp->rdev,
@@ -431,228 +405,6 @@ static int alloc_pbl(struct c4iw_mr *mhp, int npages)
 	return 0;
 }
 
-static int build_phys_page_list(struct ib_phys_buf *buffer_list,
-				int num_phys_buf, u64 *iova_start,
-				u64 *total_size, int *npages,
-				int *shift, __be64 **page_list)
-{
-	u64 mask;
-	int i, j, n;
-
-	mask = 0;
-	*total_size = 0;
-	for (i = 0; i < num_phys_buf; ++i) {
-		if (i != 0 && buffer_list[i].addr & ~PAGE_MASK)
-			return -EINVAL;
-		if (i != 0 && i != num_phys_buf - 1 &&
-		    (buffer_list[i].size & ~PAGE_MASK))
-			return -EINVAL;
-		*total_size += buffer_list[i].size;
-		if (i > 0)
-			mask |= buffer_list[i].addr;
-		else
-			mask |= buffer_list[i].addr & PAGE_MASK;
-		if (i != num_phys_buf - 1)
-			mask |= buffer_list[i].addr + buffer_list[i].size;
-		else
-			mask |= (buffer_list[i].addr + buffer_list[i].size +
-				PAGE_SIZE - 1) & PAGE_MASK;
-	}
-
-	if (*total_size > 0xFFFFFFFFULL)
-		return -ENOMEM;
-
-	/* Find largest page shift we can use to cover buffers */
-	for (*shift = PAGE_SHIFT; *shift < 27; ++(*shift))
-		if ((1ULL << *shift) & mask)
-			break;
-
-	buffer_list[0].size += buffer_list[0].addr & ((1ULL << *shift) - 1);
-	buffer_list[0].addr &= ~0ull << *shift;
-
-	*npages = 0;
-	for (i = 0; i < num_phys_buf; ++i)
-		*npages += (buffer_list[i].size +
-			(1ULL << *shift) - 1) >> *shift;
-
-	if (!*npages)
-		return -EINVAL;
-
-	*page_list = kmalloc(sizeof(u64) * *npages, GFP_KERNEL);
-	if (!*page_list)
-		return -ENOMEM;
-
-	n = 0;
-	for (i = 0; i < num_phys_buf; ++i)
-		for (j = 0;
-		     j < (buffer_list[i].size + (1ULL << *shift) - 1) >> *shift;
-		     ++j)
-			(*page_list)[n++] = cpu_to_be64(buffer_list[i].addr +
-			    ((u64) j << *shift));
-
-	PDBG("%s va 0x%llx mask 0x%llx shift %d len %lld pbl_size %d\n",
-	     __func__, (unsigned long long)*iova_start,
-	     (unsigned long long)mask, *shift, (unsigned long long)*total_size,
-	     *npages);
-
-	return 0;
-
-}
-
-int c4iw_reregister_phys_mem(struct ib_mr *mr, int mr_rereg_mask,
-			     struct ib_pd *pd, struct ib_phys_buf *buffer_list,
-			     int num_phys_buf, int acc, u64 *iova_start)
-{
-
-	struct c4iw_mr mh, *mhp;
-	struct c4iw_pd *php;
-	struct c4iw_dev *rhp;
-	__be64 *page_list = NULL;
-	int shift = 0;
-	u64 total_size;
-	int npages;
-	int ret;
-
-	PDBG("%s ib_mr %p ib_pd %p\n", __func__, mr, pd);
-
-	/* There can be no memory windows */
-	if (atomic_read(&mr->usecnt))
-		return -EINVAL;
-
-	mhp = to_c4iw_mr(mr);
-	rhp = mhp->rhp;
-	php = to_c4iw_pd(mr->pd);
-
-	/* make sure we are on the same adapter */
-	if (rhp != php->rhp)
-		return -EINVAL;
-
-	memcpy(&mh, mhp, sizeof *mhp);
-
-	if (mr_rereg_mask & IB_MR_REREG_PD)
-		php = to_c4iw_pd(pd);
-	if (mr_rereg_mask & IB_MR_REREG_ACCESS) {
-		mh.attr.perms = c4iw_ib_to_tpt_access(acc);
-		mh.attr.mw_bind_enable = (acc & IB_ACCESS_MW_BIND) ==
-					 IB_ACCESS_MW_BIND;
-	}
-	if (mr_rereg_mask & IB_MR_REREG_TRANS) {
-		ret = build_phys_page_list(buffer_list, num_phys_buf,
-						iova_start,
-						&total_size, &npages,
-						&shift, &page_list);
-		if (ret)
-			return ret;
-	}
-
-	if (mr_exceeds_hw_limits(rhp, total_size)) {
-		kfree(page_list);
-		return -EINVAL;
-	}
-
-	ret = reregister_mem(rhp, php, &mh, shift, npages);
-	kfree(page_list);
-	if (ret)
-		return ret;
-	if (mr_rereg_mask & IB_MR_REREG_PD)
-		mhp->attr.pdid = php->pdid;
-	if (mr_rereg_mask & IB_MR_REREG_ACCESS)
-		mhp->attr.perms = c4iw_ib_to_tpt_access(acc);
-	if (mr_rereg_mask & IB_MR_REREG_TRANS) {
-		mhp->attr.zbva = 0;
-		mhp->attr.va_fbo = *iova_start;
-		mhp->attr.page_size = shift - 12;
-		mhp->attr.len = (u32) total_size;
-		mhp->attr.pbl_size = npages;
-	}
-
-	return 0;
-}
-
-struct ib_mr *c4iw_register_phys_mem(struct ib_pd *pd,
-				     struct ib_phys_buf *buffer_list,
-				     int num_phys_buf, int acc, u64 *iova_start)
-{
-	__be64 *page_list;
-	int shift;
-	u64 total_size;
-	int npages;
-	struct c4iw_dev *rhp;
-	struct c4iw_pd *php;
-	struct c4iw_mr *mhp;
-	int ret;
-
-	PDBG("%s ib_pd %p\n", __func__, pd);
-	php = to_c4iw_pd(pd);
-	rhp = php->rhp;
-
-	mhp = kzalloc(sizeof(*mhp), GFP_KERNEL);
-	if (!mhp)
-		return ERR_PTR(-ENOMEM);
-
-	mhp->rhp = rhp;
-
-	/* First check that we have enough alignment */
-	if ((*iova_start & ~PAGE_MASK) != (buffer_list[0].addr & ~PAGE_MASK)) {
-		ret = -EINVAL;
-		goto err;
-	}
-
-	if (num_phys_buf > 1 &&
-	    ((buffer_list[0].addr + buffer_list[0].size) & ~PAGE_MASK)) {
-		ret = -EINVAL;
-		goto err;
-	}
-
-	ret = build_phys_page_list(buffer_list, num_phys_buf, iova_start,
-					&total_size, &npages, &shift,
-					&page_list);
-	if (ret)
-		goto err;
-
-	if (mr_exceeds_hw_limits(rhp, total_size)) {
-		kfree(page_list);
-		ret = -EINVAL;
-		goto err;
-	}
-
-	ret = alloc_pbl(mhp, npages);
-	if (ret) {
-		kfree(page_list);
-		goto err;
-	}
-
-	ret = write_pbl(&mhp->rhp->rdev, page_list, mhp->attr.pbl_addr,
-			     npages);
-	kfree(page_list);
-	if (ret)
-		goto err_pbl;
-
-	mhp->attr.pdid = php->pdid;
-	mhp->attr.zbva = 0;
-
-	mhp->attr.perms = c4iw_ib_to_tpt_access(acc);
-	mhp->attr.va_fbo = *iova_start;
-	mhp->attr.page_size = shift - 12;
-
-	mhp->attr.len = (u32) total_size;
-	mhp->attr.pbl_size = npages;
-	ret = register_mem(rhp, php, mhp, shift);
-	if (ret)
-		goto err_pbl;
-
-	return &mhp->ibmr;
-
-err_pbl:
-	c4iw_pblpool_free(&mhp->rhp->rdev, mhp->attr.pbl_addr,
-			      mhp->attr.pbl_size << 3);
-
-err:
-	kfree(mhp);
-	return ERR_PTR(ret);
-
-}
-
 struct ib_mr *c4iw_get_dma_mr(struct ib_pd *pd, int acc)
 {
 	struct c4iw_dev *rhp;
diff --git a/drivers/infiniband/hw/cxgb4/provider.c b/drivers/infiniband/hw/cxgb4/provider.c
index b7703bc..186319e 100644
--- a/drivers/infiniband/hw/cxgb4/provider.c
+++ b/drivers/infiniband/hw/cxgb4/provider.c
@@ -509,8 +509,6 @@ int c4iw_register_device(struct c4iw_dev *dev)
 	dev->ibdev.resize_cq = c4iw_resize_cq;
 	dev->ibdev.poll_cq = c4iw_poll_cq;
 	dev->ibdev.get_dma_mr = c4iw_get_dma_mr;
-	dev->ibdev.reg_phys_mr = c4iw_register_phys_mem;
-	dev->ibdev.rereg_phys_mr = c4iw_reregister_phys_mem;
 	dev->ibdev.reg_user_mr = c4iw_reg_user_mr;
 	dev->ibdev.dereg_mr = c4iw_dereg_mr;
 	dev->ibdev.alloc_mw = c4iw_alloc_mw;
diff --git a/drivers/infiniband/hw/mthca/mthca_provider.c b/drivers/infiniband/hw/mthca/mthca_provider.c
index 28d7a8b..6d0a1db 100644
--- a/drivers/infiniband/hw/mthca/mthca_provider.c
+++ b/drivers/infiniband/hw/mthca/mthca_provider.c
@@ -892,89 +892,6 @@ static struct ib_mr *mthca_get_dma_mr(struct ib_pd *pd, int acc)
 	return &mr->ibmr;
 }
 
-static struct ib_mr *mthca_reg_phys_mr(struct ib_pd       *pd,
-				       struct ib_phys_buf *buffer_list,
-				       int                 num_phys_buf,
-				       int                 acc,
-				       u64                *iova_start)
-{
-	struct mthca_mr *mr;
-	u64 *page_list;
-	u64 total_size;
-	unsigned long mask;
-	int shift;
-	int npages;
-	int err;
-	int i, j, n;
-
-	mask = buffer_list[0].addr ^ *iova_start;
-	total_size = 0;
-	for (i = 0; i < num_phys_buf; ++i) {
-		if (i != 0)
-			mask |= buffer_list[i].addr;
-		if (i != num_phys_buf - 1)
-			mask |= buffer_list[i].addr + buffer_list[i].size;
-
-		total_size += buffer_list[i].size;
-	}
-
-	if (mask & ~PAGE_MASK)
-		return ERR_PTR(-EINVAL);
-
-	shift = __ffs(mask | 1 << 31);
-
-	buffer_list[0].size += buffer_list[0].addr & ((1ULL << shift) - 1);
-	buffer_list[0].addr &= ~0ull << shift;
-
-	mr = kmalloc(sizeof *mr, GFP_KERNEL);
-	if (!mr)
-		return ERR_PTR(-ENOMEM);
-
-	npages = 0;
-	for (i = 0; i < num_phys_buf; ++i)
-		npages += (buffer_list[i].size + (1ULL << shift) - 1) >> shift;
-
-	if (!npages)
-		return &mr->ibmr;
-
-	page_list = kmalloc(npages * sizeof *page_list, GFP_KERNEL);
-	if (!page_list) {
-		kfree(mr);
-		return ERR_PTR(-ENOMEM);
-	}
-
-	n = 0;
-	for (i = 0; i < num_phys_buf; ++i)
-		for (j = 0;
-		     j < (buffer_list[i].size + (1ULL << shift) - 1) >> shift;
-		     ++j)
-			page_list[n++] = buffer_list[i].addr + ((u64) j << shift);
-
-	mthca_dbg(to_mdev(pd->device), "Registering memory at %llx (iova %llx) "
-		  "in PD %x; shift %d, npages %d.\n",
-		  (unsigned long long) buffer_list[0].addr,
-		  (unsigned long long) *iova_start,
-		  to_mpd(pd)->pd_num,
-		  shift, npages);
-
-	err = mthca_mr_alloc_phys(to_mdev(pd->device),
-				  to_mpd(pd)->pd_num,
-				  page_list, shift, npages,
-				  *iova_start, total_size,
-				  convert_access(acc), mr);
-
-	if (err) {
-		kfree(page_list);
-		kfree(mr);
-		return ERR_PTR(err);
-	}
-
-	kfree(page_list);
-	mr->umem = NULL;
-
-	return &mr->ibmr;
-}
-
 static struct ib_mr *mthca_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
 				       u64 virt, int acc, struct ib_udata *udata)
 {
@@ -1339,7 +1256,6 @@ int mthca_register_device(struct mthca_dev *dev)
 	dev->ib_dev.destroy_cq           = mthca_destroy_cq;
 	dev->ib_dev.poll_cq              = mthca_poll_cq;
 	dev->ib_dev.get_dma_mr           = mthca_get_dma_mr;
-	dev->ib_dev.reg_phys_mr          = mthca_reg_phys_mr;
 	dev->ib_dev.reg_user_mr          = mthca_reg_user_mr;
 	dev->ib_dev.dereg_mr             = mthca_dereg_mr;
 	dev->ib_dev.get_port_immutable   = mthca_port_immutable;
diff --git a/drivers/infiniband/hw/nes/nes_cm.c b/drivers/infiniband/hw/nes/nes_cm.c
index 8a3ad17..242c87d 100644
--- a/drivers/infiniband/hw/nes/nes_cm.c
+++ b/drivers/infiniband/hw/nes/nes_cm.c
@@ -3319,10 +3319,9 @@ int nes_accept(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
 		ibphysbuf.addr = nesqp->ietf_frame_pbase + mpa_frame_offset;
 		ibphysbuf.size = buff_len;
 		tagged_offset = (u64)(unsigned long)*start_buff;
-		ibmr = nesibdev->ibdev.reg_phys_mr((struct ib_pd *)nespd,
-						   &ibphysbuf, 1,
-						   IB_ACCESS_LOCAL_WRITE,
-						   &tagged_offset);
+		ibmr = nes_reg_phys_mr(&nespd->ibpd, &ibphysbuf, 1,
+					IB_ACCESS_LOCAL_WRITE,
+					&tagged_offset);
 		if (!ibmr) {
 			nes_debug(NES_DBG_CM, "Unable to register memory region"
 				  "for lSMM for cm_node = %p \n",
diff --git a/drivers/infiniband/hw/nes/nes_verbs.c b/drivers/infiniband/hw/nes/nes_verbs.c
index 2bad036..453ebc2 100644
--- a/drivers/infiniband/hw/nes/nes_verbs.c
+++ b/drivers/infiniband/hw/nes/nes_verbs.c
@@ -2019,7 +2019,7 @@ static int nes_reg_mr(struct nes_device *nesdev, struct nes_pd *nespd,
 /**
  * nes_reg_phys_mr
  */
-static struct ib_mr *nes_reg_phys_mr(struct ib_pd *ib_pd,
+struct ib_mr *nes_reg_phys_mr(struct ib_pd *ib_pd,
 		struct ib_phys_buf *buffer_list, int num_phys_buf, int acc,
 		u64 * iova_start)
 {
@@ -3832,7 +3832,6 @@ struct nes_ib_device *nes_init_ofa_device(struct net_device *netdev)
 	nesibdev->ibdev.destroy_cq = nes_destroy_cq;
 	nesibdev->ibdev.poll_cq = nes_poll_cq;
 	nesibdev->ibdev.get_dma_mr = nes_get_dma_mr;
-	nesibdev->ibdev.reg_phys_mr = nes_reg_phys_mr;
 	nesibdev->ibdev.reg_user_mr = nes_reg_user_mr;
 	nesibdev->ibdev.dereg_mr = nes_dereg_mr;
 	nesibdev->ibdev.alloc_mw = nes_alloc_mw;
diff --git a/drivers/infiniband/hw/nes/nes_verbs.h b/drivers/infiniband/hw/nes/nes_verbs.h
index a204b67..38e38cf 100644
--- a/drivers/infiniband/hw/nes/nes_verbs.h
+++ b/drivers/infiniband/hw/nes/nes_verbs.h
@@ -190,4 +190,9 @@ struct nes_qp {
 	u8                    pau_state;
 	__u64                 nesuqp_addr;
 };
+
+struct ib_mr *nes_reg_phys_mr(struct ib_pd *ib_pd,
+		struct ib_phys_buf *buffer_list, int num_phys_buf, int acc,
+		u64 * iova_start);
+
 #endif			/* NES_VERBS_H */
diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_main.c b/drivers/infiniband/hw/ocrdma/ocrdma_main.c
index 963be66..f91131f 100644
--- a/drivers/infiniband/hw/ocrdma/ocrdma_main.c
+++ b/drivers/infiniband/hw/ocrdma/ocrdma_main.c
@@ -174,7 +174,6 @@ static int ocrdma_register_device(struct ocrdma_dev *dev)
 	dev->ibdev.req_notify_cq = ocrdma_arm_cq;
 
 	dev->ibdev.get_dma_mr = ocrdma_get_dma_mr;
-	dev->ibdev.reg_phys_mr = ocrdma_reg_kernel_mr;
 	dev->ibdev.dereg_mr = ocrdma_dereg_mr;
 	dev->ibdev.reg_user_mr = ocrdma_reg_user_mr;
 
diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
index 74cf67a..3cffab7 100644
--- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
+++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
@@ -3017,169 +3017,6 @@ pl_err:
 	return ERR_PTR(-ENOMEM);
 }
 
-#define MAX_KERNEL_PBE_SIZE 65536
-static inline int count_kernel_pbes(struct ib_phys_buf *buf_list,
-				    int buf_cnt, u32 *pbe_size)
-{
-	u64 total_size = 0;
-	u64 buf_size = 0;
-	int i;
-	*pbe_size = roundup(buf_list[0].size, PAGE_SIZE);
-	*pbe_size = roundup_pow_of_two(*pbe_size);
-
-	/* find the smallest PBE size that we can have */
-	for (i = 0; i < buf_cnt; i++) {
-		/* first addr may not be page aligned, so ignore checking */
-		if ((i != 0) && ((buf_list[i].addr & ~PAGE_MASK) ||
-				 (buf_list[i].size & ~PAGE_MASK))) {
-			return 0;
-		}
-
-		/* if configured PBE size is greater then the chosen one,
-		 * reduce the PBE size.
-		 */
-		buf_size = roundup(buf_list[i].size, PAGE_SIZE);
-		/* pbe_size has to be even multiple of 4K 1,2,4,8...*/
-		buf_size = roundup_pow_of_two(buf_size);
-		if (*pbe_size > buf_size)
-			*pbe_size = buf_size;
-
-		total_size += buf_size;
-	}
-	*pbe_size = *pbe_size > MAX_KERNEL_PBE_SIZE ?
-	    (MAX_KERNEL_PBE_SIZE) : (*pbe_size);
-
-	/* num_pbes = total_size / (*pbe_size);  this is implemented below. */
-
-	return total_size >> ilog2(*pbe_size);
-}
-
-static void build_kernel_pbes(struct ib_phys_buf *buf_list, int ib_buf_cnt,
-			      u32 pbe_size, struct ocrdma_pbl *pbl_tbl,
-			      struct ocrdma_hw_mr *hwmr)
-{
-	int i;
-	int idx;
-	int pbes_per_buf = 0;
-	u64 buf_addr = 0;
-	int num_pbes;
-	struct ocrdma_pbe *pbe;
-	int total_num_pbes = 0;
-
-	if (!hwmr->num_pbes)
-		return;
-
-	pbe = (struct ocrdma_pbe *)pbl_tbl->va;
-	num_pbes = 0;
-
-	/* go through the OS phy regions & fill hw pbe entries into pbls. */
-	for (i = 0; i < ib_buf_cnt; i++) {
-		buf_addr = buf_list[i].addr;
-		pbes_per_buf =
-		    roundup_pow_of_two(roundup(buf_list[i].size, PAGE_SIZE)) /
-		    pbe_size;
-		hwmr->len += buf_list[i].size;
-		/* number of pbes can be more for one OS buf, when
-		 * buffers are of different sizes.
-		 * split the ib_buf to one or more pbes.
-		 */
-		for (idx = 0; idx < pbes_per_buf; idx++) {
-			/* we program always page aligned addresses,
-			 * first unaligned address is taken care by fbo.
-			 */
-			if (i == 0) {
-				/* for non zero fbo, assign the
-				 * start of the page.
-				 */
-				pbe->pa_lo =
-				    cpu_to_le32((u32) (buf_addr & PAGE_MASK));
-				pbe->pa_hi =
-				    cpu_to_le32((u32) upper_32_bits(buf_addr));
-			} else {
-				pbe->pa_lo =
-				    cpu_to_le32((u32) (buf_addr & 0xffffffff));
-				pbe->pa_hi =
-				    cpu_to_le32((u32) upper_32_bits(buf_addr));
-			}
-			buf_addr += pbe_size;
-			num_pbes += 1;
-			total_num_pbes += 1;
-			pbe++;
-
-			if (total_num_pbes == hwmr->num_pbes)
-				goto mr_tbl_done;
-			/* if the pbl is full storing the pbes,
-			 * move to next pbl.
-			 */
-			if (num_pbes == (hwmr->pbl_size/sizeof(u64))) {
-				pbl_tbl++;
-				pbe = (struct ocrdma_pbe *)pbl_tbl->va;
-				num_pbes = 0;
-			}
-		}
-	}
-mr_tbl_done:
-	return;
-}
-
-struct ib_mr *ocrdma_reg_kernel_mr(struct ib_pd *ibpd,
-				   struct ib_phys_buf *buf_list,
-				   int buf_cnt, int acc, u64 *iova_start)
-{
-	int status = -ENOMEM;
-	struct ocrdma_mr *mr;
-	struct ocrdma_pd *pd = get_ocrdma_pd(ibpd);
-	struct ocrdma_dev *dev = get_ocrdma_dev(ibpd->device);
-	u32 num_pbes;
-	u32 pbe_size = 0;
-
-	if ((acc & IB_ACCESS_REMOTE_WRITE) && !(acc & IB_ACCESS_LOCAL_WRITE))
-		return ERR_PTR(-EINVAL);
-
-	mr = kzalloc(sizeof(*mr), GFP_KERNEL);
-	if (!mr)
-		return ERR_PTR(status);
-
-	num_pbes = count_kernel_pbes(buf_list, buf_cnt, &pbe_size);
-	if (num_pbes == 0) {
-		status = -EINVAL;
-		goto pbl_err;
-	}
-	status = ocrdma_get_pbl_info(dev, mr, num_pbes);
-	if (status)
-		goto pbl_err;
-
-	mr->hwmr.pbe_size = pbe_size;
-	mr->hwmr.fbo = *iova_start - (buf_list[0].addr & PAGE_MASK);
-	mr->hwmr.va = *iova_start;
-	mr->hwmr.local_rd = 1;
-	mr->hwmr.remote_wr = (acc & IB_ACCESS_REMOTE_WRITE) ? 1 : 0;
-	mr->hwmr.remote_rd = (acc & IB_ACCESS_REMOTE_READ) ? 1 : 0;
-	mr->hwmr.local_wr = (acc & IB_ACCESS_LOCAL_WRITE) ? 1 : 0;
-	mr->hwmr.remote_atomic = (acc & IB_ACCESS_REMOTE_ATOMIC) ? 1 : 0;
-	mr->hwmr.mw_bind = (acc & IB_ACCESS_MW_BIND) ? 1 : 0;
-
-	status = ocrdma_build_pbl_tbl(dev, &mr->hwmr);
-	if (status)
-		goto pbl_err;
-	build_kernel_pbes(buf_list, buf_cnt, pbe_size, mr->hwmr.pbl_table,
-			  &mr->hwmr);
-	status = ocrdma_reg_mr(dev, &mr->hwmr, pd->id, acc);
-	if (status)
-		goto mbx_err;
-
-	mr->ibmr.lkey = mr->hwmr.lkey;
-	if (mr->hwmr.remote_wr || mr->hwmr.remote_rd)
-		mr->ibmr.rkey = mr->hwmr.lkey;
-	return &mr->ibmr;
-
-mbx_err:
-	ocrdma_free_mr_pbl_tbl(dev, &mr->hwmr);
-pbl_err:
-	kfree(mr);
-	return ERR_PTR(status);
-}
-
 static int ocrdma_set_page(struct ib_mr *ibmr, u64 addr)
 {
 	struct ocrdma_mr *mr = get_ocrdma_mr(ibmr);
diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h
index f2ce048..82f476f 100644
--- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h
+++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h
@@ -115,9 +115,6 @@ int ocrdma_post_srq_recv(struct ib_srq *, struct ib_recv_wr *,
 
 int ocrdma_dereg_mr(struct ib_mr *);
 struct ib_mr *ocrdma_get_dma_mr(struct ib_pd *, int acc);
-struct ib_mr *ocrdma_reg_kernel_mr(struct ib_pd *,
-				   struct ib_phys_buf *buffer_list,
-				   int num_phys_buf, int acc, u64 *iova_start);
 struct ib_mr *ocrdma_reg_user_mr(struct ib_pd *, u64 start, u64 length,
 				 u64 virt, int acc, struct ib_udata *);
 struct ib_mr *ocrdma_alloc_mr(struct ib_pd *pd,
diff --git a/drivers/infiniband/hw/qib/qib_mr.c b/drivers/infiniband/hw/qib/qib_mr.c
index 294f5c7..5f53304 100644
--- a/drivers/infiniband/hw/qib/qib_mr.c
+++ b/drivers/infiniband/hw/qib/qib_mr.c
@@ -150,10 +150,7 @@ static struct qib_mr *alloc_mr(int count, struct ib_pd *pd)
 	rval = init_qib_mregion(&mr->mr, pd, count);
 	if (rval)
 		goto bail;
-	/*
-	 * ib_reg_phys_mr() will initialize mr->ibmr except for
-	 * lkey and rkey.
-	 */
+
 	rval = qib_alloc_lkey(&mr->mr, 0);
 	if (rval)
 		goto bail_mregion;
@@ -171,52 +168,6 @@ bail:
 }
 
 /**
- * qib_reg_phys_mr - register a physical memory region
- * @pd: protection domain for this memory region
- * @buffer_list: pointer to the list of physical buffers to register
- * @num_phys_buf: the number of physical buffers to register
- * @iova_start: the starting address passed over IB which maps to this MR
- *
- * Returns the memory region on success, otherwise returns an errno.
- */
-struct ib_mr *qib_reg_phys_mr(struct ib_pd *pd,
-			      struct ib_phys_buf *buffer_list,
-			      int num_phys_buf, int acc, u64 *iova_start)
-{
-	struct qib_mr *mr;
-	int n, m, i;
-	struct ib_mr *ret;
-
-	mr = alloc_mr(num_phys_buf, pd);
-	if (IS_ERR(mr)) {
-		ret = (struct ib_mr *)mr;
-		goto bail;
-	}
-
-	mr->mr.user_base = *iova_start;
-	mr->mr.iova = *iova_start;
-	mr->mr.access_flags = acc;
-
-	m = 0;
-	n = 0;
-	for (i = 0; i < num_phys_buf; i++) {
-		mr->mr.map[m]->segs[n].vaddr = (void *) buffer_list[i].addr;
-		mr->mr.map[m]->segs[n].length = buffer_list[i].size;
-		mr->mr.length += buffer_list[i].size;
-		n++;
-		if (n == QIB_SEGSZ) {
-			m++;
-			n = 0;
-		}
-	}
-
-	ret = &mr->ibmr;
-
-bail:
-	return ret;
-}
-
-/**
  * qib_reg_user_mr - register a userspace memory region
  * @pd: protection domain for this memory region
  * @start: starting userspace address
diff --git a/drivers/infiniband/hw/qib/qib_verbs.c b/drivers/infiniband/hw/qib/qib_verbs.c
index 9e1af0b..e56bd65 100644
--- a/drivers/infiniband/hw/qib/qib_verbs.c
+++ b/drivers/infiniband/hw/qib/qib_verbs.c
@@ -2206,7 +2206,6 @@ int qib_register_ib_device(struct qib_devdata *dd)
 	ibdev->poll_cq = qib_poll_cq;
 	ibdev->req_notify_cq = qib_req_notify_cq;
 	ibdev->get_dma_mr = qib_get_dma_mr;
-	ibdev->reg_phys_mr = qib_reg_phys_mr;
 	ibdev->reg_user_mr = qib_reg_user_mr;
 	ibdev->dereg_mr = qib_dereg_mr;
 	ibdev->alloc_mr = qib_alloc_mr;
diff --git a/drivers/infiniband/hw/qib/qib_verbs.h b/drivers/infiniband/hw/qib/qib_verbs.h
index 2baf5ad..2d538c8c 100644
--- a/drivers/infiniband/hw/qib/qib_verbs.h
+++ b/drivers/infiniband/hw/qib/qib_verbs.h
@@ -1032,10 +1032,6 @@ int qib_resize_cq(struct ib_cq *ibcq, int cqe, struct ib_udata *udata);
 
 struct ib_mr *qib_get_dma_mr(struct ib_pd *pd, int acc);
 
-struct ib_mr *qib_reg_phys_mr(struct ib_pd *pd,
-			      struct ib_phys_buf *buffer_list,
-			      int num_phys_buf, int acc, u64 *iova_start);
-
 struct ib_mr *qib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
 			      u64 virt_addr, int mr_access_flags,
 			      struct ib_udata *udata);
diff --git a/drivers/staging/rdma/amso1100/c2_provider.c b/drivers/staging/rdma/amso1100/c2_provider.c
index ee2ff87..6c3e9cc 100644
--- a/drivers/staging/rdma/amso1100/c2_provider.c
+++ b/drivers/staging/rdma/amso1100/c2_provider.c
@@ -831,7 +831,6 @@ int c2_register_device(struct c2_dev *dev)
 	dev->ibdev.destroy_cq = c2_destroy_cq;
 	dev->ibdev.poll_cq = c2_poll_cq;
 	dev->ibdev.get_dma_mr = c2_get_dma_mr;
-	dev->ibdev.reg_phys_mr = c2_reg_phys_mr;
 	dev->ibdev.reg_user_mr = c2_reg_user_mr;
 	dev->ibdev.dereg_mr = c2_dereg_mr;
 	dev->ibdev.get_port_immutable = c2_port_immutable;
diff --git a/drivers/staging/rdma/ehca/ehca_iverbs.h b/drivers/staging/rdma/ehca/ehca_iverbs.h
index 4a45ca3..219f635 100644
--- a/drivers/staging/rdma/ehca/ehca_iverbs.h
+++ b/drivers/staging/rdma/ehca/ehca_iverbs.h
@@ -79,21 +79,10 @@ int ehca_destroy_ah(struct ib_ah *ah);
 
 struct ib_mr *ehca_get_dma_mr(struct ib_pd *pd, int mr_access_flags);
 
-struct ib_mr *ehca_reg_phys_mr(struct ib_pd *pd,
-			       struct ib_phys_buf *phys_buf_array,
-			       int num_phys_buf,
-			       int mr_access_flags, u64 *iova_start);
-
 struct ib_mr *ehca_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
 			       u64 virt, int mr_access_flags,
 			       struct ib_udata *udata);
 
-int ehca_rereg_phys_mr(struct ib_mr *mr,
-		       int mr_rereg_mask,
-		       struct ib_pd *pd,
-		       struct ib_phys_buf *phys_buf_array,
-		       int num_phys_buf, int mr_access_flags, u64 *iova_start);
-
 int ehca_dereg_mr(struct ib_mr *mr);
 
 struct ib_mw *ehca_alloc_mw(struct ib_pd *pd, enum ib_mw_type type);
diff --git a/drivers/staging/rdma/ehca/ehca_main.c b/drivers/staging/rdma/ehca/ehca_main.c
index 0be7959..ba023426 100644
--- a/drivers/staging/rdma/ehca/ehca_main.c
+++ b/drivers/staging/rdma/ehca/ehca_main.c
@@ -511,10 +511,8 @@ static int ehca_init_device(struct ehca_shca *shca)
 	shca->ib_device.req_notify_cq	    = ehca_req_notify_cq;
 	/* shca->ib_device.req_ncomp_notif  = ehca_req_ncomp_notif; */
 	shca->ib_device.get_dma_mr	    = ehca_get_dma_mr;
-	shca->ib_device.reg_phys_mr	    = ehca_reg_phys_mr;
 	shca->ib_device.reg_user_mr	    = ehca_reg_user_mr;
 	shca->ib_device.dereg_mr	    = ehca_dereg_mr;
-	shca->ib_device.rereg_phys_mr	    = ehca_rereg_phys_mr;
 	shca->ib_device.alloc_mw	    = ehca_alloc_mw;
 	shca->ib_device.bind_mw		    = ehca_bind_mw;
 	shca->ib_device.dealloc_mw	    = ehca_dealloc_mw;
diff --git a/drivers/staging/rdma/ehca/ehca_mrmw.c b/drivers/staging/rdma/ehca/ehca_mrmw.c
index eb274c1..1c1a8dd 100644
--- a/drivers/staging/rdma/ehca/ehca_mrmw.c
+++ b/drivers/staging/rdma/ehca/ehca_mrmw.c
@@ -196,120 +196,6 @@ get_dma_mr_exit0:
 
 /*----------------------------------------------------------------------*/
 
-struct ib_mr *ehca_reg_phys_mr(struct ib_pd *pd,
-			       struct ib_phys_buf *phys_buf_array,
-			       int num_phys_buf,
-			       int mr_access_flags,
-			       u64 *iova_start)
-{
-	struct ib_mr *ib_mr;
-	int ret;
-	struct ehca_mr *e_mr;
-	struct ehca_shca *shca =
-		container_of(pd->device, struct ehca_shca, ib_device);
-	struct ehca_pd *e_pd = container_of(pd, struct ehca_pd, ib_pd);
-
-	u64 size;
-
-	if ((num_phys_buf <= 0) || !phys_buf_array) {
-		ehca_err(pd->device, "bad input values: num_phys_buf=%x "
-			 "phys_buf_array=%p", num_phys_buf, phys_buf_array);
-		ib_mr = ERR_PTR(-EINVAL);
-		goto reg_phys_mr_exit0;
-	}
-	if (((mr_access_flags & IB_ACCESS_REMOTE_WRITE) &&
-	     !(mr_access_flags & IB_ACCESS_LOCAL_WRITE)) ||
-	    ((mr_access_flags & IB_ACCESS_REMOTE_ATOMIC) &&
-	     !(mr_access_flags & IB_ACCESS_LOCAL_WRITE))) {
-		/*
-		 * Remote Write Access requires Local Write Access
-		 * Remote Atomic Access requires Local Write Access
-		 */
-		ehca_err(pd->device, "bad input values: mr_access_flags=%x",
-			 mr_access_flags);
-		ib_mr = ERR_PTR(-EINVAL);
-		goto reg_phys_mr_exit0;
-	}
-
-	/* check physical buffer list and calculate size */
-	ret = ehca_mr_chk_buf_and_calc_size(phys_buf_array, num_phys_buf,
-					    iova_start, &size);
-	if (ret) {
-		ib_mr = ERR_PTR(ret);
-		goto reg_phys_mr_exit0;
-	}
-	if ((size == 0) ||
-	    (((u64)iova_start + size) < (u64)iova_start)) {
-		ehca_err(pd->device, "bad input values: size=%llx iova_start=%p",
-			 size, iova_start);
-		ib_mr = ERR_PTR(-EINVAL);
-		goto reg_phys_mr_exit0;
-	}
-
-	e_mr = ehca_mr_new();
-	if (!e_mr) {
-		ehca_err(pd->device, "out of memory");
-		ib_mr = ERR_PTR(-ENOMEM);
-		goto reg_phys_mr_exit0;
-	}
-
-	/* register MR on HCA */
-	if (ehca_mr_is_maxmr(size, iova_start)) {
-		e_mr->flags |= EHCA_MR_FLAG_MAXMR;
-		ret = ehca_reg_maxmr(shca, e_mr, iova_start, mr_access_flags,
-				     e_pd, &e_mr->ib.ib_mr.lkey,
-				     &e_mr->ib.ib_mr.rkey);
-		if (ret) {
-			ib_mr = ERR_PTR(ret);
-			goto reg_phys_mr_exit1;
-		}
-	} else {
-		struct ehca_mr_pginfo pginfo;
-		u32 num_kpages;
-		u32 num_hwpages;
-		u64 hw_pgsize;
-
-		num_kpages = NUM_CHUNKS(((u64)iova_start % PAGE_SIZE) + size,
-					PAGE_SIZE);
-		/* for kernel space we try most possible pgsize */
-		hw_pgsize = ehca_get_max_hwpage_size(shca);
-		num_hwpages = NUM_CHUNKS(((u64)iova_start % hw_pgsize) + size,
-					 hw_pgsize);
-		memset(&pginfo, 0, sizeof(pginfo));
-		pginfo.type = EHCA_MR_PGI_PHYS;
-		pginfo.num_kpages = num_kpages;
-		pginfo.hwpage_size = hw_pgsize;
-		pginfo.num_hwpages = num_hwpages;
-		pginfo.u.phy.num_phys_buf = num_phys_buf;
-		pginfo.u.phy.phys_buf_array = phys_buf_array;
-		pginfo.next_hwpage =
-			((u64)iova_start & ~PAGE_MASK) / hw_pgsize;
-
-		ret = ehca_reg_mr(shca, e_mr, iova_start, size, mr_access_flags,
-				  e_pd, &pginfo, &e_mr->ib.ib_mr.lkey,
-				  &e_mr->ib.ib_mr.rkey, EHCA_REG_MR);
-		if (ret) {
-			ib_mr = ERR_PTR(ret);
-			goto reg_phys_mr_exit1;
-		}
-	}
-
-	/* successful registration of all pages */
-	return &e_mr->ib.ib_mr;
-
-reg_phys_mr_exit1:
-	ehca_mr_delete(e_mr);
-reg_phys_mr_exit0:
-	if (IS_ERR(ib_mr))
-		ehca_err(pd->device, "h_ret=%li pd=%p phys_buf_array=%p "
-			 "num_phys_buf=%x mr_access_flags=%x iova_start=%p",
-			 PTR_ERR(ib_mr), pd, phys_buf_array,
-			 num_phys_buf, mr_access_flags, iova_start);
-	return ib_mr;
-} /* end ehca_reg_phys_mr() */
-
-/*----------------------------------------------------------------------*/
-
 struct ib_mr *ehca_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
 			       u64 virt, int mr_access_flags,
 			       struct ib_udata *udata)
@@ -437,158 +323,6 @@ reg_user_mr_exit0:
 
 /*----------------------------------------------------------------------*/
 
-int ehca_rereg_phys_mr(struct ib_mr *mr,
-		       int mr_rereg_mask,
-		       struct ib_pd *pd,
-		       struct ib_phys_buf *phys_buf_array,
-		       int num_phys_buf,
-		       int mr_access_flags,
-		       u64 *iova_start)
-{
-	int ret;
-
-	struct ehca_shca *shca =
-		container_of(mr->device, struct ehca_shca, ib_device);
-	struct ehca_mr *e_mr = container_of(mr, struct ehca_mr, ib.ib_mr);
-	u64 new_size;
-	u64 *new_start;
-	u32 new_acl;
-	struct ehca_pd *new_pd;
-	u32 tmp_lkey, tmp_rkey;
-	unsigned long sl_flags;
-	u32 num_kpages = 0;
-	u32 num_hwpages = 0;
-	struct ehca_mr_pginfo pginfo;
-
-	if (!(mr_rereg_mask & IB_MR_REREG_TRANS)) {
-		/* TODO not supported, because PHYP rereg hCall needs pages */
-		ehca_err(mr->device, "rereg without IB_MR_REREG_TRANS not "
-			 "supported yet, mr_rereg_mask=%x", mr_rereg_mask);
-		ret = -EINVAL;
-		goto rereg_phys_mr_exit0;
-	}
-
-	if (mr_rereg_mask & IB_MR_REREG_PD) {
-		if (!pd) {
-			ehca_err(mr->device, "rereg with bad pd, pd=%p "
-				 "mr_rereg_mask=%x", pd, mr_rereg_mask);
-			ret = -EINVAL;
-			goto rereg_phys_mr_exit0;
-		}
-	}
-
-	if ((mr_rereg_mask &
-	     ~(IB_MR_REREG_TRANS | IB_MR_REREG_PD | IB_MR_REREG_ACCESS)) ||
-	    (mr_rereg_mask == 0)) {
-		ret = -EINVAL;
-		goto rereg_phys_mr_exit0;
-	}
-
-	/* check other parameters */
-	if (e_mr == shca->maxmr) {
-		/* should be impossible, however reject to be sure */
-		ehca_err(mr->device, "rereg internal max-MR impossible, mr=%p "
-			 "shca->maxmr=%p mr->lkey=%x",
-			 mr, shca->maxmr, mr->lkey);
-		ret = -EINVAL;
-		goto rereg_phys_mr_exit0;
-	}
-	if (mr_rereg_mask & IB_MR_REREG_TRANS) { /* transl., i.e. addr/size */
-		if (e_mr->flags & EHCA_MR_FLAG_FMR) {
-			ehca_err(mr->device, "not supported for FMR, mr=%p "
-				 "flags=%x", mr, e_mr->flags);
-			ret = -EINVAL;
-			goto rereg_phys_mr_exit0;
-		}
-		if (!phys_buf_array || num_phys_buf <= 0) {
-			ehca_err(mr->device, "bad input values mr_rereg_mask=%x"
-				 " phys_buf_array=%p num_phys_buf=%x",
-				 mr_rereg_mask, phys_buf_array, num_phys_buf);
-			ret = -EINVAL;
-			goto rereg_phys_mr_exit0;
-		}
-	}
-	if ((mr_rereg_mask & IB_MR_REREG_ACCESS) &&	/* change ACL */
-	    (((mr_access_flags & IB_ACCESS_REMOTE_WRITE) &&
-	      !(mr_access_flags & IB_ACCESS_LOCAL_WRITE)) ||
-	     ((mr_access_flags & IB_ACCESS_REMOTE_ATOMIC) &&
-	      !(mr_access_flags & IB_ACCESS_LOCAL_WRITE)))) {
-		/*
-		 * Remote Write Access requires Local Write Access
-		 * Remote Atomic Access requires Local Write Access
-		 */
-		ehca_err(mr->device, "bad input values: mr_rereg_mask=%x "
-			 "mr_access_flags=%x", mr_rereg_mask, mr_access_flags);
-		ret = -EINVAL;
-		goto rereg_phys_mr_exit0;
-	}
-
-	/* set requested values dependent on rereg request */
-	spin_lock_irqsave(&e_mr->mrlock, sl_flags);
-	new_start = e_mr->start;
-	new_size = e_mr->size;
-	new_acl = e_mr->acl;
-	new_pd = container_of(mr->pd, struct ehca_pd, ib_pd);
-
-	if (mr_rereg_mask & IB_MR_REREG_TRANS) {
-		u64 hw_pgsize = ehca_get_max_hwpage_size(shca);
-
-		new_start = iova_start;	/* change address */
-		/* check physical buffer list and calculate size */
-		ret = ehca_mr_chk_buf_and_calc_size(phys_buf_array,
-						    num_phys_buf, iova_start,
-						    &new_size);
-		if (ret)
-			goto rereg_phys_mr_exit1;
-		if ((new_size == 0) ||
-		    (((u64)iova_start + new_size) < (u64)iova_start)) {
-			ehca_err(mr->device, "bad input values: new_size=%llx "
-				 "iova_start=%p", new_size, iova_start);
-			ret = -EINVAL;
-			goto rereg_phys_mr_exit1;
-		}
-		num_kpages = NUM_CHUNKS(((u64)new_start % PAGE_SIZE) +
-					new_size, PAGE_SIZE);
-		num_hwpages = NUM_CHUNKS(((u64)new_start % hw_pgsize) +
-					 new_size, hw_pgsize);
-		memset(&pginfo, 0, sizeof(pginfo));
-		pginfo.type = EHCA_MR_PGI_PHYS;
-		pginfo.num_kpages = num_kpages;
-		pginfo.hwpage_size = hw_pgsize;
-		pginfo.num_hwpages = num_hwpages;
-		pginfo.u.phy.num_phys_buf = num_phys_buf;
-		pginfo.u.phy.phys_buf_array = phys_buf_array;
-		pginfo.next_hwpage =
-			((u64)iova_start & ~PAGE_MASK) / hw_pgsize;
-	}
-	if (mr_rereg_mask & IB_MR_REREG_ACCESS)
-		new_acl = mr_access_flags;
-	if (mr_rereg_mask & IB_MR_REREG_PD)
-		new_pd = container_of(pd, struct ehca_pd, ib_pd);
-
-	ret = ehca_rereg_mr(shca, e_mr, new_start, new_size, new_acl,
-			    new_pd, &pginfo, &tmp_lkey, &tmp_rkey);
-	if (ret)
-		goto rereg_phys_mr_exit1;
-
-	/* successful reregistration */
-	if (mr_rereg_mask & IB_MR_REREG_PD)
-		mr->pd = pd;
-	mr->lkey = tmp_lkey;
-	mr->rkey = tmp_rkey;
-
-rereg_phys_mr_exit1:
-	spin_unlock_irqrestore(&e_mr->mrlock, sl_flags);
-rereg_phys_mr_exit0:
-	if (ret)
-		ehca_err(mr->device, "ret=%i mr=%p mr_rereg_mask=%x pd=%p "
-			 "phys_buf_array=%p num_phys_buf=%x mr_access_flags=%x "
-			 "iova_start=%p",
-			 ret, mr, mr_rereg_mask, pd, phys_buf_array,
-			 num_phys_buf, mr_access_flags, iova_start);
-	return ret;
-} /* end ehca_rereg_phys_mr() */
-
 int ehca_dereg_mr(struct ib_mr *mr)
 {
 	int ret = 0;
@@ -1713,61 +1447,6 @@ ehca_dereg_internal_maxmr_exit0:
 
 /*----------------------------------------------------------------------*/
 
-/*
- * check physical buffer array of MR verbs for validness and
- * calculates MR size
- */
-int ehca_mr_chk_buf_and_calc_size(struct ib_phys_buf *phys_buf_array,
-				  int num_phys_buf,
-				  u64 *iova_start,
-				  u64 *size)
-{
-	struct ib_phys_buf *pbuf = phys_buf_array;
-	u64 size_count = 0;
-	u32 i;
-
-	if (num_phys_buf == 0) {
-		ehca_gen_err("bad phys buf array len, num_phys_buf=0");
-		return -EINVAL;
-	}
-	/* check first buffer */
-	if (((u64)iova_start & ~PAGE_MASK) != (pbuf->addr & ~PAGE_MASK)) {
-		ehca_gen_err("iova_start/addr mismatch, iova_start=%p "
-			     "pbuf->addr=%llx pbuf->size=%llx",
-			     iova_start, pbuf->addr, pbuf->size);
-		return -EINVAL;
-	}
-	if (((pbuf->addr + pbuf->size) % PAGE_SIZE) &&
-	    (num_phys_buf > 1)) {
-		ehca_gen_err("addr/size mismatch in 1st buf, pbuf->addr=%llx "
-			     "pbuf->size=%llx", pbuf->addr, pbuf->size);
-		return -EINVAL;
-	}
-
-	for (i = 0; i < num_phys_buf; i++) {
-		if ((i > 0) && (pbuf->addr % PAGE_SIZE)) {
-			ehca_gen_err("bad address, i=%x pbuf->addr=%llx "
-				     "pbuf->size=%llx",
-				     i, pbuf->addr, pbuf->size);
-			return -EINVAL;
-		}
-		if (((i > 0) &&	/* not 1st */
-		     (i < (num_phys_buf - 1)) &&	/* not last */
-		     (pbuf->size % PAGE_SIZE)) || (pbuf->size == 0)) {
-			ehca_gen_err("bad size, i=%x pbuf->size=%llx",
-				     i, pbuf->size);
-			return -EINVAL;
-		}
-		size_count += pbuf->size;
-		pbuf++;
-	}
-
-	*size = size_count;
-	return 0;
-} /* end ehca_mr_chk_buf_and_calc_size() */
-
-/*----------------------------------------------------------------------*/
-
 /* check page list of map FMR verb for validness */
 int ehca_fmr_check_page_list(struct ehca_mr *e_fmr,
 			     u64 *page_list,
diff --git a/drivers/staging/rdma/ehca/ehca_mrmw.h b/drivers/staging/rdma/ehca/ehca_mrmw.h
index 50d8b51..52bfa95 100644
--- a/drivers/staging/rdma/ehca/ehca_mrmw.h
+++ b/drivers/staging/rdma/ehca/ehca_mrmw.h
@@ -98,11 +98,6 @@ int ehca_reg_maxmr(struct ehca_shca *shca,
 
 int ehca_dereg_internal_maxmr(struct ehca_shca *shca);
 
-int ehca_mr_chk_buf_and_calc_size(struct ib_phys_buf *phys_buf_array,
-				  int num_phys_buf,
-				  u64 *iova_start,
-				  u64 *size);
-
 int ehca_fmr_check_page_list(struct ehca_mr *e_fmr,
 			     u64 *page_list,
 			     int list_len);
diff --git a/drivers/staging/rdma/hfi1/mr.c b/drivers/staging/rdma/hfi1/mr.c
index 568f185..a3f8b88 100644
--- a/drivers/staging/rdma/hfi1/mr.c
+++ b/drivers/staging/rdma/hfi1/mr.c
@@ -167,10 +167,7 @@ static struct hfi1_mr *alloc_mr(int count, struct ib_pd *pd)
 	rval = init_mregion(&mr->mr, pd, count);
 	if (rval)
 		goto bail;
-	/*
-	 * ib_reg_phys_mr() will initialize mr->ibmr except for
-	 * lkey and rkey.
-	 */
+
 	rval = hfi1_alloc_lkey(&mr->mr, 0);
 	if (rval)
 		goto bail_mregion;
@@ -188,52 +185,6 @@ bail:
 }
 
 /**
- * hfi1_reg_phys_mr - register a physical memory region
- * @pd: protection domain for this memory region
- * @buffer_list: pointer to the list of physical buffers to register
- * @num_phys_buf: the number of physical buffers to register
- * @iova_start: the starting address passed over IB which maps to this MR
- *
- * Returns the memory region on success, otherwise returns an errno.
- */
-struct ib_mr *hfi1_reg_phys_mr(struct ib_pd *pd,
-			       struct ib_phys_buf *buffer_list,
-			       int num_phys_buf, int acc, u64 *iova_start)
-{
-	struct hfi1_mr *mr;
-	int n, m, i;
-	struct ib_mr *ret;
-
-	mr = alloc_mr(num_phys_buf, pd);
-	if (IS_ERR(mr)) {
-		ret = (struct ib_mr *)mr;
-		goto bail;
-	}
-
-	mr->mr.user_base = *iova_start;
-	mr->mr.iova = *iova_start;
-	mr->mr.access_flags = acc;
-
-	m = 0;
-	n = 0;
-	for (i = 0; i < num_phys_buf; i++) {
-		mr->mr.map[m]->segs[n].vaddr = (void *) buffer_list[i].addr;
-		mr->mr.map[m]->segs[n].length = buffer_list[i].size;
-		mr->mr.length += buffer_list[i].size;
-		n++;
-		if (n == HFI1_SEGSZ) {
-			m++;
-			n = 0;
-		}
-	}
-
-	ret = &mr->ibmr;
-
-bail:
-	return ret;
-}
-
-/**
  * hfi1_reg_user_mr - register a userspace memory region
  * @pd: protection domain for this memory region
  * @start: starting userspace address
diff --git a/drivers/staging/rdma/hfi1/verbs.c b/drivers/staging/rdma/hfi1/verbs.c
index cb5b346..1b3d8d9 100644
--- a/drivers/staging/rdma/hfi1/verbs.c
+++ b/drivers/staging/rdma/hfi1/verbs.c
@@ -2011,7 +2011,6 @@ int hfi1_register_ib_device(struct hfi1_devdata *dd)
 	ibdev->poll_cq = hfi1_poll_cq;
 	ibdev->req_notify_cq = hfi1_req_notify_cq;
 	ibdev->get_dma_mr = hfi1_get_dma_mr;
-	ibdev->reg_phys_mr = hfi1_reg_phys_mr;
 	ibdev->reg_user_mr = hfi1_reg_user_mr;
 	ibdev->dereg_mr = hfi1_dereg_mr;
 	ibdev->alloc_mr = hfi1_alloc_mr;
diff --git a/drivers/staging/rdma/hfi1/verbs.h b/drivers/staging/rdma/hfi1/verbs.h
index 041ad07..255792a 100644
--- a/drivers/staging/rdma/hfi1/verbs.h
+++ b/drivers/staging/rdma/hfi1/verbs.h
@@ -1012,10 +1012,6 @@ int hfi1_resize_cq(struct ib_cq *ibcq, int cqe, struct ib_udata *udata);
 
 struct ib_mr *hfi1_get_dma_mr(struct ib_pd *pd, int acc);
 
-struct ib_mr *hfi1_reg_phys_mr(struct ib_pd *pd,
-			       struct ib_phys_buf *buffer_list,
-			       int num_phys_buf, int acc, u64 *iova_start);
-
 struct ib_mr *hfi1_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
 			       u64 virt_addr, int mr_access_flags,
 			       struct ib_udata *udata);
diff --git a/drivers/staging/rdma/ipath/ipath_mr.c b/drivers/staging/rdma/ipath/ipath_mr.c
index c7278f6..b76b0ce 100644
--- a/drivers/staging/rdma/ipath/ipath_mr.c
+++ b/drivers/staging/rdma/ipath/ipath_mr.c
@@ -98,10 +98,6 @@ static struct ipath_mr *alloc_mr(int count,
 	}
 	mr->mr.mapsz = m;
 
-	/*
-	 * ib_reg_phys_mr() will initialize mr->ibmr except for
-	 * lkey and rkey.
-	 */
 	if (!ipath_alloc_lkey(lk_table, &mr->mr))
 		goto bail;
 	mr->ibmr.rkey = mr->ibmr.lkey = mr->mr.lkey;
@@ -121,57 +117,6 @@ done:
 }
 
 /**
- * ipath_reg_phys_mr - register a physical memory region
- * @pd: protection domain for this memory region
- * @buffer_list: pointer to the list of physical buffers to register
- * @num_phys_buf: the number of physical buffers to register
- * @iova_start: the starting address passed over IB which maps to this MR
- *
- * Returns the memory region on success, otherwise returns an errno.
- */
-struct ib_mr *ipath_reg_phys_mr(struct ib_pd *pd,
-				struct ib_phys_buf *buffer_list,
-				int num_phys_buf, int acc, u64 *iova_start)
-{
-	struct ipath_mr *mr;
-	int n, m, i;
-	struct ib_mr *ret;
-
-	mr = alloc_mr(num_phys_buf, &to_idev(pd->device)->lk_table);
-	if (mr == NULL) {
-		ret = ERR_PTR(-ENOMEM);
-		goto bail;
-	}
-
-	mr->mr.pd = pd;
-	mr->mr.user_base = *iova_start;
-	mr->mr.iova = *iova_start;
-	mr->mr.length = 0;
-	mr->mr.offset = 0;
-	mr->mr.access_flags = acc;
-	mr->mr.max_segs = num_phys_buf;
-	mr->umem = NULL;
-
-	m = 0;
-	n = 0;
-	for (i = 0; i < num_phys_buf; i++) {
-		mr->mr.map[m]->segs[n].vaddr = (void *) buffer_list[i].addr;
-		mr->mr.map[m]->segs[n].length = buffer_list[i].size;
-		mr->mr.length += buffer_list[i].size;
-		n++;
-		if (n == IPATH_SEGSZ) {
-			m++;
-			n = 0;
-		}
-	}
-
-	ret = &mr->ibmr;
-
-bail:
-	return ret;
-}
-
-/**
  * ipath_reg_user_mr - register a userspace memory region
  * @pd: protection domain for this memory region
  * @start: starting userspace address
diff --git a/drivers/staging/rdma/ipath/ipath_verbs.c b/drivers/staging/rdma/ipath/ipath_verbs.c
index 7ab1520..02d8834 100644
--- a/drivers/staging/rdma/ipath/ipath_verbs.c
+++ b/drivers/staging/rdma/ipath/ipath_verbs.c
@@ -2149,7 +2149,6 @@ int ipath_register_ib_device(struct ipath_devdata *dd)
 	dev->poll_cq = ipath_poll_cq;
 	dev->req_notify_cq = ipath_req_notify_cq;
 	dev->get_dma_mr = ipath_get_dma_mr;
-	dev->reg_phys_mr = ipath_reg_phys_mr;
 	dev->reg_user_mr = ipath_reg_user_mr;
 	dev->dereg_mr = ipath_dereg_mr;
 	dev->alloc_fmr = ipath_alloc_fmr;
diff --git a/drivers/staging/rdma/ipath/ipath_verbs.h b/drivers/staging/rdma/ipath/ipath_verbs.h
index 0a90a56..6c70a89 100644
--- a/drivers/staging/rdma/ipath/ipath_verbs.h
+++ b/drivers/staging/rdma/ipath/ipath_verbs.h
@@ -828,10 +828,6 @@ int ipath_resize_cq(struct ib_cq *ibcq, int cqe, struct ib_udata *udata);
 
 struct ib_mr *ipath_get_dma_mr(struct ib_pd *pd, int acc);
 
-struct ib_mr *ipath_reg_phys_mr(struct ib_pd *pd,
-				struct ib_phys_buf *buffer_list,
-				int num_phys_buf, int acc, u64 *iova_start);
-
 struct ib_mr *ipath_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
 				u64 virt_addr, int mr_access_flags,
 				struct ib_udata *udata);
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index 83d6ee8..a3402af 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -1165,6 +1165,10 @@ struct ib_phys_buf {
 	u64      size;
 };
 
+/*
+ * XXX: these are apparently used for ->rereg_user_mr, no idea why they
+ * are hidden here instead of a uapi header!
+ */
 enum ib_mr_rereg_flags {
 	IB_MR_REREG_TRANS	= 1,
 	IB_MR_REREG_PD		= (1<<1),
@@ -1683,11 +1687,6 @@ struct ib_device {
 						      int wc_cnt);
 	struct ib_mr *             (*get_dma_mr)(struct ib_pd *pd,
 						 int mr_access_flags);
-	struct ib_mr *             (*reg_phys_mr)(struct ib_pd *pd,
-						  struct ib_phys_buf *phys_buf_array,
-						  int num_phys_buf,
-						  int mr_access_flags,
-						  u64 *iova_start);
 	struct ib_mr *             (*reg_user_mr)(struct ib_pd *pd,
 						  u64 start, u64 length,
 						  u64 virt_addr,
@@ -1707,13 +1706,6 @@ struct ib_device {
 	int                        (*map_mr_sg)(struct ib_mr *mr,
 						struct scatterlist *sg,
 						int sg_nents);
-	int                        (*rereg_phys_mr)(struct ib_mr *mr,
-						    int mr_rereg_mask,
-						    struct ib_pd *pd,
-						    struct ib_phys_buf *phys_buf_array,
-						    int num_phys_buf,
-						    int mr_access_flags,
-						    u64 *iova_start);
 	struct ib_mw *             (*alloc_mw)(struct ib_pd *pd,
 					       enum ib_mw_type type);
 	int                        (*bind_mw)(struct ib_qp *qp,
-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 04/11] IB: remove in-kernel support for memory windows
       [not found] ` <1448214409-7729-1-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
                     ` (2 preceding siblings ...)
  2015-11-22 17:46   ` [PATCH 03/11] IB: remove support for phys MRs Christoph Hellwig
@ 2015-11-22 17:46   ` Christoph Hellwig
  2015-11-22 17:46   ` [PATCH 05/11] cxgb3: simplify iwch_get_dma_wr Christoph Hellwig
                     ` (8 subsequent siblings)
  12 siblings, 0 replies; 35+ messages in thread
From: Christoph Hellwig @ 2015-11-22 17:46 UTC (permalink / raw)
  To: dledford-H+wXaHxf7aLQT0dZR+AlfA; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

Remove the unused ib_allow_mw and ib_bind_mw functions, remove the
unused IB_WR_BIND_MW and IB_WC_BIND_MW opcodes and move ib_dealloc_mw
into the uverbs module.

Signed-off-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
---
 Documentation/infiniband/core_locking.txt   |  2 -
 drivers/infiniband/core/uverbs.h            |  2 +
 drivers/infiniband/core/uverbs_cmd.c        |  4 +-
 drivers/infiniband/core/uverbs_main.c       | 13 ++++-
 drivers/infiniband/core/verbs.c             | 36 -------------
 drivers/infiniband/hw/cxgb3/iwch_cq.c       |  4 --
 drivers/infiniband/hw/cxgb3/iwch_provider.c |  1 -
 drivers/infiniband/hw/cxgb3/iwch_provider.h |  3 --
 drivers/infiniband/hw/cxgb3/iwch_qp.c       | 82 ----------------------------
 drivers/infiniband/hw/cxgb4/cq.c            |  3 --
 drivers/infiniband/hw/cxgb4/iw_cxgb4.h      |  2 -
 drivers/infiniband/hw/cxgb4/provider.c      |  1 -
 drivers/infiniband/hw/cxgb4/qp.c            |  5 --
 drivers/infiniband/hw/mlx4/cq.c             |  3 --
 drivers/infiniband/hw/mlx4/main.c           |  1 -
 drivers/infiniband/hw/mlx4/mlx4_ib.h        |  2 -
 drivers/infiniband/hw/mlx4/mr.c             | 22 --------
 drivers/infiniband/hw/mlx4/qp.c             | 27 ----------
 drivers/infiniband/hw/mlx5/cq.c             |  3 --
 drivers/infiniband/hw/mthca/mthca_cq.c      |  3 --
 drivers/infiniband/hw/nes/nes_verbs.c       | 75 --------------------------
 drivers/staging/rdma/amso1100/c2_cq.c       |  3 --
 drivers/staging/rdma/ehca/ehca_iverbs.h     |  3 --
 drivers/staging/rdma/ehca/ehca_main.c       |  1 -
 drivers/staging/rdma/ehca/ehca_mrmw.c       | 12 -----
 drivers/staging/rdma/ehca/ehca_reqs.c       |  1 -
 include/rdma/ib_verbs.h                     | 83 -----------------------------
 27 files changed, 16 insertions(+), 381 deletions(-)

diff --git a/Documentation/infiniband/core_locking.txt b/Documentation/infiniband/core_locking.txt
index e167854..4b1f36b 100644
--- a/Documentation/infiniband/core_locking.txt
+++ b/Documentation/infiniband/core_locking.txt
@@ -15,7 +15,6 @@ Sleeping and interrupt context
     modify_ah
     query_ah
     destroy_ah
-    bind_mw
     post_send
     post_recv
     poll_cq
@@ -31,7 +30,6 @@ Sleeping and interrupt context
     ib_modify_ah
     ib_query_ah
     ib_destroy_ah
-    ib_bind_mw
     ib_post_send
     ib_post_recv
     ib_req_notify_cq
diff --git a/drivers/infiniband/core/uverbs.h b/drivers/infiniband/core/uverbs.h
index 94bbd8c..612ccfd 100644
--- a/drivers/infiniband/core/uverbs.h
+++ b/drivers/infiniband/core/uverbs.h
@@ -204,6 +204,8 @@ void ib_uverbs_event_handler(struct ib_event_handler *handler,
 			     struct ib_event *event);
 void ib_uverbs_dealloc_xrcd(struct ib_uverbs_device *dev, struct ib_xrcd *xrcd);
 
+int uverbs_dealloc_mw(struct ib_mw *mw);
+
 struct ib_uverbs_flow_spec {
 	union {
 		union {
diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
index 29c1413..b0f7872 100644
--- a/drivers/infiniband/core/uverbs_cmd.c
+++ b/drivers/infiniband/core/uverbs_cmd.c
@@ -1240,7 +1240,7 @@ err_copy:
 	idr_remove_uobj(&ib_uverbs_mw_idr, uobj);
 
 err_unalloc:
-	ib_dealloc_mw(mw);
+	uverbs_dealloc_mw(mw);
 
 err_put:
 	put_pd_read(pd);
@@ -1269,7 +1269,7 @@ ssize_t ib_uverbs_dealloc_mw(struct ib_uverbs_file *file,
 
 	mw = uobj->object;
 
-	ret = ib_dealloc_mw(mw);
+	ret = uverbs_dealloc_mw(mw);
 	if (!ret)
 		uobj->live = 0;
 
diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c
index e3ef288..39680ae 100644
--- a/drivers/infiniband/core/uverbs_main.c
+++ b/drivers/infiniband/core/uverbs_main.c
@@ -133,6 +133,17 @@ static int (*uverbs_ex_cmd_table[])(struct ib_uverbs_file *file,
 static void ib_uverbs_add_one(struct ib_device *device);
 static void ib_uverbs_remove_one(struct ib_device *device, void *client_data);
 
+int uverbs_dealloc_mw(struct ib_mw *mw)
+{
+	struct ib_pd *pd = mw->pd;
+	int ret;
+
+	ret = mw->device->dealloc_mw(mw);
+	if (!ret)
+		atomic_dec(&pd->usecnt);
+	return ret;
+}
+
 static void ib_uverbs_release_dev(struct kobject *kobj)
 {
 	struct ib_uverbs_device *dev =
@@ -224,7 +235,7 @@ static int ib_uverbs_cleanup_ucontext(struct ib_uverbs_file *file,
 		struct ib_mw *mw = uobj->object;
 
 		idr_remove_uobj(&ib_uverbs_mw_idr, uobj);
-		ib_dealloc_mw(mw);
+		uverbs_dealloc_mw(mw);
 		kfree(uobj);
 	}
 
diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
index 0e21367..b31beb1 100644
--- a/drivers/infiniband/core/verbs.c
+++ b/drivers/infiniband/core/verbs.c
@@ -1267,42 +1267,6 @@ struct ib_mr *ib_alloc_mr(struct ib_pd *pd,
 }
 EXPORT_SYMBOL(ib_alloc_mr);
 
-/* Memory windows */
-
-struct ib_mw *ib_alloc_mw(struct ib_pd *pd, enum ib_mw_type type)
-{
-	struct ib_mw *mw;
-
-	if (!pd->device->alloc_mw)
-		return ERR_PTR(-ENOSYS);
-
-	mw = pd->device->alloc_mw(pd, type);
-	if (!IS_ERR(mw)) {
-		mw->device  = pd->device;
-		mw->pd      = pd;
-		mw->uobject = NULL;
-		mw->type    = type;
-		atomic_inc(&pd->usecnt);
-	}
-
-	return mw;
-}
-EXPORT_SYMBOL(ib_alloc_mw);
-
-int ib_dealloc_mw(struct ib_mw *mw)
-{
-	struct ib_pd *pd;
-	int ret;
-
-	pd = mw->pd;
-	ret = mw->device->dealloc_mw(mw);
-	if (!ret)
-		atomic_dec(&pd->usecnt);
-
-	return ret;
-}
-EXPORT_SYMBOL(ib_dealloc_mw);
-
 /* "Fast" memory regions */
 
 struct ib_fmr *ib_alloc_fmr(struct ib_pd *pd,
diff --git a/drivers/infiniband/hw/cxgb3/iwch_cq.c b/drivers/infiniband/hw/cxgb3/iwch_cq.c
index cfe4049..97fbfd2 100644
--- a/drivers/infiniband/hw/cxgb3/iwch_cq.c
+++ b/drivers/infiniband/hw/cxgb3/iwch_cq.c
@@ -115,10 +115,6 @@ static int iwch_poll_cq_one(struct iwch_dev *rhp, struct iwch_cq *chp,
 		case T3_SEND_WITH_SE_INV:
 			wc->opcode = IB_WC_SEND;
 			break;
-		case T3_BIND_MW:
-			wc->opcode = IB_WC_BIND_MW;
-			break;
-
 		case T3_LOCAL_INV:
 			wc->opcode = IB_WC_LOCAL_INV;
 			break;
diff --git a/drivers/infiniband/hw/cxgb3/iwch_provider.c b/drivers/infiniband/hw/cxgb3/iwch_provider.c
index 9576e15..b184933 100644
--- a/drivers/infiniband/hw/cxgb3/iwch_provider.c
+++ b/drivers/infiniband/hw/cxgb3/iwch_provider.c
@@ -1350,7 +1350,6 @@ int iwch_register_device(struct iwch_dev *dev)
 	dev->ibdev.reg_user_mr = iwch_reg_user_mr;
 	dev->ibdev.dereg_mr = iwch_dereg_mr;
 	dev->ibdev.alloc_mw = iwch_alloc_mw;
-	dev->ibdev.bind_mw = iwch_bind_mw;
 	dev->ibdev.dealloc_mw = iwch_dealloc_mw;
 	dev->ibdev.alloc_mr = iwch_alloc_mr;
 	dev->ibdev.map_mr_sg = iwch_map_mr_sg;
diff --git a/drivers/infiniband/hw/cxgb3/iwch_provider.h b/drivers/infiniband/hw/cxgb3/iwch_provider.h
index f4fa6d6..f24df44 100644
--- a/drivers/infiniband/hw/cxgb3/iwch_provider.h
+++ b/drivers/infiniband/hw/cxgb3/iwch_provider.h
@@ -330,9 +330,6 @@ int iwch_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
 		      struct ib_send_wr **bad_wr);
 int iwch_post_receive(struct ib_qp *ibqp, struct ib_recv_wr *wr,
 		      struct ib_recv_wr **bad_wr);
-int iwch_bind_mw(struct ib_qp *qp,
-			     struct ib_mw *mw,
-			     struct ib_mw_bind *mw_bind);
 int iwch_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *wc);
 int iwch_post_terminate(struct iwch_qp *qhp, struct respQ_msg_t *rsp_msg);
 int iwch_post_zb_read(struct iwch_ep *ep);
diff --git a/drivers/infiniband/hw/cxgb3/iwch_qp.c b/drivers/infiniband/hw/cxgb3/iwch_qp.c
index d0548fc..d939980 100644
--- a/drivers/infiniband/hw/cxgb3/iwch_qp.c
+++ b/drivers/infiniband/hw/cxgb3/iwch_qp.c
@@ -526,88 +526,6 @@ out:
 	return err;
 }
 
-int iwch_bind_mw(struct ib_qp *qp,
-			     struct ib_mw *mw,
-			     struct ib_mw_bind *mw_bind)
-{
-	struct iwch_dev *rhp;
-	struct iwch_mw *mhp;
-	struct iwch_qp *qhp;
-	union t3_wr *wqe;
-	u32 pbl_addr;
-	u8 page_size;
-	u32 num_wrs;
-	unsigned long flag;
-	struct ib_sge sgl;
-	int err=0;
-	enum t3_wr_flags t3_wr_flags;
-	u32 idx;
-	struct t3_swsq *sqp;
-
-	qhp = to_iwch_qp(qp);
-	mhp = to_iwch_mw(mw);
-	rhp = qhp->rhp;
-
-	spin_lock_irqsave(&qhp->lock, flag);
-	if (qhp->attr.state > IWCH_QP_STATE_RTS) {
-		spin_unlock_irqrestore(&qhp->lock, flag);
-		return -EINVAL;
-	}
-	num_wrs = Q_FREECNT(qhp->wq.sq_rptr, qhp->wq.sq_wptr,
-			    qhp->wq.sq_size_log2);
-	if (num_wrs == 0) {
-		spin_unlock_irqrestore(&qhp->lock, flag);
-		return -ENOMEM;
-	}
-	idx = Q_PTR2IDX(qhp->wq.wptr, qhp->wq.size_log2);
-	PDBG("%s: idx 0x%0x, mw 0x%p, mw_bind 0x%p\n", __func__, idx,
-	     mw, mw_bind);
-	wqe = (union t3_wr *) (qhp->wq.queue + idx);
-
-	t3_wr_flags = 0;
-	if (mw_bind->send_flags & IB_SEND_SIGNALED)
-		t3_wr_flags = T3_COMPLETION_FLAG;
-
-	sgl.addr = mw_bind->bind_info.addr;
-	sgl.lkey = mw_bind->bind_info.mr->lkey;
-	sgl.length = mw_bind->bind_info.length;
-	wqe->bind.reserved = 0;
-	wqe->bind.type = TPT_VATO;
-
-	/* TBD: check perms */
-	wqe->bind.perms = iwch_ib_to_tpt_bind_access(
-		mw_bind->bind_info.mw_access_flags);
-	wqe->bind.mr_stag = cpu_to_be32(mw_bind->bind_info.mr->lkey);
-	wqe->bind.mw_stag = cpu_to_be32(mw->rkey);
-	wqe->bind.mw_len = cpu_to_be32(mw_bind->bind_info.length);
-	wqe->bind.mw_va = cpu_to_be64(mw_bind->bind_info.addr);
-	err = iwch_sgl2pbl_map(rhp, &sgl, 1, &pbl_addr, &page_size);
-	if (err) {
-		spin_unlock_irqrestore(&qhp->lock, flag);
-		return err;
-	}
-	wqe->send.wrid.id0.hi = qhp->wq.sq_wptr;
-	sqp = qhp->wq.sq + Q_PTR2IDX(qhp->wq.sq_wptr, qhp->wq.sq_size_log2);
-	sqp->wr_id = mw_bind->wr_id;
-	sqp->opcode = T3_BIND_MW;
-	sqp->sq_wptr = qhp->wq.sq_wptr;
-	sqp->complete = 0;
-	sqp->signaled = (mw_bind->send_flags & IB_SEND_SIGNALED);
-	wqe->bind.mr_pbl_addr = cpu_to_be32(pbl_addr);
-	wqe->bind.mr_pagesz = page_size;
-	build_fw_riwrh((void *)wqe, T3_WR_BIND, t3_wr_flags,
-		       Q_GENBIT(qhp->wq.wptr, qhp->wq.size_log2), 0,
-		       sizeof(struct t3_bind_mw_wr) >> 3, T3_SOPEOP);
-	++(qhp->wq.wptr);
-	++(qhp->wq.sq_wptr);
-	spin_unlock_irqrestore(&qhp->lock, flag);
-
-	if (cxio_wq_db_enabled(&qhp->wq))
-		ring_doorbell(qhp->wq.doorbell, qhp->wq.qpid);
-
-	return err;
-}
-
 static inline void build_term_codes(struct respQ_msg_t *rsp_msg,
 				    u8 *layer_type, u8 *ecode)
 {
diff --git a/drivers/infiniband/hw/cxgb4/cq.c b/drivers/infiniband/hw/cxgb4/cq.c
index de9cd69..cf21df4 100644
--- a/drivers/infiniband/hw/cxgb4/cq.c
+++ b/drivers/infiniband/hw/cxgb4/cq.c
@@ -744,9 +744,6 @@ static int c4iw_poll_cq_one(struct c4iw_cq *chp, struct ib_wc *wc)
 		case FW_RI_SEND_WITH_SE:
 			wc->opcode = IB_WC_SEND;
 			break;
-		case FW_RI_BIND_MW:
-			wc->opcode = IB_WC_BIND_MW;
-			break;
 
 		case FW_RI_LOCAL_INV:
 			wc->opcode = IB_WC_LOCAL_INV;
diff --git a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h
index dd00cf2..fb2de75 100644
--- a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h
+++ b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h
@@ -947,8 +947,6 @@ int c4iw_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
 		      struct ib_send_wr **bad_wr);
 int c4iw_post_receive(struct ib_qp *ibqp, struct ib_recv_wr *wr,
 		      struct ib_recv_wr **bad_wr);
-int c4iw_bind_mw(struct ib_qp *qp, struct ib_mw *mw,
-		 struct ib_mw_bind *mw_bind);
 int c4iw_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param);
 int c4iw_create_listen(struct iw_cm_id *cm_id, int backlog);
 int c4iw_destroy_listen(struct iw_cm_id *cm_id);
diff --git a/drivers/infiniband/hw/cxgb4/provider.c b/drivers/infiniband/hw/cxgb4/provider.c
index 186319e..5544adc 100644
--- a/drivers/infiniband/hw/cxgb4/provider.c
+++ b/drivers/infiniband/hw/cxgb4/provider.c
@@ -512,7 +512,6 @@ int c4iw_register_device(struct c4iw_dev *dev)
 	dev->ibdev.reg_user_mr = c4iw_reg_user_mr;
 	dev->ibdev.dereg_mr = c4iw_dereg_mr;
 	dev->ibdev.alloc_mw = c4iw_alloc_mw;
-	dev->ibdev.bind_mw = c4iw_bind_mw;
 	dev->ibdev.dealloc_mw = c4iw_dealloc_mw;
 	dev->ibdev.alloc_mr = c4iw_alloc_mr;
 	dev->ibdev.map_mr_sg = c4iw_map_mr_sg;
diff --git a/drivers/infiniband/hw/cxgb4/qp.c b/drivers/infiniband/hw/cxgb4/qp.c
index aa515af..e99345e 100644
--- a/drivers/infiniband/hw/cxgb4/qp.c
+++ b/drivers/infiniband/hw/cxgb4/qp.c
@@ -933,11 +933,6 @@ int c4iw_post_receive(struct ib_qp *ibqp, struct ib_recv_wr *wr,
 	return err;
 }
 
-int c4iw_bind_mw(struct ib_qp *qp, struct ib_mw *mw, struct ib_mw_bind *mw_bind)
-{
-	return -ENOSYS;
-}
-
 static inline void build_term_codes(struct t4_cqe *err_cqe, u8 *layer_type,
 				    u8 *ecode)
 {
diff --git a/drivers/infiniband/hw/mlx4/cq.c b/drivers/infiniband/hw/mlx4/cq.c
index b88fc8f..9f8b516 100644
--- a/drivers/infiniband/hw/mlx4/cq.c
+++ b/drivers/infiniband/hw/mlx4/cq.c
@@ -811,9 +811,6 @@ repoll:
 			wc->opcode    = IB_WC_MASKED_FETCH_ADD;
 			wc->byte_len  = 8;
 			break;
-		case MLX4_OPCODE_BIND_MW:
-			wc->opcode    = IB_WC_BIND_MW;
-			break;
 		case MLX4_OPCODE_LSO:
 			wc->opcode    = IB_WC_LSO;
 			break;
diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c
index 7026580..4b1c40f 100644
--- a/drivers/infiniband/hw/mlx4/main.c
+++ b/drivers/infiniband/hw/mlx4/main.c
@@ -2313,7 +2313,6 @@ static void *mlx4_ib_add(struct mlx4_dev *dev)
 	if (dev->caps.flags & MLX4_DEV_CAP_FLAG_MEM_WINDOW ||
 	    dev->caps.bmme_flags & MLX4_BMME_FLAG_TYPE_2_WIN) {
 		ibdev->ib_dev.alloc_mw = mlx4_ib_alloc_mw;
-		ibdev->ib_dev.bind_mw = mlx4_ib_bind_mw;
 		ibdev->ib_dev.dealloc_mw = mlx4_ib_dealloc_mw;
 
 		ibdev->ib_dev.uverbs_cmd_mask |=
diff --git a/drivers/infiniband/hw/mlx4/mlx4_ib.h b/drivers/infiniband/hw/mlx4/mlx4_ib.h
index 1caa11e..8916e9b 100644
--- a/drivers/infiniband/hw/mlx4/mlx4_ib.h
+++ b/drivers/infiniband/hw/mlx4/mlx4_ib.h
@@ -704,8 +704,6 @@ struct ib_mr *mlx4_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
 				  struct ib_udata *udata);
 int mlx4_ib_dereg_mr(struct ib_mr *mr);
 struct ib_mw *mlx4_ib_alloc_mw(struct ib_pd *pd, enum ib_mw_type type);
-int mlx4_ib_bind_mw(struct ib_qp *qp, struct ib_mw *mw,
-		    struct ib_mw_bind *mw_bind);
 int mlx4_ib_dealloc_mw(struct ib_mw *mw);
 struct ib_mr *mlx4_ib_alloc_mr(struct ib_pd *pd,
 			       enum ib_mr_type mr_type,
diff --git a/drivers/infiniband/hw/mlx4/mr.c b/drivers/infiniband/hw/mlx4/mr.c
index 4d1e1c6..242b94e 100644
--- a/drivers/infiniband/hw/mlx4/mr.c
+++ b/drivers/infiniband/hw/mlx4/mr.c
@@ -366,28 +366,6 @@ err_free:
 	return ERR_PTR(err);
 }
 
-int mlx4_ib_bind_mw(struct ib_qp *qp, struct ib_mw *mw,
-		    struct ib_mw_bind *mw_bind)
-{
-	struct ib_bind_mw_wr  wr;
-	struct ib_send_wr *bad_wr;
-	int ret;
-
-	memset(&wr, 0, sizeof(wr));
-	wr.wr.opcode		= IB_WR_BIND_MW;
-	wr.wr.wr_id		= mw_bind->wr_id;
-	wr.wr.send_flags	= mw_bind->send_flags;
-	wr.mw			= mw;
-	wr.bind_info		= mw_bind->bind_info;
-	wr.rkey			= ib_inc_rkey(mw->rkey);
-
-	ret = mlx4_ib_post_send(qp, &wr.wr, &bad_wr);
-	if (!ret)
-		mw->rkey = wr.rkey;
-
-	return ret;
-}
-
 int mlx4_ib_dealloc_mw(struct ib_mw *ibmw)
 {
 	struct mlx4_ib_mw *mw = to_mmw(ibmw);
diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
index a2e4ca5..9781ac3 100644
--- a/drivers/infiniband/hw/mlx4/qp.c
+++ b/drivers/infiniband/hw/mlx4/qp.c
@@ -114,7 +114,6 @@ static const __be32 mlx4_ib_opcode[] = {
 	[IB_WR_REG_MR]				= cpu_to_be32(MLX4_OPCODE_FMR),
 	[IB_WR_MASKED_ATOMIC_CMP_AND_SWP]	= cpu_to_be32(MLX4_OPCODE_MASKED_ATOMIC_CS),
 	[IB_WR_MASKED_ATOMIC_FETCH_AND_ADD]	= cpu_to_be32(MLX4_OPCODE_MASKED_ATOMIC_FA),
-	[IB_WR_BIND_MW]				= cpu_to_be32(MLX4_OPCODE_BIND_MW),
 };
 
 static struct mlx4_ib_sqp *to_msqp(struct mlx4_ib_qp *mqp)
@@ -2521,25 +2520,6 @@ static void set_reg_seg(struct mlx4_wqe_fmr_seg *fseg,
 	fseg->reserved[1]	= 0;
 }
 
-static void set_bind_seg(struct mlx4_wqe_bind_seg *bseg,
-		struct ib_bind_mw_wr *wr)
-{
-	bseg->flags1 =
-		convert_access(wr->bind_info.mw_access_flags) &
-		cpu_to_be32(MLX4_WQE_FMR_AND_BIND_PERM_REMOTE_READ  |
-			    MLX4_WQE_FMR_AND_BIND_PERM_REMOTE_WRITE |
-			    MLX4_WQE_FMR_AND_BIND_PERM_ATOMIC);
-	bseg->flags2 = 0;
-	if (wr->mw->type == IB_MW_TYPE_2)
-		bseg->flags2 |= cpu_to_be32(MLX4_WQE_BIND_TYPE_2);
-	if (wr->bind_info.mw_access_flags & IB_ZERO_BASED)
-		bseg->flags2 |= cpu_to_be32(MLX4_WQE_BIND_ZERO_BASED);
-	bseg->new_rkey = cpu_to_be32(wr->rkey);
-	bseg->lkey = cpu_to_be32(wr->bind_info.mr->lkey);
-	bseg->addr = cpu_to_be64(wr->bind_info.addr);
-	bseg->length = cpu_to_be64(wr->bind_info.length);
-}
-
 static void set_local_inv_seg(struct mlx4_wqe_local_inval_seg *iseg, u32 rkey)
 {
 	memset(iseg, 0, sizeof(*iseg));
@@ -2860,13 +2840,6 @@ int mlx4_ib_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
 				size += sizeof(struct mlx4_wqe_fmr_seg) / 16;
 				break;
 
-			case IB_WR_BIND_MW:
-				ctrl->srcrb_flags |=
-					cpu_to_be32(MLX4_WQE_CTRL_STRONG_ORDER);
-				set_bind_seg(wqe, bind_mw_wr(wr));
-				wqe  += sizeof(struct mlx4_wqe_bind_seg);
-				size += sizeof(struct mlx4_wqe_bind_seg) / 16;
-				break;
 			default:
 				/* No extra segments required for sends */
 				break;
diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c
index 3dfd287..82f86ed 100644
--- a/drivers/infiniband/hw/mlx5/cq.c
+++ b/drivers/infiniband/hw/mlx5/cq.c
@@ -154,9 +154,6 @@ static void handle_good_req(struct ib_wc *wc, struct mlx5_cqe64 *cqe,
 		wc->opcode    = IB_WC_MASKED_FETCH_ADD;
 		wc->byte_len  = 8;
 		break;
-	case MLX5_OPCODE_BIND_MW:
-		wc->opcode    = IB_WC_BIND_MW;
-		break;
 	case MLX5_OPCODE_UMR:
 		wc->opcode = get_umr_comp(wq, idx);
 		break;
diff --git a/drivers/infiniband/hw/mthca/mthca_cq.c b/drivers/infiniband/hw/mthca/mthca_cq.c
index 40ba833..a6531ff 100644
--- a/drivers/infiniband/hw/mthca/mthca_cq.c
+++ b/drivers/infiniband/hw/mthca/mthca_cq.c
@@ -608,9 +608,6 @@ static inline int mthca_poll_one(struct mthca_dev *dev,
 			entry->opcode    = IB_WC_FETCH_ADD;
 			entry->byte_len  = MTHCA_ATOMIC_BYTE_LEN;
 			break;
-		case MTHCA_OPCODE_BIND_MW:
-			entry->opcode    = IB_WC_BIND_MW;
-			break;
 		default:
 			entry->opcode    = MTHCA_OPCODE_INVALID;
 			break;
diff --git a/drivers/infiniband/hw/nes/nes_verbs.c b/drivers/infiniband/hw/nes/nes_verbs.c
index 453ebc2..640f68f 100644
--- a/drivers/infiniband/hw/nes/nes_verbs.c
+++ b/drivers/infiniband/hw/nes/nes_verbs.c
@@ -206,80 +206,6 @@ static int nes_dealloc_mw(struct ib_mw *ibmw)
 }
 
 
-/**
- * nes_bind_mw
- */
-static int nes_bind_mw(struct ib_qp *ibqp, struct ib_mw *ibmw,
-		struct ib_mw_bind *ibmw_bind)
-{
-	u64 u64temp;
-	struct nes_vnic *nesvnic = to_nesvnic(ibqp->device);
-	struct nes_device *nesdev = nesvnic->nesdev;
-	/* struct nes_mr *nesmr = to_nesmw(ibmw); */
-	struct nes_qp *nesqp = to_nesqp(ibqp);
-	struct nes_hw_qp_wqe *wqe;
-	unsigned long flags = 0;
-	u32 head;
-	u32 wqe_misc = 0;
-	u32 qsize;
-
-	if (nesqp->ibqp_state > IB_QPS_RTS)
-		return -EINVAL;
-
-	spin_lock_irqsave(&nesqp->lock, flags);
-
-	head = nesqp->hwqp.sq_head;
-	qsize = nesqp->hwqp.sq_tail;
-
-	/* Check for SQ overflow */
-	if (((head + (2 * qsize) - nesqp->hwqp.sq_tail) % qsize) == (qsize - 1)) {
-		spin_unlock_irqrestore(&nesqp->lock, flags);
-		return -ENOMEM;
-	}
-
-	wqe = &nesqp->hwqp.sq_vbase[head];
-	/* nes_debug(NES_DBG_MR, "processing sq wqe at %p, head = %u.\n", wqe, head); */
-	nes_fill_init_qp_wqe(wqe, nesqp, head);
-	u64temp = ibmw_bind->wr_id;
-	set_wqe_64bit_value(wqe->wqe_words, NES_IWARP_SQ_WQE_COMP_SCRATCH_LOW_IDX, u64temp);
-	wqe_misc = NES_IWARP_SQ_OP_BIND;
-
-	wqe_misc |= NES_IWARP_SQ_WQE_LOCAL_FENCE;
-
-	if (ibmw_bind->send_flags & IB_SEND_SIGNALED)
-		wqe_misc |= NES_IWARP_SQ_WQE_SIGNALED_COMPL;
-
-	if (ibmw_bind->bind_info.mw_access_flags & IB_ACCESS_REMOTE_WRITE)
-		wqe_misc |= NES_CQP_STAG_RIGHTS_REMOTE_WRITE;
-	if (ibmw_bind->bind_info.mw_access_flags & IB_ACCESS_REMOTE_READ)
-		wqe_misc |= NES_CQP_STAG_RIGHTS_REMOTE_READ;
-
-	set_wqe_32bit_value(wqe->wqe_words, NES_IWARP_SQ_WQE_MISC_IDX, wqe_misc);
-	set_wqe_32bit_value(wqe->wqe_words, NES_IWARP_SQ_BIND_WQE_MR_IDX,
-			    ibmw_bind->bind_info.mr->lkey);
-	set_wqe_32bit_value(wqe->wqe_words, NES_IWARP_SQ_BIND_WQE_MW_IDX, ibmw->rkey);
-	set_wqe_32bit_value(wqe->wqe_words, NES_IWARP_SQ_BIND_WQE_LENGTH_LOW_IDX,
-			ibmw_bind->bind_info.length);
-	wqe->wqe_words[NES_IWARP_SQ_BIND_WQE_LENGTH_HIGH_IDX] = 0;
-	u64temp = (u64)ibmw_bind->bind_info.addr;
-	set_wqe_64bit_value(wqe->wqe_words, NES_IWARP_SQ_BIND_WQE_VA_FBO_LOW_IDX, u64temp);
-
-	head++;
-	if (head >= qsize)
-		head = 0;
-
-	nesqp->hwqp.sq_head = head;
-	barrier();
-
-	nes_write32(nesdev->regs+NES_WQE_ALLOC,
-			(1 << 24) | 0x00800000 | nesqp->hwqp.qp_id);
-
-	spin_unlock_irqrestore(&nesqp->lock, flags);
-
-	return 0;
-}
-
-
 /*
  * nes_alloc_fast_mr
  */
@@ -3836,7 +3762,6 @@ struct nes_ib_device *nes_init_ofa_device(struct net_device *netdev)
 	nesibdev->ibdev.dereg_mr = nes_dereg_mr;
 	nesibdev->ibdev.alloc_mw = nes_alloc_mw;
 	nesibdev->ibdev.dealloc_mw = nes_dealloc_mw;
-	nesibdev->ibdev.bind_mw = nes_bind_mw;
 
 	nesibdev->ibdev.alloc_mr = nes_alloc_mr;
 	nesibdev->ibdev.map_mr_sg = nes_map_mr_sg;
diff --git a/drivers/staging/rdma/amso1100/c2_cq.c b/drivers/staging/rdma/amso1100/c2_cq.c
index 3ef881f..7ad0c08 100644
--- a/drivers/staging/rdma/amso1100/c2_cq.c
+++ b/drivers/staging/rdma/amso1100/c2_cq.c
@@ -173,9 +173,6 @@ static inline int c2_poll_one(struct c2_dev *c2dev,
 	case C2_WR_TYPE_RDMA_READ:
 		entry->opcode = IB_WC_RDMA_READ;
 		break;
-	case C2_WR_TYPE_BIND_MW:
-		entry->opcode = IB_WC_BIND_MW;
-		break;
 	case C2_WR_TYPE_RECV:
 		entry->byte_len = be32_to_cpu(ce->bytes_rcvd);
 		entry->opcode = IB_WC_RECV;
diff --git a/drivers/staging/rdma/ehca/ehca_iverbs.h b/drivers/staging/rdma/ehca/ehca_iverbs.h
index 219f635..630416d 100644
--- a/drivers/staging/rdma/ehca/ehca_iverbs.h
+++ b/drivers/staging/rdma/ehca/ehca_iverbs.h
@@ -87,9 +87,6 @@ int ehca_dereg_mr(struct ib_mr *mr);
 
 struct ib_mw *ehca_alloc_mw(struct ib_pd *pd, enum ib_mw_type type);
 
-int ehca_bind_mw(struct ib_qp *qp, struct ib_mw *mw,
-		 struct ib_mw_bind *mw_bind);
-
 int ehca_dealloc_mw(struct ib_mw *mw);
 
 struct ib_fmr *ehca_alloc_fmr(struct ib_pd *pd,
diff --git a/drivers/staging/rdma/ehca/ehca_main.c b/drivers/staging/rdma/ehca/ehca_main.c
index ba023426..915e924 100644
--- a/drivers/staging/rdma/ehca/ehca_main.c
+++ b/drivers/staging/rdma/ehca/ehca_main.c
@@ -514,7 +514,6 @@ static int ehca_init_device(struct ehca_shca *shca)
 	shca->ib_device.reg_user_mr	    = ehca_reg_user_mr;
 	shca->ib_device.dereg_mr	    = ehca_dereg_mr;
 	shca->ib_device.alloc_mw	    = ehca_alloc_mw;
-	shca->ib_device.bind_mw		    = ehca_bind_mw;
 	shca->ib_device.dealloc_mw	    = ehca_dealloc_mw;
 	shca->ib_device.alloc_fmr	    = ehca_alloc_fmr;
 	shca->ib_device.map_phys_fmr	    = ehca_map_phys_fmr;
diff --git a/drivers/staging/rdma/ehca/ehca_mrmw.c b/drivers/staging/rdma/ehca/ehca_mrmw.c
index 1c1a8dd..c6e3245 100644
--- a/drivers/staging/rdma/ehca/ehca_mrmw.c
+++ b/drivers/staging/rdma/ehca/ehca_mrmw.c
@@ -413,18 +413,6 @@ alloc_mw_exit0:
 
 /*----------------------------------------------------------------------*/
 
-int ehca_bind_mw(struct ib_qp *qp,
-		 struct ib_mw *mw,
-		 struct ib_mw_bind *mw_bind)
-{
-	/* TODO: not supported up to now */
-	ehca_gen_err("bind MW currently not supported by HCAD");
-
-	return -EPERM;
-} /* end ehca_bind_mw() */
-
-/*----------------------------------------------------------------------*/
-
 int ehca_dealloc_mw(struct ib_mw *mw)
 {
 	u64 h_ret;
diff --git a/drivers/staging/rdma/ehca/ehca_reqs.c b/drivers/staging/rdma/ehca/ehca_reqs.c
index 10e2074..11813b8 100644
--- a/drivers/staging/rdma/ehca/ehca_reqs.c
+++ b/drivers/staging/rdma/ehca/ehca_reqs.c
@@ -614,7 +614,6 @@ int ehca_post_srq_recv(struct ib_srq *srq,
 static const u8 ib_wc_opcode[255] = {
 	[0x01] = IB_WC_RECV+1,
 	[0x02] = IB_WC_RECV_RDMA_WITH_IMM+1,
-	[0x04] = IB_WC_BIND_MW+1,
 	[0x08] = IB_WC_FETCH_ADD+1,
 	[0x10] = IB_WC_COMP_SWAP+1,
 	[0x20] = IB_WC_RDMA_WRITE+1,
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index a3402af..1b2412b 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -704,7 +704,6 @@ enum ib_wc_opcode {
 	IB_WC_RDMA_READ,
 	IB_WC_COMP_SWAP,
 	IB_WC_FETCH_ADD,
-	IB_WC_BIND_MW,
 	IB_WC_LSO,
 	IB_WC_LOCAL_INV,
 	IB_WC_REG_MR,
@@ -997,7 +996,6 @@ enum ib_wr_opcode {
 	IB_WR_REG_MR,
 	IB_WR_MASKED_ATOMIC_CMP_AND_SWP,
 	IB_WR_MASKED_ATOMIC_FETCH_AND_ADD,
-	IB_WR_BIND_MW,
 	IB_WR_REG_SIG_MR,
 	/* reserve values for low level drivers' internal use.
 	 * These values will not be used at all in the ib core layer.
@@ -1032,23 +1030,6 @@ struct ib_sge {
 	u32	lkey;
 };
 
-/**
- * struct ib_mw_bind_info - Parameters for a memory window bind operation.
- * @mr: A memory region to bind the memory window to.
- * @addr: The address where the memory window should begin.
- * @length: The length of the memory window, in bytes.
- * @mw_access_flags: Access flags from enum ib_access_flags for the window.
- *
- * This struct contains the shared parameters for type 1 and type 2
- * memory window bind operations.
- */
-struct ib_mw_bind_info {
-	struct ib_mr   *mr;
-	u64		addr;
-	u64		length;
-	int		mw_access_flags;
-};
-
 struct ib_send_wr {
 	struct ib_send_wr      *next;
 	u64			wr_id;
@@ -1117,19 +1098,6 @@ static inline struct ib_reg_wr *reg_wr(struct ib_send_wr *wr)
 	return container_of(wr, struct ib_reg_wr, wr);
 }
 
-struct ib_bind_mw_wr {
-	struct ib_send_wr	wr;
-	struct ib_mw		*mw;
-	/* The new rkey for the memory window. */
-	u32			rkey;
-	struct ib_mw_bind_info	bind_info;
-};
-
-static inline struct ib_bind_mw_wr *bind_mw_wr(struct ib_send_wr *wr)
-{
-	return container_of(wr, struct ib_bind_mw_wr, wr);
-}
-
 struct ib_sig_handover_wr {
 	struct ib_send_wr	wr;
 	struct ib_sig_attrs    *sig_attrs;
@@ -1176,18 +1144,6 @@ enum ib_mr_rereg_flags {
 	IB_MR_REREG_SUPPORTED	= ((IB_MR_REREG_ACCESS << 1) - 1)
 };
 
-/**
- * struct ib_mw_bind - Parameters for a type 1 memory window bind operation.
- * @wr_id:      Work request id.
- * @send_flags: Flags from ib_send_flags enum.
- * @bind_info:  More parameters of the bind operation.
- */
-struct ib_mw_bind {
-	u64                    wr_id;
-	int                    send_flags;
-	struct ib_mw_bind_info bind_info;
-};
-
 struct ib_fmr_attr {
 	int	max_pages;
 	int	max_maps;
@@ -1708,9 +1664,6 @@ struct ib_device {
 						int sg_nents);
 	struct ib_mw *             (*alloc_mw)(struct ib_pd *pd,
 					       enum ib_mw_type type);
-	int                        (*bind_mw)(struct ib_qp *qp,
-					      struct ib_mw *mw,
-					      struct ib_mw_bind *mw_bind);
 	int                        (*dealloc_mw)(struct ib_mw *mw);
 	struct ib_fmr *	           (*alloc_fmr)(struct ib_pd *pd,
 						int mr_access_flags,
@@ -2867,42 +2820,6 @@ static inline u32 ib_inc_rkey(u32 rkey)
 }
 
 /**
- * ib_alloc_mw - Allocates a memory window.
- * @pd: The protection domain associated with the memory window.
- * @type: The type of the memory window (1 or 2).
- */
-struct ib_mw *ib_alloc_mw(struct ib_pd *pd, enum ib_mw_type type);
-
-/**
- * ib_bind_mw - Posts a work request to the send queue of the specified
- *   QP, which binds the memory window to the given address range and
- *   remote access attributes.
- * @qp: QP to post the bind work request on.
- * @mw: The memory window to bind.
- * @mw_bind: Specifies information about the memory window, including
- *   its address range, remote access rights, and associated memory region.
- *
- * If there is no immediate error, the function will update the rkey member
- * of the mw parameter to its new value. The bind operation can still fail
- * asynchronously.
- */
-static inline int ib_bind_mw(struct ib_qp *qp,
-			     struct ib_mw *mw,
-			     struct ib_mw_bind *mw_bind)
-{
-	/* XXX reference counting in corresponding MR? */
-	return mw->device->bind_mw ?
-		mw->device->bind_mw(qp, mw, mw_bind) :
-		-ENOSYS;
-}
-
-/**
- * ib_dealloc_mw - Deallocates a memory window.
- * @mw: The memory window to deallocate.
- */
-int ib_dealloc_mw(struct ib_mw *mw);
-
-/**
  * ib_alloc_fmr - Allocates a unmapped fast memory region.
  * @pd: The protection domain associated with the unmapped region.
  * @mr_access_flags: Specifies the memory access rights.
-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 05/11] cxgb3: simplify iwch_get_dma_wr
       [not found] ` <1448214409-7729-1-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
                     ` (3 preceding siblings ...)
  2015-11-22 17:46   ` [PATCH 04/11] IB: remove in-kernel support for memory windows Christoph Hellwig
@ 2015-11-22 17:46   ` Christoph Hellwig
  2015-11-22 17:46   ` [PATCH 06/11] nes: simplify nes_reg_phys_mr calling conventions Christoph Hellwig
                     ` (7 subsequent siblings)
  12 siblings, 0 replies; 35+ messages in thread
From: Christoph Hellwig @ 2015-11-22 17:46 UTC (permalink / raw)
  To: dledford-H+wXaHxf7aLQT0dZR+AlfA; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

Fold simplified versions of build_phys_page_list and
iwch_register_phys_mem into iwch_get_dma_wr now that no other callers
are left.

Signed-off-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
---
 drivers/infiniband/hw/cxgb3/iwch_mem.c      | 71 ----------------------------
 drivers/infiniband/hw/cxgb3/iwch_provider.c | 73 ++++++++++-------------------
 drivers/infiniband/hw/cxgb3/iwch_provider.h |  8 ----
 3 files changed, 26 insertions(+), 126 deletions(-)

diff --git a/drivers/infiniband/hw/cxgb3/iwch_mem.c b/drivers/infiniband/hw/cxgb3/iwch_mem.c
index 3a5e27d..1d04c87 100644
--- a/drivers/infiniband/hw/cxgb3/iwch_mem.c
+++ b/drivers/infiniband/hw/cxgb3/iwch_mem.c
@@ -99,74 +99,3 @@ int iwch_write_pbl(struct iwch_mr *mhp, __be64 *pages, int npages, int offset)
 	return cxio_write_pbl(&mhp->rhp->rdev, pages,
 			      mhp->attr.pbl_addr + (offset << 3), npages);
 }
-
-int build_phys_page_list(struct ib_phys_buf *buffer_list,
-					int num_phys_buf,
-					u64 *iova_start,
-					u64 *total_size,
-					int *npages,
-					int *shift,
-					__be64 **page_list)
-{
-	u64 mask;
-	int i, j, n;
-
-	mask = 0;
-	*total_size = 0;
-	for (i = 0; i < num_phys_buf; ++i) {
-		if (i != 0 && buffer_list[i].addr & ~PAGE_MASK)
-			return -EINVAL;
-		if (i != 0 && i != num_phys_buf - 1 &&
-		    (buffer_list[i].size & ~PAGE_MASK))
-			return -EINVAL;
-		*total_size += buffer_list[i].size;
-		if (i > 0)
-			mask |= buffer_list[i].addr;
-		else
-			mask |= buffer_list[i].addr & PAGE_MASK;
-		if (i != num_phys_buf - 1)
-			mask |= buffer_list[i].addr + buffer_list[i].size;
-		else
-			mask |= (buffer_list[i].addr + buffer_list[i].size +
-				PAGE_SIZE - 1) & PAGE_MASK;
-	}
-
-	if (*total_size > 0xFFFFFFFFULL)
-		return -ENOMEM;
-
-	/* Find largest page shift we can use to cover buffers */
-	for (*shift = PAGE_SHIFT; *shift < 27; ++(*shift))
-		if ((1ULL << *shift) & mask)
-			break;
-
-	buffer_list[0].size += buffer_list[0].addr & ((1ULL << *shift) - 1);
-	buffer_list[0].addr &= ~0ull << *shift;
-
-	*npages = 0;
-	for (i = 0; i < num_phys_buf; ++i)
-		*npages += (buffer_list[i].size +
-			(1ULL << *shift) - 1) >> *shift;
-
-	if (!*npages)
-		return -EINVAL;
-
-	*page_list = kmalloc(sizeof(u64) * *npages, GFP_KERNEL);
-	if (!*page_list)
-		return -ENOMEM;
-
-	n = 0;
-	for (i = 0; i < num_phys_buf; ++i)
-		for (j = 0;
-		     j < (buffer_list[i].size + (1ULL << *shift) - 1) >> *shift;
-		     ++j)
-			(*page_list)[n++] = cpu_to_be64(buffer_list[i].addr +
-			    ((u64) j << *shift));
-
-	PDBG("%s va 0x%llx mask 0x%llx shift %d len %lld pbl_size %d\n",
-	     __func__, (unsigned long long) *iova_start,
-	     (unsigned long long) mask, *shift, (unsigned long long) *total_size,
-	     *npages);
-
-	return 0;
-
-}
diff --git a/drivers/infiniband/hw/cxgb3/iwch_provider.c b/drivers/infiniband/hw/cxgb3/iwch_provider.c
index b184933..097eb93 100644
--- a/drivers/infiniband/hw/cxgb3/iwch_provider.c
+++ b/drivers/infiniband/hw/cxgb3/iwch_provider.c
@@ -479,24 +479,25 @@ static int iwch_dereg_mr(struct ib_mr *ib_mr)
 	return 0;
 }
 
-static struct ib_mr *iwch_register_phys_mem(struct ib_pd *pd,
-					struct ib_phys_buf *buffer_list,
-					int num_phys_buf,
-					int acc,
-					u64 *iova_start)
+static struct ib_mr *iwch_get_dma_mr(struct ib_pd *pd, int acc)
 {
-	__be64 *page_list;
-	int shift;
-	u64 total_size;
-	int npages;
-	struct iwch_dev *rhp;
-	struct iwch_pd *php;
+	const u64 total_size = 0xffffffff;
+	const u64 mask = (total_size + PAGE_SIZE - 1) & PAGE_MASK;
+	struct iwch_pd *php = to_iwch_pd(pd);
+	struct iwch_dev *rhp = php->rhp;
 	struct iwch_mr *mhp;
-	int ret;
+	__be64 *page_list;
+	int shift = 26, npages, ret, i;
 
 	PDBG("%s ib_pd %p\n", __func__, pd);
-	php = to_iwch_pd(pd);
-	rhp = php->rhp;
+
+	/*
+	 * T3 only supports 32 bits of size.
+	 */
+	if (sizeof(phys_addr_t) > 4) {
+		pr_warn_once(MOD "Cannot support dma_mrs on this platform.\n");
+		return ERR_PTR(-ENOTSUPP);
+	}
 
 	mhp = kzalloc(sizeof(*mhp), GFP_KERNEL);
 	if (!mhp)
@@ -504,22 +505,23 @@ static struct ib_mr *iwch_register_phys_mem(struct ib_pd *pd,
 
 	mhp->rhp = rhp;
 
-	/* First check that we have enough alignment */
-	if ((*iova_start & ~PAGE_MASK) != (buffer_list[0].addr & ~PAGE_MASK)) {
+	npages = (total_size + (1ULL << shift) - 1) >> shift;
+	if (!npages) {
 		ret = -EINVAL;
 		goto err;
 	}
 
-	if (num_phys_buf > 1 &&
-	    ((buffer_list[0].addr + buffer_list[0].size) & ~PAGE_MASK)) {
-		ret = -EINVAL;
+	page_list = kmalloc_array(npages, sizeof(u64), GFP_KERNEL);
+	if (!page_list) {
+		ret = -ENOMEM;
 		goto err;
 	}
 
-	ret = build_phys_page_list(buffer_list, num_phys_buf, iova_start,
-				   &total_size, &npages, &shift, &page_list);
-	if (ret)
-		goto err;
+	for (i = 0; i < npages; i++)
+		page_list[i] = cpu_to_be64((u64)i << shift);
+
+	PDBG("%s mask 0x%llx shift %d len %lld pbl_size %d\n",
+		__func__, mask, shift, total_size, npages);
 
 	ret = iwch_alloc_pbl(mhp, npages);
 	if (ret) {
@@ -536,7 +538,7 @@ static struct ib_mr *iwch_register_phys_mem(struct ib_pd *pd,
 	mhp->attr.zbva = 0;
 
 	mhp->attr.perms = iwch_ib_to_tpt_access(acc);
-	mhp->attr.va_fbo = *iova_start;
+	mhp->attr.va_fbo = 0;
 	mhp->attr.page_size = shift - 12;
 
 	mhp->attr.len = (u32) total_size;
@@ -553,7 +555,6 @@ err_pbl:
 err:
 	kfree(mhp);
 	return ERR_PTR(ret);
-
 }
 
 static struct ib_mr *iwch_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
@@ -659,28 +660,6 @@ err:
 	return ERR_PTR(err);
 }
 
-static struct ib_mr *iwch_get_dma_mr(struct ib_pd *pd, int acc)
-{
-	struct ib_phys_buf bl;
-	u64 kva;
-	struct ib_mr *ibmr;
-
-	PDBG("%s ib_pd %p\n", __func__, pd);
-
-	/*
-	 * T3 only supports 32 bits of size.
-	 */
-	if (sizeof(phys_addr_t) > 4) {
-		pr_warn_once(MOD "Cannot support dma_mrs on this platform.\n");
-		return ERR_PTR(-ENOTSUPP);
-	}
-	bl.size = 0xffffffff;
-	bl.addr = 0;
-	kva = 0;
-	ibmr = iwch_register_phys_mem(pd, &bl, 1, acc, &kva);
-	return ibmr;
-}
-
 static struct ib_mw *iwch_alloc_mw(struct ib_pd *pd, enum ib_mw_type type)
 {
 	struct iwch_dev *rhp;
diff --git a/drivers/infiniband/hw/cxgb3/iwch_provider.h b/drivers/infiniband/hw/cxgb3/iwch_provider.h
index f24df44..252c464 100644
--- a/drivers/infiniband/hw/cxgb3/iwch_provider.h
+++ b/drivers/infiniband/hw/cxgb3/iwch_provider.h
@@ -341,14 +341,6 @@ int iwch_register_mem(struct iwch_dev *rhp, struct iwch_pd *php,
 int iwch_alloc_pbl(struct iwch_mr *mhp, int npages);
 void iwch_free_pbl(struct iwch_mr *mhp);
 int iwch_write_pbl(struct iwch_mr *mhp, __be64 *pages, int npages, int offset);
-int build_phys_page_list(struct ib_phys_buf *buffer_list,
-					int num_phys_buf,
-					u64 *iova_start,
-					u64 *total_size,
-					int *npages,
-					int *shift,
-					__be64 **page_list);
-
 
 #define IWCH_NODE_DESC "cxgb3 Chelsio Communications"
 
-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 06/11] nes: simplify nes_reg_phys_mr calling conventions
       [not found] ` <1448214409-7729-1-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
                     ` (4 preceding siblings ...)
  2015-11-22 17:46   ` [PATCH 05/11] cxgb3: simplify iwch_get_dma_wr Christoph Hellwig
@ 2015-11-22 17:46   ` Christoph Hellwig
  2015-11-22 17:46   ` [PATCH 07/11] amso1100: fold c2_reg_phys_mr into c2_get_dma_mr Christoph Hellwig
                     ` (6 subsequent siblings)
  12 siblings, 0 replies; 35+ messages in thread
From: Christoph Hellwig @ 2015-11-22 17:46 UTC (permalink / raw)
  To: dledford-H+wXaHxf7aLQT0dZR+AlfA; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

Just pass and address/size pair instead of an ib_phys_buf array.

Signed-off-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
---
 drivers/infiniband/hw/nes/nes_cm.c    |  10 +--
 drivers/infiniband/hw/nes/nes_verbs.c | 140 ++++++++--------------------------
 drivers/infiniband/hw/nes/nes_verbs.h |   3 +-
 3 files changed, 37 insertions(+), 116 deletions(-)

diff --git a/drivers/infiniband/hw/nes/nes_cm.c b/drivers/infiniband/hw/nes/nes_cm.c
index 242c87d..bc37adb 100644
--- a/drivers/infiniband/hw/nes/nes_cm.c
+++ b/drivers/infiniband/hw/nes/nes_cm.c
@@ -3232,7 +3232,6 @@ int nes_accept(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
 	int passive_state;
 	struct nes_ib_device *nesibdev;
 	struct ib_mr *ibmr = NULL;
-	struct ib_phys_buf ibphysbuf;
 	struct nes_pd *nespd;
 	u64 tagged_offset;
 	u8 mpa_frame_offset = 0;
@@ -3316,12 +3315,11 @@ int nes_accept(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
 		u64temp = (unsigned long)nesqp;
 		nesibdev = nesvnic->nesibdev;
 		nespd = nesqp->nespd;
-		ibphysbuf.addr = nesqp->ietf_frame_pbase + mpa_frame_offset;
-		ibphysbuf.size = buff_len;
 		tagged_offset = (u64)(unsigned long)*start_buff;
-		ibmr = nes_reg_phys_mr(&nespd->ibpd, &ibphysbuf, 1,
-					IB_ACCESS_LOCAL_WRITE,
-					&tagged_offset);
+		ibmr = nes_reg_phys_mr(&nespd->ibpd,
+				nesqp->ietf_frame_pbase + mpa_frame_offset,
+				buff_len, IB_ACCESS_LOCAL_WRITE,
+				&tagged_offset);
 		if (!ibmr) {
 			nes_debug(NES_DBG_CM, "Unable to register memory region"
 				  "for lSMM for cm_node = %p \n",
diff --git a/drivers/infiniband/hw/nes/nes_verbs.c b/drivers/infiniband/hw/nes/nes_verbs.c
index 640f68f..4e57bf0 100644
--- a/drivers/infiniband/hw/nes/nes_verbs.c
+++ b/drivers/infiniband/hw/nes/nes_verbs.c
@@ -1945,9 +1945,8 @@ static int nes_reg_mr(struct nes_device *nesdev, struct nes_pd *nespd,
 /**
  * nes_reg_phys_mr
  */
-struct ib_mr *nes_reg_phys_mr(struct ib_pd *ib_pd,
-		struct ib_phys_buf *buffer_list, int num_phys_buf, int acc,
-		u64 * iova_start)
+struct ib_mr *nes_reg_phys_mr(struct ib_pd *ib_pd, u64 addr, u64 size,
+		int acc, u64 *iova_start)
 {
 	u64 region_length;
 	struct nes_pd *nespd = to_nespd(ib_pd);
@@ -1959,13 +1958,10 @@ struct ib_mr *nes_reg_phys_mr(struct ib_pd *ib_pd,
 	struct nes_vpbl vpbl;
 	struct nes_root_vpbl root_vpbl;
 	u32 stag;
-	u32 i;
 	unsigned long mask;
 	u32 stag_index = 0;
 	u32 next_stag_index = 0;
 	u32 driver_key = 0;
-	u32 root_pbl_index = 0;
-	u32 cur_pbl_index = 0;
 	int err = 0;
 	int ret = 0;
 	u16 pbl_count = 0;
@@ -1984,11 +1980,8 @@ struct ib_mr *nes_reg_phys_mr(struct ib_pd *ib_pd,
 
 	next_stag_index >>= 8;
 	next_stag_index %= nesadapter->max_mr;
-	if (num_phys_buf > (1024*512)) {
-		return ERR_PTR(-E2BIG);
-	}
 
-	if ((buffer_list[0].addr ^ *iova_start) & ~PAGE_MASK)
+	if ((addr ^ *iova_start) & ~PAGE_MASK)
 		return ERR_PTR(-EINVAL);
 
 	err = nes_alloc_resource(nesadapter, nesadapter->allocated_mrs, nesadapter->max_mr,
@@ -2003,84 +1996,33 @@ struct ib_mr *nes_reg_phys_mr(struct ib_pd *ib_pd,
 		return ERR_PTR(-ENOMEM);
 	}
 
-	for (i = 0; i < num_phys_buf; i++) {
+	/* Allocate a 4K buffer for the PBL */
+	vpbl.pbl_vbase = pci_alloc_consistent(nesdev->pcidev, 4096,
+			&vpbl.pbl_pbase);
+	nes_debug(NES_DBG_MR, "Allocating leaf PBL, va = %p, pa = 0x%016lX\n",
+			vpbl.pbl_vbase, (unsigned long)vpbl.pbl_pbase);
+	if (!vpbl.pbl_vbase) {
+		nes_free_resource(nesadapter, nesadapter->allocated_mrs, stag_index);
+		ibmr = ERR_PTR(-ENOMEM);
+		kfree(nesmr);
+		goto reg_phys_err;
+	}
 
-		if ((i & 0x01FF) == 0) {
-			if (root_pbl_index == 1) {
-				/* Allocate the root PBL */
-				root_vpbl.pbl_vbase = pci_alloc_consistent(nesdev->pcidev, 8192,
-						&root_vpbl.pbl_pbase);
-				nes_debug(NES_DBG_MR, "Allocating root PBL, va = %p, pa = 0x%08X\n",
-						root_vpbl.pbl_vbase, (unsigned int)root_vpbl.pbl_pbase);
-				if (!root_vpbl.pbl_vbase) {
-					pci_free_consistent(nesdev->pcidev, 4096, vpbl.pbl_vbase,
-							vpbl.pbl_pbase);
-					nes_free_resource(nesadapter, nesadapter->allocated_mrs, stag_index);
-					kfree(nesmr);
-					return ERR_PTR(-ENOMEM);
-				}
-				root_vpbl.leaf_vpbl = kzalloc(sizeof(*root_vpbl.leaf_vpbl)*1024, GFP_KERNEL);
-				if (!root_vpbl.leaf_vpbl) {
-					pci_free_consistent(nesdev->pcidev, 8192, root_vpbl.pbl_vbase,
-							root_vpbl.pbl_pbase);
-					pci_free_consistent(nesdev->pcidev, 4096, vpbl.pbl_vbase,
-							vpbl.pbl_pbase);
-					nes_free_resource(nesadapter, nesadapter->allocated_mrs, stag_index);
-					kfree(nesmr);
-					return ERR_PTR(-ENOMEM);
-				}
-				root_vpbl.pbl_vbase[0].pa_low = cpu_to_le32((u32)vpbl.pbl_pbase);
-				root_vpbl.pbl_vbase[0].pa_high =
-						cpu_to_le32((u32)((((u64)vpbl.pbl_pbase) >> 32)));
-				root_vpbl.leaf_vpbl[0] = vpbl;
-			}
-			/* Allocate a 4K buffer for the PBL */
-			vpbl.pbl_vbase = pci_alloc_consistent(nesdev->pcidev, 4096,
-					&vpbl.pbl_pbase);
-			nes_debug(NES_DBG_MR, "Allocating leaf PBL, va = %p, pa = 0x%016lX\n",
-					vpbl.pbl_vbase, (unsigned long)vpbl.pbl_pbase);
-			if (!vpbl.pbl_vbase) {
-				nes_free_resource(nesadapter, nesadapter->allocated_mrs, stag_index);
-				ibmr = ERR_PTR(-ENOMEM);
-				kfree(nesmr);
-				goto reg_phys_err;
-			}
-			/* Fill in the root table */
-			if (1 <= root_pbl_index) {
-				root_vpbl.pbl_vbase[root_pbl_index].pa_low =
-						cpu_to_le32((u32)vpbl.pbl_pbase);
-				root_vpbl.pbl_vbase[root_pbl_index].pa_high =
-						cpu_to_le32((u32)((((u64)vpbl.pbl_pbase) >> 32)));
-				root_vpbl.leaf_vpbl[root_pbl_index] = vpbl;
-			}
-			root_pbl_index++;
-			cur_pbl_index = 0;
-		}
 
-		mask = !buffer_list[i].size;
-		if (i != 0)
-			mask |= buffer_list[i].addr;
-		if (i != num_phys_buf - 1)
-			mask |= buffer_list[i].addr + buffer_list[i].size;
-
-		if (mask & ~PAGE_MASK) {
-			nes_free_resource(nesadapter, nesadapter->allocated_mrs, stag_index);
-			nes_debug(NES_DBG_MR, "Invalid buffer addr or size\n");
-			ibmr = ERR_PTR(-EINVAL);
-			kfree(nesmr);
-			goto reg_phys_err;
-		}
+	mask = !size;
 
-		region_length += buffer_list[i].size;
-		if ((i != 0) && (single_page)) {
-			if ((buffer_list[i-1].addr+PAGE_SIZE) != buffer_list[i].addr)
-				single_page = 0;
-		}
-		vpbl.pbl_vbase[cur_pbl_index].pa_low = cpu_to_le32((u32)buffer_list[i].addr & PAGE_MASK);
-		vpbl.pbl_vbase[cur_pbl_index++].pa_high =
-				cpu_to_le32((u32)((((u64)buffer_list[i].addr) >> 32)));
+	if (mask & ~PAGE_MASK) {
+		nes_free_resource(nesadapter, nesadapter->allocated_mrs, stag_index);
+		nes_debug(NES_DBG_MR, "Invalid buffer addr or size\n");
+		ibmr = ERR_PTR(-EINVAL);
+		kfree(nesmr);
+		goto reg_phys_err;
 	}
 
+	region_length += size;
+	vpbl.pbl_vbase[0].pa_low = cpu_to_le32((u32)addr & PAGE_MASK);
+	vpbl.pbl_vbase[0].pa_high = cpu_to_le32((u32)((((u64)addr) >> 32)));
+
 	stag = stag_index << 8;
 	stag |= driver_key;
 	stag += (u32)stag_key;
@@ -2090,17 +2032,15 @@ struct ib_mr *nes_reg_phys_mr(struct ib_pd *ib_pd,
 			stag, (unsigned long)*iova_start, (unsigned long)region_length, stag_index);
 
 	/* Make the leaf PBL the root if only one PBL */
-	if (root_pbl_index == 1) {
-		root_vpbl.pbl_pbase = vpbl.pbl_pbase;
-	}
+	root_vpbl.pbl_pbase = vpbl.pbl_pbase;
 
 	if (single_page) {
 		pbl_count = 0;
 	} else {
-		pbl_count = root_pbl_index;
+		pbl_count = 1;
 	}
 	ret = nes_reg_mr(nesdev, nespd, stag, region_length, &root_vpbl,
-			buffer_list[0].addr, pbl_count, (u16)cur_pbl_index, acc, iova_start,
+			addr, pbl_count, 1, acc, iova_start,
 			&nesmr->pbls_used, &nesmr->pbl_4k);
 
 	if (ret == 0) {
@@ -2113,21 +2053,9 @@ struct ib_mr *nes_reg_phys_mr(struct ib_pd *ib_pd,
 		ibmr = ERR_PTR(-ENOMEM);
 	}
 
-	reg_phys_err:
-	/* free the resources */
-	if (root_pbl_index == 1) {
-		/* single PBL case */
-		pci_free_consistent(nesdev->pcidev, 4096, vpbl.pbl_vbase, vpbl.pbl_pbase);
-	} else {
-		for (i=0; i<root_pbl_index; i++) {
-			pci_free_consistent(nesdev->pcidev, 4096, root_vpbl.leaf_vpbl[i].pbl_vbase,
-					root_vpbl.leaf_vpbl[i].pbl_pbase);
-		}
-		kfree(root_vpbl.leaf_vpbl);
-		pci_free_consistent(nesdev->pcidev, 8192, root_vpbl.pbl_vbase,
-				root_vpbl.pbl_pbase);
-	}
-
+reg_phys_err:
+	/* single PBL case */
+	pci_free_consistent(nesdev->pcidev, 4096, vpbl.pbl_vbase, vpbl.pbl_pbase);
 	return ibmr;
 }
 
@@ -2137,17 +2065,13 @@ struct ib_mr *nes_reg_phys_mr(struct ib_pd *ib_pd,
  */
 static struct ib_mr *nes_get_dma_mr(struct ib_pd *pd, int acc)
 {
-	struct ib_phys_buf bl;
 	u64 kva = 0;
 
 	nes_debug(NES_DBG_MR, "\n");
 
-	bl.size = (u64)0xffffffffffULL;
-	bl.addr = 0;
-	return nes_reg_phys_mr(pd, &bl, 1, acc, &kva);
+	return nes_reg_phys_mr(pd, 0, 0xffffffffffULL, acc, &kva);
 }
 
-
 /**
  * nes_reg_user_mr
  */
diff --git a/drivers/infiniband/hw/nes/nes_verbs.h b/drivers/infiniband/hw/nes/nes_verbs.h
index 38e38cf..7029088 100644
--- a/drivers/infiniband/hw/nes/nes_verbs.h
+++ b/drivers/infiniband/hw/nes/nes_verbs.h
@@ -192,7 +192,6 @@ struct nes_qp {
 };
 
 struct ib_mr *nes_reg_phys_mr(struct ib_pd *ib_pd,
-		struct ib_phys_buf *buffer_list, int num_phys_buf, int acc,
-		u64 * iova_start);
+		u64 addr, u64 size, int acc, u64 *iova_start);
 
 #endif			/* NES_VERBS_H */
-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 07/11] amso1100: fold c2_reg_phys_mr into c2_get_dma_mr
       [not found] ` <1448214409-7729-1-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
                     ` (5 preceding siblings ...)
  2015-11-22 17:46   ` [PATCH 06/11] nes: simplify nes_reg_phys_mr calling conventions Christoph Hellwig
@ 2015-11-22 17:46   ` Christoph Hellwig
  2015-11-22 17:46   ` [PATCH 08/11] ehca: stop using struct ib_phys_buf Christoph Hellwig
                     ` (5 subsequent siblings)
  12 siblings, 0 replies; 35+ messages in thread
From: Christoph Hellwig @ 2015-11-22 17:46 UTC (permalink / raw)
  To: dledford-H+wXaHxf7aLQT0dZR+AlfA; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

Signed-off-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
---
 drivers/staging/rdma/amso1100/c2_provider.c | 67 ++++++-----------------------
 1 file changed, 12 insertions(+), 55 deletions(-)

diff --git a/drivers/staging/rdma/amso1100/c2_provider.c b/drivers/staging/rdma/amso1100/c2_provider.c
index 6c3e9cc..2de5a8f 100644
--- a/drivers/staging/rdma/amso1100/c2_provider.c
+++ b/drivers/staging/rdma/amso1100/c2_provider.c
@@ -323,43 +323,21 @@ static inline u32 c2_convert_access(int acc)
 	    C2_ACF_LOCAL_READ | C2_ACF_WINDOW_BIND;
 }
 
-static struct ib_mr *c2_reg_phys_mr(struct ib_pd *ib_pd,
-				    struct ib_phys_buf *buffer_list,
-				    int num_phys_buf, int acc, u64 * iova_start)
+static struct ib_mr *c2_get_dma_mr(struct ib_pd *pd, int acc)
 {
 	struct c2_mr *mr;
 	u64 *page_list;
-	u32 total_len;
-	int err, i, j, k, page_shift, pbl_depth;
+	const u32 total_len = 0xffffffff;	/* AMSO1100 limit */
+	int err, page_shift, pbl_depth, i;
+	u64 kva = 0;
 
-	pbl_depth = 0;
-	total_len = 0;
+	pr_debug("%s:%u\n", __func__, __LINE__);
 
-	page_shift = PAGE_SHIFT;
 	/*
-	 * If there is only 1 buffer we assume this could
-	 * be a map of all phy mem...use a 32k page_shift.
+	 * This is a map of all phy mem...use a 32k page_shift.
 	 */
-	if (num_phys_buf == 1)
-		page_shift += 3;
-
-	for (i = 0; i < num_phys_buf; i++) {
-
-		if (offset_in_page(buffer_list[i].addr)) {
-			pr_debug("Unaligned Memory Buffer: 0x%x\n",
-				(unsigned int) buffer_list[i].addr);
-			return ERR_PTR(-EINVAL);
-		}
-
-		if (!buffer_list[i].size) {
-			pr_debug("Invalid Buffer Size\n");
-			return ERR_PTR(-EINVAL);
-		}
-
-		total_len += buffer_list[i].size;
-		pbl_depth += ALIGN(buffer_list[i].size,
-				   BIT(page_shift)) >> page_shift;
-	}
+	page_shift = PAGE_SHIFT + 3;
+	pbl_depth = ALIGN(total_len, BIT(page_shift)) >> page_shift;
 
 	page_list = vmalloc(sizeof(u64) * pbl_depth);
 	if (!page_list) {
@@ -368,16 +346,8 @@ static struct ib_mr *c2_reg_phys_mr(struct ib_pd *ib_pd,
 		return ERR_PTR(-ENOMEM);
 	}
 
-	for (i = 0, j = 0; i < num_phys_buf; i++) {
-
-		int naddrs;
-
- 		naddrs = ALIGN(buffer_list[i].size,
-			       BIT(page_shift)) >> page_shift;
-		for (k = 0; k < naddrs; k++)
-			page_list[j++] = (buffer_list[i].addr +
-						     (k << page_shift));
-	}
+	for (i = 0; i < pbl_depth; i++)
+		page_list[i++] = (i << page_shift);
 
 	mr = kmalloc(sizeof(*mr), GFP_KERNEL);
 	if (!mr) {
@@ -390,12 +360,12 @@ static struct ib_mr *c2_reg_phys_mr(struct ib_pd *ib_pd,
 	pr_debug("%s - page shift %d, pbl_depth %d, total_len %u, "
 		"*iova_start %llx, first pa %llx, last pa %llx\n",
 		__func__, page_shift, pbl_depth, total_len,
-		(unsigned long long) *iova_start,
+		(unsigned long long) kva,
 	       	(unsigned long long) page_list[0],
 	       	(unsigned long long) page_list[pbl_depth-1]);
   	err = c2_nsmr_register_phys_kern(to_c2dev(ib_pd->device), page_list,
 					 BIT(page_shift), pbl_depth,
-					 total_len, 0, iova_start,
+					 total_len, 0, &kva,
 					 c2_convert_access(acc), mr);
 	vfree(page_list);
 	if (err) {
@@ -406,19 +376,6 @@ static struct ib_mr *c2_reg_phys_mr(struct ib_pd *ib_pd,
 	return &mr->ibmr;
 }
 
-static struct ib_mr *c2_get_dma_mr(struct ib_pd *pd, int acc)
-{
-	struct ib_phys_buf bl;
-	u64 kva = 0;
-
-	pr_debug("%s:%u\n", __func__, __LINE__);
-
-	/* AMSO1100 limit */
-	bl.size = 0xffffffff;
-	bl.addr = 0;
-	return c2_reg_phys_mr(pd, &bl, 1, acc, &kva);
-}
-
 static struct ib_mr *c2_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
 				    u64 virt, int acc, struct ib_udata *udata)
 {
-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 08/11] ehca: stop using struct ib_phys_buf
       [not found] ` <1448214409-7729-1-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
                     ` (6 preceding siblings ...)
  2015-11-22 17:46   ` [PATCH 07/11] amso1100: fold c2_reg_phys_mr into c2_get_dma_mr Christoph Hellwig
@ 2015-11-22 17:46   ` Christoph Hellwig
  2015-11-22 17:46   ` [PATCH 09/11] IB: remove the struct ib_phys_buf definition Christoph Hellwig
                     ` (4 subsequent siblings)
  12 siblings, 0 replies; 35+ messages in thread
From: Christoph Hellwig @ 2015-11-22 17:46 UTC (permalink / raw)
  To: dledford-H+wXaHxf7aLQT0dZR+AlfA; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

And simplify the calling convention for full-memory registrations.

Signed-off-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
---
 drivers/staging/rdma/ehca/ehca_classes.h |  5 +-
 drivers/staging/rdma/ehca/ehca_mrmw.c    | 94 +++++++++++++++-----------------
 2 files changed, 46 insertions(+), 53 deletions(-)

diff --git a/drivers/staging/rdma/ehca/ehca_classes.h b/drivers/staging/rdma/ehca/ehca_classes.h
index bd45e0f..e8c3387 100644
--- a/drivers/staging/rdma/ehca/ehca_classes.h
+++ b/drivers/staging/rdma/ehca/ehca_classes.h
@@ -316,9 +316,8 @@ struct ehca_mr_pginfo {
 
 	union {
 		struct { /* type EHCA_MR_PGI_PHYS section */
-			int num_phys_buf;
-			struct ib_phys_buf *phys_buf_array;
-			u64 next_buf;
+			u64 addr;
+			u16 size;
 		} phy;
 		struct { /* type EHCA_MR_PGI_USER section */
 			struct ib_umem *region;
diff --git a/drivers/staging/rdma/ehca/ehca_mrmw.c b/drivers/staging/rdma/ehca/ehca_mrmw.c
index c6e3245..1814af7 100644
--- a/drivers/staging/rdma/ehca/ehca_mrmw.c
+++ b/drivers/staging/rdma/ehca/ehca_mrmw.c
@@ -1289,7 +1289,6 @@ int ehca_reg_internal_maxmr(
 	u64 *iova_start;
 	u64 size_maxmr;
 	struct ehca_mr_pginfo pginfo;
-	struct ib_phys_buf ib_pbuf;
 	u32 num_kpages;
 	u32 num_hwpages;
 	u64 hw_pgsize;
@@ -1310,8 +1309,6 @@ int ehca_reg_internal_maxmr(
 	/* register internal max-MR on HCA */
 	size_maxmr = ehca_mr_len;
 	iova_start = (u64 *)ehca_map_vaddr((void *)(KERNELBASE + PHYSICAL_START));
-	ib_pbuf.addr = 0;
-	ib_pbuf.size = size_maxmr;
 	num_kpages = NUM_CHUNKS(((u64)iova_start % PAGE_SIZE) + size_maxmr,
 				PAGE_SIZE);
 	hw_pgsize = ehca_get_max_hwpage_size(shca);
@@ -1323,8 +1320,8 @@ int ehca_reg_internal_maxmr(
 	pginfo.num_kpages = num_kpages;
 	pginfo.num_hwpages = num_hwpages;
 	pginfo.hwpage_size = hw_pgsize;
-	pginfo.u.phy.num_phys_buf = 1;
-	pginfo.u.phy.phys_buf_array = &ib_pbuf;
+	pginfo.u.phy.addr = 0;
+	pginfo.u.phy.size = size_maxmr;
 
 	ret = ehca_reg_mr(shca, e_mr, iova_start, size_maxmr, 0, e_pd,
 			  &pginfo, &e_mr->ib.ib_mr.lkey,
@@ -1620,57 +1617,54 @@ static int ehca_set_pagebuf_phys(struct ehca_mr_pginfo *pginfo,
 				 u32 number, u64 *kpage)
 {
 	int ret = 0;
-	struct ib_phys_buf *pbuf;
+	u64 addr = pginfo->u.phy.addr;
+	u64 size = pginfo->u.phy.size;
 	u64 num_hw, offs_hw;
 	u32 i = 0;
 
-	/* loop over desired phys_buf_array entries */
-	while (i < number) {
-		pbuf   = pginfo->u.phy.phys_buf_array + pginfo->u.phy.next_buf;
-		num_hw  = NUM_CHUNKS((pbuf->addr % pginfo->hwpage_size) +
-				     pbuf->size, pginfo->hwpage_size);
-		offs_hw = (pbuf->addr & ~(pginfo->hwpage_size - 1)) /
-			pginfo->hwpage_size;
-		while (pginfo->next_hwpage < offs_hw + num_hw) {
-			/* sanity check */
-			if ((pginfo->kpage_cnt >= pginfo->num_kpages) ||
-			    (pginfo->hwpage_cnt >= pginfo->num_hwpages)) {
-				ehca_gen_err("kpage_cnt >= num_kpages, "
-					     "kpage_cnt=%llx num_kpages=%llx "
-					     "hwpage_cnt=%llx "
-					     "num_hwpages=%llx i=%x",
-					     pginfo->kpage_cnt,
-					     pginfo->num_kpages,
-					     pginfo->hwpage_cnt,
-					     pginfo->num_hwpages, i);
-				return -EFAULT;
-			}
-			*kpage = (pbuf->addr & ~(pginfo->hwpage_size - 1)) +
-				 (pginfo->next_hwpage * pginfo->hwpage_size);
-			if ( !(*kpage) && pbuf->addr ) {
-				ehca_gen_err("pbuf->addr=%llx pbuf->size=%llx "
-					     "next_hwpage=%llx", pbuf->addr,
-					     pbuf->size, pginfo->next_hwpage);
-				return -EFAULT;
-			}
-			(pginfo->hwpage_cnt)++;
-			(pginfo->next_hwpage)++;
-			if (PAGE_SIZE >= pginfo->hwpage_size) {
-				if (pginfo->next_hwpage %
-				    (PAGE_SIZE / pginfo->hwpage_size) == 0)
-					(pginfo->kpage_cnt)++;
-			} else
-				pginfo->kpage_cnt += pginfo->hwpage_size /
-					PAGE_SIZE;
-			kpage++;
-			i++;
-			if (i >= number) break;
+	num_hw  = NUM_CHUNKS((addr % pginfo->hwpage_size) + size,
+				pginfo->hwpage_size);
+	offs_hw = (addr & ~(pginfo->hwpage_size - 1)) / pginfo->hwpage_size;
+
+	while (pginfo->next_hwpage < offs_hw + num_hw) {
+		/* sanity check */
+		if ((pginfo->kpage_cnt >= pginfo->num_kpages) ||
+		    (pginfo->hwpage_cnt >= pginfo->num_hwpages)) {
+			ehca_gen_err("kpage_cnt >= num_kpages, "
+				     "kpage_cnt=%llx num_kpages=%llx "
+				     "hwpage_cnt=%llx "
+				     "num_hwpages=%llx i=%x",
+				     pginfo->kpage_cnt,
+				     pginfo->num_kpages,
+				     pginfo->hwpage_cnt,
+				     pginfo->num_hwpages, i);
+			return -EFAULT;
 		}
-		if (pginfo->next_hwpage >= offs_hw + num_hw) {
-			(pginfo->u.phy.next_buf)++;
-			pginfo->next_hwpage = 0;
+		*kpage = (addr & ~(pginfo->hwpage_size - 1)) +
+			 (pginfo->next_hwpage * pginfo->hwpage_size);
+		if ( !(*kpage) && addr ) {
+			ehca_gen_err("addr=%llx size=%llx "
+				     "next_hwpage=%llx", addr,
+				     size, pginfo->next_hwpage);
+			return -EFAULT;
 		}
+		(pginfo->hwpage_cnt)++;
+		(pginfo->next_hwpage)++;
+		if (PAGE_SIZE >= pginfo->hwpage_size) {
+			if (pginfo->next_hwpage %
+			    (PAGE_SIZE / pginfo->hwpage_size) == 0)
+				(pginfo->kpage_cnt)++;
+		} else
+			pginfo->kpage_cnt += pginfo->hwpage_size /
+				PAGE_SIZE;
+		kpage++;
+		i++;
+		if (i >= number) break;
+	}
+	if (pginfo->next_hwpage >= offs_hw + num_hw) {
+		pginfo->next_hwpage = 0;
 	}
+
 	return ret;
 }
 
-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 09/11] IB: remove the struct ib_phys_buf definition
       [not found] ` <1448214409-7729-1-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
                     ` (7 preceding siblings ...)
  2015-11-22 17:46   ` [PATCH 08/11] ehca: stop using struct ib_phys_buf Christoph Hellwig
@ 2015-11-22 17:46   ` Christoph Hellwig
  2015-11-22 17:46   ` [PATCH 10/11] IB: only keep a single key in struct ib_mr Christoph Hellwig
                     ` (3 subsequent siblings)
  12 siblings, 0 replies; 35+ messages in thread
From: Christoph Hellwig @ 2015-11-22 17:46 UTC (permalink / raw)
  To: dledford-H+wXaHxf7aLQT0dZR+AlfA; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

Signed-off-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
---
 include/rdma/ib_verbs.h | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index 1b2412b..81e047e 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -1128,11 +1128,6 @@ enum ib_access_flags {
 	IB_ACCESS_ON_DEMAND     = (1<<6),
 };
 
-struct ib_phys_buf {
-	u64      addr;
-	u64      size;
-};
-
 /*
  * XXX: these are apparently used for ->rereg_user_mr, no idea why they
  * are hidden here instead of a uapi header!
-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 10/11] IB: only keep a single key in struct ib_mr
       [not found] ` <1448214409-7729-1-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
                     ` (8 preceding siblings ...)
  2015-11-22 17:46   ` [PATCH 09/11] IB: remove the struct ib_phys_buf definition Christoph Hellwig
@ 2015-11-22 17:46   ` Christoph Hellwig
       [not found]     ` <1448214409-7729-11-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
  2015-11-22 17:46   ` [PATCH 11/11] IB: provide better access flags for fast registrations Christoph Hellwig
                     ` (2 subsequent siblings)
  12 siblings, 1 reply; 35+ messages in thread
From: Christoph Hellwig @ 2015-11-22 17:46 UTC (permalink / raw)
  To: dledford-H+wXaHxf7aLQT0dZR+AlfA; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

While IB supports the notion of returning separate local and remote keys
from a memory registration, the iWarp spec doesn't and neither does any
of our in-tree HCA drivers [1] nor consumers.  Consolidate the in-kernel
API to provide only a single key and make everyones life easier.

[1] the EHCA driver which is in the staging tree on it's way out can
    actually return two values from it's thick firmware interface.
    I doubt they ever were different, though.

Signed-off-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
---
 drivers/infiniband/core/uverbs_cmd.c               |  8 +++---
 drivers/infiniband/core/verbs.c                    |  2 +-
 drivers/infiniband/hw/cxgb3/iwch_mem.c             |  2 +-
 drivers/infiniband/hw/cxgb3/iwch_provider.c        |  2 +-
 drivers/infiniband/hw/cxgb3/iwch_qp.c              |  2 +-
 drivers/infiniband/hw/cxgb4/mem.c                  |  4 +--
 drivers/infiniband/hw/cxgb4/qp.c                   |  2 +-
 drivers/infiniband/hw/mlx4/mr.c                    |  6 ++--
 drivers/infiniband/hw/mlx4/qp.c                    |  2 +-
 drivers/infiniband/hw/mlx5/mr.c                    | 11 +++-----
 drivers/infiniband/hw/mlx5/qp.c                    |  8 +++---
 drivers/infiniband/hw/mthca/mthca_av.c             |  2 +-
 drivers/infiniband/hw/mthca/mthca_cq.c             |  2 +-
 drivers/infiniband/hw/mthca/mthca_eq.c             |  2 +-
 drivers/infiniband/hw/mthca/mthca_mr.c             |  8 +++---
 drivers/infiniband/hw/mthca/mthca_provider.c       |  8 +++---
 drivers/infiniband/hw/mthca/mthca_qp.c             |  4 +--
 drivers/infiniband/hw/mthca/mthca_srq.c            |  4 +--
 drivers/infiniband/hw/nes/nes_cm.c                 |  2 +-
 drivers/infiniband/hw/nes/nes_verbs.c              | 33 ++++++++++------------
 drivers/infiniband/hw/ocrdma/ocrdma_verbs.c        | 13 +++------
 drivers/infiniband/hw/qib/qib_keys.c               |  2 +-
 drivers/infiniband/hw/qib/qib_mr.c                 |  3 +-
 drivers/infiniband/hw/usnic/usnic_ib_verbs.c       |  2 +-
 drivers/infiniband/ulp/iser/iser_memory.c          | 17 ++++-------
 drivers/infiniband/ulp/isert/ib_isert.c            | 13 +++------
 drivers/infiniband/ulp/srp/ib_srp.c                | 21 ++++++--------
 .../staging/lustre/lnet/klnds/o2iblnd/o2iblnd_cb.c |  4 +--
 include/rdma/ib_verbs.h                            |  7 ++---
 net/rds/iw.h                                       |  4 +--
 net/rds/iw_rdma.c                                  |  6 ++--
 net/rds/iw_send.c                                  |  3 +-
 net/sunrpc/xprtrdma/frwr_ops.c                     |  7 ++---
 net/sunrpc/xprtrdma/physical_ops.c                 |  2 +-
 net/sunrpc/xprtrdma/svc_rdma_recvfrom.c            |  9 +++---
 net/sunrpc/xprtrdma/svc_rdma_transport.c           |  2 +-
 36 files changed, 98 insertions(+), 131 deletions(-)

diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
index b0f7872..f10b492 100644
--- a/drivers/infiniband/core/uverbs_cmd.c
+++ b/drivers/infiniband/core/uverbs_cmd.c
@@ -998,8 +998,8 @@ ssize_t ib_uverbs_reg_mr(struct ib_uverbs_file *file,
 		goto err_unreg;
 
 	memset(&resp, 0, sizeof resp);
-	resp.lkey      = mr->lkey;
-	resp.rkey      = mr->rkey;
+	resp.lkey      = mr->key;
+	resp.rkey      = mr->key;
 	resp.mr_handle = uobj->id;
 
 	if (copy_to_user((void __user *) (unsigned long) cmd.response,
@@ -1108,8 +1108,8 @@ ssize_t ib_uverbs_rereg_mr(struct ib_uverbs_file *file,
 	}
 
 	memset(&resp, 0, sizeof(resp));
-	resp.lkey      = mr->lkey;
-	resp.rkey      = mr->rkey;
+	resp.lkey      = mr->key;
+	resp.rkey      = mr->key;
 
 	if (copy_to_user((void __user *)(unsigned long)cmd.response,
 			 &resp, sizeof(resp)))
diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
index b31beb1..751a017 100644
--- a/drivers/infiniband/core/verbs.c
+++ b/drivers/infiniband/core/verbs.c
@@ -251,7 +251,7 @@ struct ib_pd *ib_alloc_pd(struct ib_device *device)
 		}
 
 		pd->local_mr = mr;
-		pd->local_dma_lkey = pd->local_mr->lkey;
+		pd->local_dma_lkey = pd->local_mr->key;
 	}
 	return pd;
 }
diff --git a/drivers/infiniband/hw/cxgb3/iwch_mem.c b/drivers/infiniband/hw/cxgb3/iwch_mem.c
index 1d04c87..4a27978 100644
--- a/drivers/infiniband/hw/cxgb3/iwch_mem.c
+++ b/drivers/infiniband/hw/cxgb3/iwch_mem.c
@@ -47,7 +47,7 @@ static int iwch_finish_mem_reg(struct iwch_mr *mhp, u32 stag)
 	mhp->attr.state = 1;
 	mhp->attr.stag = stag;
 	mmid = stag >> 8;
-	mhp->ibmr.rkey = mhp->ibmr.lkey = stag;
+	mhp->ibmr.key = stag;
 	PDBG("%s mmid 0x%x mhp %p\n", __func__, mmid, mhp);
 	return insert_handle(mhp->rhp, &mhp->rhp->mmidr, mhp, mmid);
 }
diff --git a/drivers/infiniband/hw/cxgb3/iwch_provider.c b/drivers/infiniband/hw/cxgb3/iwch_provider.c
index 097eb93..02b80b1 100644
--- a/drivers/infiniband/hw/cxgb3/iwch_provider.c
+++ b/drivers/infiniband/hw/cxgb3/iwch_provider.c
@@ -754,7 +754,7 @@ static struct ib_mr *iwch_alloc_mr(struct ib_pd *pd,
 	mhp->attr.stag = stag;
 	mhp->attr.state = 1;
 	mmid = (stag) >> 8;
-	mhp->ibmr.rkey = mhp->ibmr.lkey = stag;
+	mhp->ibmr.key = stag;
 	if (insert_handle(rhp, &rhp->mmidr, mhp, mmid))
 		goto err3;
 
diff --git a/drivers/infiniband/hw/cxgb3/iwch_qp.c b/drivers/infiniband/hw/cxgb3/iwch_qp.c
index d939980..87be4be 100644
--- a/drivers/infiniband/hw/cxgb3/iwch_qp.c
+++ b/drivers/infiniband/hw/cxgb3/iwch_qp.c
@@ -156,7 +156,7 @@ static int build_memreg(union t3_wr *wqe, struct ib_reg_wr *wr,
 	if (mhp->npages > T3_MAX_FASTREG_DEPTH)
 		return -EINVAL;
 	*wr_cnt = 1;
-	wqe->fastreg.stag = cpu_to_be32(wr->key);
+	wqe->fastreg.stag = cpu_to_be32(wr->mr->key);
 	wqe->fastreg.len = cpu_to_be32(mhp->ibmr.length);
 	wqe->fastreg.va_base_hi = cpu_to_be32(mhp->ibmr.iova >> 32);
 	wqe->fastreg.va_base_lo_fbo =
diff --git a/drivers/infiniband/hw/cxgb4/mem.c b/drivers/infiniband/hw/cxgb4/mem.c
index 1eb833a..f1e3973 100644
--- a/drivers/infiniband/hw/cxgb4/mem.c
+++ b/drivers/infiniband/hw/cxgb4/mem.c
@@ -364,7 +364,7 @@ static int finish_mem_reg(struct c4iw_mr *mhp, u32 stag)
 	mhp->attr.state = 1;
 	mhp->attr.stag = stag;
 	mmid = stag >> 8;
-	mhp->ibmr.rkey = mhp->ibmr.lkey = stag;
+	mhp->ibmr.key = stag;
 	PDBG("%s mmid 0x%x mhp %p\n", __func__, mmid, mhp);
 	return insert_handle(mhp->rhp, &mhp->rhp->mmidr, mhp, mmid);
 }
@@ -651,7 +651,7 @@ struct ib_mr *c4iw_alloc_mr(struct ib_pd *pd,
 	mhp->attr.stag = stag;
 	mhp->attr.state = 1;
 	mmid = (stag) >> 8;
-	mhp->ibmr.rkey = mhp->ibmr.lkey = stag;
+	mhp->ibmr.key = stag;
 	if (insert_handle(rhp, &rhp->mmidr, mhp, mmid)) {
 		ret = -ENOMEM;
 		goto err3;
diff --git a/drivers/infiniband/hw/cxgb4/qp.c b/drivers/infiniband/hw/cxgb4/qp.c
index e99345e..cb8031b 100644
--- a/drivers/infiniband/hw/cxgb4/qp.c
+++ b/drivers/infiniband/hw/cxgb4/qp.c
@@ -624,7 +624,7 @@ static int build_memreg(struct t4_sq *sq, union t4_wr *wqe,
 	wqe->fr.mem_perms = c4iw_ib_to_tpt_access(wr->access);
 	wqe->fr.len_hi = 0;
 	wqe->fr.len_lo = cpu_to_be32(mhp->ibmr.length);
-	wqe->fr.stag = cpu_to_be32(wr->key);
+	wqe->fr.stag = cpu_to_be32(wr->mr->key);
 	wqe->fr.va_hi = cpu_to_be32(mhp->ibmr.iova >> 32);
 	wqe->fr.va_lo_fbo = cpu_to_be32(mhp->ibmr.iova &
 					0xffffffff);
diff --git a/drivers/infiniband/hw/mlx4/mr.c b/drivers/infiniband/hw/mlx4/mr.c
index 242b94e..a544a94 100644
--- a/drivers/infiniband/hw/mlx4/mr.c
+++ b/drivers/infiniband/hw/mlx4/mr.c
@@ -72,7 +72,7 @@ struct ib_mr *mlx4_ib_get_dma_mr(struct ib_pd *pd, int acc)
 	if (err)
 		goto err_mr;
 
-	mr->ibmr.rkey = mr->ibmr.lkey = mr->mmr.key;
+	mr->ibmr.key = mr->mmr.key;
 	mr->umem = NULL;
 
 	return &mr->ibmr;
@@ -169,7 +169,7 @@ struct ib_mr *mlx4_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
 	if (err)
 		goto err_mr;
 
-	mr->ibmr.rkey = mr->ibmr.lkey = mr->mmr.key;
+	mr->ibmr.key = mr->mmr.key;
 
 	return &mr->ibmr;
 
@@ -407,7 +407,7 @@ struct ib_mr *mlx4_ib_alloc_mr(struct ib_pd *pd,
 	if (err)
 		goto err_free_pl;
 
-	mr->ibmr.rkey = mr->ibmr.lkey = mr->mmr.key;
+	mr->ibmr.key = mr->mmr.key;
 	mr->umem = NULL;
 
 	return &mr->ibmr;
diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
index 9781ac3..afba1a9 100644
--- a/drivers/infiniband/hw/mlx4/qp.c
+++ b/drivers/infiniband/hw/mlx4/qp.c
@@ -2510,7 +2510,7 @@ static void set_reg_seg(struct mlx4_wqe_fmr_seg *fseg,
 	struct mlx4_ib_mr *mr = to_mmr(wr->mr);
 
 	fseg->flags		= convert_access(wr->access);
-	fseg->mem_key		= cpu_to_be32(wr->key);
+	fseg->mem_key		= cpu_to_be32(wr->mr->key);
 	fseg->buf_list		= cpu_to_be64(mr->page_map);
 	fseg->start_addr	= cpu_to_be64(mr->ibmr.iova);
 	fseg->reg_len		= cpu_to_be64(mr->ibmr.length);
diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
index ec8993a..c86cab1 100644
--- a/drivers/infiniband/hw/mlx5/mr.c
+++ b/drivers/infiniband/hw/mlx5/mr.c
@@ -651,8 +651,7 @@ struct ib_mr *mlx5_ib_get_dma_mr(struct ib_pd *pd, int acc)
 		goto err_in;
 
 	kfree(in);
-	mr->ibmr.lkey = mr->mmr.key;
-	mr->ibmr.rkey = mr->mmr.key;
+	mr->ibmr.key = mr->mmr.key;
 	mr->umem = NULL;
 
 	return &mr->ibmr;
@@ -1084,8 +1083,7 @@ struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
 	mr->umem = umem;
 	mr->npages = npages;
 	atomic_add(npages, &dev->mdev->priv.reg_pages);
-	mr->ibmr.lkey = mr->mmr.key;
-	mr->ibmr.rkey = mr->mmr.key;
+	mr->ibmr.key = mr->mmr.key;
 
 #ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING
 	if (umem->odp_data) {
@@ -1355,8 +1353,7 @@ struct ib_mr *mlx5_ib_alloc_mr(struct ib_pd *pd,
 	if (err)
 		goto err_destroy_psv;
 
-	mr->ibmr.lkey = mr->mmr.key;
-	mr->ibmr.rkey = mr->mmr.key;
+	mr->ibmr.key = mr->mmr.key;
 	mr->umem = NULL;
 	kfree(in);
 
@@ -1407,7 +1404,7 @@ int mlx5_ib_check_mr_status(struct ib_mr *ibmr, u32 check_mask,
 		if (!mmr->sig->sig_err_exists)
 			goto done;
 
-		if (ibmr->lkey == mmr->sig->err_item.key)
+		if (ibmr->key == mmr->sig->err_item.key)
 			memcpy(&mr_status->sig_err, &mmr->sig->err_item,
 			       sizeof(mr_status->sig_err));
 		else {
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 307bdbc..ba39045 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -2327,7 +2327,7 @@ static void set_sig_mkey_segment(struct mlx5_mkey_seg *seg,
 				 u32 length, u32 pdn)
 {
 	struct ib_mr *sig_mr = wr->sig_mr;
-	u32 sig_key = sig_mr->rkey;
+	u32 sig_key = sig_mr->key;
 	u8 sigerr = to_mmr(sig_mr)->sig->sigerr_count & 1;
 
 	memset(seg, 0, sizeof(*seg));
@@ -2449,7 +2449,7 @@ static int set_reg_wr(struct mlx5_ib_qp *qp,
 	if (unlikely((*seg == qp->sq.qend)))
 		*seg = mlx5_get_send_wqe(qp, 0);
 
-	set_reg_mkey_seg(*seg, mr, wr->key, wr->access);
+	set_reg_mkey_seg(*seg, mr, wr->mr->key, wr->access);
 	*seg += sizeof(struct mlx5_mkey_seg);
 	*size += sizeof(struct mlx5_mkey_seg) / 16;
 	if (unlikely((*seg == qp->sq.qend)))
@@ -2670,7 +2670,7 @@ int mlx5_ib_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
 			case IB_WR_REG_MR:
 				next_fence = MLX5_FENCE_MODE_INITIATOR_SMALL;
 				qp->sq.wr_data[idx] = IB_WR_REG_MR;
-				ctrl->imm = cpu_to_be32(reg_wr(wr)->key);
+				ctrl->imm = cpu_to_be32(reg_wr(wr)->mr->key);
 				err = set_reg_wr(qp, reg_wr(wr), &seg, &size);
 				if (err) {
 					*bad_wr = wr;
@@ -2683,7 +2683,7 @@ int mlx5_ib_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
 				qp->sq.wr_data[idx] = IB_WR_REG_SIG_MR;
 				mr = to_mmr(sig_handover_wr(wr)->sig_mr);
 
-				ctrl->imm = cpu_to_be32(mr->ibmr.rkey);
+				ctrl->imm = cpu_to_be32(mr->ibmr.key);
 				err = set_sig_umr_wr(wr, qp, &seg, &size);
 				if (err) {
 					mlx5_ib_warn(dev, "\n");
diff --git a/drivers/infiniband/hw/mthca/mthca_av.c b/drivers/infiniband/hw/mthca/mthca_av.c
index bcac294..046a442 100644
--- a/drivers/infiniband/hw/mthca/mthca_av.c
+++ b/drivers/infiniband/hw/mthca/mthca_av.c
@@ -194,7 +194,7 @@ on_hca_fail:
 		av = ah->av;
 	}
 
-	ah->key = pd->ntmr.ibmr.lkey;
+	ah->key = pd->ntmr.ibmr.key;
 
 	memset(av, 0, MTHCA_AV_SIZE);
 
diff --git a/drivers/infiniband/hw/mthca/mthca_cq.c b/drivers/infiniband/hw/mthca/mthca_cq.c
index a6531ff..01dc054 100644
--- a/drivers/infiniband/hw/mthca/mthca_cq.c
+++ b/drivers/infiniband/hw/mthca/mthca_cq.c
@@ -836,7 +836,7 @@ int mthca_init_cq(struct mthca_dev *dev, int nent,
 	cq_context->error_eqn       = cpu_to_be32(dev->eq_table.eq[MTHCA_EQ_ASYNC].eqn);
 	cq_context->comp_eqn        = cpu_to_be32(dev->eq_table.eq[MTHCA_EQ_COMP].eqn);
 	cq_context->pd              = cpu_to_be32(pdn);
-	cq_context->lkey            = cpu_to_be32(cq->buf.mr.ibmr.lkey);
+	cq_context->lkey            = cpu_to_be32(cq->buf.mr.ibmr.key);
 	cq_context->cqn             = cpu_to_be32(cq->cqn);
 
 	if (mthca_is_memfree(dev)) {
diff --git a/drivers/infiniband/hw/mthca/mthca_eq.c b/drivers/infiniband/hw/mthca/mthca_eq.c
index 6902017..83a25e8 100644
--- a/drivers/infiniband/hw/mthca/mthca_eq.c
+++ b/drivers/infiniband/hw/mthca/mthca_eq.c
@@ -540,7 +540,7 @@ static int mthca_create_eq(struct mthca_dev *dev,
 		eq_context->tavor_pd         = cpu_to_be32(dev->driver_pd.pd_num);
 	}
 	eq_context->intr            = intr;
-	eq_context->lkey            = cpu_to_be32(eq->mr.ibmr.lkey);
+	eq_context->lkey            = cpu_to_be32(eq->mr.ibmr.key);
 
 	err = mthca_SW2HW_EQ(dev, mailbox, eq->eqn);
 	if (err) {
diff --git a/drivers/infiniband/hw/mthca/mthca_mr.c b/drivers/infiniband/hw/mthca/mthca_mr.c
index ed9a989..9bd8274 100644
--- a/drivers/infiniband/hw/mthca/mthca_mr.c
+++ b/drivers/infiniband/hw/mthca/mthca_mr.c
@@ -441,7 +441,7 @@ int mthca_mr_alloc(struct mthca_dev *dev, u32 pd, int buffer_size_shift,
 	if (key == -1)
 		return -ENOMEM;
 	key = adjust_key(dev, key);
-	mr->ibmr.rkey = mr->ibmr.lkey = hw_index_to_key(dev, key);
+	mr->ibmr.key = hw_index_to_key(dev, key);
 
 	if (mthca_is_memfree(dev)) {
 		err = mthca_table_get(dev, dev->mr_table.mpt_table, key);
@@ -478,7 +478,7 @@ int mthca_mr_alloc(struct mthca_dev *dev, u32 pd, int buffer_size_shift,
 				    mr->mtt->first_seg * dev->limits.mtt_seg_size);
 
 	if (0) {
-		mthca_dbg(dev, "Dumping MPT entry %08x:\n", mr->ibmr.lkey);
+		mthca_dbg(dev, "Dumping MPT entry %08x:\n", mr->ibmr.key);
 		for (i = 0; i < sizeof (struct mthca_mpt_entry) / 4; ++i) {
 			if (i % 4 == 0)
 				printk("[%02x] ", i * 4);
@@ -555,12 +555,12 @@ void mthca_free_mr(struct mthca_dev *dev, struct mthca_mr *mr)
 	int err;
 
 	err = mthca_HW2SW_MPT(dev, NULL,
-			      key_to_hw_index(dev, mr->ibmr.lkey) &
+			      key_to_hw_index(dev, mr->ibmr.key) &
 			      (dev->limits.num_mpts - 1));
 	if (err)
 		mthca_warn(dev, "HW2SW_MPT failed (%d)\n", err);
 
-	mthca_free_region(dev, mr->ibmr.lkey);
+	mthca_free_region(dev, mr->ibmr.key);
 	mthca_free_mtt(dev, mr->mtt);
 }
 
diff --git a/drivers/infiniband/hw/mthca/mthca_provider.c b/drivers/infiniband/hw/mthca/mthca_provider.c
index 6d0a1db..13f3fd1 100644
--- a/drivers/infiniband/hw/mthca/mthca_provider.c
+++ b/drivers/infiniband/hw/mthca/mthca_provider.c
@@ -460,7 +460,7 @@ static struct ib_srq *mthca_create_srq(struct ib_pd *pd,
 		if (err)
 			goto err_free;
 
-		srq->mr.ibmr.lkey = ucmd.lkey;
+		srq->mr.ibmr.key  = ucmd.lkey;
 		srq->db_index     = ucmd.db_index;
 	}
 
@@ -555,7 +555,7 @@ static struct ib_qp *mthca_create_qp(struct ib_pd *pd,
 				return ERR_PTR(err);
 			}
 
-			qp->mr.ibmr.lkey = ucmd.lkey;
+			qp->mr.ibmr.key  = ucmd.lkey;
 			qp->sq.db_index  = ucmd.sq_db_index;
 			qp->rq.db_index  = ucmd.rq_db_index;
 		}
@@ -680,7 +680,7 @@ static struct ib_cq *mthca_create_cq(struct ib_device *ibdev,
 	}
 
 	if (context) {
-		cq->buf.mr.ibmr.lkey = ucmd.lkey;
+		cq->buf.mr.ibmr.key  = ucmd.lkey;
 		cq->set_ci_db_index  = ucmd.set_db_index;
 		cq->arm_db_index     = ucmd.arm_db_index;
 	}
@@ -789,7 +789,7 @@ static int mthca_resize_cq(struct ib_cq *ibcq, int entries, struct ib_udata *uda
 		ret = mthca_alloc_resize_buf(dev, cq, entries);
 		if (ret)
 			goto out;
-		lkey = cq->resize_buf->buf.mr.ibmr.lkey;
+		lkey = cq->resize_buf->buf.mr.ibmr.key;
 	} else {
 		if (ib_copy_from_udata(&ucmd, udata, sizeof ucmd)) {
 			ret = -EFAULT;
diff --git a/drivers/infiniband/hw/mthca/mthca_qp.c b/drivers/infiniband/hw/mthca/mthca_qp.c
index 35fe506..23a0a49 100644
--- a/drivers/infiniband/hw/mthca/mthca_qp.c
+++ b/drivers/infiniband/hw/mthca/mthca_qp.c
@@ -692,7 +692,7 @@ static int __mthca_modify_qp(struct ib_qp *ibqp,
 	/* leave rdd as 0 */
 	qp_context->pd         = cpu_to_be32(to_mpd(ibqp->pd)->pd_num);
 	/* leave wqe_base as 0 (we always create an MR based at 0 for WQs) */
-	qp_context->wqe_lkey   = cpu_to_be32(qp->mr.ibmr.lkey);
+	qp_context->wqe_lkey   = cpu_to_be32(qp->mr.ibmr.key);
 	qp_context->params1    = cpu_to_be32((MTHCA_ACK_REQ_FREQ << 28) |
 					     (MTHCA_FLIGHT_LIMIT << 24) |
 					     MTHCA_QP_BIT_SWE);
@@ -1535,7 +1535,7 @@ static int build_mlx_header(struct mthca_dev *dev, struct mthca_sqp *sqp,
 					ind * MTHCA_UD_HEADER_SIZE);
 
 	data->byte_count = cpu_to_be32(header_size);
-	data->lkey       = cpu_to_be32(to_mpd(sqp->qp.ibqp.pd)->ntmr.ibmr.lkey);
+	data->lkey       = cpu_to_be32(to_mpd(sqp->qp.ibqp.pd)->ntmr.ibmr.key);
 	data->addr       = cpu_to_be64(sqp->header_dma +
 				       ind * MTHCA_UD_HEADER_SIZE);
 
diff --git a/drivers/infiniband/hw/mthca/mthca_srq.c b/drivers/infiniband/hw/mthca/mthca_srq.c
index d22f970..d25063c 100644
--- a/drivers/infiniband/hw/mthca/mthca_srq.c
+++ b/drivers/infiniband/hw/mthca/mthca_srq.c
@@ -101,7 +101,7 @@ static void mthca_tavor_init_srq_context(struct mthca_dev *dev,
 
 	context->wqe_base_ds = cpu_to_be64(1 << (srq->wqe_shift - 4));
 	context->state_pd    = cpu_to_be32(pd->pd_num);
-	context->lkey        = cpu_to_be32(srq->mr.ibmr.lkey);
+	context->lkey        = cpu_to_be32(srq->mr.ibmr.key);
 
 	if (pd->ibpd.uobject)
 		context->uar =
@@ -126,7 +126,7 @@ static void mthca_arbel_init_srq_context(struct mthca_dev *dev,
 	max = srq->max;
 	logsize = ilog2(max);
 	context->state_logsize_srqn = cpu_to_be32(logsize << 24 | srq->srqn);
-	context->lkey = cpu_to_be32(srq->mr.ibmr.lkey);
+	context->lkey = cpu_to_be32(srq->mr.ibmr.key);
 	context->db_index = cpu_to_be32(srq->db_index);
 	context->logstride_usrpage = cpu_to_be32((srq->wqe_shift - 4) << 29);
 	if (pd->ibpd.uobject)
diff --git a/drivers/infiniband/hw/nes/nes_cm.c b/drivers/infiniband/hw/nes/nes_cm.c
index bc37adb..8a1c95b 100644
--- a/drivers/infiniband/hw/nes/nes_cm.c
+++ b/drivers/infiniband/hw/nes/nes_cm.c
@@ -3348,7 +3348,7 @@ int nes_accept(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
 				    (u64)(unsigned long)(*start_buff));
 		wqe->wqe_words[NES_IWARP_SQ_WQE_LENGTH0_IDX] =
 			cpu_to_le32(buff_len);
-		wqe->wqe_words[NES_IWARP_SQ_WQE_STAG0_IDX] = ibmr->lkey;
+		wqe->wqe_words[NES_IWARP_SQ_WQE_STAG0_IDX] = ibmr->key;
 		if (nesqp->sq_kmapped) {
 			nesqp->sq_kmapped = 0;
 			kunmap(nesqp->page);
diff --git a/drivers/infiniband/hw/nes/nes_verbs.c b/drivers/infiniband/hw/nes/nes_verbs.c
index 4e57bf0..f7f1f78 100644
--- a/drivers/infiniband/hw/nes/nes_verbs.c
+++ b/drivers/infiniband/hw/nes/nes_verbs.c
@@ -363,8 +363,7 @@ static struct ib_mr *nes_alloc_mr(struct ib_pd *ibpd,
 	ret = alloc_fast_reg_mr(nesdev, nespd, stag, max_num_sg);
 
 	if (ret == 0) {
-		nesmr->ibmr.rkey = stag;
-		nesmr->ibmr.lkey = stag;
+		nesmr->ibmr.key = stag;
 		nesmr->mode = IWNES_MEMREG_TYPE_FMEM;
 		ibmr = &nesmr->ibmr;
 	} else {
@@ -2044,8 +2043,7 @@ struct ib_mr *nes_reg_phys_mr(struct ib_pd *ib_pd, u64 addr, u64 size,
 			&nesmr->pbls_used, &nesmr->pbl_4k);
 
 	if (ret == 0) {
-		nesmr->ibmr.rkey = stag;
-		nesmr->ibmr.lkey = stag;
+		nesmr->ibmr.key = stag;
 		nesmr->mode = IWNES_MEMREG_TYPE_MEM;
 		ibmr = &nesmr->ibmr;
 	} else {
@@ -2313,8 +2311,7 @@ static struct ib_mr *nes_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
 			nes_debug(NES_DBG_MR, "ret=%d\n", ret);
 
 			if (ret == 0) {
-				nesmr->ibmr.rkey = stag;
-				nesmr->ibmr.lkey = stag;
+				nesmr->ibmr.key = stag;
 				nesmr->mode = IWNES_MEMREG_TYPE_MEM;
 				ibmr = &nesmr->ibmr;
 			} else {
@@ -2419,8 +2416,7 @@ static struct ib_mr *nes_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
 			} else {
 				list_add_tail(&nespbl->list, &nes_ucontext->cq_reg_mem_list);
 			}
-			nesmr->ibmr.rkey = -1;
-			nesmr->ibmr.lkey = -1;
+			nesmr->ibmr.key = -1;
 			nesmr->mode = req.reg_type;
 			return &nesmr->ibmr;
 	}
@@ -2475,18 +2471,19 @@ static int nes_dereg_mr(struct ib_mr *ib_mr)
 	set_wqe_32bit_value(cqp_wqe->wqe_words, NES_CQP_WQE_OPCODE_IDX,
 			NES_CQP_DEALLOCATE_STAG | NES_CQP_STAG_VA_TO |
 			NES_CQP_STAG_DEALLOC_PBLS | NES_CQP_STAG_MR);
-	set_wqe_32bit_value(cqp_wqe->wqe_words, NES_CQP_STAG_WQE_STAG_IDX, ib_mr->rkey);
+	set_wqe_32bit_value(cqp_wqe->wqe_words, NES_CQP_STAG_WQE_STAG_IDX,
+			ib_mr->key);
 
 	atomic_set(&cqp_request->refcount, 2);
 	nes_post_cqp_request(nesdev, cqp_request);
 
 	/* Wait for CQP */
-	nes_debug(NES_DBG_MR, "Waiting for deallocate STag 0x%08X completed\n", ib_mr->rkey);
+	nes_debug(NES_DBG_MR, "Waiting for deallocate STag 0x%08X completed\n", ib_mr->key);
 	ret = wait_event_timeout(cqp_request->waitq, (cqp_request->request_done != 0),
 			NES_EVENT_TIMEOUT);
 	nes_debug(NES_DBG_MR, "Deallocate STag 0x%08X completed, wait_event_timeout ret = %u,"
 			" CQP Major:Minor codes = 0x%04X:0x%04X\n",
-			ib_mr->rkey, ret, cqp_request->major_code, cqp_request->minor_code);
+			ib_mr->key, ret, cqp_request->major_code, cqp_request->minor_code);
 
 	major_code = cqp_request->major_code;
 	minor_code = cqp_request->minor_code;
@@ -2495,13 +2492,13 @@ static int nes_dereg_mr(struct ib_mr *ib_mr)
 
 	if (!ret) {
 		nes_debug(NES_DBG_MR, "Timeout waiting to destroy STag,"
-				" ib_mr=%p, rkey = 0x%08X\n",
-				ib_mr, ib_mr->rkey);
+				" ib_mr=%p, key = 0x%08X\n",
+				ib_mr, ib_mr->key);
 		return -ETIME;
 	} else if (major_code) {
 		nes_debug(NES_DBG_MR, "Error (0x%04X:0x%04X) while attempting"
-				" to destroy STag, ib_mr=%p, rkey = 0x%08X\n",
-				major_code, minor_code, ib_mr, ib_mr->rkey);
+				" to destroy STag, ib_mr=%p, key = 0x%08X\n",
+				major_code, minor_code, ib_mr, ib_mr->key);
 		return -EIO;
 	}
 
@@ -2525,7 +2522,7 @@ static int nes_dereg_mr(struct ib_mr *ib_mr)
 		spin_unlock_irqrestore(&nesadapter->pbl_lock, flags);
 	}
 	nes_free_resource(nesadapter, nesadapter->allocated_mrs,
-			(ib_mr->rkey & 0x0fffff00) >> 8);
+			(ib_mr->key & 0x0fffff00) >> 8);
 
 	kfree(nesmr);
 
@@ -3217,7 +3214,7 @@ static int nes_post_send(struct ib_qp *ibqp, struct ib_send_wr *ib_wr,
 					    NES_IWARP_SQ_FMR_WQE_LENGTH_HIGH_IDX, 0);
 			set_wqe_32bit_value(wqe->wqe_words,
 					    NES_IWARP_SQ_FMR_WQE_MR_STAG_IDX,
-					    reg_wr(ib_wr)->key);
+					    reg_wr(ib_wr)->mr->key);
 
 			if (page_shift == 12) {
 				wqe_misc |= NES_IWARP_SQ_FMR_WQE_PAGE_SIZE_4K;
@@ -3258,7 +3255,7 @@ static int nes_post_send(struct ib_qp *ibqp, struct ib_send_wr *ib_wr,
 				  "page_list_len: %u, wqe_misc: %x\n",
 				  (unsigned long long) mr->ibmr.iova,
 				  mr->ibmr.length,
-				  reg_wr(ib_wr)->key,
+				  reg_wr(ib_wr)->mr->key,
 				  (unsigned long long) mr->paddr,
 				  mr->npages,
 				  wqe_misc);
diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
index 3cffab7..6d09634 100644
--- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
+++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
@@ -744,9 +744,7 @@ static int ocrdma_alloc_lkey(struct ocrdma_dev *dev, struct ocrdma_mr *mr,
 	if (status)
 		return status;
 
-	mr->ibmr.lkey = mr->hwmr.lkey;
-	if (mr->hwmr.remote_wr || mr->hwmr.remote_rd)
-		mr->ibmr.rkey = mr->hwmr.lkey;
+	mr->ibmr.key = mr->hwmr.lkey;
 	return 0;
 }
 
@@ -944,9 +942,7 @@ struct ib_mr *ocrdma_reg_user_mr(struct ib_pd *ibpd, u64 start, u64 len,
 	status = ocrdma_reg_mr(dev, &mr->hwmr, pd->id, acc);
 	if (status)
 		goto mbx_err;
-	mr->ibmr.lkey = mr->hwmr.lkey;
-	if (mr->hwmr.remote_wr || mr->hwmr.remote_rd)
-		mr->ibmr.rkey = mr->hwmr.lkey;
+	mr->ibmr.key = mr->hwmr.lkey;
 
 	return &mr->ibmr;
 
@@ -2117,7 +2113,7 @@ static int ocrdma_build_reg(struct ocrdma_qp *qp,
 		hdr->rsvd_lkey_flags |= OCRDMA_LKEY_FLAG_REMOTE_WR;
 	if (wr->access & IB_ACCESS_REMOTE_READ)
 		hdr->rsvd_lkey_flags |= OCRDMA_LKEY_FLAG_REMOTE_RD;
-	hdr->lkey = wr->key;
+	hdr->lkey = wr->mr->key;
 	hdr->total_len = mr->ibmr.length;
 
 	fbo = mr->ibmr.iova - mr->pages[0];
@@ -3003,8 +2999,7 @@ struct ib_mr *ocrdma_alloc_mr(struct ib_pd *ibpd,
 	status = ocrdma_reg_mr(dev, &mr->hwmr, pd->id, 0);
 	if (status)
 		goto mbx_err;
-	mr->ibmr.rkey = mr->hwmr.lkey;
-	mr->ibmr.lkey = mr->hwmr.lkey;
+	mr->ibmr.key = mr->hwmr.lkey;
 	dev->stag_arr[(mr->hwmr.lkey >> 8) & (OCRDMA_MAX_STAG - 1)] =
 		(unsigned long) mr;
 	return &mr->ibmr;
diff --git a/drivers/infiniband/hw/qib/qib_keys.c b/drivers/infiniband/hw/qib/qib_keys.c
index d725c56..1be4807 100644
--- a/drivers/infiniband/hw/qib/qib_keys.c
+++ b/drivers/infiniband/hw/qib/qib_keys.c
@@ -344,7 +344,7 @@ int qib_reg_mr(struct qib_qp *qp, struct ib_reg_wr *wr)
 	struct qib_pd *pd = to_ipd(qp->ibqp.pd);
 	struct qib_mr *mr = to_imr(wr->mr);
 	struct qib_mregion *mrg;
-	u32 key = wr->key;
+	u32 key = wr->mr->key;
 	unsigned i, n, m;
 	int ret = -EINVAL;
 	unsigned long flags;
diff --git a/drivers/infiniband/hw/qib/qib_mr.c b/drivers/infiniband/hw/qib/qib_mr.c
index 5f53304..45beb66 100644
--- a/drivers/infiniband/hw/qib/qib_mr.c
+++ b/drivers/infiniband/hw/qib/qib_mr.c
@@ -154,8 +154,7 @@ static struct qib_mr *alloc_mr(int count, struct ib_pd *pd)
 	rval = qib_alloc_lkey(&mr->mr, 0);
 	if (rval)
 		goto bail_mregion;
-	mr->ibmr.lkey = mr->mr.lkey;
-	mr->ibmr.rkey = mr->mr.lkey;
+	mr->ibmr.key = mr->mr.lkey;
 done:
 	return mr;
 
diff --git a/drivers/infiniband/hw/usnic/usnic_ib_verbs.c b/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
index 740170c..75251b3 100644
--- a/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
+++ b/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
@@ -615,7 +615,7 @@ struct ib_mr *usnic_ib_reg_mr(struct ib_pd *pd, u64 start, u64 length,
 		goto err_free;
 	}
 
-	mr->ibmr.lkey = mr->ibmr.rkey = 0;
+	mr->ibmr.key = 0;
 	return &mr->ibmr;
 
 err_free:
diff --git a/drivers/infiniband/ulp/iser/iser_memory.c b/drivers/infiniband/ulp/iser/iser_memory.c
index 81ad5e9..fb4ca76 100644
--- a/drivers/infiniband/ulp/iser/iser_memory.c
+++ b/drivers/infiniband/ulp/iser/iser_memory.c
@@ -250,7 +250,7 @@ iser_reg_dma(struct iser_device *device, struct iser_data_buf *mem,
 	struct scatterlist *sg = mem->sg;
 
 	reg->sge.lkey = device->pd->local_dma_lkey;
-	reg->rkey = device->mr->rkey;
+	reg->rkey = device->mr->key;
 	reg->sge.addr = ib_sg_dma_address(device->ib_device, &sg[0]);
 	reg->sge.length = ib_sg_dma_len(device->ib_device, &sg[0]);
 
@@ -415,16 +415,13 @@ iser_set_prot_checks(struct scsi_cmnd *sc, u8 *mask)
 static void
 iser_inv_rkey(struct ib_send_wr *inv_wr, struct ib_mr *mr)
 {
-	u32 rkey;
-
 	inv_wr->opcode = IB_WR_LOCAL_INV;
 	inv_wr->wr_id = ISER_FASTREG_LI_WRID;
-	inv_wr->ex.invalidate_rkey = mr->rkey;
+	inv_wr->ex.invalidate_rkey = mr->key;
 	inv_wr->send_flags = 0;
 	inv_wr->num_sge = 0;
 
-	rkey = ib_inc_rkey(mr->rkey);
-	ib_update_fast_reg_key(mr, rkey);
+	ib_update_fast_reg_key(mr, ib_inc_rkey(mr->key));
 }
 
 static int
@@ -466,8 +463,8 @@ iser_reg_sig_mr(struct iscsi_iser_task *iser_task,
 			   IB_ACCESS_REMOTE_WRITE;
 	pi_ctx->sig_mr_valid = 0;
 
-	sig_reg->sge.lkey = pi_ctx->sig_mr->lkey;
-	sig_reg->rkey = pi_ctx->sig_mr->rkey;
+	sig_reg->sge.lkey = pi_ctx->sig_mr->key;
+	sig_reg->rkey = pi_ctx->sig_mr->key;
 	sig_reg->sge.addr = 0;
 	sig_reg->sge.length = scsi_transfer_length(iser_task->sc);
 
@@ -504,15 +501,13 @@ static int iser_fast_reg_mr(struct iscsi_iser_task *iser_task,
 	wr->wr.send_flags = 0;
 	wr->wr.num_sge = 0;
 	wr->mr = mr;
-	wr->key = mr->rkey;
 	wr->access = IB_ACCESS_LOCAL_WRITE  |
 		     IB_ACCESS_REMOTE_WRITE |
 		     IB_ACCESS_REMOTE_READ;
 
 	rsc->mr_valid = 0;
 
-	reg->sge.lkey = mr->lkey;
-	reg->rkey = mr->rkey;
+	reg->sge.lkey = mr->key;
 	reg->sge.addr = mr->iova;
 	reg->sge.length = mr->length;
 
diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
index 8d9a167..d38a1e7 100644
--- a/drivers/infiniband/ulp/isert/ib_isert.c
+++ b/drivers/infiniband/ulp/isert/ib_isert.c
@@ -2489,16 +2489,12 @@ unmap_cmd:
 static inline void
 isert_inv_rkey(struct ib_send_wr *inv_wr, struct ib_mr *mr)
 {
-	u32 rkey;
-
 	memset(inv_wr, 0, sizeof(*inv_wr));
 	inv_wr->wr_id = ISER_FASTREG_LI_WRID;
 	inv_wr->opcode = IB_WR_LOCAL_INV;
-	inv_wr->ex.invalidate_rkey = mr->rkey;
+	inv_wr->ex.invalidate_rkey = mr->key;
 
-	/* Bump the key */
-	rkey = ib_inc_rkey(mr->rkey);
-	ib_update_fast_reg_key(mr, rkey);
+	ib_update_fast_reg_key(mr, ib_inc_rkey(mr->key));
 }
 
 static int
@@ -2552,7 +2548,6 @@ isert_fast_reg_mr(struct isert_conn *isert_conn,
 	reg_wr.wr.send_flags = 0;
 	reg_wr.wr.num_sge = 0;
 	reg_wr.mr = mr;
-	reg_wr.key = mr->lkey;
 	reg_wr.access = IB_ACCESS_LOCAL_WRITE;
 
 	if (!wr)
@@ -2567,7 +2562,7 @@ isert_fast_reg_mr(struct isert_conn *isert_conn,
 	}
 	fr_desc->ind &= ~ind;
 
-	sge->lkey = mr->lkey;
+	sge->lkey = mr->key;
 	sge->addr = mr->iova;
 	sge->length = mr->length;
 
@@ -2680,7 +2675,7 @@ isert_reg_sig_mr(struct isert_conn *isert_conn,
 	}
 	fr_desc->ind &= ~ISERT_SIG_KEY_VALID;
 
-	rdma_wr->ib_sg[SIG].lkey = pi_ctx->sig_mr->lkey;
+	rdma_wr->ib_sg[SIG].lkey = pi_ctx->sig_mr->key;
 	rdma_wr->ib_sg[SIG].addr = 0;
 	rdma_wr->ib_sg[SIG].length = se_cmd->data_length;
 	if (se_cmd->prot_op != TARGET_PROT_DIN_STRIP &&
diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
index 8521c4a..6a8ef10 100644
--- a/drivers/infiniband/ulp/srp/ib_srp.c
+++ b/drivers/infiniband/ulp/srp/ib_srp.c
@@ -1070,11 +1070,11 @@ static void srp_unmap_data(struct scsi_cmnd *scmnd,
 		struct srp_fr_desc **pfr;
 
 		for (i = req->nmdesc, pfr = req->fr_list; i > 0; i--, pfr++) {
-			res = srp_inv_rkey(ch, (*pfr)->mr->rkey);
+			res = srp_inv_rkey(ch, (*pfr)->mr->key);
 			if (res < 0) {
 				shost_printk(KERN_ERR, target->scsi_host, PFX
 				  "Queueing INV WR for rkey %#x failed (%d)\n",
-				  (*pfr)->mr->rkey, res);
+				  (*pfr)->mr->key, res);
 				queue_work(system_long_wq,
 					   &target->tl_err_work);
 			}
@@ -1286,7 +1286,7 @@ static int srp_map_finish_fmr(struct srp_map_state *state,
 
 	if (state->npages == 1 && target->global_mr) {
 		srp_map_desc(state, state->base_dma_addr, state->dma_len,
-			     target->global_mr->rkey);
+			     target->global_mr->key);
 		goto reset_state;
 	}
 
@@ -1316,7 +1316,6 @@ static int srp_map_finish_fr(struct srp_map_state *state,
 	struct ib_send_wr *bad_wr;
 	struct ib_reg_wr wr;
 	struct srp_fr_desc *desc;
-	u32 rkey;
 	int n, err;
 
 	if (state->fr.next >= state->fr.end)
@@ -1330,7 +1329,7 @@ static int srp_map_finish_fr(struct srp_map_state *state,
 	if (state->sg_nents == 1 && target->global_mr) {
 		srp_map_desc(state, sg_dma_address(state->sg),
 			     sg_dma_len(state->sg),
-			     target->global_mr->rkey);
+			     target->global_mr->key);
 		return 1;
 	}
 
@@ -1338,8 +1337,7 @@ static int srp_map_finish_fr(struct srp_map_state *state,
 	if (!desc)
 		return -ENOMEM;
 
-	rkey = ib_inc_rkey(desc->mr->rkey);
-	ib_update_fast_reg_key(desc->mr, rkey);
+	ib_update_fast_reg_key(desc->mr, ib_inc_rkey(desc->mr->key));
 
 	n = ib_map_mr_sg(desc->mr, state->sg, state->sg_nents,
 			 dev->mr_page_size);
@@ -1352,7 +1350,6 @@ static int srp_map_finish_fr(struct srp_map_state *state,
 	wr.wr.num_sge = 0;
 	wr.wr.send_flags = 0;
 	wr.mr = desc->mr;
-	wr.key = desc->mr->rkey;
 	wr.access = (IB_ACCESS_LOCAL_WRITE |
 		     IB_ACCESS_REMOTE_READ |
 		     IB_ACCESS_REMOTE_WRITE);
@@ -1361,7 +1358,7 @@ static int srp_map_finish_fr(struct srp_map_state *state,
 	state->nmdesc++;
 
 	srp_map_desc(state, desc->mr->iova,
-		     desc->mr->length, desc->mr->rkey);
+		     desc->mr->length, desc->mr->key);
 
 	err = ib_post_send(ch->qp, &wr.wr, &bad_wr);
 	if (unlikely(err))
@@ -1480,7 +1477,7 @@ static int srp_map_sg_dma(struct srp_map_state *state, struct srp_rdma_ch *ch,
 	for_each_sg(scat, sg, count, i) {
 		srp_map_desc(state, ib_sg_dma_address(dev->dev, sg),
 			     ib_sg_dma_len(dev->dev, sg),
-			     target->global_mr->rkey);
+			     target->global_mr->key);
 	}
 
 	req->nmdesc = state->nmdesc;
@@ -1589,7 +1586,7 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_rdma_ch *ch,
 		struct srp_direct_buf *buf = (void *) cmd->add_data;
 
 		buf->va  = cpu_to_be64(ib_sg_dma_address(ibdev, scat));
-		buf->key = cpu_to_be32(target->global_mr->rkey);
+		buf->key = cpu_to_be32(target->global_mr->key);
 		buf->len = cpu_to_be32(ib_sg_dma_len(ibdev, scat));
 
 		req->nmdesc = 0;
@@ -1655,7 +1652,7 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_rdma_ch *ch,
 			return ret;
 		req->nmdesc++;
 	} else {
-		idb_rkey = target->global_mr->rkey;
+		idb_rkey = target->global_mr->key;
 	}
 
 	indirect_hdr->table_desc.va = cpu_to_be64(req->indirect_dma_addr);
diff --git a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd_cb.c b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd_cb.c
index 2607503..cc89770 100644
--- a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd_cb.c
+++ b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd_cb.c
@@ -161,7 +161,7 @@ kiblnd_post_rx(kib_rx_t *rx, int credit)
 	mr = kiblnd_find_dma_mr(conn->ibc_hdev, rx->rx_msgaddr, IBLND_MSG_SIZE);
 	LASSERT(mr != NULL);
 
-	rx->rx_sge.lkey   = mr->lkey;
+	rx->rx_sge.lkey   = mr->key;
 	rx->rx_sge.addr   = rx->rx_msgaddr;
 	rx->rx_sge.length = IBLND_MSG_SIZE;
 
@@ -645,7 +645,7 @@ static int kiblnd_map_tx(lnet_ni_t *ni, kib_tx_t *tx, kib_rdma_desc_t *rd,
 	mr = kiblnd_find_rd_dma_mr(hdev, rd);
 	if (mr != NULL) {
 		/* found pre-mapping MR */
-		rd->rd_key = (rd != tx->tx_rd) ? mr->rkey : mr->lkey;
+		rd->rd_key = mr->key;
 		return 0;
 	}
 
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index 81e047e..51dd3a7 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -1089,7 +1089,6 @@ static inline struct ib_ud_wr *ud_wr(struct ib_send_wr *wr)
 struct ib_reg_wr {
 	struct ib_send_wr	wr;
 	struct ib_mr		*mr;
-	u32			key;
 	int			access;
 };
 
@@ -1273,8 +1272,7 @@ struct ib_mr {
 	struct ib_device  *device;
 	struct ib_pd	  *pd;
 	struct ib_uobject *uobject;
-	u32		   lkey;
-	u32		   rkey;
+	u32		   key;
 	u64		   iova;
 	u32		   length;
 	unsigned int	   page_size;
@@ -2799,8 +2797,7 @@ struct ib_mr *ib_alloc_mr(struct ib_pd *pd,
  */
 static inline void ib_update_fast_reg_key(struct ib_mr *mr, u8 newkey)
 {
-	mr->lkey = (mr->lkey & 0xffffff00) | newkey;
-	mr->rkey = (mr->rkey & 0xffffff00) | newkey;
+	mr->key = (mr->key & 0xffffff00) | newkey;
 }
 
 /**
diff --git a/net/rds/iw.h b/net/rds/iw.h
index 5af01d1..e9d4d8c 100644
--- a/net/rds/iw.h
+++ b/net/rds/iw.h
@@ -63,7 +63,6 @@ struct rds_iw_mapping {
 	spinlock_t		m_lock;	/* protect the mapping struct */
 	struct list_head	m_list;
 	struct rds_iw_mr	*m_mr;
-	uint32_t		m_rkey;
 	struct rds_iw_scatterlist m_sg;
 };
 
@@ -267,7 +266,8 @@ static inline void rds_iw_dma_sync_sg_for_device(struct ib_device *dev,
 
 static inline u32 rds_iw_local_dma_lkey(struct rds_iw_connection *ic)
 {
-	return ic->i_dma_local_lkey ? ic->i_cm_id->device->local_dma_lkey : ic->i_mr->lkey;
+	return ic->i_dma_local_lkey ?
+		ic->i_cm_id->device->local_dma_lkey : ic->i_mr->key;
 }
 
 /* ib.c */
diff --git a/net/rds/iw_rdma.c b/net/rds/iw_rdma.c
index b09a40c..3e683a9 100644
--- a/net/rds/iw_rdma.c
+++ b/net/rds/iw_rdma.c
@@ -603,7 +603,7 @@ void *rds_iw_get_mr(struct scatterlist *sg, unsigned long nents,
 
 	ret = rds_iw_map_reg(rds_iwdev->mr_pool, ibmr, sg, nents);
 	if (ret == 0)
-		*key_ret = ibmr->mr->rkey;
+		*key_ret = ibmr->mr->key;
 	else
 		printk(KERN_WARNING "RDS/IW: failed to map mr (errno=%d)\n", ret);
 
@@ -675,7 +675,6 @@ static int rds_iw_rdma_reg_mr(struct rds_iw_mapping *mapping)
 	reg_wr.wr.wr_id = RDS_IW_REG_WR_ID;
 	reg_wr.wr.num_sge = 0;
 	reg_wr.mr = ibmr->mr;
-	reg_wr.key = mapping->m_rkey;
 	reg_wr.access = IB_ACCESS_LOCAL_WRITE |
 			IB_ACCESS_REMOTE_READ |
 			IB_ACCESS_REMOTE_WRITE;
@@ -687,7 +686,6 @@ static int rds_iw_rdma_reg_mr(struct rds_iw_mapping *mapping)
 	 * counter, which should guarantee uniqueness.
 	 */
 	ib_update_fast_reg_key(ibmr->mr, ibmr->remap_count++);
-	mapping->m_rkey = ibmr->mr->rkey;
 
 	failed_wr = &reg_wr.wr;
 	ret = ib_post_send(ibmr->cm_id->qp, &reg_wr.wr, &failed_wr);
@@ -709,7 +707,7 @@ static int rds_iw_rdma_fastreg_inv(struct rds_iw_mr *ibmr)
 	memset(&s_wr, 0, sizeof(s_wr));
 	s_wr.wr_id = RDS_IW_LOCAL_INV_WR_ID;
 	s_wr.opcode = IB_WR_LOCAL_INV;
-	s_wr.ex.invalidate_rkey = ibmr->mr->rkey;
+	s_wr.ex.invalidate_rkey = ibmr->mr->key;
 	s_wr.send_flags = IB_SEND_SIGNALED;
 
 	failed_wr = &s_wr;
diff --git a/net/rds/iw_send.c b/net/rds/iw_send.c
index e20bd50..acfe38a 100644
--- a/net/rds/iw_send.c
+++ b/net/rds/iw_send.c
@@ -775,7 +775,6 @@ static int rds_iw_build_send_reg(struct rds_iw_send_work *send,
 	send->s_reg_wr.wr.wr_id = 0;
 	send->s_reg_wr.wr.num_sge = 0;
 	send->s_reg_wr.mr = send->s_mr;
-	send->s_reg_wr.key = send->s_mr->rkey;
 	send->s_reg_wr.access = IB_ACCESS_REMOTE_WRITE;
 
 	ib_update_fast_reg_key(send->s_mr, send->s_remap_count++);
@@ -917,7 +916,7 @@ int rds_iw_xmit_rdma(struct rds_connection *conn, struct rm_rdma_op *op)
 			send->s_rdma_wr.wr.num_sge = 1;
 			send->s_sge[0].addr = conn->c_xmit_rm->m_rs->rs_user_addr;
 			send->s_sge[0].length = conn->c_xmit_rm->m_rs->rs_user_bytes;
-			send->s_sge[0].lkey = ic->i_sends[fr_pos].s_mr->lkey;
+			send->s_sge[0].lkey = ic->i_sends[fr_pos].s_mr->key;
 		}
 
 		rdsdebug("send %p wr %p num_sge %u next %p\n", send,
diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c
index ae2a241..0b1b89b 100644
--- a/net/sunrpc/xprtrdma/frwr_ops.c
+++ b/net/sunrpc/xprtrdma/frwr_ops.c
@@ -377,7 +377,7 @@ frwr_op_map(struct rpcrdma_xprt *r_xprt, struct rpcrdma_mr_seg *seg,
 	dprintk("RPC:       %s: Using frmr %p to map %u segments (%u bytes)\n",
 		__func__, mw, frmr->sg_nents, mr->length);
 
-	key = (u8)(mr->rkey & 0x000000FF);
+	key = (u8)(mr->key & 0x000000FF);
 	ib_update_fast_reg_key(mr, ++key);
 
 	reg_wr.wr.next = NULL;
@@ -386,7 +386,6 @@ frwr_op_map(struct rpcrdma_xprt *r_xprt, struct rpcrdma_mr_seg *seg,
 	reg_wr.wr.num_sge = 0;
 	reg_wr.wr.send_flags = 0;
 	reg_wr.mr = mr;
-	reg_wr.key = mr->rkey;
 	reg_wr.access = writing ?
 			IB_ACCESS_REMOTE_WRITE | IB_ACCESS_LOCAL_WRITE :
 			IB_ACCESS_REMOTE_READ;
@@ -398,7 +397,7 @@ frwr_op_map(struct rpcrdma_xprt *r_xprt, struct rpcrdma_mr_seg *seg,
 
 	seg1->mr_dir = direction;
 	seg1->rl_mw = mw;
-	seg1->mr_rkey = mr->rkey;
+	seg1->mr_rkey = mr->key;
 	seg1->mr_base = mr->iova;
 	seg1->mr_nsegs = frmr->sg_nents;
 	seg1->mr_len = mr->length;
@@ -433,7 +432,7 @@ frwr_op_unmap(struct rpcrdma_xprt *r_xprt, struct rpcrdma_mr_seg *seg)
 	memset(&invalidate_wr, 0, sizeof(invalidate_wr));
 	invalidate_wr.wr_id = (unsigned long)(void *)mw;
 	invalidate_wr.opcode = IB_WR_LOCAL_INV;
-	invalidate_wr.ex.invalidate_rkey = frmr->fr_mr->rkey;
+	invalidate_wr.ex.invalidate_rkey = frmr->fr_mr->key;
 	DECR_CQCOUNT(&r_xprt->rx_ep);
 
 	ib_dma_unmap_sg(ia->ri_device, frmr->sg, frmr->sg_nents, seg1->mr_dir);
diff --git a/net/sunrpc/xprtrdma/physical_ops.c b/net/sunrpc/xprtrdma/physical_ops.c
index 617b76f..856730b 100644
--- a/net/sunrpc/xprtrdma/physical_ops.c
+++ b/net/sunrpc/xprtrdma/physical_ops.c
@@ -66,7 +66,7 @@ physical_op_map(struct rpcrdma_xprt *r_xprt, struct rpcrdma_mr_seg *seg,
 	struct rpcrdma_ia *ia = &r_xprt->rx_ia;
 
 	rpcrdma_map_one(ia->ri_device, seg, rpcrdma_data_dir(writing));
-	seg->mr_rkey = ia->ri_dma_mr->rkey;
+	seg->mr_rkey = ia->ri_dma_mr->key;
 	seg->mr_base = seg->mr_dma;
 	seg->mr_nsegs = 1;
 	return 1;
diff --git a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
index ff4f01e..b1d7528 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
@@ -289,11 +289,11 @@ int rdma_read_chunk_frmr(struct svcxprt_rdma *xprt,
 	}
 
 	/* Bump the key */
-	key = (u8)(frmr->mr->lkey & 0x000000FF);
+	key = (u8)(frmr->mr->key & 0x000000FF);
 	ib_update_fast_reg_key(frmr->mr, ++key);
 
 	ctxt->sge[0].addr = frmr->mr->iova;
-	ctxt->sge[0].lkey = frmr->mr->lkey;
+	ctxt->sge[0].lkey = frmr->mr->key;
 	ctxt->sge[0].length = frmr->mr->length;
 	ctxt->count = 1;
 	ctxt->read_hdr = head;
@@ -304,7 +304,6 @@ int rdma_read_chunk_frmr(struct svcxprt_rdma *xprt,
 	reg_wr.wr.send_flags = IB_SEND_SIGNALED;
 	reg_wr.wr.num_sge = 0;
 	reg_wr.mr = frmr->mr;
-	reg_wr.key = frmr->mr->lkey;
 	reg_wr.access = frmr->access_flags;
 	reg_wr.wr.next = &read_wr.wr;
 
@@ -318,7 +317,7 @@ int rdma_read_chunk_frmr(struct svcxprt_rdma *xprt,
 	if (xprt->sc_dev_caps & SVCRDMA_DEVCAP_READ_W_INV) {
 		read_wr.wr.opcode = IB_WR_RDMA_READ_WITH_INV;
 		read_wr.wr.wr_id = (unsigned long)ctxt;
-		read_wr.wr.ex.invalidate_rkey = ctxt->frmr->mr->lkey;
+		read_wr.wr.ex.invalidate_rkey = ctxt->frmr->mr->key;
 	} else {
 		read_wr.wr.opcode = IB_WR_RDMA_READ;
 		read_wr.wr.next = &inv_wr;
@@ -327,7 +326,7 @@ int rdma_read_chunk_frmr(struct svcxprt_rdma *xprt,
 		inv_wr.wr_id = (unsigned long)ctxt;
 		inv_wr.opcode = IB_WR_LOCAL_INV;
 		inv_wr.send_flags = IB_SEND_SIGNALED | IB_SEND_FENCE;
-		inv_wr.ex.invalidate_rkey = frmr->mr->lkey;
+		inv_wr.ex.invalidate_rkey = frmr->mr->key;
 	}
 	ctxt->wr_op = read_wr.wr.opcode;
 
diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c
index 9f3eb89..779c22d 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_transport.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c
@@ -1045,7 +1045,7 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt)
 				ret);
 			goto errout;
 		}
-		newxprt->sc_dma_lkey = newxprt->sc_phys_mr->lkey;
+		newxprt->sc_dma_lkey = newxprt->sc_phys_mr->key;
 	} else
 		newxprt->sc_dma_lkey = dev->local_dma_lkey;
 
-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 11/11] IB: provide better access flags for fast registrations
       [not found] ` <1448214409-7729-1-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
                     ` (9 preceding siblings ...)
  2015-11-22 17:46   ` [PATCH 10/11] IB: only keep a single key in struct ib_mr Christoph Hellwig
@ 2015-11-22 17:46   ` Christoph Hellwig
       [not found]     ` <1448214409-7729-12-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
  2015-11-23  9:03   ` memory registration updates Sagi Grimberg
  2015-11-23 15:06   ` Steve Wise
  12 siblings, 1 reply; 35+ messages in thread
From: Christoph Hellwig @ 2015-11-22 17:46 UTC (permalink / raw)
  To: dledford-H+wXaHxf7aLQT0dZR+AlfA; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

Instead of the confusing IB spec values provide a flags argument that
describes:

  a) the operation we perform the memory registration for, and
  b) if we want to access it for read or write purposes.

This helps to abstract out the IB vs iWarp differences as well.

Signed-off-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
---
 drivers/infiniband/hw/cxgb3/iwch_qp.c       |  3 +-
 drivers/infiniband/hw/cxgb4/qp.c            |  3 +-
 drivers/infiniband/hw/mlx4/qp.c             |  2 +-
 drivers/infiniband/hw/mlx5/qp.c             |  2 +-
 drivers/infiniband/hw/nes/nes_verbs.c       |  2 +-
 drivers/infiniband/hw/ocrdma/ocrdma_verbs.c |  7 ++--
 drivers/infiniband/hw/qib/qib_keys.c        |  2 +-
 drivers/infiniband/ulp/iser/iser_memory.c   |  5 +--
 drivers/infiniband/ulp/isert/ib_isert.c     |  4 +-
 drivers/infiniband/ulp/srp/ib_srp.c         |  5 +--
 include/rdma/ib_verbs.h                     | 60 ++++++++++++++++++++++++++++-
 net/rds/iw_rdma.c                           |  5 +--
 net/rds/iw_send.c                           |  2 +-
 net/sunrpc/xprtrdma/frwr_ops.c              |  7 ++--
 net/sunrpc/xprtrdma/svc_rdma_recvfrom.c     |  3 +-
 15 files changed, 86 insertions(+), 26 deletions(-)

diff --git a/drivers/infiniband/hw/cxgb3/iwch_qp.c b/drivers/infiniband/hw/cxgb3/iwch_qp.c
index 87be4be..47b3d0c 100644
--- a/drivers/infiniband/hw/cxgb3/iwch_qp.c
+++ b/drivers/infiniband/hw/cxgb3/iwch_qp.c
@@ -165,7 +165,8 @@ static int build_memreg(union t3_wr *wqe, struct ib_reg_wr *wr,
 		V_FR_PAGE_COUNT(mhp->npages) |
 		V_FR_PAGE_SIZE(ilog2(wr->mr->page_size) - 12) |
 		V_FR_TYPE(TPT_VATO) |
-		V_FR_PERMS(iwch_ib_to_tpt_access(wr->access)));
+		V_FR_PERMS(iwch_ib_to_tpt_access(
+				iwarp_scope_to_access(wr->scope))));
 	p = &wqe->fastreg.pbl_addrs[0];
 	for (i = 0; i < mhp->npages; i++, p++) {
 
diff --git a/drivers/infiniband/hw/cxgb4/qp.c b/drivers/infiniband/hw/cxgb4/qp.c
index cb8031b..d28a5a3 100644
--- a/drivers/infiniband/hw/cxgb4/qp.c
+++ b/drivers/infiniband/hw/cxgb4/qp.c
@@ -621,7 +621,8 @@ static int build_memreg(struct t4_sq *sq, union t4_wr *wqe,
 	wqe->fr.qpbinde_to_dcacpu = 0;
 	wqe->fr.pgsz_shift = ilog2(wr->mr->page_size) - 12;
 	wqe->fr.addr_type = FW_RI_VA_BASED_TO;
-	wqe->fr.mem_perms = c4iw_ib_to_tpt_access(wr->access);
+	wqe->fr.mem_perms =
+		c4iw_ib_to_tpt_access(iwarp_scope_to_access(wr->scope));
 	wqe->fr.len_hi = 0;
 	wqe->fr.len_lo = cpu_to_be32(mhp->ibmr.length);
 	wqe->fr.stag = cpu_to_be32(wr->mr->key);
diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
index afba1a9..0e0d4e4 100644
--- a/drivers/infiniband/hw/mlx4/qp.c
+++ b/drivers/infiniband/hw/mlx4/qp.c
@@ -2509,7 +2509,7 @@ static void set_reg_seg(struct mlx4_wqe_fmr_seg *fseg,
 {
 	struct mlx4_ib_mr *mr = to_mmr(wr->mr);
 
-	fseg->flags		= convert_access(wr->access);
+	fseg->flags		= convert_access(ib_scope_to_access(wr->scope));
 	fseg->mem_key		= cpu_to_be32(wr->mr->key);
 	fseg->buf_list		= cpu_to_be64(mr->page_map);
 	fseg->start_addr	= cpu_to_be64(mr->ibmr.iova);
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index ba39045..6eef2cb 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -2449,7 +2449,7 @@ static int set_reg_wr(struct mlx5_ib_qp *qp,
 	if (unlikely((*seg == qp->sq.qend)))
 		*seg = mlx5_get_send_wqe(qp, 0);
 
-	set_reg_mkey_seg(*seg, mr, wr->mr->key, wr->access);
+	set_reg_mkey_seg(*seg, mr, wr->mr->key, ib_scope_to_access(wr->scope));
 	*seg += sizeof(struct mlx5_mkey_seg);
 	*size += sizeof(struct mlx5_mkey_seg) / 16;
 	if (unlikely((*seg == qp->sq.qend)))
diff --git a/drivers/infiniband/hw/nes/nes_verbs.c b/drivers/infiniband/hw/nes/nes_verbs.c
index f7f1f78..93c1231 100644
--- a/drivers/infiniband/hw/nes/nes_verbs.c
+++ b/drivers/infiniband/hw/nes/nes_verbs.c
@@ -3196,7 +3196,7 @@ static int nes_post_send(struct ib_qp *ibqp, struct ib_send_wr *ib_wr,
 		{
 			struct nes_mr *mr = to_nesmr(reg_wr(ib_wr)->mr);
 			int page_shift = ilog2(reg_wr(ib_wr)->mr->page_size);
-			int flags = reg_wr(ib_wr)->access;
+			int flags = iwarp_scope_to_access(reg_wr(ib_wr)->scope);
 
 			if (mr->npages > (NES_4K_PBL_CHUNK_SIZE / sizeof(u64))) {
 				nes_debug(NES_DBG_IW_TX, "SQ_FMR: bad page_list_len\n");
diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
index 6d09634..43a0d24 100644
--- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
+++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
@@ -2100,6 +2100,7 @@ static int ocrdma_build_reg(struct ocrdma_qp *qp,
 	struct ocrdma_pbl *pbl_tbl = mr->hwmr.pbl_table;
 	struct ocrdma_pbe *pbe;
 	u32 wqe_size = sizeof(*fast_reg) + sizeof(*hdr);
+	int access = ib_scope_to_access(wr->scope);
 	int num_pbes = 0, i;
 
 	wqe_size = roundup(wqe_size, OCRDMA_WQE_ALIGN_BYTES);
@@ -2107,11 +2108,11 @@ static int ocrdma_build_reg(struct ocrdma_qp *qp,
 	hdr->cw |= (OCRDMA_FR_MR << OCRDMA_WQE_OPCODE_SHIFT);
 	hdr->cw |= ((wqe_size / OCRDMA_WQE_STRIDE) << OCRDMA_WQE_SIZE_SHIFT);
 
-	if (wr->access & IB_ACCESS_LOCAL_WRITE)
+	if (access & IB_ACCESS_LOCAL_WRITE)
 		hdr->rsvd_lkey_flags |= OCRDMA_LKEY_FLAG_LOCAL_WR;
-	if (wr->access & IB_ACCESS_REMOTE_WRITE)
+	if (access & IB_ACCESS_REMOTE_WRITE)
 		hdr->rsvd_lkey_flags |= OCRDMA_LKEY_FLAG_REMOTE_WR;
-	if (wr->access & IB_ACCESS_REMOTE_READ)
+	if (access & IB_ACCESS_REMOTE_READ)
 		hdr->rsvd_lkey_flags |= OCRDMA_LKEY_FLAG_REMOTE_RD;
 	hdr->lkey = wr->mr->key;
 	hdr->total_len = mr->ibmr.length;
diff --git a/drivers/infiniband/hw/qib/qib_keys.c b/drivers/infiniband/hw/qib/qib_keys.c
index 1be4807..8620d22 100644
--- a/drivers/infiniband/hw/qib/qib_keys.c
+++ b/drivers/infiniband/hw/qib/qib_keys.c
@@ -372,7 +372,7 @@ int qib_reg_mr(struct qib_qp *qp, struct ib_reg_wr *wr)
 	mrg->iova = mr->ibmr.iova;
 	mrg->lkey = key;
 	mrg->length = mr->ibmr.length;
-	mrg->access_flags = wr->access;
+	mrg->access_flags = ib_scope_to_access(wr->scope);
 	page_list = mr->pages;
 	m = 0;
 	n = 0;
diff --git a/drivers/infiniband/ulp/iser/iser_memory.c b/drivers/infiniband/ulp/iser/iser_memory.c
index fb4ca76..6767fc6 100644
--- a/drivers/infiniband/ulp/iser/iser_memory.c
+++ b/drivers/infiniband/ulp/iser/iser_memory.c
@@ -501,9 +501,8 @@ static int iser_fast_reg_mr(struct iscsi_iser_task *iser_task,
 	wr->wr.send_flags = 0;
 	wr->wr.num_sge = 0;
 	wr->mr = mr;
-	wr->access = IB_ACCESS_LOCAL_WRITE  |
-		     IB_ACCESS_REMOTE_WRITE |
-		     IB_ACCESS_REMOTE_READ;
+	/* XXX: pass read vs write flag */
+	wr->scope = IB_REG_RKEY | IB_REG_OP_RDMA_READ | IB_REG_OP_RDMA_WRITE;
 
 	rsc->mr_valid = 0;
 
diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
index d38a1e7..de94ce7 100644
--- a/drivers/infiniband/ulp/isert/ib_isert.c
+++ b/drivers/infiniband/ulp/isert/ib_isert.c
@@ -2548,7 +2548,9 @@ isert_fast_reg_mr(struct isert_conn *isert_conn,
 	reg_wr.wr.send_flags = 0;
 	reg_wr.wr.num_sge = 0;
 	reg_wr.mr = mr;
-	reg_wr.access = IB_ACCESS_LOCAL_WRITE;
+	/* XXX: pass read vs write flag */
+	reg_wr.scope = IB_REG_LKEY | IB_REG_OP_RDMA_READ | IB_REG_OP_RDMA_WRITE;
+
 
 	if (!wr)
 		wr = &reg_wr.wr;
diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
index 6a8ef10..f67aa2a 100644
--- a/drivers/infiniband/ulp/srp/ib_srp.c
+++ b/drivers/infiniband/ulp/srp/ib_srp.c
@@ -1350,9 +1350,8 @@ static int srp_map_finish_fr(struct srp_map_state *state,
 	wr.wr.num_sge = 0;
 	wr.wr.send_flags = 0;
 	wr.mr = desc->mr;
-	wr.access = (IB_ACCESS_LOCAL_WRITE |
-		     IB_ACCESS_REMOTE_READ |
-		     IB_ACCESS_REMOTE_WRITE);
+	/* XXX: pass read vs write flag */
+	wr.scope = IB_REG_RKEY | IB_REG_OP_RDMA_READ | IB_REG_OP_RDMA_WRITE;
 
 	*state->fr.next++ = desc;
 	state->nmdesc++;
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index 51dd3a7..ca2aedc 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -1086,10 +1086,12 @@ static inline struct ib_ud_wr *ud_wr(struct ib_send_wr *wr)
 	return container_of(wr, struct ib_ud_wr, wr);
 }
 
+typedef unsigned __bitwise__ ib_reg_scope_t;
+
 struct ib_reg_wr {
 	struct ib_send_wr	wr;
 	struct ib_mr		*mr;
-	int			access;
+	ib_reg_scope_t		scope;
 };
 
 static inline struct ib_reg_wr *reg_wr(struct ib_send_wr *wr)
@@ -1128,6 +1130,62 @@ enum ib_access_flags {
 };
 
 /*
+ * Decide if this a target of remote operations (rkey),
+ * or a source of local data (lkey).  Only one of these
+ * must be used at a given time.
+ */
+#define IB_REG_LKEY		(ib_reg_scope_t)0x0000
+#define IB_REG_RKEY		(ib_reg_scope_t)0x0001
+
+/*
+ * Operation we're registering for.  Multiple operations
+ * can be be used if absolutely needed.
+ */
+#define IB_REG_OP_SEND		(ib_reg_scope_t)0x0010
+#define IB_REG_OP_RDMA_READ	(ib_reg_scope_t)0x0020
+#define IB_REG_OP_RDMA_WRITE	(ib_reg_scope_t)0x0040
+/* add IB_REG_OP_ATOMIC when needed */
+
+static inline int ib_scope_to_access(ib_reg_scope_t scope)
+{
+	unsigned int acc = 0;
+
+	if (scope & IB_REG_RKEY) {
+		WARN_ON(scope & IB_REG_OP_SEND);
+
+		if (scope & IB_REG_OP_RDMA_READ)
+			acc |= IB_ACCESS_REMOTE_READ;
+		if (scope & IB_REG_OP_RDMA_WRITE)
+			acc |= IB_ACCESS_REMOTE_WRITE | IB_ACCESS_LOCAL_WRITE;
+	} else {
+		if (scope & IB_REG_OP_RDMA_READ)
+			acc |= IB_ACCESS_LOCAL_WRITE;
+	}
+
+	return acc;
+}
+
+static inline int iwarp_scope_to_access(ib_reg_scope_t scope)
+{
+	unsigned int acc = 0;
+
+	if (scope & IB_REG_RKEY) {
+		WARN_ON(scope & IB_REG_OP_SEND);
+
+		if (scope & IB_REG_OP_RDMA_READ)
+			acc |= IB_ACCESS_REMOTE_READ;
+		if (scope & IB_REG_OP_RDMA_WRITE)
+			acc |= IB_ACCESS_REMOTE_WRITE | IB_ACCESS_LOCAL_WRITE;
+	} else {
+		if (scope & IB_REG_OP_RDMA_READ)
+			acc |= IB_ACCESS_REMOTE_WRITE | IB_ACCESS_LOCAL_WRITE;
+	}
+
+	return acc;
+}
+
+
+/*
  * XXX: these are apparently used for ->rereg_user_mr, no idea why they
  * are hidden here instead of a uapi header!
  */
diff --git a/net/rds/iw_rdma.c b/net/rds/iw_rdma.c
index 3e683a9..205ed6b 100644
--- a/net/rds/iw_rdma.c
+++ b/net/rds/iw_rdma.c
@@ -675,9 +675,8 @@ static int rds_iw_rdma_reg_mr(struct rds_iw_mapping *mapping)
 	reg_wr.wr.wr_id = RDS_IW_REG_WR_ID;
 	reg_wr.wr.num_sge = 0;
 	reg_wr.mr = ibmr->mr;
-	reg_wr.access = IB_ACCESS_LOCAL_WRITE |
-			IB_ACCESS_REMOTE_READ |
-			IB_ACCESS_REMOTE_WRITE;
+	/* XXX: pass read vs write flag */
+	reg_wr.scope = IB_REG_RKEY | IB_REG_OP_RDMA_READ | IB_REG_OP_RDMA_WRITE;
 
 	/*
 	 * Perform a WR for the reg_mr. Each individual page
diff --git a/net/rds/iw_send.c b/net/rds/iw_send.c
index acfe38a..b83233f 100644
--- a/net/rds/iw_send.c
+++ b/net/rds/iw_send.c
@@ -775,7 +775,7 @@ static int rds_iw_build_send_reg(struct rds_iw_send_work *send,
 	send->s_reg_wr.wr.wr_id = 0;
 	send->s_reg_wr.wr.num_sge = 0;
 	send->s_reg_wr.mr = send->s_mr;
-	send->s_reg_wr.access = IB_ACCESS_REMOTE_WRITE;
+	send->s_reg_wr.scope = IB_REG_LKEY | IB_REG_OP_RDMA_READ;
 
 	ib_update_fast_reg_key(send->s_mr, send->s_remap_count++);
 
diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c
index 0b1b89b..52f99eb 100644
--- a/net/sunrpc/xprtrdma/frwr_ops.c
+++ b/net/sunrpc/xprtrdma/frwr_ops.c
@@ -386,9 +386,10 @@ frwr_op_map(struct rpcrdma_xprt *r_xprt, struct rpcrdma_mr_seg *seg,
 	reg_wr.wr.num_sge = 0;
 	reg_wr.wr.send_flags = 0;
 	reg_wr.mr = mr;
-	reg_wr.access = writing ?
-			IB_ACCESS_REMOTE_WRITE | IB_ACCESS_LOCAL_WRITE :
-			IB_ACCESS_REMOTE_READ;
+	if (writing)
+		reg_wr.scope = IB_REG_RKEY | IB_REG_OP_RDMA_WRITE;
+	else
+		reg_wr.scope = IB_REG_RKEY | IB_REG_OP_RDMA_READ;
 
 	DECR_CQCOUNT(&r_xprt->rx_ep);
 	rc = ib_post_send(ia->ri_id->qp, &reg_wr.wr, &bad_wr);
diff --git a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
index b1d7528..9bd9709 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
@@ -239,7 +239,6 @@ int rdma_read_chunk_frmr(struct svcxprt_rdma *xprt,
 	read = min_t(int, (nents << PAGE_SHIFT) - *page_offset, rs_length);
 
 	frmr->direction = DMA_FROM_DEVICE;
-	frmr->access_flags = (IB_ACCESS_LOCAL_WRITE|IB_ACCESS_REMOTE_WRITE);
 	frmr->sg_nents = nents;
 
 	for (pno = 0; pno < nents; pno++) {
@@ -304,7 +303,7 @@ int rdma_read_chunk_frmr(struct svcxprt_rdma *xprt,
 	reg_wr.wr.send_flags = IB_SEND_SIGNALED;
 	reg_wr.wr.num_sge = 0;
 	reg_wr.mr = frmr->mr;
-	reg_wr.access = frmr->access_flags;
+	reg_wr.scope = IB_REG_LKEY | IB_REG_OP_RDMA_READ;
 	reg_wr.wr.next = &read_wr.wr;
 
 	/* Prepare RDMA_READ */
-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: memory registration updates
       [not found] ` <1448214409-7729-1-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
                     ` (10 preceding siblings ...)
  2015-11-22 17:46   ` [PATCH 11/11] IB: provide better access flags for fast registrations Christoph Hellwig
@ 2015-11-23  9:03   ` Sagi Grimberg
       [not found]     ` <5652D66E.5070600-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
  2015-11-23 15:06   ` Steve Wise
  12 siblings, 1 reply; 35+ messages in thread
From: Sagi Grimberg @ 2015-11-23  9:03 UTC (permalink / raw)
  To: Christoph Hellwig, dledford-H+wXaHxf7aLQT0dZR+AlfA
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

Christoph,

> This series removes huge chunks of code related to old memory
> registration methods that we don't use anymore, and then simplifies the
> current memory registration API

Let's split out patches 10,11 from this set because these patches are
logically completely different from the rest of the series which is a
cleanup by nature.

I suggest you send out patches 10,11 as a separate RFC series.

For patches 1-9,

Reviewed-by: Sagi Grimberg <sagig-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: memory registration updates
       [not found]     ` <5652D66E.5070600-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
@ 2015-11-23 14:16       ` Christoph Hellwig
       [not found]         ` <20151123141650.GA11116-jcswGhMUV9g@public.gmane.org>
  0 siblings, 1 reply; 35+ messages in thread
From: Christoph Hellwig @ 2015-11-23 14:16 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Christoph Hellwig, dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA

On Mon, Nov 23, 2015 at 11:03:42AM +0200, Sagi Grimberg wrote:
> Christoph,
>
>> This series removes huge chunks of code related to old memory
>> registration methods that we don't use anymore, and then simplifies the
>> current memory registration API
>
> Let's split out patches 10,11 from this set because these patches are
> logically completely different from the rest of the series which is a
> cleanup by nature.
>
> I suggest you send out patches 10,11 as a separate RFC series.

I send 1-9 out separately earlier :)  The other two sit on top of them
and they are prep patches in a sense as they remove a lot of users
of struct ib_mr that i don't have to modify in patches 10 and 11.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: memory registration updates
       [not found] ` <1448214409-7729-1-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
                     ` (11 preceding siblings ...)
  2015-11-23  9:03   ` memory registration updates Sagi Grimberg
@ 2015-11-23 15:06   ` Steve Wise
  12 siblings, 0 replies; 35+ messages in thread
From: Steve Wise @ 2015-11-23 15:06 UTC (permalink / raw)
  To: Christoph Hellwig, dledford-H+wXaHxf7aLQT0dZR+AlfA
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

On 11/22/2015 11:46 AM, Christoph Hellwig wrote:
> This series removes huge chunks of code related to old memory
> registration methods that we don't use anymore, and then simplifies the
> current memory registration API
>
> This expects my "IB: merge struct ib_device_attr into struct ib_device"
> patch to be already applied.
>
> Also available as a git tree:
>
> 	http://git.infradead.org/users/hch/rdma.git/shortlog/refs/heads/rdma-mr
> 	git://git.infradead.org/users/hch/rdma.git rdma-mr
>
>

Series looks good.

Reviewed-by: Steve Wise <swise-7bPotxP6k4+P2YhJcF5u+vpXobYPEAuW@public.gmane.org>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 11/11] IB: provide better access flags for fast registrations
       [not found]     ` <1448214409-7729-12-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
@ 2015-11-23 15:09       ` Chuck Lever
       [not found]         ` <EE465F12-A5F6-4C95-A77C-199DFF8F77BD-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
  2015-11-23 18:58       ` Jason Gunthorpe
  1 sibling, 1 reply; 35+ messages in thread
From: Chuck Lever @ 2015-11-23 15:09 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: dledford-H+wXaHxf7aLQT0dZR+AlfA, linux-rdma-u79uwXL29TY76Z2rM5mHXA


> On Nov 22, 2015, at 12:46 PM, Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org> wrote:
> 
> Instead of the confusing IB spec values provide a flags argument that
> describes:
> 
>  a) the operation we perform the memory registration for, and
>  b) if we want to access it for read or write purposes.
> 
> This helps to abstract out the IB vs iWarp differences as well.
> 
> Signed-off-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>

net/sunrpc/xprtrdma/*.c

Reviewed-by: Chuck Lever <chuck.lever-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>


Out of curiosity, why are you keeping the IB_ACCESS flags?
It would be more efficient for providers to convert the
scope information directly into native permissions.


> ---
> drivers/infiniband/hw/cxgb3/iwch_qp.c       |  3 +-
> drivers/infiniband/hw/cxgb4/qp.c            |  3 +-
> drivers/infiniband/hw/mlx4/qp.c             |  2 +-
> drivers/infiniband/hw/mlx5/qp.c             |  2 +-
> drivers/infiniband/hw/nes/nes_verbs.c       |  2 +-
> drivers/infiniband/hw/ocrdma/ocrdma_verbs.c |  7 ++--
> drivers/infiniband/hw/qib/qib_keys.c        |  2 +-
> drivers/infiniband/ulp/iser/iser_memory.c   |  5 +--
> drivers/infiniband/ulp/isert/ib_isert.c     |  4 +-
> drivers/infiniband/ulp/srp/ib_srp.c         |  5 +--
> include/rdma/ib_verbs.h                     | 60 ++++++++++++++++++++++++++++-
> net/rds/iw_rdma.c                           |  5 +--
> net/rds/iw_send.c                           |  2 +-
> net/sunrpc/xprtrdma/frwr_ops.c              |  7 ++--
> net/sunrpc/xprtrdma/svc_rdma_recvfrom.c     |  3 +-
> 15 files changed, 86 insertions(+), 26 deletions(-)
> 
> diff --git a/drivers/infiniband/hw/cxgb3/iwch_qp.c b/drivers/infiniband/hw/cxgb3/iwch_qp.c
> index 87be4be..47b3d0c 100644
> --- a/drivers/infiniband/hw/cxgb3/iwch_qp.c
> +++ b/drivers/infiniband/hw/cxgb3/iwch_qp.c
> @@ -165,7 +165,8 @@ static int build_memreg(union t3_wr *wqe, struct ib_reg_wr *wr,
> 		V_FR_PAGE_COUNT(mhp->npages) |
> 		V_FR_PAGE_SIZE(ilog2(wr->mr->page_size) - 12) |
> 		V_FR_TYPE(TPT_VATO) |
> -		V_FR_PERMS(iwch_ib_to_tpt_access(wr->access)));
> +		V_FR_PERMS(iwch_ib_to_tpt_access(
> +				iwarp_scope_to_access(wr->scope))));
> 	p = &wqe->fastreg.pbl_addrs[0];
> 	for (i = 0; i < mhp->npages; i++, p++) {
> 
> diff --git a/drivers/infiniband/hw/cxgb4/qp.c b/drivers/infiniband/hw/cxgb4/qp.c
> index cb8031b..d28a5a3 100644
> --- a/drivers/infiniband/hw/cxgb4/qp.c
> +++ b/drivers/infiniband/hw/cxgb4/qp.c
> @@ -621,7 +621,8 @@ static int build_memreg(struct t4_sq *sq, union t4_wr *wqe,
> 	wqe->fr.qpbinde_to_dcacpu = 0;
> 	wqe->fr.pgsz_shift = ilog2(wr->mr->page_size) - 12;
> 	wqe->fr.addr_type = FW_RI_VA_BASED_TO;
> -	wqe->fr.mem_perms = c4iw_ib_to_tpt_access(wr->access);
> +	wqe->fr.mem_perms =
> +		c4iw_ib_to_tpt_access(iwarp_scope_to_access(wr->scope));
> 	wqe->fr.len_hi = 0;
> 	wqe->fr.len_lo = cpu_to_be32(mhp->ibmr.length);
> 	wqe->fr.stag = cpu_to_be32(wr->mr->key);
> diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
> index afba1a9..0e0d4e4 100644
> --- a/drivers/infiniband/hw/mlx4/qp.c
> +++ b/drivers/infiniband/hw/mlx4/qp.c
> @@ -2509,7 +2509,7 @@ static void set_reg_seg(struct mlx4_wqe_fmr_seg *fseg,
> {
> 	struct mlx4_ib_mr *mr = to_mmr(wr->mr);
> 
> -	fseg->flags		= convert_access(wr->access);
> +	fseg->flags		= convert_access(ib_scope_to_access(wr->scope));
> 	fseg->mem_key		= cpu_to_be32(wr->mr->key);
> 	fseg->buf_list		= cpu_to_be64(mr->page_map);
> 	fseg->start_addr	= cpu_to_be64(mr->ibmr.iova);
> diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
> index ba39045..6eef2cb 100644
> --- a/drivers/infiniband/hw/mlx5/qp.c
> +++ b/drivers/infiniband/hw/mlx5/qp.c
> @@ -2449,7 +2449,7 @@ static int set_reg_wr(struct mlx5_ib_qp *qp,
> 	if (unlikely((*seg == qp->sq.qend)))
> 		*seg = mlx5_get_send_wqe(qp, 0);
> 
> -	set_reg_mkey_seg(*seg, mr, wr->mr->key, wr->access);
> +	set_reg_mkey_seg(*seg, mr, wr->mr->key, ib_scope_to_access(wr->scope));
> 	*seg += sizeof(struct mlx5_mkey_seg);
> 	*size += sizeof(struct mlx5_mkey_seg) / 16;
> 	if (unlikely((*seg == qp->sq.qend)))
> diff --git a/drivers/infiniband/hw/nes/nes_verbs.c b/drivers/infiniband/hw/nes/nes_verbs.c
> index f7f1f78..93c1231 100644
> --- a/drivers/infiniband/hw/nes/nes_verbs.c
> +++ b/drivers/infiniband/hw/nes/nes_verbs.c
> @@ -3196,7 +3196,7 @@ static int nes_post_send(struct ib_qp *ibqp, struct ib_send_wr *ib_wr,
> 		{
> 			struct nes_mr *mr = to_nesmr(reg_wr(ib_wr)->mr);
> 			int page_shift = ilog2(reg_wr(ib_wr)->mr->page_size);
> -			int flags = reg_wr(ib_wr)->access;
> +			int flags = iwarp_scope_to_access(reg_wr(ib_wr)->scope);
> 
> 			if (mr->npages > (NES_4K_PBL_CHUNK_SIZE / sizeof(u64))) {
> 				nes_debug(NES_DBG_IW_TX, "SQ_FMR: bad page_list_len\n");
> diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
> index 6d09634..43a0d24 100644
> --- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
> +++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
> @@ -2100,6 +2100,7 @@ static int ocrdma_build_reg(struct ocrdma_qp *qp,
> 	struct ocrdma_pbl *pbl_tbl = mr->hwmr.pbl_table;
> 	struct ocrdma_pbe *pbe;
> 	u32 wqe_size = sizeof(*fast_reg) + sizeof(*hdr);
> +	int access = ib_scope_to_access(wr->scope);
> 	int num_pbes = 0, i;
> 
> 	wqe_size = roundup(wqe_size, OCRDMA_WQE_ALIGN_BYTES);
> @@ -2107,11 +2108,11 @@ static int ocrdma_build_reg(struct ocrdma_qp *qp,
> 	hdr->cw |= (OCRDMA_FR_MR << OCRDMA_WQE_OPCODE_SHIFT);
> 	hdr->cw |= ((wqe_size / OCRDMA_WQE_STRIDE) << OCRDMA_WQE_SIZE_SHIFT);
> 
> -	if (wr->access & IB_ACCESS_LOCAL_WRITE)
> +	if (access & IB_ACCESS_LOCAL_WRITE)
> 		hdr->rsvd_lkey_flags |= OCRDMA_LKEY_FLAG_LOCAL_WR;
> -	if (wr->access & IB_ACCESS_REMOTE_WRITE)
> +	if (access & IB_ACCESS_REMOTE_WRITE)
> 		hdr->rsvd_lkey_flags |= OCRDMA_LKEY_FLAG_REMOTE_WR;
> -	if (wr->access & IB_ACCESS_REMOTE_READ)
> +	if (access & IB_ACCESS_REMOTE_READ)
> 		hdr->rsvd_lkey_flags |= OCRDMA_LKEY_FLAG_REMOTE_RD;
> 	hdr->lkey = wr->mr->key;
> 	hdr->total_len = mr->ibmr.length;
> diff --git a/drivers/infiniband/hw/qib/qib_keys.c b/drivers/infiniband/hw/qib/qib_keys.c
> index 1be4807..8620d22 100644
> --- a/drivers/infiniband/hw/qib/qib_keys.c
> +++ b/drivers/infiniband/hw/qib/qib_keys.c
> @@ -372,7 +372,7 @@ int qib_reg_mr(struct qib_qp *qp, struct ib_reg_wr *wr)
> 	mrg->iova = mr->ibmr.iova;
> 	mrg->lkey = key;
> 	mrg->length = mr->ibmr.length;
> -	mrg->access_flags = wr->access;
> +	mrg->access_flags = ib_scope_to_access(wr->scope);
> 	page_list = mr->pages;
> 	m = 0;
> 	n = 0;
> diff --git a/drivers/infiniband/ulp/iser/iser_memory.c b/drivers/infiniband/ulp/iser/iser_memory.c
> index fb4ca76..6767fc6 100644
> --- a/drivers/infiniband/ulp/iser/iser_memory.c
> +++ b/drivers/infiniband/ulp/iser/iser_memory.c
> @@ -501,9 +501,8 @@ static int iser_fast_reg_mr(struct iscsi_iser_task *iser_task,
> 	wr->wr.send_flags = 0;
> 	wr->wr.num_sge = 0;
> 	wr->mr = mr;
> -	wr->access = IB_ACCESS_LOCAL_WRITE  |
> -		     IB_ACCESS_REMOTE_WRITE |
> -		     IB_ACCESS_REMOTE_READ;
> +	/* XXX: pass read vs write flag */
> +	wr->scope = IB_REG_RKEY | IB_REG_OP_RDMA_READ | IB_REG_OP_RDMA_WRITE;
> 
> 	rsc->mr_valid = 0;
> 
> diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
> index d38a1e7..de94ce7 100644
> --- a/drivers/infiniband/ulp/isert/ib_isert.c
> +++ b/drivers/infiniband/ulp/isert/ib_isert.c
> @@ -2548,7 +2548,9 @@ isert_fast_reg_mr(struct isert_conn *isert_conn,
> 	reg_wr.wr.send_flags = 0;
> 	reg_wr.wr.num_sge = 0;
> 	reg_wr.mr = mr;
> -	reg_wr.access = IB_ACCESS_LOCAL_WRITE;
> +	/* XXX: pass read vs write flag */
> +	reg_wr.scope = IB_REG_LKEY | IB_REG_OP_RDMA_READ | IB_REG_OP_RDMA_WRITE;
> +
> 
> 	if (!wr)
> 		wr = &reg_wr.wr;
> diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
> index 6a8ef10..f67aa2a 100644
> --- a/drivers/infiniband/ulp/srp/ib_srp.c
> +++ b/drivers/infiniband/ulp/srp/ib_srp.c
> @@ -1350,9 +1350,8 @@ static int srp_map_finish_fr(struct srp_map_state *state,
> 	wr.wr.num_sge = 0;
> 	wr.wr.send_flags = 0;
> 	wr.mr = desc->mr;
> -	wr.access = (IB_ACCESS_LOCAL_WRITE |
> -		     IB_ACCESS_REMOTE_READ |
> -		     IB_ACCESS_REMOTE_WRITE);
> +	/* XXX: pass read vs write flag */
> +	wr.scope = IB_REG_RKEY | IB_REG_OP_RDMA_READ | IB_REG_OP_RDMA_WRITE;
> 
> 	*state->fr.next++ = desc;
> 	state->nmdesc++;
> diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
> index 51dd3a7..ca2aedc 100644
> --- a/include/rdma/ib_verbs.h
> +++ b/include/rdma/ib_verbs.h
> @@ -1086,10 +1086,12 @@ static inline struct ib_ud_wr *ud_wr(struct ib_send_wr *wr)
> 	return container_of(wr, struct ib_ud_wr, wr);
> }
> 
> +typedef unsigned __bitwise__ ib_reg_scope_t;
> +
> struct ib_reg_wr {
> 	struct ib_send_wr	wr;
> 	struct ib_mr		*mr;
> -	int			access;
> +	ib_reg_scope_t		scope;
> };
> 
> static inline struct ib_reg_wr *reg_wr(struct ib_send_wr *wr)
> @@ -1128,6 +1130,62 @@ enum ib_access_flags {
> };
> 
> /*
> + * Decide if this a target of remote operations (rkey),
> + * or a source of local data (lkey).  Only one of these
> + * must be used at a given time.
> + */
> +#define IB_REG_LKEY		(ib_reg_scope_t)0x0000
> +#define IB_REG_RKEY		(ib_reg_scope_t)0x0001
> +
> +/*
> + * Operation we're registering for.  Multiple operations
> + * can be be used if absolutely needed.
> + */
> +#define IB_REG_OP_SEND		(ib_reg_scope_t)0x0010
> +#define IB_REG_OP_RDMA_READ	(ib_reg_scope_t)0x0020
> +#define IB_REG_OP_RDMA_WRITE	(ib_reg_scope_t)0x0040
> +/* add IB_REG_OP_ATOMIC when needed */
> +
> +static inline int ib_scope_to_access(ib_reg_scope_t scope)
> +{
> +	unsigned int acc = 0;
> +
> +	if (scope & IB_REG_RKEY) {
> +		WARN_ON(scope & IB_REG_OP_SEND);
> +
> +		if (scope & IB_REG_OP_RDMA_READ)
> +			acc |= IB_ACCESS_REMOTE_READ;
> +		if (scope & IB_REG_OP_RDMA_WRITE)
> +			acc |= IB_ACCESS_REMOTE_WRITE | IB_ACCESS_LOCAL_WRITE;
> +	} else {
> +		if (scope & IB_REG_OP_RDMA_READ)
> +			acc |= IB_ACCESS_LOCAL_WRITE;
> +	}
> +
> +	return acc;
> +}
> +
> +static inline int iwarp_scope_to_access(ib_reg_scope_t scope)
> +{
> +	unsigned int acc = 0;
> +
> +	if (scope & IB_REG_RKEY) {
> +		WARN_ON(scope & IB_REG_OP_SEND);
> +
> +		if (scope & IB_REG_OP_RDMA_READ)
> +			acc |= IB_ACCESS_REMOTE_READ;
> +		if (scope & IB_REG_OP_RDMA_WRITE)
> +			acc |= IB_ACCESS_REMOTE_WRITE | IB_ACCESS_LOCAL_WRITE;
> +	} else {
> +		if (scope & IB_REG_OP_RDMA_READ)
> +			acc |= IB_ACCESS_REMOTE_WRITE | IB_ACCESS_LOCAL_WRITE;
> +	}
> +
> +	return acc;
> +}
> +
> +
> +/*
>  * XXX: these are apparently used for ->rereg_user_mr, no idea why they
>  * are hidden here instead of a uapi header!
>  */
> diff --git a/net/rds/iw_rdma.c b/net/rds/iw_rdma.c
> index 3e683a9..205ed6b 100644
> --- a/net/rds/iw_rdma.c
> +++ b/net/rds/iw_rdma.c
> @@ -675,9 +675,8 @@ static int rds_iw_rdma_reg_mr(struct rds_iw_mapping *mapping)
> 	reg_wr.wr.wr_id = RDS_IW_REG_WR_ID;
> 	reg_wr.wr.num_sge = 0;
> 	reg_wr.mr = ibmr->mr;
> -	reg_wr.access = IB_ACCESS_LOCAL_WRITE |
> -			IB_ACCESS_REMOTE_READ |
> -			IB_ACCESS_REMOTE_WRITE;
> +	/* XXX: pass read vs write flag */
> +	reg_wr.scope = IB_REG_RKEY | IB_REG_OP_RDMA_READ | IB_REG_OP_RDMA_WRITE;
> 
> 	/*
> 	 * Perform a WR for the reg_mr. Each individual page
> diff --git a/net/rds/iw_send.c b/net/rds/iw_send.c
> index acfe38a..b83233f 100644
> --- a/net/rds/iw_send.c
> +++ b/net/rds/iw_send.c
> @@ -775,7 +775,7 @@ static int rds_iw_build_send_reg(struct rds_iw_send_work *send,
> 	send->s_reg_wr.wr.wr_id = 0;
> 	send->s_reg_wr.wr.num_sge = 0;
> 	send->s_reg_wr.mr = send->s_mr;
> -	send->s_reg_wr.access = IB_ACCESS_REMOTE_WRITE;
> +	send->s_reg_wr.scope = IB_REG_LKEY | IB_REG_OP_RDMA_READ;
> 
> 	ib_update_fast_reg_key(send->s_mr, send->s_remap_count++);
> 
> diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c
> index 0b1b89b..52f99eb 100644
> --- a/net/sunrpc/xprtrdma/frwr_ops.c
> +++ b/net/sunrpc/xprtrdma/frwr_ops.c
> @@ -386,9 +386,10 @@ frwr_op_map(struct rpcrdma_xprt *r_xprt, struct rpcrdma_mr_seg *seg,
> 	reg_wr.wr.num_sge = 0;
> 	reg_wr.wr.send_flags = 0;
> 	reg_wr.mr = mr;
> -	reg_wr.access = writing ?
> -			IB_ACCESS_REMOTE_WRITE | IB_ACCESS_LOCAL_WRITE :
> -			IB_ACCESS_REMOTE_READ;
> +	if (writing)
> +		reg_wr.scope = IB_REG_RKEY | IB_REG_OP_RDMA_WRITE;
> +	else
> +		reg_wr.scope = IB_REG_RKEY | IB_REG_OP_RDMA_READ;
> 
> 	DECR_CQCOUNT(&r_xprt->rx_ep);
> 	rc = ib_post_send(ia->ri_id->qp, &reg_wr.wr, &bad_wr);
> diff --git a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
> index b1d7528..9bd9709 100644
> --- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
> +++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
> @@ -239,7 +239,6 @@ int rdma_read_chunk_frmr(struct svcxprt_rdma *xprt,
> 	read = min_t(int, (nents << PAGE_SHIFT) - *page_offset, rs_length);
> 
> 	frmr->direction = DMA_FROM_DEVICE;
> -	frmr->access_flags = (IB_ACCESS_LOCAL_WRITE|IB_ACCESS_REMOTE_WRITE);
> 	frmr->sg_nents = nents;
> 
> 	for (pno = 0; pno < nents; pno++) {
> @@ -304,7 +303,7 @@ int rdma_read_chunk_frmr(struct svcxprt_rdma *xprt,
> 	reg_wr.wr.send_flags = IB_SEND_SIGNALED;
> 	reg_wr.wr.num_sge = 0;
> 	reg_wr.mr = frmr->mr;
> -	reg_wr.access = frmr->access_flags;
> +	reg_wr.scope = IB_REG_LKEY | IB_REG_OP_RDMA_READ;
> 	reg_wr.wr.next = &read_wr.wr;
> 
> 	/* Prepare RDMA_READ */
> -- 
> 1.9.1
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
Chuck Lever




--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: memory registration updates
       [not found]         ` <20151123141650.GA11116-jcswGhMUV9g@public.gmane.org>
@ 2015-11-23 15:17           ` Sagi Grimberg
       [not found]             ` <56532E0A.20907-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
  0 siblings, 1 reply; 35+ messages in thread
From: Sagi Grimberg @ 2015-11-23 15:17 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: dledford-H+wXaHxf7aLQT0dZR+AlfA, linux-rdma-u79uwXL29TY76Z2rM5mHXA


> I send 1-9 out separately earlier :)  The other two sit on top of them
> and they are prep patches in a sense as they remove a lot of users
> of struct ib_mr that i don't have to modify in patches 10 and 11.

Still, patches 10,11 are not really a part of this patchset.
I think they need to stand on their own while patches 1-9
are cleanups really.

I'll look at patches 10,11 once I get a bit of free time...
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 11/11] IB: provide better access flags for fast registrations
       [not found]         ` <EE465F12-A5F6-4C95-A77C-199DFF8F77BD-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
@ 2015-11-23 15:49           ` Christoph Hellwig
  0 siblings, 0 replies; 35+ messages in thread
From: Christoph Hellwig @ 2015-11-23 15:49 UTC (permalink / raw)
  To: Chuck Lever
  Cc: Christoph Hellwig, dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA

On Mon, Nov 23, 2015 at 10:09:05AM -0500, Chuck Lever wrote:
> Out of curiosity, why are you keeping the IB_ACCESS flags?

We'll still need them for all kinds of other use cases
(ib_get_dma_mr, userspace MRs, qp_access_flags).

> It would be more efficient for providers to convert the
> scope information directly into native permissions.

For the FR mapping path I'd expect the maintained drivers
to eventually do a direct translation.  But I'd much rather
let the maintainers implement that instead of doing it
in a global search and replace.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 11/11] IB: provide better access flags for fast registrations
       [not found]     ` <1448214409-7729-12-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
  2015-11-23 15:09       ` Chuck Lever
@ 2015-11-23 18:58       ` Jason Gunthorpe
       [not found]         ` <20151123185829.GE32085-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
  1 sibling, 1 reply; 35+ messages in thread
From: Jason Gunthorpe @ 2015-11-23 18:58 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: dledford-H+wXaHxf7aLQT0dZR+AlfA, linux-rdma-u79uwXL29TY76Z2rM5mHXA

On Sun, Nov 22, 2015 at 06:46:49PM +0100, Christoph Hellwig wrote:
> Instead of the confusing IB spec values provide a flags argument that
> describes:
> 
>   a) the operation we perform the memory registration for, and
>   b) if we want to access it for read or write purposes.
> 
> This helps to abstract out the IB vs iWarp differences as well.

This is so much better, thanks

> +#define IB_REG_LKEY		(ib_reg_scope_t)0x0000
> +#define IB_REG_RKEY		(ib_reg_scope_t)0x0001

Wrap in () just for convention?

> +static inline int ib_scope_to_access(ib_reg_scope_t scope)
> +{
> +	unsigned int acc = 0;
> +
> +	if (scope & IB_REG_RKEY) {
> +		WARN_ON(scope & IB_REG_OP_SEND);
> +
> +		if (scope & IB_REG_OP_RDMA_READ)
> +			acc |= IB_ACCESS_REMOTE_READ;
> +		if (scope & IB_REG_OP_RDMA_WRITE)
> +			acc |= IB_ACCESS_REMOTE_WRITE | IB_ACCESS_LOCAL_WRITE;
> +	} else {
> +		if (scope & IB_REG_OP_RDMA_READ)
> +			acc |= IB_ACCESS_LOCAL_WRITE;

> +	}
> +
> +	return acc;
> +}
> +
> +static inline int iwarp_scope_to_access(ib_reg_scope_t scope)
> +{

Maybe

unsigned int acc = ib_scope_to_access(scope);
if ((scope & (IB_REG_RKEY | IB_REG_OP_RDMA_READ)) == (IB_REG_RKEY | IB_REG_OP_RDMA_READ))
   acc |= IB_ACCESS_LOCAL_WRITE;

return acc;

Makes it a bit clearer what the only difference is.

Is this enough to purge the cap test related to this?

Jaason
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 10/11] IB: only keep a single key in struct ib_mr
       [not found]     ` <1448214409-7729-11-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
@ 2015-11-23 19:41       ` Jason Gunthorpe
       [not found]         ` <20151123194124.GF32085-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
  2015-12-22  9:17       ` Sagi Grimberg
  1 sibling, 1 reply; 35+ messages in thread
From: Jason Gunthorpe @ 2015-11-23 19:41 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: dledford-H+wXaHxf7aLQT0dZR+AlfA, linux-rdma-u79uwXL29TY76Z2rM5mHXA

On Sun, Nov 22, 2015 at 06:46:48PM +0100, Christoph Hellwig wrote:
> While IB supports the notion of returning separate local and remote keys
> from a memory registration, the iWarp spec doesn't and neither does any
> of our in-tree HCA drivers [1] nor consumers.  Consolidate the in-kernel
> API to provide only a single key and make everyones life easier.
> 
> [1] the EHCA driver which is in the staging tree on it's way out can
>     actually return two values from it's thick firmware interface.
>     I doubt they ever were different, though.

I like this too, but, I'm a little worried this makes the API more
confusing - ideally, we'd get rid of all the IB_ACCESS stuff from
within the kernel completely. Then it make sense, the mr is created as
a rkey/lkey mr, the single return is only one kind, and that kind must
go in the lkey/rkey spots in other API places.

Anyhow, for the core stuff:

Reviewed-by: Jason Gunthorpe <jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>

Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: memory registration updates
       [not found]             ` <56532E0A.20907-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
@ 2015-11-23 19:43               ` Christoph Hellwig
  0 siblings, 0 replies; 35+ messages in thread
From: Christoph Hellwig @ 2015-11-23 19:43 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Christoph Hellwig, dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA

On Mon, Nov 23, 2015 at 05:17:30PM +0200, Sagi Grimberg wrote:
>
>> I send 1-9 out separately earlier :)  The other two sit on top of them
>> and they are prep patches in a sense as they remove a lot of users
>> of struct ib_mr that i don't have to modify in patches 10 and 11.
>
> Still, patches 10,11 are not really a part of this patchset.
> I think they need to stand on their own while patches 1-9
> are cleanups really.

Already, I'll split the patches next.  Still hoping I won't have to resend
the first 9..
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 11/11] IB: provide better access flags for fast registrations
       [not found]         ` <20151123185829.GE32085-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
@ 2015-11-23 19:44           ` Christoph Hellwig
  0 siblings, 0 replies; 35+ messages in thread
From: Christoph Hellwig @ 2015-11-23 19:44 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Christoph Hellwig, dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA

On Mon, Nov 23, 2015 at 11:58:29AM -0700, Jason Gunthorpe wrote:
> > +#define IB_REG_LKEY		(ib_reg_scope_t)0x0000
> > +#define IB_REG_RKEY		(ib_reg_scope_t)0x0001
> 
> Wrap in () just for convention?

Ok.

> Maybe
> 
> unsigned int acc = ib_scope_to_access(scope);
> if ((scope & (IB_REG_RKEY | IB_REG_OP_RDMA_READ)) == (IB_REG_RKEY | IB_REG_OP_RDMA_READ))
>    acc |= IB_ACCESS_LOCAL_WRITE;
> 
> return acc;
> 
> Makes it a bit clearer what the only difference is.

I can do that.

> Is this enough to purge the cap test related to this?

I thought we didn't even have a cap check for it yet?
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 10/11] IB: only keep a single key in struct ib_mr
       [not found]         ` <20151123194124.GF32085-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
@ 2015-11-23 19:48           ` Christoph Hellwig
  0 siblings, 0 replies; 35+ messages in thread
From: Christoph Hellwig @ 2015-11-23 19:48 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Christoph Hellwig, dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA

On Mon, Nov 23, 2015 at 12:41:24PM -0700, Jason Gunthorpe wrote:
> I like this too, but, I'm a little worried this makes the API more
> confusing - ideally, we'd get rid of all the IB_ACCESS stuff from
> within the kernel completely.

That's my plan - at least for MRs.  The only place still using it
are ib_get_dma_mr and FRM. Both of them should be switcheѕ to this
new API - ib_get_dma_mr is trivial and will follow next, while for
FMRs I was going to look to change the code to mirror what Sagi did
to FRs.  That is if we can't get rid of FMRs entirely soon.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 03/11] IB: remove support for phys MRs
       [not found]     ` <1448214409-7729-4-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
@ 2015-11-24 12:01       ` Devesh Sharma
  0 siblings, 0 replies; 35+ messages in thread
From: Devesh Sharma @ 2015-11-24 12:01 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Doug Ledford, linux-rdma-u79uwXL29TY76Z2rM5mHXA

Reviewed-By: Devesh Sharma<devesh.sharma-1wcpHE2jlwO1Z/+hSey0Gg@public.gmane.org> (ocrdma)

On Sun, Nov 22, 2015 at 11:16 PM, Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org> wrote:
> We have stopped using phys MRs in the kernel a while ago, so let's
> remove all the cruft used to implement them.
>
> Signed-off-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
> Reviewed-by: Steve Wise <swise-7bPotxP6k4+P2YhJcF5u+vpXobYPEAuW@public.gmane.org>   [cxgb3, cxgb4]
> ---
>  drivers/infiniband/hw/cxgb3/iwch_mem.c       |  31 ---
>  drivers/infiniband/hw/cxgb3/iwch_provider.c  |  69 ------
>  drivers/infiniband/hw/cxgb3/iwch_provider.h  |   4 -
>  drivers/infiniband/hw/cxgb4/iw_cxgb4.h       |  11 -
>  drivers/infiniband/hw/cxgb4/mem.c            | 248 ---------------------
>  drivers/infiniband/hw/cxgb4/provider.c       |   2 -
>  drivers/infiniband/hw/mthca/mthca_provider.c |  84 -------
>  drivers/infiniband/hw/nes/nes_cm.c           |   7 +-
>  drivers/infiniband/hw/nes/nes_verbs.c        |   3 +-
>  drivers/infiniband/hw/nes/nes_verbs.h        |   5 +
>  drivers/infiniband/hw/ocrdma/ocrdma_main.c   |   1 -
>  drivers/infiniband/hw/ocrdma/ocrdma_verbs.c  | 163 --------------
>  drivers/infiniband/hw/ocrdma/ocrdma_verbs.h  |   3 -
>  drivers/infiniband/hw/qib/qib_mr.c           |  51 +----
>  drivers/infiniband/hw/qib/qib_verbs.c        |   1 -
>  drivers/infiniband/hw/qib/qib_verbs.h        |   4 -
>  drivers/staging/rdma/amso1100/c2_provider.c  |   1 -
>  drivers/staging/rdma/ehca/ehca_iverbs.h      |  11 -
>  drivers/staging/rdma/ehca/ehca_main.c        |   2 -
>  drivers/staging/rdma/ehca/ehca_mrmw.c        | 321 ---------------------------
>  drivers/staging/rdma/ehca/ehca_mrmw.h        |   5 -
>  drivers/staging/rdma/hfi1/mr.c               |  51 +----
>  drivers/staging/rdma/hfi1/verbs.c            |   1 -
>  drivers/staging/rdma/hfi1/verbs.h            |   4 -
>  drivers/staging/rdma/ipath/ipath_mr.c        |  55 -----
>  drivers/staging/rdma/ipath/ipath_verbs.c     |   1 -
>  drivers/staging/rdma/ipath/ipath_verbs.h     |   4 -
>  include/rdma/ib_verbs.h                      |  16 +-
>  28 files changed, 15 insertions(+), 1144 deletions(-)
>
> diff --git a/drivers/infiniband/hw/cxgb3/iwch_mem.c b/drivers/infiniband/hw/cxgb3/iwch_mem.c
> index 5c36ee2..3a5e27d 100644
> --- a/drivers/infiniband/hw/cxgb3/iwch_mem.c
> +++ b/drivers/infiniband/hw/cxgb3/iwch_mem.c
> @@ -75,37 +75,6 @@ int iwch_register_mem(struct iwch_dev *rhp, struct iwch_pd *php,
>         return ret;
>  }
>
> -int iwch_reregister_mem(struct iwch_dev *rhp, struct iwch_pd *php,
> -                                       struct iwch_mr *mhp,
> -                                       int shift,
> -                                       int npages)
> -{
> -       u32 stag;
> -       int ret;
> -
> -       /* We could support this... */
> -       if (npages > mhp->attr.pbl_size)
> -               return -ENOMEM;
> -
> -       stag = mhp->attr.stag;
> -       if (cxio_reregister_phys_mem(&rhp->rdev,
> -                                  &stag, mhp->attr.pdid,
> -                                  mhp->attr.perms,
> -                                  mhp->attr.zbva,
> -                                  mhp->attr.va_fbo,
> -                                  mhp->attr.len,
> -                                  shift - 12,
> -                                  mhp->attr.pbl_size, mhp->attr.pbl_addr))
> -               return -ENOMEM;
> -
> -       ret = iwch_finish_mem_reg(mhp, stag);
> -       if (ret)
> -               cxio_dereg_mem(&rhp->rdev, mhp->attr.stag, mhp->attr.pbl_size,
> -                      mhp->attr.pbl_addr);
> -
> -       return ret;
> -}
> -
>  int iwch_alloc_pbl(struct iwch_mr *mhp, int npages)
>  {
>         mhp->attr.pbl_addr = cxio_hal_pblpool_alloc(&mhp->rhp->rdev,
> diff --git a/drivers/infiniband/hw/cxgb3/iwch_provider.c b/drivers/infiniband/hw/cxgb3/iwch_provider.c
> index 1567b5b..9576e15 100644
> --- a/drivers/infiniband/hw/cxgb3/iwch_provider.c
> +++ b/drivers/infiniband/hw/cxgb3/iwch_provider.c
> @@ -556,73 +556,6 @@ err:
>
>  }
>
> -static int iwch_reregister_phys_mem(struct ib_mr *mr,
> -                                    int mr_rereg_mask,
> -                                    struct ib_pd *pd,
> -                                    struct ib_phys_buf *buffer_list,
> -                                    int num_phys_buf,
> -                                    int acc, u64 * iova_start)
> -{
> -
> -       struct iwch_mr mh, *mhp;
> -       struct iwch_pd *php;
> -       struct iwch_dev *rhp;
> -       __be64 *page_list = NULL;
> -       int shift = 0;
> -       u64 total_size;
> -       int npages = 0;
> -       int ret;
> -
> -       PDBG("%s ib_mr %p ib_pd %p\n", __func__, mr, pd);
> -
> -       /* There can be no memory windows */
> -       if (atomic_read(&mr->usecnt))
> -               return -EINVAL;
> -
> -       mhp = to_iwch_mr(mr);
> -       rhp = mhp->rhp;
> -       php = to_iwch_pd(mr->pd);
> -
> -       /* make sure we are on the same adapter */
> -       if (rhp != php->rhp)
> -               return -EINVAL;
> -
> -       memcpy(&mh, mhp, sizeof *mhp);
> -
> -       if (mr_rereg_mask & IB_MR_REREG_PD)
> -               php = to_iwch_pd(pd);
> -       if (mr_rereg_mask & IB_MR_REREG_ACCESS)
> -               mh.attr.perms = iwch_ib_to_tpt_access(acc);
> -       if (mr_rereg_mask & IB_MR_REREG_TRANS) {
> -               ret = build_phys_page_list(buffer_list, num_phys_buf,
> -                                          iova_start,
> -                                          &total_size, &npages,
> -                                          &shift, &page_list);
> -               if (ret)
> -                       return ret;
> -       }
> -
> -       ret = iwch_reregister_mem(rhp, php, &mh, shift, npages);
> -       kfree(page_list);
> -       if (ret) {
> -               return ret;
> -       }
> -       if (mr_rereg_mask & IB_MR_REREG_PD)
> -               mhp->attr.pdid = php->pdid;
> -       if (mr_rereg_mask & IB_MR_REREG_ACCESS)
> -               mhp->attr.perms = iwch_ib_to_tpt_access(acc);
> -       if (mr_rereg_mask & IB_MR_REREG_TRANS) {
> -               mhp->attr.zbva = 0;
> -               mhp->attr.va_fbo = *iova_start;
> -               mhp->attr.page_size = shift - 12;
> -               mhp->attr.len = (u32) total_size;
> -               mhp->attr.pbl_size = npages;
> -       }
> -
> -       return 0;
> -}
> -
> -
>  static struct ib_mr *iwch_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
>                                       u64 virt, int acc, struct ib_udata *udata)
>  {
> @@ -1414,8 +1347,6 @@ int iwch_register_device(struct iwch_dev *dev)
>         dev->ibdev.resize_cq = iwch_resize_cq;
>         dev->ibdev.poll_cq = iwch_poll_cq;
>         dev->ibdev.get_dma_mr = iwch_get_dma_mr;
> -       dev->ibdev.reg_phys_mr = iwch_register_phys_mem;
> -       dev->ibdev.rereg_phys_mr = iwch_reregister_phys_mem;
>         dev->ibdev.reg_user_mr = iwch_reg_user_mr;
>         dev->ibdev.dereg_mr = iwch_dereg_mr;
>         dev->ibdev.alloc_mw = iwch_alloc_mw;
> diff --git a/drivers/infiniband/hw/cxgb3/iwch_provider.h b/drivers/infiniband/hw/cxgb3/iwch_provider.h
> index 2ac85b8..f4fa6d6 100644
> --- a/drivers/infiniband/hw/cxgb3/iwch_provider.h
> +++ b/drivers/infiniband/hw/cxgb3/iwch_provider.h
> @@ -341,10 +341,6 @@ void iwch_unregister_device(struct iwch_dev *dev);
>  void stop_read_rep_timer(struct iwch_qp *qhp);
>  int iwch_register_mem(struct iwch_dev *rhp, struct iwch_pd *php,
>                       struct iwch_mr *mhp, int shift);
> -int iwch_reregister_mem(struct iwch_dev *rhp, struct iwch_pd *php,
> -                                       struct iwch_mr *mhp,
> -                                       int shift,
> -                                       int npages);
>  int iwch_alloc_pbl(struct iwch_mr *mhp, int npages);
>  void iwch_free_pbl(struct iwch_mr *mhp);
>  int iwch_write_pbl(struct iwch_mr *mhp, __be64 *pages, int npages, int offset);
> diff --git a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h
> index 00e55fa..dd00cf2 100644
> --- a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h
> +++ b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h
> @@ -968,17 +968,6 @@ struct ib_mr *c4iw_reg_user_mr(struct ib_pd *pd, u64 start,
>                                            u64 length, u64 virt, int acc,
>                                            struct ib_udata *udata);
>  struct ib_mr *c4iw_get_dma_mr(struct ib_pd *pd, int acc);
> -struct ib_mr *c4iw_register_phys_mem(struct ib_pd *pd,
> -                                       struct ib_phys_buf *buffer_list,
> -                                       int num_phys_buf,
> -                                       int acc,
> -                                       u64 *iova_start);
> -int c4iw_reregister_phys_mem(struct ib_mr *mr,
> -                                    int mr_rereg_mask,
> -                                    struct ib_pd *pd,
> -                                    struct ib_phys_buf *buffer_list,
> -                                    int num_phys_buf,
> -                                    int acc, u64 *iova_start);
>  int c4iw_dereg_mr(struct ib_mr *ib_mr);
>  int c4iw_destroy_cq(struct ib_cq *ib_cq);
>  struct ib_cq *c4iw_create_cq(struct ib_device *ibdev,
> diff --git a/drivers/infiniband/hw/cxgb4/mem.c b/drivers/infiniband/hw/cxgb4/mem.c
> index e1629ab..1eb833a 100644
> --- a/drivers/infiniband/hw/cxgb4/mem.c
> +++ b/drivers/infiniband/hw/cxgb4/mem.c
> @@ -392,32 +392,6 @@ static int register_mem(struct c4iw_dev *rhp, struct c4iw_pd *php,
>         return ret;
>  }
>
> -static int reregister_mem(struct c4iw_dev *rhp, struct c4iw_pd *php,
> -                         struct c4iw_mr *mhp, int shift, int npages)
> -{
> -       u32 stag;
> -       int ret;
> -
> -       if (npages > mhp->attr.pbl_size)
> -               return -ENOMEM;
> -
> -       stag = mhp->attr.stag;
> -       ret = write_tpt_entry(&rhp->rdev, 0, &stag, 1, mhp->attr.pdid,
> -                             FW_RI_STAG_NSMR, mhp->attr.perms,
> -                             mhp->attr.mw_bind_enable, mhp->attr.zbva,
> -                             mhp->attr.va_fbo, mhp->attr.len, shift - 12,
> -                             mhp->attr.pbl_size, mhp->attr.pbl_addr);
> -       if (ret)
> -               return ret;
> -
> -       ret = finish_mem_reg(mhp, stag);
> -       if (ret)
> -               dereg_mem(&rhp->rdev, mhp->attr.stag, mhp->attr.pbl_size,
> -                      mhp->attr.pbl_addr);
> -
> -       return ret;
> -}
> -
>  static int alloc_pbl(struct c4iw_mr *mhp, int npages)
>  {
>         mhp->attr.pbl_addr = c4iw_pblpool_alloc(&mhp->rhp->rdev,
> @@ -431,228 +405,6 @@ static int alloc_pbl(struct c4iw_mr *mhp, int npages)
>         return 0;
>  }
>
> -static int build_phys_page_list(struct ib_phys_buf *buffer_list,
> -                               int num_phys_buf, u64 *iova_start,
> -                               u64 *total_size, int *npages,
> -                               int *shift, __be64 **page_list)
> -{
> -       u64 mask;
> -       int i, j, n;
> -
> -       mask = 0;
> -       *total_size = 0;
> -       for (i = 0; i < num_phys_buf; ++i) {
> -               if (i != 0 && buffer_list[i].addr & ~PAGE_MASK)
> -                       return -EINVAL;
> -               if (i != 0 && i != num_phys_buf - 1 &&
> -                   (buffer_list[i].size & ~PAGE_MASK))
> -                       return -EINVAL;
> -               *total_size += buffer_list[i].size;
> -               if (i > 0)
> -                       mask |= buffer_list[i].addr;
> -               else
> -                       mask |= buffer_list[i].addr & PAGE_MASK;
> -               if (i != num_phys_buf - 1)
> -                       mask |= buffer_list[i].addr + buffer_list[i].size;
> -               else
> -                       mask |= (buffer_list[i].addr + buffer_list[i].size +
> -                               PAGE_SIZE - 1) & PAGE_MASK;
> -       }
> -
> -       if (*total_size > 0xFFFFFFFFULL)
> -               return -ENOMEM;
> -
> -       /* Find largest page shift we can use to cover buffers */
> -       for (*shift = PAGE_SHIFT; *shift < 27; ++(*shift))
> -               if ((1ULL << *shift) & mask)
> -                       break;
> -
> -       buffer_list[0].size += buffer_list[0].addr & ((1ULL << *shift) - 1);
> -       buffer_list[0].addr &= ~0ull << *shift;
> -
> -       *npages = 0;
> -       for (i = 0; i < num_phys_buf; ++i)
> -               *npages += (buffer_list[i].size +
> -                       (1ULL << *shift) - 1) >> *shift;
> -
> -       if (!*npages)
> -               return -EINVAL;
> -
> -       *page_list = kmalloc(sizeof(u64) * *npages, GFP_KERNEL);
> -       if (!*page_list)
> -               return -ENOMEM;
> -
> -       n = 0;
> -       for (i = 0; i < num_phys_buf; ++i)
> -               for (j = 0;
> -                    j < (buffer_list[i].size + (1ULL << *shift) - 1) >> *shift;
> -                    ++j)
> -                       (*page_list)[n++] = cpu_to_be64(buffer_list[i].addr +
> -                           ((u64) j << *shift));
> -
> -       PDBG("%s va 0x%llx mask 0x%llx shift %d len %lld pbl_size %d\n",
> -            __func__, (unsigned long long)*iova_start,
> -            (unsigned long long)mask, *shift, (unsigned long long)*total_size,
> -            *npages);
> -
> -       return 0;
> -
> -}
> -
> -int c4iw_reregister_phys_mem(struct ib_mr *mr, int mr_rereg_mask,
> -                            struct ib_pd *pd, struct ib_phys_buf *buffer_list,
> -                            int num_phys_buf, int acc, u64 *iova_start)
> -{
> -
> -       struct c4iw_mr mh, *mhp;
> -       struct c4iw_pd *php;
> -       struct c4iw_dev *rhp;
> -       __be64 *page_list = NULL;
> -       int shift = 0;
> -       u64 total_size;
> -       int npages;
> -       int ret;
> -
> -       PDBG("%s ib_mr %p ib_pd %p\n", __func__, mr, pd);
> -
> -       /* There can be no memory windows */
> -       if (atomic_read(&mr->usecnt))
> -               return -EINVAL;
> -
> -       mhp = to_c4iw_mr(mr);
> -       rhp = mhp->rhp;
> -       php = to_c4iw_pd(mr->pd);
> -
> -       /* make sure we are on the same adapter */
> -       if (rhp != php->rhp)
> -               return -EINVAL;
> -
> -       memcpy(&mh, mhp, sizeof *mhp);
> -
> -       if (mr_rereg_mask & IB_MR_REREG_PD)
> -               php = to_c4iw_pd(pd);
> -       if (mr_rereg_mask & IB_MR_REREG_ACCESS) {
> -               mh.attr.perms = c4iw_ib_to_tpt_access(acc);
> -               mh.attr.mw_bind_enable = (acc & IB_ACCESS_MW_BIND) ==
> -                                        IB_ACCESS_MW_BIND;
> -       }
> -       if (mr_rereg_mask & IB_MR_REREG_TRANS) {
> -               ret = build_phys_page_list(buffer_list, num_phys_buf,
> -                                               iova_start,
> -                                               &total_size, &npages,
> -                                               &shift, &page_list);
> -               if (ret)
> -                       return ret;
> -       }
> -
> -       if (mr_exceeds_hw_limits(rhp, total_size)) {
> -               kfree(page_list);
> -               return -EINVAL;
> -       }
> -
> -       ret = reregister_mem(rhp, php, &mh, shift, npages);
> -       kfree(page_list);
> -       if (ret)
> -               return ret;
> -       if (mr_rereg_mask & IB_MR_REREG_PD)
> -               mhp->attr.pdid = php->pdid;
> -       if (mr_rereg_mask & IB_MR_REREG_ACCESS)
> -               mhp->attr.perms = c4iw_ib_to_tpt_access(acc);
> -       if (mr_rereg_mask & IB_MR_REREG_TRANS) {
> -               mhp->attr.zbva = 0;
> -               mhp->attr.va_fbo = *iova_start;
> -               mhp->attr.page_size = shift - 12;
> -               mhp->attr.len = (u32) total_size;
> -               mhp->attr.pbl_size = npages;
> -       }
> -
> -       return 0;
> -}
> -
> -struct ib_mr *c4iw_register_phys_mem(struct ib_pd *pd,
> -                                    struct ib_phys_buf *buffer_list,
> -                                    int num_phys_buf, int acc, u64 *iova_start)
> -{
> -       __be64 *page_list;
> -       int shift;
> -       u64 total_size;
> -       int npages;
> -       struct c4iw_dev *rhp;
> -       struct c4iw_pd *php;
> -       struct c4iw_mr *mhp;
> -       int ret;
> -
> -       PDBG("%s ib_pd %p\n", __func__, pd);
> -       php = to_c4iw_pd(pd);
> -       rhp = php->rhp;
> -
> -       mhp = kzalloc(sizeof(*mhp), GFP_KERNEL);
> -       if (!mhp)
> -               return ERR_PTR(-ENOMEM);
> -
> -       mhp->rhp = rhp;
> -
> -       /* First check that we have enough alignment */
> -       if ((*iova_start & ~PAGE_MASK) != (buffer_list[0].addr & ~PAGE_MASK)) {
> -               ret = -EINVAL;
> -               goto err;
> -       }
> -
> -       if (num_phys_buf > 1 &&
> -           ((buffer_list[0].addr + buffer_list[0].size) & ~PAGE_MASK)) {
> -               ret = -EINVAL;
> -               goto err;
> -       }
> -
> -       ret = build_phys_page_list(buffer_list, num_phys_buf, iova_start,
> -                                       &total_size, &npages, &shift,
> -                                       &page_list);
> -       if (ret)
> -               goto err;
> -
> -       if (mr_exceeds_hw_limits(rhp, total_size)) {
> -               kfree(page_list);
> -               ret = -EINVAL;
> -               goto err;
> -       }
> -
> -       ret = alloc_pbl(mhp, npages);
> -       if (ret) {
> -               kfree(page_list);
> -               goto err;
> -       }
> -
> -       ret = write_pbl(&mhp->rhp->rdev, page_list, mhp->attr.pbl_addr,
> -                            npages);
> -       kfree(page_list);
> -       if (ret)
> -               goto err_pbl;
> -
> -       mhp->attr.pdid = php->pdid;
> -       mhp->attr.zbva = 0;
> -
> -       mhp->attr.perms = c4iw_ib_to_tpt_access(acc);
> -       mhp->attr.va_fbo = *iova_start;
> -       mhp->attr.page_size = shift - 12;
> -
> -       mhp->attr.len = (u32) total_size;
> -       mhp->attr.pbl_size = npages;
> -       ret = register_mem(rhp, php, mhp, shift);
> -       if (ret)
> -               goto err_pbl;
> -
> -       return &mhp->ibmr;
> -
> -err_pbl:
> -       c4iw_pblpool_free(&mhp->rhp->rdev, mhp->attr.pbl_addr,
> -                             mhp->attr.pbl_size << 3);
> -
> -err:
> -       kfree(mhp);
> -       return ERR_PTR(ret);
> -
> -}
> -
>  struct ib_mr *c4iw_get_dma_mr(struct ib_pd *pd, int acc)
>  {
>         struct c4iw_dev *rhp;
> diff --git a/drivers/infiniband/hw/cxgb4/provider.c b/drivers/infiniband/hw/cxgb4/provider.c
> index b7703bc..186319e 100644
> --- a/drivers/infiniband/hw/cxgb4/provider.c
> +++ b/drivers/infiniband/hw/cxgb4/provider.c
> @@ -509,8 +509,6 @@ int c4iw_register_device(struct c4iw_dev *dev)
>         dev->ibdev.resize_cq = c4iw_resize_cq;
>         dev->ibdev.poll_cq = c4iw_poll_cq;
>         dev->ibdev.get_dma_mr = c4iw_get_dma_mr;
> -       dev->ibdev.reg_phys_mr = c4iw_register_phys_mem;
> -       dev->ibdev.rereg_phys_mr = c4iw_reregister_phys_mem;
>         dev->ibdev.reg_user_mr = c4iw_reg_user_mr;
>         dev->ibdev.dereg_mr = c4iw_dereg_mr;
>         dev->ibdev.alloc_mw = c4iw_alloc_mw;
> diff --git a/drivers/infiniband/hw/mthca/mthca_provider.c b/drivers/infiniband/hw/mthca/mthca_provider.c
> index 28d7a8b..6d0a1db 100644
> --- a/drivers/infiniband/hw/mthca/mthca_provider.c
> +++ b/drivers/infiniband/hw/mthca/mthca_provider.c
> @@ -892,89 +892,6 @@ static struct ib_mr *mthca_get_dma_mr(struct ib_pd *pd, int acc)
>         return &mr->ibmr;
>  }
>
> -static struct ib_mr *mthca_reg_phys_mr(struct ib_pd       *pd,
> -                                      struct ib_phys_buf *buffer_list,
> -                                      int                 num_phys_buf,
> -                                      int                 acc,
> -                                      u64                *iova_start)
> -{
> -       struct mthca_mr *mr;
> -       u64 *page_list;
> -       u64 total_size;
> -       unsigned long mask;
> -       int shift;
> -       int npages;
> -       int err;
> -       int i, j, n;
> -
> -       mask = buffer_list[0].addr ^ *iova_start;
> -       total_size = 0;
> -       for (i = 0; i < num_phys_buf; ++i) {
> -               if (i != 0)
> -                       mask |= buffer_list[i].addr;
> -               if (i != num_phys_buf - 1)
> -                       mask |= buffer_list[i].addr + buffer_list[i].size;
> -
> -               total_size += buffer_list[i].size;
> -       }
> -
> -       if (mask & ~PAGE_MASK)
> -               return ERR_PTR(-EINVAL);
> -
> -       shift = __ffs(mask | 1 << 31);
> -
> -       buffer_list[0].size += buffer_list[0].addr & ((1ULL << shift) - 1);
> -       buffer_list[0].addr &= ~0ull << shift;
> -
> -       mr = kmalloc(sizeof *mr, GFP_KERNEL);
> -       if (!mr)
> -               return ERR_PTR(-ENOMEM);
> -
> -       npages = 0;
> -       for (i = 0; i < num_phys_buf; ++i)
> -               npages += (buffer_list[i].size + (1ULL << shift) - 1) >> shift;
> -
> -       if (!npages)
> -               return &mr->ibmr;
> -
> -       page_list = kmalloc(npages * sizeof *page_list, GFP_KERNEL);
> -       if (!page_list) {
> -               kfree(mr);
> -               return ERR_PTR(-ENOMEM);
> -       }
> -
> -       n = 0;
> -       for (i = 0; i < num_phys_buf; ++i)
> -               for (j = 0;
> -                    j < (buffer_list[i].size + (1ULL << shift) - 1) >> shift;
> -                    ++j)
> -                       page_list[n++] = buffer_list[i].addr + ((u64) j << shift);
> -
> -       mthca_dbg(to_mdev(pd->device), "Registering memory at %llx (iova %llx) "
> -                 "in PD %x; shift %d, npages %d.\n",
> -                 (unsigned long long) buffer_list[0].addr,
> -                 (unsigned long long) *iova_start,
> -                 to_mpd(pd)->pd_num,
> -                 shift, npages);
> -
> -       err = mthca_mr_alloc_phys(to_mdev(pd->device),
> -                                 to_mpd(pd)->pd_num,
> -                                 page_list, shift, npages,
> -                                 *iova_start, total_size,
> -                                 convert_access(acc), mr);
> -
> -       if (err) {
> -               kfree(page_list);
> -               kfree(mr);
> -               return ERR_PTR(err);
> -       }
> -
> -       kfree(page_list);
> -       mr->umem = NULL;
> -
> -       return &mr->ibmr;
> -}
> -
>  static struct ib_mr *mthca_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
>                                        u64 virt, int acc, struct ib_udata *udata)
>  {
> @@ -1339,7 +1256,6 @@ int mthca_register_device(struct mthca_dev *dev)
>         dev->ib_dev.destroy_cq           = mthca_destroy_cq;
>         dev->ib_dev.poll_cq              = mthca_poll_cq;
>         dev->ib_dev.get_dma_mr           = mthca_get_dma_mr;
> -       dev->ib_dev.reg_phys_mr          = mthca_reg_phys_mr;
>         dev->ib_dev.reg_user_mr          = mthca_reg_user_mr;
>         dev->ib_dev.dereg_mr             = mthca_dereg_mr;
>         dev->ib_dev.get_port_immutable   = mthca_port_immutable;
> diff --git a/drivers/infiniband/hw/nes/nes_cm.c b/drivers/infiniband/hw/nes/nes_cm.c
> index 8a3ad17..242c87d 100644
> --- a/drivers/infiniband/hw/nes/nes_cm.c
> +++ b/drivers/infiniband/hw/nes/nes_cm.c
> @@ -3319,10 +3319,9 @@ int nes_accept(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
>                 ibphysbuf.addr = nesqp->ietf_frame_pbase + mpa_frame_offset;
>                 ibphysbuf.size = buff_len;
>                 tagged_offset = (u64)(unsigned long)*start_buff;
> -               ibmr = nesibdev->ibdev.reg_phys_mr((struct ib_pd *)nespd,
> -                                                  &ibphysbuf, 1,
> -                                                  IB_ACCESS_LOCAL_WRITE,
> -                                                  &tagged_offset);
> +               ibmr = nes_reg_phys_mr(&nespd->ibpd, &ibphysbuf, 1,
> +                                       IB_ACCESS_LOCAL_WRITE,
> +                                       &tagged_offset);
>                 if (!ibmr) {
>                         nes_debug(NES_DBG_CM, "Unable to register memory region"
>                                   "for lSMM for cm_node = %p \n",
> diff --git a/drivers/infiniband/hw/nes/nes_verbs.c b/drivers/infiniband/hw/nes/nes_verbs.c
> index 2bad036..453ebc2 100644
> --- a/drivers/infiniband/hw/nes/nes_verbs.c
> +++ b/drivers/infiniband/hw/nes/nes_verbs.c
> @@ -2019,7 +2019,7 @@ static int nes_reg_mr(struct nes_device *nesdev, struct nes_pd *nespd,
>  /**
>   * nes_reg_phys_mr
>   */
> -static struct ib_mr *nes_reg_phys_mr(struct ib_pd *ib_pd,
> +struct ib_mr *nes_reg_phys_mr(struct ib_pd *ib_pd,
>                 struct ib_phys_buf *buffer_list, int num_phys_buf, int acc,
>                 u64 * iova_start)
>  {
> @@ -3832,7 +3832,6 @@ struct nes_ib_device *nes_init_ofa_device(struct net_device *netdev)
>         nesibdev->ibdev.destroy_cq = nes_destroy_cq;
>         nesibdev->ibdev.poll_cq = nes_poll_cq;
>         nesibdev->ibdev.get_dma_mr = nes_get_dma_mr;
> -       nesibdev->ibdev.reg_phys_mr = nes_reg_phys_mr;
>         nesibdev->ibdev.reg_user_mr = nes_reg_user_mr;
>         nesibdev->ibdev.dereg_mr = nes_dereg_mr;
>         nesibdev->ibdev.alloc_mw = nes_alloc_mw;
> diff --git a/drivers/infiniband/hw/nes/nes_verbs.h b/drivers/infiniband/hw/nes/nes_verbs.h
> index a204b67..38e38cf 100644
> --- a/drivers/infiniband/hw/nes/nes_verbs.h
> +++ b/drivers/infiniband/hw/nes/nes_verbs.h
> @@ -190,4 +190,9 @@ struct nes_qp {
>         u8                    pau_state;
>         __u64                 nesuqp_addr;
>  };
> +
> +struct ib_mr *nes_reg_phys_mr(struct ib_pd *ib_pd,
> +               struct ib_phys_buf *buffer_list, int num_phys_buf, int acc,
> +               u64 * iova_start);
> +
>  #endif                 /* NES_VERBS_H */
> diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_main.c b/drivers/infiniband/hw/ocrdma/ocrdma_main.c
> index 963be66..f91131f 100644
> --- a/drivers/infiniband/hw/ocrdma/ocrdma_main.c
> +++ b/drivers/infiniband/hw/ocrdma/ocrdma_main.c
> @@ -174,7 +174,6 @@ static int ocrdma_register_device(struct ocrdma_dev *dev)
>         dev->ibdev.req_notify_cq = ocrdma_arm_cq;
>
>         dev->ibdev.get_dma_mr = ocrdma_get_dma_mr;
> -       dev->ibdev.reg_phys_mr = ocrdma_reg_kernel_mr;
>         dev->ibdev.dereg_mr = ocrdma_dereg_mr;
>         dev->ibdev.reg_user_mr = ocrdma_reg_user_mr;
>
> diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
> index 74cf67a..3cffab7 100644
> --- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
> +++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
> @@ -3017,169 +3017,6 @@ pl_err:
>         return ERR_PTR(-ENOMEM);
>  }
>
> -#define MAX_KERNEL_PBE_SIZE 65536
> -static inline int count_kernel_pbes(struct ib_phys_buf *buf_list,
> -                                   int buf_cnt, u32 *pbe_size)
> -{
> -       u64 total_size = 0;
> -       u64 buf_size = 0;
> -       int i;
> -       *pbe_size = roundup(buf_list[0].size, PAGE_SIZE);
> -       *pbe_size = roundup_pow_of_two(*pbe_size);
> -
> -       /* find the smallest PBE size that we can have */
> -       for (i = 0; i < buf_cnt; i++) {
> -               /* first addr may not be page aligned, so ignore checking */
> -               if ((i != 0) && ((buf_list[i].addr & ~PAGE_MASK) ||
> -                                (buf_list[i].size & ~PAGE_MASK))) {
> -                       return 0;
> -               }
> -
> -               /* if configured PBE size is greater then the chosen one,
> -                * reduce the PBE size.
> -                */
> -               buf_size = roundup(buf_list[i].size, PAGE_SIZE);
> -               /* pbe_size has to be even multiple of 4K 1,2,4,8...*/
> -               buf_size = roundup_pow_of_two(buf_size);
> -               if (*pbe_size > buf_size)
> -                       *pbe_size = buf_size;
> -
> -               total_size += buf_size;
> -       }
> -       *pbe_size = *pbe_size > MAX_KERNEL_PBE_SIZE ?
> -           (MAX_KERNEL_PBE_SIZE) : (*pbe_size);
> -
> -       /* num_pbes = total_size / (*pbe_size);  this is implemented below. */
> -
> -       return total_size >> ilog2(*pbe_size);
> -}
> -
> -static void build_kernel_pbes(struct ib_phys_buf *buf_list, int ib_buf_cnt,
> -                             u32 pbe_size, struct ocrdma_pbl *pbl_tbl,
> -                             struct ocrdma_hw_mr *hwmr)
> -{
> -       int i;
> -       int idx;
> -       int pbes_per_buf = 0;
> -       u64 buf_addr = 0;
> -       int num_pbes;
> -       struct ocrdma_pbe *pbe;
> -       int total_num_pbes = 0;
> -
> -       if (!hwmr->num_pbes)
> -               return;
> -
> -       pbe = (struct ocrdma_pbe *)pbl_tbl->va;
> -       num_pbes = 0;
> -
> -       /* go through the OS phy regions & fill hw pbe entries into pbls. */
> -       for (i = 0; i < ib_buf_cnt; i++) {
> -               buf_addr = buf_list[i].addr;
> -               pbes_per_buf =
> -                   roundup_pow_of_two(roundup(buf_list[i].size, PAGE_SIZE)) /
> -                   pbe_size;
> -               hwmr->len += buf_list[i].size;
> -               /* number of pbes can be more for one OS buf, when
> -                * buffers are of different sizes.
> -                * split the ib_buf to one or more pbes.
> -                */
> -               for (idx = 0; idx < pbes_per_buf; idx++) {
> -                       /* we program always page aligned addresses,
> -                        * first unaligned address is taken care by fbo.
> -                        */
> -                       if (i == 0) {
> -                               /* for non zero fbo, assign the
> -                                * start of the page.
> -                                */
> -                               pbe->pa_lo =
> -                                   cpu_to_le32((u32) (buf_addr & PAGE_MASK));
> -                               pbe->pa_hi =
> -                                   cpu_to_le32((u32) upper_32_bits(buf_addr));
> -                       } else {
> -                               pbe->pa_lo =
> -                                   cpu_to_le32((u32) (buf_addr & 0xffffffff));
> -                               pbe->pa_hi =
> -                                   cpu_to_le32((u32) upper_32_bits(buf_addr));
> -                       }
> -                       buf_addr += pbe_size;
> -                       num_pbes += 1;
> -                       total_num_pbes += 1;
> -                       pbe++;
> -
> -                       if (total_num_pbes == hwmr->num_pbes)
> -                               goto mr_tbl_done;
> -                       /* if the pbl is full storing the pbes,
> -                        * move to next pbl.
> -                        */
> -                       if (num_pbes == (hwmr->pbl_size/sizeof(u64))) {
> -                               pbl_tbl++;
> -                               pbe = (struct ocrdma_pbe *)pbl_tbl->va;
> -                               num_pbes = 0;
> -                       }
> -               }
> -       }
> -mr_tbl_done:
> -       return;
> -}
> -
> -struct ib_mr *ocrdma_reg_kernel_mr(struct ib_pd *ibpd,
> -                                  struct ib_phys_buf *buf_list,
> -                                  int buf_cnt, int acc, u64 *iova_start)
> -{
> -       int status = -ENOMEM;
> -       struct ocrdma_mr *mr;
> -       struct ocrdma_pd *pd = get_ocrdma_pd(ibpd);
> -       struct ocrdma_dev *dev = get_ocrdma_dev(ibpd->device);
> -       u32 num_pbes;
> -       u32 pbe_size = 0;
> -
> -       if ((acc & IB_ACCESS_REMOTE_WRITE) && !(acc & IB_ACCESS_LOCAL_WRITE))
> -               return ERR_PTR(-EINVAL);
> -
> -       mr = kzalloc(sizeof(*mr), GFP_KERNEL);
> -       if (!mr)
> -               return ERR_PTR(status);
> -
> -       num_pbes = count_kernel_pbes(buf_list, buf_cnt, &pbe_size);
> -       if (num_pbes == 0) {
> -               status = -EINVAL;
> -               goto pbl_err;
> -       }
> -       status = ocrdma_get_pbl_info(dev, mr, num_pbes);
> -       if (status)
> -               goto pbl_err;
> -
> -       mr->hwmr.pbe_size = pbe_size;
> -       mr->hwmr.fbo = *iova_start - (buf_list[0].addr & PAGE_MASK);
> -       mr->hwmr.va = *iova_start;
> -       mr->hwmr.local_rd = 1;
> -       mr->hwmr.remote_wr = (acc & IB_ACCESS_REMOTE_WRITE) ? 1 : 0;
> -       mr->hwmr.remote_rd = (acc & IB_ACCESS_REMOTE_READ) ? 1 : 0;
> -       mr->hwmr.local_wr = (acc & IB_ACCESS_LOCAL_WRITE) ? 1 : 0;
> -       mr->hwmr.remote_atomic = (acc & IB_ACCESS_REMOTE_ATOMIC) ? 1 : 0;
> -       mr->hwmr.mw_bind = (acc & IB_ACCESS_MW_BIND) ? 1 : 0;
> -
> -       status = ocrdma_build_pbl_tbl(dev, &mr->hwmr);
> -       if (status)
> -               goto pbl_err;
> -       build_kernel_pbes(buf_list, buf_cnt, pbe_size, mr->hwmr.pbl_table,
> -                         &mr->hwmr);
> -       status = ocrdma_reg_mr(dev, &mr->hwmr, pd->id, acc);
> -       if (status)
> -               goto mbx_err;
> -
> -       mr->ibmr.lkey = mr->hwmr.lkey;
> -       if (mr->hwmr.remote_wr || mr->hwmr.remote_rd)
> -               mr->ibmr.rkey = mr->hwmr.lkey;
> -       return &mr->ibmr;
> -
> -mbx_err:
> -       ocrdma_free_mr_pbl_tbl(dev, &mr->hwmr);
> -pbl_err:
> -       kfree(mr);
> -       return ERR_PTR(status);
> -}
> -
>  static int ocrdma_set_page(struct ib_mr *ibmr, u64 addr)
>  {
>         struct ocrdma_mr *mr = get_ocrdma_mr(ibmr);
> diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h
> index f2ce048..82f476f 100644
> --- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h
> +++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h
> @@ -115,9 +115,6 @@ int ocrdma_post_srq_recv(struct ib_srq *, struct ib_recv_wr *,
>
>  int ocrdma_dereg_mr(struct ib_mr *);
>  struct ib_mr *ocrdma_get_dma_mr(struct ib_pd *, int acc);
> -struct ib_mr *ocrdma_reg_kernel_mr(struct ib_pd *,
> -                                  struct ib_phys_buf *buffer_list,
> -                                  int num_phys_buf, int acc, u64 *iova_start);
>  struct ib_mr *ocrdma_reg_user_mr(struct ib_pd *, u64 start, u64 length,
>                                  u64 virt, int acc, struct ib_udata *);
>  struct ib_mr *ocrdma_alloc_mr(struct ib_pd *pd,
> diff --git a/drivers/infiniband/hw/qib/qib_mr.c b/drivers/infiniband/hw/qib/qib_mr.c
> index 294f5c7..5f53304 100644
> --- a/drivers/infiniband/hw/qib/qib_mr.c
> +++ b/drivers/infiniband/hw/qib/qib_mr.c
> @@ -150,10 +150,7 @@ static struct qib_mr *alloc_mr(int count, struct ib_pd *pd)
>         rval = init_qib_mregion(&mr->mr, pd, count);
>         if (rval)
>                 goto bail;
> -       /*
> -        * ib_reg_phys_mr() will initialize mr->ibmr except for
> -        * lkey and rkey.
> -        */
> +
>         rval = qib_alloc_lkey(&mr->mr, 0);
>         if (rval)
>                 goto bail_mregion;
> @@ -171,52 +168,6 @@ bail:
>  }
>
>  /**
> - * qib_reg_phys_mr - register a physical memory region
> - * @pd: protection domain for this memory region
> - * @buffer_list: pointer to the list of physical buffers to register
> - * @num_phys_buf: the number of physical buffers to register
> - * @iova_start: the starting address passed over IB which maps to this MR
> - *
> - * Returns the memory region on success, otherwise returns an errno.
> - */
> -struct ib_mr *qib_reg_phys_mr(struct ib_pd *pd,
> -                             struct ib_phys_buf *buffer_list,
> -                             int num_phys_buf, int acc, u64 *iova_start)
> -{
> -       struct qib_mr *mr;
> -       int n, m, i;
> -       struct ib_mr *ret;
> -
> -       mr = alloc_mr(num_phys_buf, pd);
> -       if (IS_ERR(mr)) {
> -               ret = (struct ib_mr *)mr;
> -               goto bail;
> -       }
> -
> -       mr->mr.user_base = *iova_start;
> -       mr->mr.iova = *iova_start;
> -       mr->mr.access_flags = acc;
> -
> -       m = 0;
> -       n = 0;
> -       for (i = 0; i < num_phys_buf; i++) {
> -               mr->mr.map[m]->segs[n].vaddr = (void *) buffer_list[i].addr;
> -               mr->mr.map[m]->segs[n].length = buffer_list[i].size;
> -               mr->mr.length += buffer_list[i].size;
> -               n++;
> -               if (n == QIB_SEGSZ) {
> -                       m++;
> -                       n = 0;
> -               }
> -       }
> -
> -       ret = &mr->ibmr;
> -
> -bail:
> -       return ret;
> -}
> -
> -/**
>   * qib_reg_user_mr - register a userspace memory region
>   * @pd: protection domain for this memory region
>   * @start: starting userspace address
> diff --git a/drivers/infiniband/hw/qib/qib_verbs.c b/drivers/infiniband/hw/qib/qib_verbs.c
> index 9e1af0b..e56bd65 100644
> --- a/drivers/infiniband/hw/qib/qib_verbs.c
> +++ b/drivers/infiniband/hw/qib/qib_verbs.c
> @@ -2206,7 +2206,6 @@ int qib_register_ib_device(struct qib_devdata *dd)
>         ibdev->poll_cq = qib_poll_cq;
>         ibdev->req_notify_cq = qib_req_notify_cq;
>         ibdev->get_dma_mr = qib_get_dma_mr;
> -       ibdev->reg_phys_mr = qib_reg_phys_mr;
>         ibdev->reg_user_mr = qib_reg_user_mr;
>         ibdev->dereg_mr = qib_dereg_mr;
>         ibdev->alloc_mr = qib_alloc_mr;
> diff --git a/drivers/infiniband/hw/qib/qib_verbs.h b/drivers/infiniband/hw/qib/qib_verbs.h
> index 2baf5ad..2d538c8c 100644
> --- a/drivers/infiniband/hw/qib/qib_verbs.h
> +++ b/drivers/infiniband/hw/qib/qib_verbs.h
> @@ -1032,10 +1032,6 @@ int qib_resize_cq(struct ib_cq *ibcq, int cqe, struct ib_udata *udata);
>
>  struct ib_mr *qib_get_dma_mr(struct ib_pd *pd, int acc);
>
> -struct ib_mr *qib_reg_phys_mr(struct ib_pd *pd,
> -                             struct ib_phys_buf *buffer_list,
> -                             int num_phys_buf, int acc, u64 *iova_start);
> -
>  struct ib_mr *qib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
>                               u64 virt_addr, int mr_access_flags,
>                               struct ib_udata *udata);
> diff --git a/drivers/staging/rdma/amso1100/c2_provider.c b/drivers/staging/rdma/amso1100/c2_provider.c
> index ee2ff87..6c3e9cc 100644
> --- a/drivers/staging/rdma/amso1100/c2_provider.c
> +++ b/drivers/staging/rdma/amso1100/c2_provider.c
> @@ -831,7 +831,6 @@ int c2_register_device(struct c2_dev *dev)
>         dev->ibdev.destroy_cq = c2_destroy_cq;
>         dev->ibdev.poll_cq = c2_poll_cq;
>         dev->ibdev.get_dma_mr = c2_get_dma_mr;
> -       dev->ibdev.reg_phys_mr = c2_reg_phys_mr;
>         dev->ibdev.reg_user_mr = c2_reg_user_mr;
>         dev->ibdev.dereg_mr = c2_dereg_mr;
>         dev->ibdev.get_port_immutable = c2_port_immutable;
> diff --git a/drivers/staging/rdma/ehca/ehca_iverbs.h b/drivers/staging/rdma/ehca/ehca_iverbs.h
> index 4a45ca3..219f635 100644
> --- a/drivers/staging/rdma/ehca/ehca_iverbs.h
> +++ b/drivers/staging/rdma/ehca/ehca_iverbs.h
> @@ -79,21 +79,10 @@ int ehca_destroy_ah(struct ib_ah *ah);
>
>  struct ib_mr *ehca_get_dma_mr(struct ib_pd *pd, int mr_access_flags);
>
> -struct ib_mr *ehca_reg_phys_mr(struct ib_pd *pd,
> -                              struct ib_phys_buf *phys_buf_array,
> -                              int num_phys_buf,
> -                              int mr_access_flags, u64 *iova_start);
> -
>  struct ib_mr *ehca_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
>                                u64 virt, int mr_access_flags,
>                                struct ib_udata *udata);
>
> -int ehca_rereg_phys_mr(struct ib_mr *mr,
> -                      int mr_rereg_mask,
> -                      struct ib_pd *pd,
> -                      struct ib_phys_buf *phys_buf_array,
> -                      int num_phys_buf, int mr_access_flags, u64 *iova_start);
> -
>  int ehca_dereg_mr(struct ib_mr *mr);
>
>  struct ib_mw *ehca_alloc_mw(struct ib_pd *pd, enum ib_mw_type type);
> diff --git a/drivers/staging/rdma/ehca/ehca_main.c b/drivers/staging/rdma/ehca/ehca_main.c
> index 0be7959..ba023426 100644
> --- a/drivers/staging/rdma/ehca/ehca_main.c
> +++ b/drivers/staging/rdma/ehca/ehca_main.c
> @@ -511,10 +511,8 @@ static int ehca_init_device(struct ehca_shca *shca)
>         shca->ib_device.req_notify_cq       = ehca_req_notify_cq;
>         /* shca->ib_device.req_ncomp_notif  = ehca_req_ncomp_notif; */
>         shca->ib_device.get_dma_mr          = ehca_get_dma_mr;
> -       shca->ib_device.reg_phys_mr         = ehca_reg_phys_mr;
>         shca->ib_device.reg_user_mr         = ehca_reg_user_mr;
>         shca->ib_device.dereg_mr            = ehca_dereg_mr;
> -       shca->ib_device.rereg_phys_mr       = ehca_rereg_phys_mr;
>         shca->ib_device.alloc_mw            = ehca_alloc_mw;
>         shca->ib_device.bind_mw             = ehca_bind_mw;
>         shca->ib_device.dealloc_mw          = ehca_dealloc_mw;
> diff --git a/drivers/staging/rdma/ehca/ehca_mrmw.c b/drivers/staging/rdma/ehca/ehca_mrmw.c
> index eb274c1..1c1a8dd 100644
> --- a/drivers/staging/rdma/ehca/ehca_mrmw.c
> +++ b/drivers/staging/rdma/ehca/ehca_mrmw.c
> @@ -196,120 +196,6 @@ get_dma_mr_exit0:
>
>  /*----------------------------------------------------------------------*/
>
> -struct ib_mr *ehca_reg_phys_mr(struct ib_pd *pd,
> -                              struct ib_phys_buf *phys_buf_array,
> -                              int num_phys_buf,
> -                              int mr_access_flags,
> -                              u64 *iova_start)
> -{
> -       struct ib_mr *ib_mr;
> -       int ret;
> -       struct ehca_mr *e_mr;
> -       struct ehca_shca *shca =
> -               container_of(pd->device, struct ehca_shca, ib_device);
> -       struct ehca_pd *e_pd = container_of(pd, struct ehca_pd, ib_pd);
> -
> -       u64 size;
> -
> -       if ((num_phys_buf <= 0) || !phys_buf_array) {
> -               ehca_err(pd->device, "bad input values: num_phys_buf=%x "
> -                        "phys_buf_array=%p", num_phys_buf, phys_buf_array);
> -               ib_mr = ERR_PTR(-EINVAL);
> -               goto reg_phys_mr_exit0;
> -       }
> -       if (((mr_access_flags & IB_ACCESS_REMOTE_WRITE) &&
> -            !(mr_access_flags & IB_ACCESS_LOCAL_WRITE)) ||
> -           ((mr_access_flags & IB_ACCESS_REMOTE_ATOMIC) &&
> -            !(mr_access_flags & IB_ACCESS_LOCAL_WRITE))) {
> -               /*
> -                * Remote Write Access requires Local Write Access
> -                * Remote Atomic Access requires Local Write Access
> -                */
> -               ehca_err(pd->device, "bad input values: mr_access_flags=%x",
> -                        mr_access_flags);
> -               ib_mr = ERR_PTR(-EINVAL);
> -               goto reg_phys_mr_exit0;
> -       }
> -
> -       /* check physical buffer list and calculate size */
> -       ret = ehca_mr_chk_buf_and_calc_size(phys_buf_array, num_phys_buf,
> -                                           iova_start, &size);
> -       if (ret) {
> -               ib_mr = ERR_PTR(ret);
> -               goto reg_phys_mr_exit0;
> -       }
> -       if ((size == 0) ||
> -           (((u64)iova_start + size) < (u64)iova_start)) {
> -               ehca_err(pd->device, "bad input values: size=%llx iova_start=%p",
> -                        size, iova_start);
> -               ib_mr = ERR_PTR(-EINVAL);
> -               goto reg_phys_mr_exit0;
> -       }
> -
> -       e_mr = ehca_mr_new();
> -       if (!e_mr) {
> -               ehca_err(pd->device, "out of memory");
> -               ib_mr = ERR_PTR(-ENOMEM);
> -               goto reg_phys_mr_exit0;
> -       }
> -
> -       /* register MR on HCA */
> -       if (ehca_mr_is_maxmr(size, iova_start)) {
> -               e_mr->flags |= EHCA_MR_FLAG_MAXMR;
> -               ret = ehca_reg_maxmr(shca, e_mr, iova_start, mr_access_flags,
> -                                    e_pd, &e_mr->ib.ib_mr.lkey,
> -                                    &e_mr->ib.ib_mr.rkey);
> -               if (ret) {
> -                       ib_mr = ERR_PTR(ret);
> -                       goto reg_phys_mr_exit1;
> -               }
> -       } else {
> -               struct ehca_mr_pginfo pginfo;
> -               u32 num_kpages;
> -               u32 num_hwpages;
> -               u64 hw_pgsize;
> -
> -               num_kpages = NUM_CHUNKS(((u64)iova_start % PAGE_SIZE) + size,
> -                                       PAGE_SIZE);
> -               /* for kernel space we try most possible pgsize */
> -               hw_pgsize = ehca_get_max_hwpage_size(shca);
> -               num_hwpages = NUM_CHUNKS(((u64)iova_start % hw_pgsize) + size,
> -                                        hw_pgsize);
> -               memset(&pginfo, 0, sizeof(pginfo));
> -               pginfo.type = EHCA_MR_PGI_PHYS;
> -               pginfo.num_kpages = num_kpages;
> -               pginfo.hwpage_size = hw_pgsize;
> -               pginfo.num_hwpages = num_hwpages;
> -               pginfo.u.phy.num_phys_buf = num_phys_buf;
> -               pginfo.u.phy.phys_buf_array = phys_buf_array;
> -               pginfo.next_hwpage =
> -                       ((u64)iova_start & ~PAGE_MASK) / hw_pgsize;
> -
> -               ret = ehca_reg_mr(shca, e_mr, iova_start, size, mr_access_flags,
> -                                 e_pd, &pginfo, &e_mr->ib.ib_mr.lkey,
> -                                 &e_mr->ib.ib_mr.rkey, EHCA_REG_MR);
> -               if (ret) {
> -                       ib_mr = ERR_PTR(ret);
> -                       goto reg_phys_mr_exit1;
> -               }
> -       }
> -
> -       /* successful registration of all pages */
> -       return &e_mr->ib.ib_mr;
> -
> -reg_phys_mr_exit1:
> -       ehca_mr_delete(e_mr);
> -reg_phys_mr_exit0:
> -       if (IS_ERR(ib_mr))
> -               ehca_err(pd->device, "h_ret=%li pd=%p phys_buf_array=%p "
> -                        "num_phys_buf=%x mr_access_flags=%x iova_start=%p",
> -                        PTR_ERR(ib_mr), pd, phys_buf_array,
> -                        num_phys_buf, mr_access_flags, iova_start);
> -       return ib_mr;
> -} /* end ehca_reg_phys_mr() */
> -
> -/*----------------------------------------------------------------------*/
> -
>  struct ib_mr *ehca_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
>                                u64 virt, int mr_access_flags,
>                                struct ib_udata *udata)
> @@ -437,158 +323,6 @@ reg_user_mr_exit0:
>
>  /*----------------------------------------------------------------------*/
>
> -int ehca_rereg_phys_mr(struct ib_mr *mr,
> -                      int mr_rereg_mask,
> -                      struct ib_pd *pd,
> -                      struct ib_phys_buf *phys_buf_array,
> -                      int num_phys_buf,
> -                      int mr_access_flags,
> -                      u64 *iova_start)
> -{
> -       int ret;
> -
> -       struct ehca_shca *shca =
> -               container_of(mr->device, struct ehca_shca, ib_device);
> -       struct ehca_mr *e_mr = container_of(mr, struct ehca_mr, ib.ib_mr);
> -       u64 new_size;
> -       u64 *new_start;
> -       u32 new_acl;
> -       struct ehca_pd *new_pd;
> -       u32 tmp_lkey, tmp_rkey;
> -       unsigned long sl_flags;
> -       u32 num_kpages = 0;
> -       u32 num_hwpages = 0;
> -       struct ehca_mr_pginfo pginfo;
> -
> -       if (!(mr_rereg_mask & IB_MR_REREG_TRANS)) {
> -               /* TODO not supported, because PHYP rereg hCall needs pages */
> -               ehca_err(mr->device, "rereg without IB_MR_REREG_TRANS not "
> -                        "supported yet, mr_rereg_mask=%x", mr_rereg_mask);
> -               ret = -EINVAL;
> -               goto rereg_phys_mr_exit0;
> -       }
> -
> -       if (mr_rereg_mask & IB_MR_REREG_PD) {
> -               if (!pd) {
> -                       ehca_err(mr->device, "rereg with bad pd, pd=%p "
> -                                "mr_rereg_mask=%x", pd, mr_rereg_mask);
> -                       ret = -EINVAL;
> -                       goto rereg_phys_mr_exit0;
> -               }
> -       }
> -
> -       if ((mr_rereg_mask &
> -            ~(IB_MR_REREG_TRANS | IB_MR_REREG_PD | IB_MR_REREG_ACCESS)) ||
> -           (mr_rereg_mask == 0)) {
> -               ret = -EINVAL;
> -               goto rereg_phys_mr_exit0;
> -       }
> -
> -       /* check other parameters */
> -       if (e_mr == shca->maxmr) {
> -               /* should be impossible, however reject to be sure */
> -               ehca_err(mr->device, "rereg internal max-MR impossible, mr=%p "
> -                        "shca->maxmr=%p mr->lkey=%x",
> -                        mr, shca->maxmr, mr->lkey);
> -               ret = -EINVAL;
> -               goto rereg_phys_mr_exit0;
> -       }
> -       if (mr_rereg_mask & IB_MR_REREG_TRANS) { /* transl., i.e. addr/size */
> -               if (e_mr->flags & EHCA_MR_FLAG_FMR) {
> -                       ehca_err(mr->device, "not supported for FMR, mr=%p "
> -                                "flags=%x", mr, e_mr->flags);
> -                       ret = -EINVAL;
> -                       goto rereg_phys_mr_exit0;
> -               }
> -               if (!phys_buf_array || num_phys_buf <= 0) {
> -                       ehca_err(mr->device, "bad input values mr_rereg_mask=%x"
> -                                " phys_buf_array=%p num_phys_buf=%x",
> -                                mr_rereg_mask, phys_buf_array, num_phys_buf);
> -                       ret = -EINVAL;
> -                       goto rereg_phys_mr_exit0;
> -               }
> -       }
> -       if ((mr_rereg_mask & IB_MR_REREG_ACCESS) &&     /* change ACL */
> -           (((mr_access_flags & IB_ACCESS_REMOTE_WRITE) &&
> -             !(mr_access_flags & IB_ACCESS_LOCAL_WRITE)) ||
> -            ((mr_access_flags & IB_ACCESS_REMOTE_ATOMIC) &&
> -             !(mr_access_flags & IB_ACCESS_LOCAL_WRITE)))) {
> -               /*
> -                * Remote Write Access requires Local Write Access
> -                * Remote Atomic Access requires Local Write Access
> -                */
> -               ehca_err(mr->device, "bad input values: mr_rereg_mask=%x "
> -                        "mr_access_flags=%x", mr_rereg_mask, mr_access_flags);
> -               ret = -EINVAL;
> -               goto rereg_phys_mr_exit0;
> -       }
> -
> -       /* set requested values dependent on rereg request */
> -       spin_lock_irqsave(&e_mr->mrlock, sl_flags);
> -       new_start = e_mr->start;
> -       new_size = e_mr->size;
> -       new_acl = e_mr->acl;
> -       new_pd = container_of(mr->pd, struct ehca_pd, ib_pd);
> -
> -       if (mr_rereg_mask & IB_MR_REREG_TRANS) {
> -               u64 hw_pgsize = ehca_get_max_hwpage_size(shca);
> -
> -               new_start = iova_start; /* change address */
> -               /* check physical buffer list and calculate size */
> -               ret = ehca_mr_chk_buf_and_calc_size(phys_buf_array,
> -                                                   num_phys_buf, iova_start,
> -                                                   &new_size);
> -               if (ret)
> -                       goto rereg_phys_mr_exit1;
> -               if ((new_size == 0) ||
> -                   (((u64)iova_start + new_size) < (u64)iova_start)) {
> -                       ehca_err(mr->device, "bad input values: new_size=%llx "
> -                                "iova_start=%p", new_size, iova_start);
> -                       ret = -EINVAL;
> -                       goto rereg_phys_mr_exit1;
> -               }
> -               num_kpages = NUM_CHUNKS(((u64)new_start % PAGE_SIZE) +
> -                                       new_size, PAGE_SIZE);
> -               num_hwpages = NUM_CHUNKS(((u64)new_start % hw_pgsize) +
> -                                        new_size, hw_pgsize);
> -               memset(&pginfo, 0, sizeof(pginfo));
> -               pginfo.type = EHCA_MR_PGI_PHYS;
> -               pginfo.num_kpages = num_kpages;
> -               pginfo.hwpage_size = hw_pgsize;
> -               pginfo.num_hwpages = num_hwpages;
> -               pginfo.u.phy.num_phys_buf = num_phys_buf;
> -               pginfo.u.phy.phys_buf_array = phys_buf_array;
> -               pginfo.next_hwpage =
> -                       ((u64)iova_start & ~PAGE_MASK) / hw_pgsize;
> -       }
> -       if (mr_rereg_mask & IB_MR_REREG_ACCESS)
> -               new_acl = mr_access_flags;
> -       if (mr_rereg_mask & IB_MR_REREG_PD)
> -               new_pd = container_of(pd, struct ehca_pd, ib_pd);
> -
> -       ret = ehca_rereg_mr(shca, e_mr, new_start, new_size, new_acl,
> -                           new_pd, &pginfo, &tmp_lkey, &tmp_rkey);
> -       if (ret)
> -               goto rereg_phys_mr_exit1;
> -
> -       /* successful reregistration */
> -       if (mr_rereg_mask & IB_MR_REREG_PD)
> -               mr->pd = pd;
> -       mr->lkey = tmp_lkey;
> -       mr->rkey = tmp_rkey;
> -
> -rereg_phys_mr_exit1:
> -       spin_unlock_irqrestore(&e_mr->mrlock, sl_flags);
> -rereg_phys_mr_exit0:
> -       if (ret)
> -               ehca_err(mr->device, "ret=%i mr=%p mr_rereg_mask=%x pd=%p "
> -                        "phys_buf_array=%p num_phys_buf=%x mr_access_flags=%x "
> -                        "iova_start=%p",
> -                        ret, mr, mr_rereg_mask, pd, phys_buf_array,
> -                        num_phys_buf, mr_access_flags, iova_start);
> -       return ret;
> -} /* end ehca_rereg_phys_mr() */
> -
>  int ehca_dereg_mr(struct ib_mr *mr)
>  {
>         int ret = 0;
> @@ -1713,61 +1447,6 @@ ehca_dereg_internal_maxmr_exit0:
>
>  /*----------------------------------------------------------------------*/
>
> -/*
> - * check physical buffer array of MR verbs for validness and
> - * calculates MR size
> - */
> -int ehca_mr_chk_buf_and_calc_size(struct ib_phys_buf *phys_buf_array,
> -                                 int num_phys_buf,
> -                                 u64 *iova_start,
> -                                 u64 *size)
> -{
> -       struct ib_phys_buf *pbuf = phys_buf_array;
> -       u64 size_count = 0;
> -       u32 i;
> -
> -       if (num_phys_buf == 0) {
> -               ehca_gen_err("bad phys buf array len, num_phys_buf=0");
> -               return -EINVAL;
> -       }
> -       /* check first buffer */
> -       if (((u64)iova_start & ~PAGE_MASK) != (pbuf->addr & ~PAGE_MASK)) {
> -               ehca_gen_err("iova_start/addr mismatch, iova_start=%p "
> -                            "pbuf->addr=%llx pbuf->size=%llx",
> -                            iova_start, pbuf->addr, pbuf->size);
> -               return -EINVAL;
> -       }
> -       if (((pbuf->addr + pbuf->size) % PAGE_SIZE) &&
> -           (num_phys_buf > 1)) {
> -               ehca_gen_err("addr/size mismatch in 1st buf, pbuf->addr=%llx "
> -                            "pbuf->size=%llx", pbuf->addr, pbuf->size);
> -               return -EINVAL;
> -       }
> -
> -       for (i = 0; i < num_phys_buf; i++) {
> -               if ((i > 0) && (pbuf->addr % PAGE_SIZE)) {
> -                       ehca_gen_err("bad address, i=%x pbuf->addr=%llx "
> -                                    "pbuf->size=%llx",
> -                                    i, pbuf->addr, pbuf->size);
> -                       return -EINVAL;
> -               }
> -               if (((i > 0) && /* not 1st */
> -                    (i < (num_phys_buf - 1)) &&        /* not last */
> -                    (pbuf->size % PAGE_SIZE)) || (pbuf->size == 0)) {
> -                       ehca_gen_err("bad size, i=%x pbuf->size=%llx",
> -                                    i, pbuf->size);
> -                       return -EINVAL;
> -               }
> -               size_count += pbuf->size;
> -               pbuf++;
> -       }
> -
> -       *size = size_count;
> -       return 0;
> -} /* end ehca_mr_chk_buf_and_calc_size() */
> -
> -/*----------------------------------------------------------------------*/
> -
>  /* check page list of map FMR verb for validness */
>  int ehca_fmr_check_page_list(struct ehca_mr *e_fmr,
>                              u64 *page_list,
> diff --git a/drivers/staging/rdma/ehca/ehca_mrmw.h b/drivers/staging/rdma/ehca/ehca_mrmw.h
> index 50d8b51..52bfa95 100644
> --- a/drivers/staging/rdma/ehca/ehca_mrmw.h
> +++ b/drivers/staging/rdma/ehca/ehca_mrmw.h
> @@ -98,11 +98,6 @@ int ehca_reg_maxmr(struct ehca_shca *shca,
>
>  int ehca_dereg_internal_maxmr(struct ehca_shca *shca);
>
> -int ehca_mr_chk_buf_and_calc_size(struct ib_phys_buf *phys_buf_array,
> -                                 int num_phys_buf,
> -                                 u64 *iova_start,
> -                                 u64 *size);
> -
>  int ehca_fmr_check_page_list(struct ehca_mr *e_fmr,
>                              u64 *page_list,
>                              int list_len);
> diff --git a/drivers/staging/rdma/hfi1/mr.c b/drivers/staging/rdma/hfi1/mr.c
> index 568f185..a3f8b88 100644
> --- a/drivers/staging/rdma/hfi1/mr.c
> +++ b/drivers/staging/rdma/hfi1/mr.c
> @@ -167,10 +167,7 @@ static struct hfi1_mr *alloc_mr(int count, struct ib_pd *pd)
>         rval = init_mregion(&mr->mr, pd, count);
>         if (rval)
>                 goto bail;
> -       /*
> -        * ib_reg_phys_mr() will initialize mr->ibmr except for
> -        * lkey and rkey.
> -        */
> +
>         rval = hfi1_alloc_lkey(&mr->mr, 0);
>         if (rval)
>                 goto bail_mregion;
> @@ -188,52 +185,6 @@ bail:
>  }
>
>  /**
> - * hfi1_reg_phys_mr - register a physical memory region
> - * @pd: protection domain for this memory region
> - * @buffer_list: pointer to the list of physical buffers to register
> - * @num_phys_buf: the number of physical buffers to register
> - * @iova_start: the starting address passed over IB which maps to this MR
> - *
> - * Returns the memory region on success, otherwise returns an errno.
> - */
> -struct ib_mr *hfi1_reg_phys_mr(struct ib_pd *pd,
> -                              struct ib_phys_buf *buffer_list,
> -                              int num_phys_buf, int acc, u64 *iova_start)
> -{
> -       struct hfi1_mr *mr;
> -       int n, m, i;
> -       struct ib_mr *ret;
> -
> -       mr = alloc_mr(num_phys_buf, pd);
> -       if (IS_ERR(mr)) {
> -               ret = (struct ib_mr *)mr;
> -               goto bail;
> -       }
> -
> -       mr->mr.user_base = *iova_start;
> -       mr->mr.iova = *iova_start;
> -       mr->mr.access_flags = acc;
> -
> -       m = 0;
> -       n = 0;
> -       for (i = 0; i < num_phys_buf; i++) {
> -               mr->mr.map[m]->segs[n].vaddr = (void *) buffer_list[i].addr;
> -               mr->mr.map[m]->segs[n].length = buffer_list[i].size;
> -               mr->mr.length += buffer_list[i].size;
> -               n++;
> -               if (n == HFI1_SEGSZ) {
> -                       m++;
> -                       n = 0;
> -               }
> -       }
> -
> -       ret = &mr->ibmr;
> -
> -bail:
> -       return ret;
> -}
> -
> -/**
>   * hfi1_reg_user_mr - register a userspace memory region
>   * @pd: protection domain for this memory region
>   * @start: starting userspace address
> diff --git a/drivers/staging/rdma/hfi1/verbs.c b/drivers/staging/rdma/hfi1/verbs.c
> index cb5b346..1b3d8d9 100644
> --- a/drivers/staging/rdma/hfi1/verbs.c
> +++ b/drivers/staging/rdma/hfi1/verbs.c
> @@ -2011,7 +2011,6 @@ int hfi1_register_ib_device(struct hfi1_devdata *dd)
>         ibdev->poll_cq = hfi1_poll_cq;
>         ibdev->req_notify_cq = hfi1_req_notify_cq;
>         ibdev->get_dma_mr = hfi1_get_dma_mr;
> -       ibdev->reg_phys_mr = hfi1_reg_phys_mr;
>         ibdev->reg_user_mr = hfi1_reg_user_mr;
>         ibdev->dereg_mr = hfi1_dereg_mr;
>         ibdev->alloc_mr = hfi1_alloc_mr;
> diff --git a/drivers/staging/rdma/hfi1/verbs.h b/drivers/staging/rdma/hfi1/verbs.h
> index 041ad07..255792a 100644
> --- a/drivers/staging/rdma/hfi1/verbs.h
> +++ b/drivers/staging/rdma/hfi1/verbs.h
> @@ -1012,10 +1012,6 @@ int hfi1_resize_cq(struct ib_cq *ibcq, int cqe, struct ib_udata *udata);
>
>  struct ib_mr *hfi1_get_dma_mr(struct ib_pd *pd, int acc);
>
> -struct ib_mr *hfi1_reg_phys_mr(struct ib_pd *pd,
> -                              struct ib_phys_buf *buffer_list,
> -                              int num_phys_buf, int acc, u64 *iova_start);
> -
>  struct ib_mr *hfi1_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
>                                u64 virt_addr, int mr_access_flags,
>                                struct ib_udata *udata);
> diff --git a/drivers/staging/rdma/ipath/ipath_mr.c b/drivers/staging/rdma/ipath/ipath_mr.c
> index c7278f6..b76b0ce 100644
> --- a/drivers/staging/rdma/ipath/ipath_mr.c
> +++ b/drivers/staging/rdma/ipath/ipath_mr.c
> @@ -98,10 +98,6 @@ static struct ipath_mr *alloc_mr(int count,
>         }
>         mr->mr.mapsz = m;
>
> -       /*
> -        * ib_reg_phys_mr() will initialize mr->ibmr except for
> -        * lkey and rkey.
> -        */
>         if (!ipath_alloc_lkey(lk_table, &mr->mr))
>                 goto bail;
>         mr->ibmr.rkey = mr->ibmr.lkey = mr->mr.lkey;
> @@ -121,57 +117,6 @@ done:
>  }
>
>  /**
> - * ipath_reg_phys_mr - register a physical memory region
> - * @pd: protection domain for this memory region
> - * @buffer_list: pointer to the list of physical buffers to register
> - * @num_phys_buf: the number of physical buffers to register
> - * @iova_start: the starting address passed over IB which maps to this MR
> - *
> - * Returns the memory region on success, otherwise returns an errno.
> - */
> -struct ib_mr *ipath_reg_phys_mr(struct ib_pd *pd,
> -                               struct ib_phys_buf *buffer_list,
> -                               int num_phys_buf, int acc, u64 *iova_start)
> -{
> -       struct ipath_mr *mr;
> -       int n, m, i;
> -       struct ib_mr *ret;
> -
> -       mr = alloc_mr(num_phys_buf, &to_idev(pd->device)->lk_table);
> -       if (mr == NULL) {
> -               ret = ERR_PTR(-ENOMEM);
> -               goto bail;
> -       }
> -
> -       mr->mr.pd = pd;
> -       mr->mr.user_base = *iova_start;
> -       mr->mr.iova = *iova_start;
> -       mr->mr.length = 0;
> -       mr->mr.offset = 0;
> -       mr->mr.access_flags = acc;
> -       mr->mr.max_segs = num_phys_buf;
> -       mr->umem = NULL;
> -
> -       m = 0;
> -       n = 0;
> -       for (i = 0; i < num_phys_buf; i++) {
> -               mr->mr.map[m]->segs[n].vaddr = (void *) buffer_list[i].addr;
> -               mr->mr.map[m]->segs[n].length = buffer_list[i].size;
> -               mr->mr.length += buffer_list[i].size;
> -               n++;
> -               if (n == IPATH_SEGSZ) {
> -                       m++;
> -                       n = 0;
> -               }
> -       }
> -
> -       ret = &mr->ibmr;
> -
> -bail:
> -       return ret;
> -}
> -
> -/**
>   * ipath_reg_user_mr - register a userspace memory region
>   * @pd: protection domain for this memory region
>   * @start: starting userspace address
> diff --git a/drivers/staging/rdma/ipath/ipath_verbs.c b/drivers/staging/rdma/ipath/ipath_verbs.c
> index 7ab1520..02d8834 100644
> --- a/drivers/staging/rdma/ipath/ipath_verbs.c
> +++ b/drivers/staging/rdma/ipath/ipath_verbs.c
> @@ -2149,7 +2149,6 @@ int ipath_register_ib_device(struct ipath_devdata *dd)
>         dev->poll_cq = ipath_poll_cq;
>         dev->req_notify_cq = ipath_req_notify_cq;
>         dev->get_dma_mr = ipath_get_dma_mr;
> -       dev->reg_phys_mr = ipath_reg_phys_mr;
>         dev->reg_user_mr = ipath_reg_user_mr;
>         dev->dereg_mr = ipath_dereg_mr;
>         dev->alloc_fmr = ipath_alloc_fmr;
> diff --git a/drivers/staging/rdma/ipath/ipath_verbs.h b/drivers/staging/rdma/ipath/ipath_verbs.h
> index 0a90a56..6c70a89 100644
> --- a/drivers/staging/rdma/ipath/ipath_verbs.h
> +++ b/drivers/staging/rdma/ipath/ipath_verbs.h
> @@ -828,10 +828,6 @@ int ipath_resize_cq(struct ib_cq *ibcq, int cqe, struct ib_udata *udata);
>
>  struct ib_mr *ipath_get_dma_mr(struct ib_pd *pd, int acc);
>
> -struct ib_mr *ipath_reg_phys_mr(struct ib_pd *pd,
> -                               struct ib_phys_buf *buffer_list,
> -                               int num_phys_buf, int acc, u64 *iova_start);
> -
>  struct ib_mr *ipath_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
>                                 u64 virt_addr, int mr_access_flags,
>                                 struct ib_udata *udata);
> diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
> index 83d6ee8..a3402af 100644
> --- a/include/rdma/ib_verbs.h
> +++ b/include/rdma/ib_verbs.h
> @@ -1165,6 +1165,10 @@ struct ib_phys_buf {
>         u64      size;
>  };
>
> +/*
> + * XXX: these are apparently used for ->rereg_user_mr, no idea why they
> + * are hidden here instead of a uapi header!
> + */
>  enum ib_mr_rereg_flags {
>         IB_MR_REREG_TRANS       = 1,
>         IB_MR_REREG_PD          = (1<<1),
> @@ -1683,11 +1687,6 @@ struct ib_device {
>                                                       int wc_cnt);
>         struct ib_mr *             (*get_dma_mr)(struct ib_pd *pd,
>                                                  int mr_access_flags);
> -       struct ib_mr *             (*reg_phys_mr)(struct ib_pd *pd,
> -                                                 struct ib_phys_buf *phys_buf_array,
> -                                                 int num_phys_buf,
> -                                                 int mr_access_flags,
> -                                                 u64 *iova_start);
>         struct ib_mr *             (*reg_user_mr)(struct ib_pd *pd,
>                                                   u64 start, u64 length,
>                                                   u64 virt_addr,
> @@ -1707,13 +1706,6 @@ struct ib_device {
>         int                        (*map_mr_sg)(struct ib_mr *mr,
>                                                 struct scatterlist *sg,
>                                                 int sg_nents);
> -       int                        (*rereg_phys_mr)(struct ib_mr *mr,
> -                                                   int mr_rereg_mask,
> -                                                   struct ib_pd *pd,
> -                                                   struct ib_phys_buf *phys_buf_array,
> -                                                   int num_phys_buf,
> -                                                   int mr_access_flags,
> -                                                   u64 *iova_start);
>         struct ib_mw *             (*alloc_mw)(struct ib_pd *pd,
>                                                enum ib_mw_type type);
>         int                        (*bind_mw)(struct ib_qp *qp,
> --
> 1.9.1
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 10/11] IB: only keep a single key in struct ib_mr
       [not found]     ` <1448214409-7729-11-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
  2015-11-23 19:41       ` Jason Gunthorpe
@ 2015-12-22  9:17       ` Sagi Grimberg
       [not found]         ` <56791542.6020604-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
  1 sibling, 1 reply; 35+ messages in thread
From: Sagi Grimberg @ 2015-12-22  9:17 UTC (permalink / raw)
  To: Christoph Hellwig, dledford-H+wXaHxf7aLQT0dZR+AlfA
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Liran Liss

Hi Christoph,

> While IB supports the notion of returning separate local and remote keys
> from a memory registration, the iWarp spec doesn't and neither does any
> of our in-tree HCA drivers [1] nor consumers.  Consolidate the in-kernel
> API to provide only a single key and make everyones life easier.

What makes me worried here is that the IB/RoCE specification really
defines different keys for local and remote access. I'm less concerned
about our consumers but more about our providers. We keep seeing new
providers come along and its not impossible that a specific HW will
*rely* on this distinction. In such a case we'd need to revert this
patch altogether in that very moment.

I think we're better off working on proper abstractions to help ULPs
get it right (and simple!), without risking future devices support.

Sagi.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 10/11] IB: only keep a single key in struct ib_mr
       [not found]         ` <56791542.6020604-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
@ 2015-12-22 13:13           ` Christoph Hellwig
       [not found]             ` <20151222131326.GA25267-jcswGhMUV9g@public.gmane.org>
  0 siblings, 1 reply; 35+ messages in thread
From: Christoph Hellwig @ 2015-12-22 13:13 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Christoph Hellwig, dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA, Liran Liss

On Tue, Dec 22, 2015 at 11:17:54AM +0200, Sagi Grimberg wrote:
> What makes me worried here is that the IB/RoCE specification really
> defines different keys for local and remote access. I'm less concerned
> about our consumers but more about our providers. We keep seeing new
> providers come along and its not impossible that a specific HW will
> *rely* on this distinction. In such a case we'd need to revert this
> patch altogether in that very moment.
>
> I think we're better off working on proper abstractions to help ULPs
> get it right (and simple!), without risking future devices support.

With the new API in the next patch ULPs simply can't request an lkey
and a rkey at the same time, so for kernel use it's not a problmblem at
all.  That leaves my favourite nightmare: uverbs, which of course
allows for everything under the sun, just because we can.  I guess
the right answer to that problem is to first split the data structures
for kernel and user MRs, which we probably should have done much
earlier.  Not just because of this but also because of other issues
like all the fields your FR API changs added to ib_mr that aren't needed
for user MRs, or becaue the user MR structure should reallbe be merged with
struct ib_umem.

>
> Sagi.
---end quoted text---
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 10/11] IB: only keep a single key in struct ib_mr
       [not found]             ` <20151222131326.GA25267-jcswGhMUV9g@public.gmane.org>
@ 2015-12-22 13:50               ` Sagi Grimberg
       [not found]                 ` <56795514.9090704-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
  0 siblings, 1 reply; 35+ messages in thread
From: Sagi Grimberg @ 2015-12-22 13:50 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA, Liran Liss



On 22/12/2015 15:13, Christoph Hellwig wrote:
> On Tue, Dec 22, 2015 at 11:17:54AM +0200, Sagi Grimberg wrote:
>> What makes me worried here is that the IB/RoCE specification really
>> defines different keys for local and remote access. I'm less concerned
>> about our consumers but more about our providers. We keep seeing new
>> providers come along and its not impossible that a specific HW will
>> *rely* on this distinction. In such a case we'd need to revert this
>> patch altogether in that very moment.
>>
>> I think we're better off working on proper abstractions to help ULPs
>> get it right (and simple!), without risking future devices support.
>
> With the new API in the next patch ULPs simply can't request an lkey
> and a rkey at the same time, so for kernel use it's not a problmblem at
> all.

This is why I said that the problem here is not the ULPs. But if a new
HW comes along with distinction between rkeys and lkeys it will have a
problem. For example a HW allocates two different keys, rkey and lkey.
And, it chooses to fail SEND from a rkey, or incoming READ/WRITE to a
lkey. How can such a device be supported with an API that allows a
single key per MR?
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 10/11] IB: only keep a single key in struct ib_mr
       [not found]                 ` <56795514.9090704-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
@ 2015-12-22 13:59                   ` Christoph Hellwig
       [not found]                     ` <20151222135927.GA26311-jcswGhMUV9g@public.gmane.org>
  0 siblings, 1 reply; 35+ messages in thread
From: Christoph Hellwig @ 2015-12-22 13:59 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Christoph Hellwig, dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA, Liran Liss

On Tue, Dec 22, 2015 at 03:50:12PM +0200, Sagi Grimberg wrote:
> This is why I said that the problem here is not the ULPs. But if a new
> HW comes along with distinction between rkeys and lkeys it will have a
> problem. For example a HW allocates two different keys, rkey and lkey.
> And, it chooses to fail SEND from a rkey, or incoming READ/WRITE to a
> lkey. How can such a device be supported with an API that allows a
> single key per MR?

The ULP decides if this MR is going to be used as a lkey or rkey
by passing IB_REG_LKEY or IB_REG_RKEY.  The HCA driver will then
fill mr->key by the lkey or rkey based on that and everything will
work fine.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 10/11] IB: only keep a single key in struct ib_mr
       [not found]                     ` <20151222135927.GA26311-jcswGhMUV9g@public.gmane.org>
@ 2015-12-22 16:58                       ` Sagi Grimberg
       [not found]                         ` <56798126.9000003-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
  0 siblings, 1 reply; 35+ messages in thread
From: Sagi Grimberg @ 2015-12-22 16:58 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA, Liran Liss


> The ULP decides if this MR is going to be used as a lkey or rkey
> by passing IB_REG_LKEY or IB_REG_RKEY.  The HCA driver will then
> fill mr->key by the lkey or rkey based on that and everything will
> work fine.

But the ULP *can* register a memory buffer with local and remote
access permissions. One example is ImmediateData or a FirstBurst
implementation where an initiator sends the first burst of data and
the target reads the rest of it. My concern is that a device that
uses different keys would not be able to support that (or we make
the ULP perform two registrations).
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 10/11] IB: only keep a single key in struct ib_mr
       [not found]                         ` <56798126.9000003-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
@ 2015-12-22 21:09                           ` Jason Gunthorpe
       [not found]                             ` <7sikrilf3ik3fjsgppqxh9gn.1451043246057@email.android.com>
  0 siblings, 1 reply; 35+ messages in thread
From: Jason Gunthorpe @ 2015-12-22 21:09 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Christoph Hellwig, dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA, Liran Liss

On Tue, Dec 22, 2015 at 06:58:14PM +0200, Sagi Grimberg wrote:
> 
> >The ULP decides if this MR is going to be used as a lkey or rkey
> >by passing IB_REG_LKEY or IB_REG_RKEY.  The HCA driver will then
> >fill mr->key by the lkey or rkey based on that and everything will
> >work fine.
> 
> But the ULP *can* register a memory buffer with local and remote
> access permissions.

Not in the new API.

If a ULP ever comes along that does need that then they can start with
two MRs and then eventually upgrade the kapi to have some kind of
efficient bidir MR mode.

What we've seen on the list lately is that every single ULP seems to
have technical problems running the stack properly. We need to get off
this idea that the spec has to govern the kapi - that didn't lead us
any place nice.

Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 10/11] IB: only keep a single key in struct ib_mr
       [not found]                                 ` <CAMHdiBDv9mScD+5o90OQ_tmAKw9o48_jfao6W+EBOZkUg+wReA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2015-12-25 16:46                                   ` Liran Liss
       [not found]                                     ` <CAMHdiBC75n+u9BSMsKxJwcaBnGL68_3UBeQQtHp_CD5KGcUTnQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 35+ messages in thread
From: Liran Liss @ 2015-12-25 16:46 UTC (permalink / raw)
  To: jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/
  Cc: sagig-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb, Christoph Hellwig,
	dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA, Liran Liss

> From: Jason Gunthorpe <jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>

> > >fill mr->key by the lkey or rkey based on that and everything will
> > >work fine.
> >
> > But the ULP *can* register a memory buffer with local and remote
> > access permissions.
> Not in the new API.
>
> If a ULP ever comes along that does need that then they can start with
> two MRs and then eventually upgrade the kapi to have some kind of
> efficient bidir MR mode.

ULPs are *already* using the same registrations for both local and
remote access.
So, currently, this is a no-go.
Sorry.

Let's make the required adjustments, which don't violate specs, and
then continue.

>
> What we've seen on the list lately is that every single ULP seems to
> have technical problems running the stack properly. We need to get off
> this idea that the spec has to govern the kapi - that didn't lead us
> any place nice.
>

Oh, please.
Our stack connects millions of endpoints in production using multiple
ULPs for the last 15 years with full interoperability because of
specs.

I am sure that we can keep improving our APIs while adhering to
industry standards, as we have done very successfully until now.

--Liran
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 10/11] IB: only keep a single key in struct ib_mr
       [not found]                                     ` <CAMHdiBC75n+u9BSMsKxJwcaBnGL68_3UBeQQtHp_CD5KGcUTnQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2016-01-05 17:46                                       ` Jason Gunthorpe
       [not found]                                         ` <20160105174636.GD16269-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
  0 siblings, 1 reply; 35+ messages in thread
From: Jason Gunthorpe @ 2016-01-05 17:46 UTC (permalink / raw)
  To: Liran Liss
  Cc: sagig-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb, Christoph Hellwig,
	dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA, Liran Liss

On Fri, Dec 25, 2015 at 06:46:07PM +0200, Liran Liss wrote:
> > From: Jason Gunthorpe <jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
> 
> > > >fill mr->key by the lkey or rkey based on that and everything will
> > > >work fine.
> > >
> > > But the ULP *can* register a memory buffer with local and remote
> > > access permissions.
> > Not in the new API.
> >
> > If a ULP ever comes along that does need that then they can start with
> > two MRs and then eventually upgrade the kapi to have some kind of
> > efficient bidir MR mode.
> 
> ULPs are *already* using the same registrations for both local and
> remote access.

Where? Out of tree?

Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 10/11] IB: only keep a single key in struct ib_mr
       [not found]                                         ` <20160105174636.GD16269-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
@ 2016-01-06  6:08                                           ` Christoph Hellwig
       [not found]                                             ` <20160106060806.GA16509-jcswGhMUV9g@public.gmane.org>
  0 siblings, 1 reply; 35+ messages in thread
From: Christoph Hellwig @ 2016-01-06  6:08 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Liran Liss, sagig-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb,
	Christoph Hellwig, dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA, Liran Liss

On Tue, Jan 05, 2016 at 10:46:36AM -0700, Jason Gunthorpe wrote:
> > ULPs are *already* using the same registrations for both local and
> > remote access.
> 
> Where? Out of tree?

I haven't found anything in-tree for sure.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 10/11] IB: only keep a single key in struct ib_mr
       [not found]                                             ` <20160106060806.GA16509-jcswGhMUV9g@public.gmane.org>
@ 2016-01-06 15:10                                               ` Sagi Grimberg
       [not found]                                                 ` <568D2E4E.5020806-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
  0 siblings, 1 reply; 35+ messages in thread
From: Sagi Grimberg @ 2016-01-06 15:10 UTC (permalink / raw)
  To: Christoph Hellwig, Jason Gunthorpe
  Cc: Liran Liss, dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA, Liran Liss


>>> ULPs are *already* using the same registrations for both local and
>>> remote access.
>>
>> Where? Out of tree?
>
> I haven't found anything in-tree for sure.

We have that in iSER.

iSCSI allows a FirstBurst functionality and iSER as an iSCSI
transport is required to support that.

The FirstBurst is divided into ImmediateData which comes with
the SCSI cdb and followed by UnsolicitedDataOut commands. This
depends in the target support negotiation in the iSCSI login.

So basically say we have a 128k write,
the first IMMEDIATEDATA_SIZE (in LIO 8k) will come in the first
send with the SCSI cdb, and the next UNSOLICITED_DATA_OUT_SIZE (in
LIO 56k completing to 64K first burst) will follow by several
uncolicited dataout commands and the remaining 128k will be rdma_read
by the target.

So this is why iSER registers the entire SG list and then just work with
virtual offsets in the memory region.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 10/11] IB: only keep a single key in struct ib_mr
       [not found]                                                 ` <568D2E4E.5020806-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
@ 2016-01-06 16:59                                                   ` Jason Gunthorpe
  0 siblings, 0 replies; 35+ messages in thread
From: Jason Gunthorpe @ 2016-01-06 16:59 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Christoph Hellwig, Liran Liss, dledford-H+wXaHxf7aLQT0dZR+AlfA,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA, Liran Liss

On Wed, Jan 06, 2016 at 05:10:06PM +0200, Sagi Grimberg wrote:
> 
> >>>ULPs are *already* using the same registrations for both local and
> >>>remote access.
> >>
> >>Where? Out of tree?
> >
> >I haven't found anything in-tree for sure.
> 
> We have that in iSER.
> 
> iSCSI allows a FirstBurst functionality and iSER as an iSCSI
> transport is required to support that.
> 
> The FirstBurst is divided into ImmediateData which comes with
> the SCSI cdb and followed by UnsolicitedDataOut commands. This
> depends in the target support negotiation in the iSCSI login.

Why not use local_rdma_lkey for for the CDB part?

Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2016-01-06 16:59 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-11-22 17:46 memory registration updates Christoph Hellwig
     [not found] ` <1448214409-7729-1-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
2015-11-22 17:46   ` [PATCH 01/11] IB: start documenting device capabilities Christoph Hellwig
2015-11-22 17:46   ` [PATCH 02/11] IB: remove ib_query_mr Christoph Hellwig
2015-11-22 17:46   ` [PATCH 03/11] IB: remove support for phys MRs Christoph Hellwig
     [not found]     ` <1448214409-7729-4-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
2015-11-24 12:01       ` Devesh Sharma
2015-11-22 17:46   ` [PATCH 04/11] IB: remove in-kernel support for memory windows Christoph Hellwig
2015-11-22 17:46   ` [PATCH 05/11] cxgb3: simplify iwch_get_dma_wr Christoph Hellwig
2015-11-22 17:46   ` [PATCH 06/11] nes: simplify nes_reg_phys_mr calling conventions Christoph Hellwig
2015-11-22 17:46   ` [PATCH 07/11] amso1100: fold c2_reg_phys_mr into c2_get_dma_mr Christoph Hellwig
2015-11-22 17:46   ` [PATCH 08/11] ehca: stop using struct ib_phys_buf Christoph Hellwig
2015-11-22 17:46   ` [PATCH 09/11] IB: remove the struct ib_phys_buf definition Christoph Hellwig
2015-11-22 17:46   ` [PATCH 10/11] IB: only keep a single key in struct ib_mr Christoph Hellwig
     [not found]     ` <1448214409-7729-11-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
2015-11-23 19:41       ` Jason Gunthorpe
     [not found]         ` <20151123194124.GF32085-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
2015-11-23 19:48           ` Christoph Hellwig
2015-12-22  9:17       ` Sagi Grimberg
     [not found]         ` <56791542.6020604-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
2015-12-22 13:13           ` Christoph Hellwig
     [not found]             ` <20151222131326.GA25267-jcswGhMUV9g@public.gmane.org>
2015-12-22 13:50               ` Sagi Grimberg
     [not found]                 ` <56795514.9090704-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
2015-12-22 13:59                   ` Christoph Hellwig
     [not found]                     ` <20151222135927.GA26311-jcswGhMUV9g@public.gmane.org>
2015-12-22 16:58                       ` Sagi Grimberg
     [not found]                         ` <56798126.9000003-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
2015-12-22 21:09                           ` Jason Gunthorpe
     [not found]                             ` <7sikrilf3ik3fjsgppqxh9gn.1451043246057@email.android.com>
     [not found]                               ` <CAMHdiBDv9mScD+5o90OQ_tmAKw9o48_jfao6W+EBOZkUg+wReA@mail.gmail.com>
     [not found]                                 ` <CAMHdiBDv9mScD+5o90OQ_tmAKw9o48_jfao6W+EBOZkUg+wReA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2015-12-25 16:46                                   ` Liran Liss
     [not found]                                     ` <CAMHdiBC75n+u9BSMsKxJwcaBnGL68_3UBeQQtHp_CD5KGcUTnQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2016-01-05 17:46                                       ` Jason Gunthorpe
     [not found]                                         ` <20160105174636.GD16269-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
2016-01-06  6:08                                           ` Christoph Hellwig
     [not found]                                             ` <20160106060806.GA16509-jcswGhMUV9g@public.gmane.org>
2016-01-06 15:10                                               ` Sagi Grimberg
     [not found]                                                 ` <568D2E4E.5020806-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
2016-01-06 16:59                                                   ` Jason Gunthorpe
2015-11-22 17:46   ` [PATCH 11/11] IB: provide better access flags for fast registrations Christoph Hellwig
     [not found]     ` <1448214409-7729-12-git-send-email-hch-jcswGhMUV9g@public.gmane.org>
2015-11-23 15:09       ` Chuck Lever
     [not found]         ` <EE465F12-A5F6-4C95-A77C-199DFF8F77BD-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
2015-11-23 15:49           ` Christoph Hellwig
2015-11-23 18:58       ` Jason Gunthorpe
     [not found]         ` <20151123185829.GE32085-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
2015-11-23 19:44           ` Christoph Hellwig
2015-11-23  9:03   ` memory registration updates Sagi Grimberg
     [not found]     ` <5652D66E.5070600-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
2015-11-23 14:16       ` Christoph Hellwig
     [not found]         ` <20151123141650.GA11116-jcswGhMUV9g@public.gmane.org>
2015-11-23 15:17           ` Sagi Grimberg
     [not found]             ` <56532E0A.20907-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
2015-11-23 19:43               ` Christoph Hellwig
2015-11-23 15:06   ` Steve Wise

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.