All of lore.kernel.org
 help / color / mirror / Atom feed
* Memory windows support for rxe
@ 2020-08-15  4:58 Bob Pearson
  2020-08-15  4:58 ` [PATCH 01/20] Added ib_uverbs_wc_opcode to ib_user_verbs.h Bob Pearson
                   ` (19 more replies)
  0 siblings, 20 replies; 23+ messages in thread
From: Bob Pearson @ 2020-08-15  4:58 UTC (permalink / raw)
  To: linux-rdma


Bob


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 01/20] Added ib_uverbs_wc_opcode to ib_user_verbs.h
  2020-08-15  4:58 Memory windows support for rxe Bob Pearson
@ 2020-08-15  4:58 ` Bob Pearson
  2020-08-15  4:58 ` [PATCH 02/20] Added missing IB_WR_BIND_MW opcode Bob Pearson
                   ` (18 subsequent siblings)
  19 siblings, 0 replies; 23+ messages in thread
From: Bob Pearson @ 2020-08-15  4:58 UTC (permalink / raw)
  To: linux-rdma; +Cc: Bob Pearson

This enum plays the same role as ib_uverbs_wr_opcode documenting
the opcodes in the user space API. It plays a role for software
drivers like rxe.

Signed-off-by: Bob Pearson <rpearson@hpe.com>
---
 include/uapi/rdma/ib_user_verbs.h | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/include/uapi/rdma/ib_user_verbs.h b/include/uapi/rdma/ib_user_verbs.h
index 0474c7400268..456438c18c2c 100644
--- a/include/uapi/rdma/ib_user_verbs.h
+++ b/include/uapi/rdma/ib_user_verbs.h
@@ -457,6 +457,17 @@ struct ib_uverbs_poll_cq {
 	__u32 ne;
 };
 
+enum ib_uverbs_wc_opcode {
+	IB_UVERBS_WC_SEND = 0,
+	IB_UVERBS_WC_RDMA_WRITE = 1,
+	IB_UVERBS_WC_RDMA_READ = 2,
+	IB_UVERBS_WC_COMP_SWAP = 3,
+	IB_UVERBS_WC_FETCH_ADD = 4,
+	IB_UVERBS_WC_BIND_MW = 5,
+	IB_UVERBS_WC_LOCAL_INV = 6,
+	IB_UVERBS_WC_TSO = 7,
+};
+
 struct ib_uverbs_wc {
 	__aligned_u64 wr_id;
 	__u32 status;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 02/20] Added missing IB_WR_BIND_MW opcode
  2020-08-15  4:58 Memory windows support for rxe Bob Pearson
  2020-08-15  4:58 ` [PATCH 01/20] Added ib_uverbs_wc_opcode to ib_user_verbs.h Bob Pearson
@ 2020-08-15  4:58 ` Bob Pearson
  2020-08-15  4:58 ` [PATCH 03/20] Added bind_mw parameters to rxe_send_wr Bob Pearson
                   ` (17 subsequent siblings)
  19 siblings, 0 replies; 23+ messages in thread
From: Bob Pearson @ 2020-08-15  4:58 UTC (permalink / raw)
  To: linux-rdma; +Cc: Bob Pearson

Also assigned the IB_WC_XXX to the IB_UVERBS_WC_XXX where they
are defined. This follows the same pattern as the IB_WR_XXX opcodes.
This fixes an incorrect value for LSO that had crept in but was not used.

Signed-off-by: Bob Pearson <rpearson@hpe.com>
---
 include/rdma/ib_verbs.h | 16 +++++++++-------
 1 file changed, 9 insertions(+), 7 deletions(-)

diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index c0b2fa7e9b95..05362947322b 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -952,13 +952,14 @@ enum ib_wc_status {
 const char *__attribute_const__ ib_wc_status_msg(enum ib_wc_status status);
 
 enum ib_wc_opcode {
-	IB_WC_SEND,
-	IB_WC_RDMA_WRITE,
-	IB_WC_RDMA_READ,
-	IB_WC_COMP_SWAP,
-	IB_WC_FETCH_ADD,
-	IB_WC_LSO,
-	IB_WC_LOCAL_INV,
+	IB_WC_SEND = IB_UVERBS_WC_SEND,
+	IB_WC_RDMA_WRITE = IB_UVERBS_WC_RDMA_WRITE,
+	IB_WC_RDMA_READ = IB_UVERBS_WC_RDMA_READ,
+	IB_WC_COMP_SWAP = IB_UVERBS_WC_COMP_SWAP,
+	IB_WC_FETCH_ADD = IB_UVERBS_WC_FETCH_ADD,
+	IB_WC_BIND_MW = IB_UVERBS_WC_BIND_MW,
+	IB_WC_LOCAL_INV = IB_UVERBS_WC_LOCAL_INV,
+	IB_WC_LSO = IB_UVERBS_WC_TSO,
 	IB_WC_REG_MR,
 	IB_WC_MASKED_COMP_SWAP,
 	IB_WC_MASKED_FETCH_ADD,
@@ -1291,6 +1292,7 @@ enum ib_wr_opcode {
 	IB_WR_RDMA_READ = IB_UVERBS_WR_RDMA_READ,
 	IB_WR_ATOMIC_CMP_AND_SWP = IB_UVERBS_WR_ATOMIC_CMP_AND_SWP,
 	IB_WR_ATOMIC_FETCH_AND_ADD = IB_UVERBS_WR_ATOMIC_FETCH_AND_ADD,
+	IB_WR_BIND_MW = IB_UVERBS_WR_BIND_MW,
 	IB_WR_LSO = IB_UVERBS_WR_TSO,
 	IB_WR_SEND_WITH_INV = IB_UVERBS_WR_SEND_WITH_INV,
 	IB_WR_RDMA_READ_WITH_INV = IB_UVERBS_WR_RDMA_READ_WITH_INV,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 03/20] Added bind_mw parameters to rxe_send_wr.
  2020-08-15  4:58 Memory windows support for rxe Bob Pearson
  2020-08-15  4:58 ` [PATCH 01/20] Added ib_uverbs_wc_opcode to ib_user_verbs.h Bob Pearson
  2020-08-15  4:58 ` [PATCH 02/20] Added missing IB_WR_BIND_MW opcode Bob Pearson
@ 2020-08-15  4:58 ` Bob Pearson
  2020-08-15  4:58 ` [PATCH 04/20] Added stubs for alloc_mw and dealloc_mw verbs Bob Pearson
                   ` (16 subsequent siblings)
  19 siblings, 0 replies; 23+ messages in thread
From: Bob Pearson @ 2020-08-15  4:58 UTC (permalink / raw)
  To: linux-rdma; +Cc: Bob Pearson

This is a first prototype version of the user/kernel ABI extension
to add memory windows functionality to the rxe driver. It evolves
later.

Signed-off-by: Bob Pearson <rpearson@hpe.com>
---
 include/uapi/rdma/rdma_user_rxe.h | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/include/uapi/rdma/rdma_user_rxe.h b/include/uapi/rdma/rdma_user_rxe.h
index aae2e696bb38..f88867d85c3f 100644
--- a/include/uapi/rdma/rdma_user_rxe.h
+++ b/include/uapi/rdma/rdma_user_rxe.h
@@ -93,6 +93,14 @@ struct rxe_send_wr {
 			__u32	remote_qkey;
 			__u16	pkey_index;
 		} ud;
+		struct {
+			__aligned_u64	addr;
+			__aligned_u64	length;
+			__u32	mr_rkey;
+			__u32	mw_rkey;
+			__u32	rkey;
+			__u32	access;
+		} bind_mw;
 		/* reg is only used by the kernel and is not part of the uapi */
 		struct {
 			union {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 04/20] Added stubs for alloc_mw and dealloc_mw verbs.
  2020-08-15  4:58 Memory windows support for rxe Bob Pearson
                   ` (2 preceding siblings ...)
  2020-08-15  4:58 ` [PATCH 03/20] Added bind_mw parameters to rxe_send_wr Bob Pearson
@ 2020-08-15  4:58 ` Bob Pearson
  2020-08-15  4:58 ` [PATCH 05/20] Separated MR and MW objects Bob Pearson
                   ` (15 subsequent siblings)
  19 siblings, 0 replies; 23+ messages in thread
From: Bob Pearson @ 2020-08-15  4:58 UTC (permalink / raw)
  To: linux-rdma; +Cc: Bob Pearson

Added a new file focused on memory windows, rxe_mw.c and adds stubbed
out kernel verbs API for alloc_mw and dealloc_mw. These functios are added
to the context ops struct and bits added to the supported APIs mask.

These APIs allow progress on the rdma-core user space testing.

Signed-off-by: Bob Pearson <rpearson@hpe.com>
---
 drivers/infiniband/sw/rxe/Makefile    |  1 +
 drivers/infiniband/sw/rxe/rxe_loc.h   |  5 +++
 drivers/infiniband/sw/rxe/rxe_mw.c    | 49 +++++++++++++++++++++++++++
 drivers/infiniband/sw/rxe/rxe_verbs.c |  4 +++
 4 files changed, 59 insertions(+)
 create mode 100644 drivers/infiniband/sw/rxe/rxe_mw.c

diff --git a/drivers/infiniband/sw/rxe/Makefile b/drivers/infiniband/sw/rxe/Makefile
index 66af72dca759..1e24673e9318 100644
--- a/drivers/infiniband/sw/rxe/Makefile
+++ b/drivers/infiniband/sw/rxe/Makefile
@@ -15,6 +15,7 @@ rdma_rxe-y := \
 	rxe_qp.o \
 	rxe_cq.o \
 	rxe_mr.o \
+	rxe_mw.o \
 	rxe_opcode.o \
 	rxe_mmap.o \
 	rxe_icrc.o \
diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
index 39dc3bfa5d5d..02f8ff4ed8f2 100644
--- a/drivers/infiniband/sw/rxe/rxe_loc.h
+++ b/drivers/infiniband/sw/rxe/rxe_loc.h
@@ -136,6 +136,11 @@ void rxe_mem_cleanup(struct rxe_pool_entry *arg);
 
 int advance_dma_data(struct rxe_dma_info *dma, unsigned int length);
 
+/* rxe_mw.c */
+struct ib_mw *rxe_alloc_mw(struct ib_pd *ibpd, enum ib_mw_type type,
+			   struct ib_udata *udata);
+int rxe_dealloc_mw(struct ib_mw *ibmw);
+
 /* rxe_net.c */
 void rxe_loopback(struct sk_buff *skb);
 int rxe_send(struct rxe_pkt_info *pkt, struct sk_buff *skb);
diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c
new file mode 100644
index 000000000000..6139dc9d8dd8
--- /dev/null
+++ b/drivers/infiniband/sw/rxe/rxe_mw.c
@@ -0,0 +1,49 @@
+/*
+ * Copyright (c) 2020 Hewlett Packard Enterprise, Inc. All rights reserved.
+ * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
+ * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ *     Redistribution and use in source and binary forms, with or
+ *     without modification, are permitted provided that the following
+ *     conditions are met:
+ *
+ *	- Redistributions of source code must retain the above
+ *	  copyright notice, this list of conditions and the following
+ *	  disclaimer.
+ *
+ *	- Redistributions in binary form must reproduce the above
+ *	  copyright notice, this list of conditions and the following
+ *	  disclaimer in the documentation and/or other materials
+ *	  provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include "rxe.h"
+#include "rxe_loc.h"
+
+struct ib_mw *rxe_alloc_mw(struct ib_pd *ibpd, enum ib_mw_type type,
+			   struct ib_udata *udata)
+{
+	pr_err_once("rxe_alloc_mw: not implemented\n");
+	return ERR_PTR(-ENOSYS);
+}
+
+int rxe_dealloc_mw(struct ib_mw *ibmw)
+{
+	pr_err_once("rxe_dealloc_mw: not implemented\n");
+	return -ENOSYS;
+}
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
index bb61e534e468..1dbc69b86859 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
@@ -1128,6 +1128,8 @@ static const struct ib_device_ops rxe_dev_ops = {
 	.reg_user_mr = rxe_reg_user_mr,
 	.req_notify_cq = rxe_req_notify_cq,
 	.resize_cq = rxe_resize_cq,
+	.alloc_mw = rxe_alloc_mw,
+	.dealloc_mw = rxe_dealloc_mw,
 
 	INIT_RDMA_OBJ_SIZE(ib_ah, rxe_ah, ibah),
 	INIT_RDMA_OBJ_SIZE(ib_cq, rxe_cq, ibcq),
@@ -1189,6 +1191,8 @@ int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name)
 	    | BIT_ULL(IB_USER_VERBS_CMD_DESTROY_AH)
 	    | BIT_ULL(IB_USER_VERBS_CMD_ATTACH_MCAST)
 	    | BIT_ULL(IB_USER_VERBS_CMD_DETACH_MCAST)
+	    | BIT_ULL(IB_USER_VERBS_CMD_ALLOC_MW)
+	    | BIT_ULL(IB_USER_VERBS_CMD_DEALLOC_MW)
 	    ;
 
 	ib_set_device_ops(dev, &rxe_dev_ops);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 05/20] Separated MR and MW objects.
  2020-08-15  4:58 Memory windows support for rxe Bob Pearson
                   ` (3 preceding siblings ...)
  2020-08-15  4:58 ` [PATCH 04/20] Added stubs for alloc_mw and dealloc_mw verbs Bob Pearson
@ 2020-08-15  4:58 ` Bob Pearson
  2020-08-15  4:58 ` [PATCH 06/20] Added a basic rxe_mw struct Bob Pearson
                   ` (14 subsequent siblings)
  19 siblings, 0 replies; 23+ messages in thread
From: Bob Pearson @ 2020-08-15  4:58 UTC (permalink / raw)
  To: linux-rdma; +Cc: Bob Pearson

In the original rxe implementation it was intended to use a common
object to represent MRs and MWs but it became clear that they are
different enough to separate these into two objects.

This allows replacing the mem name with mr for MRs which is less
likely to be confusing. This is a long patch that just changes mem to mr
everywhere it made sense.

Signed-off-by: Bob Pearson <rpearson@hpe.com>
---
 drivers/infiniband/sw/rxe/rxe_comp.c  |   4 +-
 drivers/infiniband/sw/rxe/rxe_loc.h   |  26 +--
 drivers/infiniband/sw/rxe/rxe_mr.c    | 258 +++++++++++++-------------
 drivers/infiniband/sw/rxe/rxe_pool.c  |   6 +-
 drivers/infiniband/sw/rxe/rxe_req.c   |   6 +-
 drivers/infiniband/sw/rxe/rxe_resp.c  |  30 +--
 drivers/infiniband/sw/rxe/rxe_verbs.c |  19 +-
 drivers/infiniband/sw/rxe/rxe_verbs.h |  31 ++--
 8 files changed, 189 insertions(+), 191 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c
index 4bc88708b355..b9b8c4e115f4 100644
--- a/drivers/infiniband/sw/rxe/rxe_comp.c
+++ b/drivers/infiniband/sw/rxe/rxe_comp.c
@@ -372,7 +372,7 @@ static inline enum comp_state do_read(struct rxe_qp *qp,
 
 	ret = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE,
 			&wqe->dma, payload_addr(pkt),
-			payload_size(pkt), to_mem_obj, NULL);
+			payload_size(pkt), to_mr_obj, NULL);
 	if (ret)
 		return COMPST_ERROR;
 
@@ -392,7 +392,7 @@ static inline enum comp_state do_atomic(struct rxe_qp *qp,
 
 	ret = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE,
 			&wqe->dma, &atomic_orig,
-			sizeof(u64), to_mem_obj, NULL);
+			sizeof(u64), to_mr_obj, NULL);
 	if (ret)
 		return COMPST_ERROR;
 	else
diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
index 02f8ff4ed8f2..42375af68f48 100644
--- a/drivers/infiniband/sw/rxe/rxe_loc.h
+++ b/drivers/infiniband/sw/rxe/rxe_loc.h
@@ -99,40 +99,40 @@ int rxe_mmap(struct ib_ucontext *context, struct vm_area_struct *vma);
 
 /* rxe_mr.c */
 enum copy_direction {
-	to_mem_obj,
-	from_mem_obj,
+	to_mr_obj,
+	from_mr_obj,
 };
 
-void rxe_mem_init_dma(struct rxe_pd *pd,
-		      int access, struct rxe_mem *mem);
+int rxe_mr_init_dma(struct rxe_pd *pd,
+		     int access, struct rxe_mr *mr);
 
-int rxe_mem_init_user(struct rxe_pd *pd, u64 start,
+int rxe_mr_init_user(struct rxe_pd *pd, u64 start,
 		      u64 length, u64 iova, int access, struct ib_udata *udata,
-		      struct rxe_mem *mr);
+		      struct rxe_mr *mr);
 
-int rxe_mem_init_fast(struct rxe_pd *pd,
-		      int max_pages, struct rxe_mem *mem);
+int rxe_mr_init_fast(struct rxe_pd *pd,
+		      int max_pages, struct rxe_mr *mr);
 
-int rxe_mem_copy(struct rxe_mem *mem, u64 iova, void *addr,
+int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr,
 		 int length, enum copy_direction dir, u32 *crcp);
 
 int copy_data(struct rxe_pd *pd, int access,
 	      struct rxe_dma_info *dma, void *addr, int length,
 	      enum copy_direction dir, u32 *crcp);
 
-void *iova_to_vaddr(struct rxe_mem *mem, u64 iova, int length);
+void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length);
 
 enum lookup_type {
 	lookup_local,
 	lookup_remote,
 };
 
-struct rxe_mem *lookup_mem(struct rxe_pd *pd, int access, u32 key,
+struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key,
 			   enum lookup_type type);
 
-int mem_check_range(struct rxe_mem *mem, u64 iova, size_t length);
+int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length);
 
-void rxe_mem_cleanup(struct rxe_pool_entry *arg);
+void rxe_mr_cleanup(struct rxe_pool_entry *arg);
 
 int advance_dma_data(struct rxe_dma_info *dma, unsigned int length);
 
diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
index cdd811a45120..0606f04e1d18 100644
--- a/drivers/infiniband/sw/rxe/rxe_mr.c
+++ b/drivers/infiniband/sw/rxe/rxe_mr.c
@@ -51,17 +51,17 @@ static u8 rxe_get_key(void)
 	return key;
 }
 
-int mem_check_range(struct rxe_mem *mem, u64 iova, size_t length)
+int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length)
 {
-	switch (mem->type) {
+	switch (mr->type) {
 	case RXE_MEM_TYPE_DMA:
 		return 0;
 
 	case RXE_MEM_TYPE_MR:
 	case RXE_MEM_TYPE_FMR:
-		if (iova < mem->iova ||
-		    length > mem->length ||
-		    iova > mem->iova + mem->length - length)
+		if (iova < mr->iova ||
+		    length > mr->length ||
+		    iova > mr->iova + mr->length - length)
 			return -EFAULT;
 		return 0;
 
@@ -74,90 +74,90 @@ int mem_check_range(struct rxe_mem *mem, u64 iova, size_t length)
 				| IB_ACCESS_REMOTE_WRITE	\
 				| IB_ACCESS_REMOTE_ATOMIC)
 
-static void rxe_mem_init(int access, struct rxe_mem *mem)
+static void rxe_mr_init(int access, struct rxe_mr *mr)
 {
-	u32 lkey = mem->pelem.index << 8 | rxe_get_key();
+	u32 lkey = mr->pelem.index << 8 | rxe_get_key();
 	u32 rkey = (access & IB_ACCESS_REMOTE) ? lkey : 0;
 
-	if (mem->pelem.pool->type == RXE_TYPE_MR) {
-		mem->ibmr.lkey		= lkey;
-		mem->ibmr.rkey		= rkey;
+	if (mr->pelem.pool->type == RXE_TYPE_MR) {
+		mr->ibmr.lkey		= lkey;
+		mr->ibmr.rkey		= rkey;
 	}
 
-	mem->lkey		= lkey;
-	mem->rkey		= rkey;
-	mem->state		= RXE_MEM_STATE_INVALID;
-	mem->type		= RXE_MEM_TYPE_NONE;
-	mem->map_shift		= ilog2(RXE_BUF_PER_MAP);
+	mr->lkey		= lkey;
+	mr->rkey		= rkey;
+	mr->state		= RXE_MEM_STATE_INVALID;
+	mr->type		= RXE_MEM_TYPE_NONE;
+	mr->map_shift		= ilog2(RXE_BUF_PER_MAP);
 }
 
-void rxe_mem_cleanup(struct rxe_pool_entry *arg)
+void rxe_mr_cleanup(struct rxe_pool_entry *arg)
 {
-	struct rxe_mem *mem = container_of(arg, typeof(*mem), pelem);
+	struct rxe_mr *mr = container_of(arg, typeof(*mr), pelem);
 	int i;
 
-	ib_umem_release(mem->umem);
+	ib_umem_release(mr->umem);
 
-	if (mem->map) {
-		for (i = 0; i < mem->num_map; i++)
-			kfree(mem->map[i]);
+	if (mr->map) {
+		for (i = 0; i < mr->num_map; i++)
+			kfree(mr->map[i]);
 
-		kfree(mem->map);
+		kfree(mr->map);
 	}
 }
 
-static int rxe_mem_alloc(struct rxe_mem *mem, int num_buf)
+static int rxe_mr_alloc(struct rxe_mr *mr, int num_buf)
 {
 	int i;
 	int num_map;
-	struct rxe_map **map = mem->map;
+	struct rxe_map **map = mr->map;
 
 	num_map = (num_buf + RXE_BUF_PER_MAP - 1) / RXE_BUF_PER_MAP;
 
-	mem->map = kmalloc_array(num_map, sizeof(*map), GFP_KERNEL);
-	if (!mem->map)
+	mr->map = kmalloc_array(num_map, sizeof(*map), GFP_KERNEL);
+	if (!mr->map)
 		goto err1;
 
 	for (i = 0; i < num_map; i++) {
-		mem->map[i] = kmalloc(sizeof(**map), GFP_KERNEL);
-		if (!mem->map[i])
+		mr->map[i] = kmalloc(sizeof(**map), GFP_KERNEL);
+		if (!mr->map[i])
 			goto err2;
 	}
 
 	BUILD_BUG_ON(!is_power_of_2(RXE_BUF_PER_MAP));
 
-	mem->map_shift	= ilog2(RXE_BUF_PER_MAP);
-	mem->map_mask	= RXE_BUF_PER_MAP - 1;
+	mr->map_shift	= ilog2(RXE_BUF_PER_MAP);
+	mr->map_mask	= RXE_BUF_PER_MAP - 1;
 
-	mem->num_buf = num_buf;
-	mem->num_map = num_map;
-	mem->max_buf = num_map * RXE_BUF_PER_MAP;
+	mr->num_buf = num_buf;
+	mr->num_map = num_map;
+	mr->max_buf = num_map * RXE_BUF_PER_MAP;
 
 	return 0;
 
 err2:
 	for (i--; i >= 0; i--)
-		kfree(mem->map[i]);
+		kfree(mr->map[i]);
 
-	kfree(mem->map);
+	kfree(mr->map);
 err1:
 	return -ENOMEM;
 }
 
-void rxe_mem_init_dma(struct rxe_pd *pd,
-		      int access, struct rxe_mem *mem)
+void rxe_mr_init_dma(struct rxe_pd *pd,
+		     int access, struct rxe_mr *mr)
 {
-	rxe_mem_init(access, mem);
+	rxe_mr_init(access, mr);
 
-	mem->pd			= pd;
-	mem->access		= access;
-	mem->state		= RXE_MEM_STATE_VALID;
-	mem->type		= RXE_MEM_TYPE_DMA;
+	mr->pd			= pd;
+	mr->access		= access;
+	mr->state		= RXE_MEM_STATE_VALID;
+	mr->type		= RXE_MEM_TYPE_DMA;
 }
 
-int rxe_mem_init_user(struct rxe_pd *pd, u64 start,
+int rxe_mr_init_user(struct rxe_pd *pd, u64 start,
 		      u64 length, u64 iova, int access, struct ib_udata *udata,
-		      struct rxe_mem *mem)
+		      struct rxe_mr *mr)
 {
 	struct rxe_map		**map;
 	struct rxe_phys_buf	*buf = NULL;
@@ -175,23 +175,23 @@ int rxe_mem_init_user(struct rxe_pd *pd, u64 start,
 		goto err1;
 	}
 
-	mem->umem = umem;
+	mr->umem = umem;
 	num_buf = ib_umem_num_pages(umem);
 
-	rxe_mem_init(access, mem);
+	rxe_mr_init(access, mr);
 
-	err = rxe_mem_alloc(mem, num_buf);
+	err = rxe_mr_alloc(mr, num_buf);
 	if (err) {
-		pr_warn("err %d from rxe_mem_alloc\n", err);
+		pr_warn("err %d from rxe_mr_alloc\n", err);
 		ib_umem_release(umem);
 		goto err1;
 	}
 
-	mem->page_shift		= PAGE_SHIFT;
-	mem->page_mask = PAGE_SIZE - 1;
+	mr->page_shift		= PAGE_SHIFT;
+	mr->page_mask = PAGE_SIZE - 1;
 
 	num_buf			= 0;
-	map			= mem->map;
+	map			= mr->map;
 	if (length > 0) {
 		buf = map[0]->buf;
 
@@ -217,15 +217,15 @@ int rxe_mem_init_user(struct rxe_pd *pd, u64 start,
 		}
 	}
 
-	mem->pd			= pd;
-	mem->umem		= umem;
-	mem->access		= access;
-	mem->length		= length;
-	mem->iova		= iova;
-	mem->va			= start;
-	mem->offset		= ib_umem_offset(umem);
-	mem->state		= RXE_MEM_STATE_VALID;
-	mem->type		= RXE_MEM_TYPE_MR;
+	mr->pd			= pd;
+	mr->umem		= umem;
+	mr->access		= access;
+	mr->length		= length;
+	mr->iova		= iova;
+	mr->va			= start;
+	mr->offset		= ib_umem_offset(umem);
+	mr->state		= RXE_MEM_STATE_VALID;
+	mr->type		= RXE_MEM_TYPE_MR;
 
 	return 0;
 
@@ -233,24 +233,24 @@ int rxe_mem_init_user(struct rxe_pd *pd, u64 start,
 	return err;
 }
 
-int rxe_mem_init_fast(struct rxe_pd *pd,
-		      int max_pages, struct rxe_mem *mem)
+int rxe_mr_init_fast(struct rxe_pd *pd,
+		      int max_pages, struct rxe_mr *mr)
 {
 	int err;
 
-	rxe_mem_init(0, mem);
+	rxe_mr_init(0, mr);
 
 	/* In fastreg, we also set the rkey */
-	mem->ibmr.rkey = mem->ibmr.lkey;
+	mr->ibmr.rkey = mr->ibmr.lkey;
 
-	err = rxe_mem_alloc(mem, max_pages);
+	err = rxe_mr_alloc(mr, max_pages);
 	if (err)
 		goto err1;
 
-	mem->pd			= pd;
-	mem->max_buf		= max_pages;
-	mem->state		= RXE_MEM_STATE_FREE;
-	mem->type		= RXE_MEM_TYPE_MR;
+	mr->pd			= pd;
+	mr->max_buf		= max_pages;
+	mr->state		= RXE_MEM_STATE_FREE;
+	mr->type		= RXE_MEM_TYPE_MR;
 
 	return 0;
 
@@ -259,27 +259,27 @@ int rxe_mem_init_fast(struct rxe_pd *pd,
 }
 
 static void lookup_iova(
-	struct rxe_mem	*mem,
+	struct rxe_mr	*mr,
 	u64			iova,
 	int			*m_out,
 	int			*n_out,
 	size_t			*offset_out)
 {
-	size_t			offset = iova - mem->iova + mem->offset;
+	size_t			offset = iova - mr->iova + mr->offset;
 	int			map_index;
 	int			buf_index;
 	u64			length;
 
-	if (likely(mem->page_shift)) {
-		*offset_out = offset & mem->page_mask;
-		offset >>= mem->page_shift;
-		*n_out = offset & mem->map_mask;
-		*m_out = offset >> mem->map_shift;
+	if (likely(mr->page_shift)) {
+		*offset_out = offset & mr->page_mask;
+		offset >>= mr->page_shift;
+		*n_out = offset & mr->map_mask;
+		*m_out = offset >> mr->map_shift;
 	} else {
 		map_index = 0;
 		buf_index = 0;
 
-		length = mem->map[map_index]->buf[buf_index].size;
+		length = mr->map[map_index]->buf[buf_index].size;
 
 		while (offset >= length) {
 			offset -= length;
@@ -289,7 +289,7 @@ static void lookup_iova(
 				map_index++;
 				buf_index = 0;
 			}
-			length = mem->map[map_index]->buf[buf_index].size;
+			length = mr->map[map_index]->buf[buf_index].size;
 		}
 
 		*m_out = map_index;
@@ -298,48 +298,48 @@ static void lookup_iova(
 	}
 }
 
-void *iova_to_vaddr(struct rxe_mem *mem, u64 iova, int length)
+void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length)
 {
 	size_t offset;
 	int m, n;
 	void *addr;
 
-	if (mem->state != RXE_MEM_STATE_VALID) {
-		pr_warn("mem not in valid state\n");
+	if (mr->state != RXE_MEM_STATE_VALID) {
+		pr_warn("mr not in valid state\n");
 		addr = NULL;
 		goto out;
 	}
 
-	if (!mem->map) {
+	if (!mr->map) {
 		addr = (void *)(uintptr_t)iova;
 		goto out;
 	}
 
-	if (mem_check_range(mem, iova, length)) {
+	if (mr_check_range(mr, iova, length)) {
 		pr_warn("range violation\n");
 		addr = NULL;
 		goto out;
 	}
 
-	lookup_iova(mem, iova, &m, &n, &offset);
+	lookup_iova(mr, iova, &m, &n, &offset);
 
-	if (offset + length > mem->map[m]->buf[n].size) {
+	if (offset + length > mr->map[m]->buf[n].size) {
 		pr_warn("crosses page boundary\n");
 		addr = NULL;
 		goto out;
 	}
 
-	addr = (void *)(uintptr_t)mem->map[m]->buf[n].addr + offset;
+	addr = (void *)(uintptr_t)mr->map[m]->buf[n].addr + offset;
 
 out:
 	return addr;
 }
 
 /* copy data from a range (vaddr, vaddr+length-1) to or from
- * a mem object starting at iova. Compute incremental value of
- * crc32 if crcp is not zero. caller must hold a reference to mem
+ * a mr object starting at iova. Compute incremental value of
+ * crc32 if crcp is not zero. caller must hold a reference to mr
  */
-int rxe_mem_copy(struct rxe_mem *mem, u64 iova, void *addr, int length,
+int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length,
 		 enum copy_direction dir, u32 *crcp)
 {
 	int			err;
@@ -355,43 +355,43 @@ int rxe_mem_copy(struct rxe_mem *mem, u64 iova, void *addr, int length,
 	if (length == 0)
 		return 0;
 
-	if (mem->type == RXE_MEM_TYPE_DMA) {
+	if (mr->type == RXE_MEM_TYPE_DMA) {
 		u8 *src, *dest;
 
-		src  = (dir == to_mem_obj) ?
+		src  = (dir == to_mr_obj) ?
 			addr : ((void *)(uintptr_t)iova);
 
-		dest = (dir == to_mem_obj) ?
+		dest = (dir == to_mr_obj) ?
 			((void *)(uintptr_t)iova) : addr;
 
 		memcpy(dest, src, length);
 
 		if (crcp)
-			*crcp = rxe_crc32(to_rdev(mem->pd->ibpd.device),
+			*crcp = rxe_crc32(to_rdev(mr->pd->ibpd.device),
 					*crcp, dest, length);
 
 		return 0;
 	}
 
-	WARN_ON_ONCE(!mem->map);
+	WARN_ON_ONCE(!mr->map);
 
-	err = mem_check_range(mem, iova, length);
+	err = mr_check_range(mr, iova, length);
 	if (err) {
 		err = -EFAULT;
 		goto err1;
 	}
 
-	lookup_iova(mem, iova, &m, &i, &offset);
+	lookup_iova(mr, iova, &m, &i, &offset);
 
-	map	= mem->map + m;
+	map	= mr->map + m;
 	buf	= map[0]->buf + i;
 
 	while (length > 0) {
 		u8 *src, *dest;
 
 		va	= (u8 *)(uintptr_t)buf->addr + offset;
-		src  = (dir == to_mem_obj) ? addr : va;
-		dest = (dir == to_mem_obj) ? va : addr;
+		src  = (dir == to_mr_obj) ? addr : va;
+		dest = (dir == to_mr_obj) ? va : addr;
 
 		bytes	= buf->size - offset;
 
@@ -401,7 +401,7 @@ int rxe_mem_copy(struct rxe_mem *mem, u64 iova, void *addr, int length,
 		memcpy(dest, src, bytes);
 
 		if (crcp)
-			crc = rxe_crc32(to_rdev(mem->pd->ibpd.device),
+			crc = rxe_crc32(to_rdev(mr->pd->ibpd.device),
 					crc, dest, bytes);
 
 		length	-= bytes;
@@ -443,7 +443,7 @@ int copy_data(
 	struct rxe_sge		*sge	= &dma->sge[dma->cur_sge];
 	int			offset	= dma->sge_offset;
 	int			resid	= dma->resid;
-	struct rxe_mem		*mem	= NULL;
+	struct rxe_mr		*mr	= NULL;
 	u64			iova;
 	int			err;
 
@@ -456,8 +456,8 @@ int copy_data(
 	}
 
 	if (sge->length && (offset < sge->length)) {
-		mem = lookup_mem(pd, access, sge->lkey, lookup_local);
-		if (!mem) {
+		mr = lookup_mr(pd, access, sge->lkey, lookup_local);
+		if (!mr) {
 			err = -EINVAL;
 			goto err1;
 		}
@@ -467,9 +467,9 @@ int copy_data(
 		bytes = length;
 
 		if (offset >= sge->length) {
-			if (mem) {
-				rxe_drop_ref(mem);
-				mem = NULL;
+			if (mr) {
+				rxe_drop_ref(mr);
+				mr = NULL;
 			}
 			sge++;
 			dma->cur_sge++;
@@ -481,9 +481,9 @@ int copy_data(
 			}
 
 			if (sge->length) {
-				mem = lookup_mem(pd, access, sge->lkey,
+				mr = lookup_mr(pd, access, sge->lkey,
 						 lookup_local);
-				if (!mem) {
+				if (!mr) {
 					err = -EINVAL;
 					goto err1;
 				}
@@ -498,7 +498,7 @@ int copy_data(
 		if (bytes > 0) {
 			iova = sge->addr + offset;
 
-			err = rxe_mem_copy(mem, iova, addr, bytes, dir, crcp);
+			err = rxe_mr_copy(mr, iova, addr, bytes, dir, crcp);
 			if (err)
 				goto err2;
 
@@ -512,14 +512,14 @@ int copy_data(
 	dma->sge_offset = offset;
 	dma->resid	= resid;
 
-	if (mem)
-		rxe_drop_ref(mem);
+	if (mr)
+		rxe_drop_ref(mr);
 
 	return 0;
 
 err2:
-	if (mem)
-		rxe_drop_ref(mem);
+	if (mr)
+		rxe_drop_ref(mr);
 err1:
 	return err;
 }
@@ -557,31 +557,31 @@ int advance_dma_data(struct rxe_dma_info *dma, unsigned int length)
 	return 0;
 }
 
-/* (1) find the mem (mr or mw) corresponding to lkey/rkey
+/* (1) find the mr (mr or mw) corresponding to lkey/rkey
  *     depending on lookup_type
- * (2) verify that the (qp) pd matches the mem pd
- * (3) verify that the mem can support the requested access
- * (4) verify that mem state is valid
+ * (2) verify that the (qp) pd matches the mr pd
+ * (3) verify that the mr can support the requested access
+ * (4) verify that mr state is valid
  */
-struct rxe_mem *lookup_mem(struct rxe_pd *pd, int access, u32 key,
+struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key,
 			   enum lookup_type type)
 {
-	struct rxe_mem *mem;
+	struct rxe_mr *mr;
 	struct rxe_dev *rxe = to_rdev(pd->ibpd.device);
 	int index = key >> 8;
 
-	mem = rxe_pool_get_index(&rxe->mr_pool, index);
-	if (!mem)
+	mr = rxe_pool_get_index(&rxe->mr_pool, index);
+	if (!mr)
 		return NULL;
 
-	if (unlikely((type == lookup_local && mem->lkey != key) ||
-		     (type == lookup_remote && mem->rkey != key) ||
-		     mem->pd != pd ||
-		     (access && !(access & mem->access)) ||
-		     mem->state != RXE_MEM_STATE_VALID)) {
-		rxe_drop_ref(mem);
-		mem = NULL;
+	if (unlikely((type == lookup_local && mr->lkey != key) ||
+		     (type == lookup_remote && mr->rkey != key) ||
+		     mr->pd != pd ||
+		     (access && !(access & mr->access)) ||
+		     mr->state != RXE_MEM_STATE_VALID)) {
+		rxe_drop_ref(mr);
+		mr = NULL;
 	}
 
-	return mem;
+	return mr;
 }
diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c
index fbcbac52290b..ba002fed8051 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.c
+++ b/drivers/infiniband/sw/rxe/rxe_pool.c
@@ -77,15 +77,15 @@ struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = {
 	},
 	[RXE_TYPE_MR] = {
 		.name		= "rxe-mr",
-		.size		= sizeof(struct rxe_mem),
-		.cleanup	= rxe_mem_cleanup,
+		.size		= sizeof(struct rxe_mr),
+		.cleanup	= rxe_mr_cleanup,
 		.flags		= RXE_POOL_INDEX,
 		.max_index	= RXE_MAX_MR_INDEX,
 		.min_index	= RXE_MIN_MR_INDEX,
 	},
 	[RXE_TYPE_MW] = {
 		.name		= "rxe-mw",
-		.size		= sizeof(struct rxe_mem),
+		.size		= sizeof(struct rxe_mr),
 		.flags		= RXE_POOL_INDEX,
 		.max_index	= RXE_MAX_MW_INDEX,
 		.min_index	= RXE_MIN_MW_INDEX,
diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
index 34df2b55e650..c566372eebf8 100644
--- a/drivers/infiniband/sw/rxe/rxe_req.c
+++ b/drivers/infiniband/sw/rxe/rxe_req.c
@@ -492,7 +492,7 @@ static int fill_packet(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
 		} else {
 			err = copy_data(qp->pd, 0, &wqe->dma,
 					payload_addr(pkt), paylen,
-					from_mem_obj,
+					from_mr_obj,
 					&crc);
 			if (err)
 				return err;
@@ -624,7 +624,7 @@ int rxe_requester(void *arg)
 	if (wqe->mask & WR_REG_MASK) {
 		if (wqe->wr.opcode == IB_WR_LOCAL_INV) {
 			struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
-			struct rxe_mem *rmr;
+			struct rxe_mr *rmr;
 
 			rmr = rxe_pool_get_index(&rxe->mr_pool,
 						 wqe->wr.ex.invalidate_rkey >> 8);
@@ -640,7 +640,7 @@ int rxe_requester(void *arg)
 			wqe->state = wqe_state_done;
 			wqe->status = IB_WC_SUCCESS;
 		} else if (wqe->wr.opcode == IB_WR_REG_MR) {
-			struct rxe_mem *rmr = to_rmr(wqe->wr.wr.reg.mr);
+			struct rxe_mr *rmr = to_rmr(wqe->wr.wr.reg.mr);
 
 			rmr->state = RXE_MEM_STATE_VALID;
 			rmr->access = wqe->wr.wr.reg.access;
diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
index c4a8195bf670..d54b5e7dad39 100644
--- a/drivers/infiniband/sw/rxe/rxe_resp.c
+++ b/drivers/infiniband/sw/rxe/rxe_resp.c
@@ -417,7 +417,7 @@ static enum resp_states check_length(struct rxe_qp *qp,
 static enum resp_states check_rkey(struct rxe_qp *qp,
 				   struct rxe_pkt_info *pkt)
 {
-	struct rxe_mem *mem = NULL;
+	struct rxe_mr *mr = NULL;
 	u64 va;
 	u32 rkey;
 	u32 resid;
@@ -456,18 +456,18 @@ static enum resp_states check_rkey(struct rxe_qp *qp,
 	resid	= qp->resp.resid;
 	pktlen	= payload_size(pkt);
 
-	mem = lookup_mem(qp->pd, access, rkey, lookup_remote);
-	if (!mem) {
+	mr = lookup_mr(qp->pd, access, rkey, lookup_remote);
+	if (!mr) {
 		state = RESPST_ERR_RKEY_VIOLATION;
 		goto err;
 	}
 
-	if (unlikely(mem->state == RXE_MEM_STATE_FREE)) {
+	if (unlikely(mr->state == RXE_MEM_STATE_FREE)) {
 		state = RESPST_ERR_RKEY_VIOLATION;
 		goto err;
 	}
 
-	if (mem_check_range(mem, va, resid)) {
+	if (mr_check_range(mr, va, resid)) {
 		state = RESPST_ERR_RKEY_VIOLATION;
 		goto err;
 	}
@@ -495,12 +495,12 @@ static enum resp_states check_rkey(struct rxe_qp *qp,
 
 	WARN_ON_ONCE(qp->resp.mr);
 
-	qp->resp.mr = mem;
+	qp->resp.mr = mr;
 	return RESPST_EXECUTE;
 
 err:
-	if (mem)
-		rxe_drop_ref(mem);
+	if (mr)
+		rxe_drop_ref(mr);
 	return state;
 }
 
@@ -510,7 +510,7 @@ static enum resp_states send_data_in(struct rxe_qp *qp, void *data_addr,
 	int err;
 
 	err = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, &qp->resp.wqe->dma,
-			data_addr, data_len, to_mem_obj, NULL);
+			data_addr, data_len, to_mr_obj, NULL);
 	if (unlikely(err))
 		return (err == -ENOSPC) ? RESPST_ERR_LENGTH
 					: RESPST_ERR_MALFORMED_WQE;
@@ -525,8 +525,8 @@ static enum resp_states write_data_in(struct rxe_qp *qp,
 	int	err;
 	int data_len = payload_size(pkt);
 
-	err = rxe_mem_copy(qp->resp.mr, qp->resp.va, payload_addr(pkt),
-			   data_len, to_mem_obj, NULL);
+	err = rxe_mr_copy(qp->resp.mr, qp->resp.va, payload_addr(pkt),
+			   data_len, to_mr_obj, NULL);
 	if (err) {
 		rc = RESPST_ERR_RKEY_VIOLATION;
 		goto out;
@@ -548,7 +548,7 @@ static enum resp_states process_atomic(struct rxe_qp *qp,
 	u64 iova = atmeth_va(pkt);
 	u64 *vaddr;
 	enum resp_states ret;
-	struct rxe_mem *mr = qp->resp.mr;
+	struct rxe_mr *mr = qp->resp.mr;
 
 	if (mr->state != RXE_MEM_STATE_VALID) {
 		ret = RESPST_ERR_RKEY_VIOLATION;
@@ -727,8 +727,8 @@ static enum resp_states read_reply(struct rxe_qp *qp,
 	if (!skb)
 		return RESPST_ERR_RNR;
 
-	err = rxe_mem_copy(res->read.mr, res->read.va, payload_addr(&ack_pkt),
-			   payload, from_mem_obj, &icrc);
+	err = rxe_mr_copy(res->read.mr, res->read.va, payload_addr(&ack_pkt),
+			   payload, from_mr_obj, &icrc);
 	if (err)
 		pr_err("Failed copying memory\n");
 
@@ -910,7 +910,7 @@ static enum resp_states do_complete(struct rxe_qp *qp,
 			}
 
 			if (pkt->mask & RXE_IETH_MASK) {
-				struct rxe_mem *rmr;
+				struct rxe_mr *rmr;
 
 				wc->wc_flags |= IB_WC_WITH_INVALIDATE;
 				wc->ex.invalidate_rkey = ieth_rkey(pkt);
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
index 1dbc69b86859..cac0f3f0c7c1 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
@@ -890,7 +890,7 @@ static struct ib_mr *rxe_get_dma_mr(struct ib_pd *ibpd, int access)
 {
 	struct rxe_dev *rxe = to_rdev(ibpd->device);
 	struct rxe_pd *pd = to_rpd(ibpd);
-	struct rxe_mem *mr;
+	struct rxe_mr *mr;
 
 	mr = rxe_alloc(&rxe->mr_pool);
 	if (!mr)
@@ -898,7 +898,8 @@ static struct ib_mr *rxe_get_dma_mr(struct ib_pd *ibpd, int access)
 
 	rxe_add_index(mr);
 	rxe_add_ref(pd);
-	rxe_mem_init_dma(pd, access, mr);
+
+	rxe_mr_init_dma(pd, access, mr);
 
 	return &mr->ibmr;
 }
@@ -912,7 +913,7 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd,
 	int err;
 	struct rxe_dev *rxe = to_rdev(ibpd->device);
 	struct rxe_pd *pd = to_rpd(ibpd);
-	struct rxe_mem *mr;
+	struct rxe_mr *mr;
 
 	mr = rxe_alloc(&rxe->mr_pool);
 	if (!mr) {
@@ -924,7 +925,7 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd,
 
 	rxe_add_ref(pd);
 
-	err = rxe_mem_init_user(pd, start, length, iova,
+	err = rxe_mr_init_user(pd, start, length, iova,
 				access, udata, mr);
 	if (err)
 		goto err3;
@@ -941,7 +942,7 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd,
 
 static int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata)
 {
-	struct rxe_mem *mr = to_rmr(ibmr);
+	struct rxe_mr *mr = to_rmr(ibmr);
 
 	mr->state = RXE_MEM_STATE_ZOMBIE;
 	rxe_drop_ref(mr->pd);
@@ -955,7 +956,7 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type,
 {
 	struct rxe_dev *rxe = to_rdev(ibpd->device);
 	struct rxe_pd *pd = to_rpd(ibpd);
-	struct rxe_mem *mr;
+	struct rxe_mr *mr;
 	int err;
 
 	if (mr_type != IB_MR_TYPE_MEM_REG)
@@ -971,7 +972,7 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type,
 
 	rxe_add_ref(pd);
 
-	err = rxe_mem_init_fast(pd, max_num_sg, mr);
+	err = rxe_mr_init_fast(pd, max_num_sg, mr);
 	if (err)
 		goto err2;
 
@@ -987,7 +988,7 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type,
 
 static int rxe_set_page(struct ib_mr *ibmr, u64 addr)
 {
-	struct rxe_mem *mr = to_rmr(ibmr);
+	struct rxe_mr *mr = to_rmr(ibmr);
 	struct rxe_map *map;
 	struct rxe_phys_buf *buf;
 
@@ -1007,7 +1008,7 @@ static int rxe_set_page(struct ib_mr *ibmr, u64 addr)
 static int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg,
 			 int sg_nents, unsigned int *sg_offset)
 {
-	struct rxe_mem *mr = to_rmr(ibmr);
+	struct rxe_mr *mr = to_rmr(ibmr);
 	int n;
 
 	mr->nbuf = 0;
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h
index c664c7f36ab5..f3f1a58e894b 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.h
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.h
@@ -183,7 +183,7 @@ struct resp_res {
 			struct sk_buff	*skb;
 		} atomic;
 		struct {
-			struct rxe_mem	*mr;
+			struct rxe_mr	*mr;
 			u64		va_org;
 			u32		rkey;
 			u32		length;
@@ -210,7 +210,7 @@ struct rxe_resp_info {
 
 	/* RDMA read / atomic only */
 	u64			va;
-	struct rxe_mem		*mr;
+	struct rxe_mr		*mr;
 	u32			resid;
 	u32			rkey;
 	u32			length;
@@ -289,14 +289,14 @@ struct rxe_qp {
 	struct execute_work	cleanup_work;
 };
 
-enum rxe_mem_state {
+enum rxe_mr_state {
 	RXE_MEM_STATE_ZOMBIE,
 	RXE_MEM_STATE_INVALID,
 	RXE_MEM_STATE_FREE,
 	RXE_MEM_STATE_VALID,
 };
 
-enum rxe_mem_type {
+enum rxe_mr_type {
 	RXE_MEM_TYPE_NONE,
 	RXE_MEM_TYPE_DMA,
 	RXE_MEM_TYPE_MR,
@@ -315,12 +315,9 @@ struct rxe_map {
 	struct rxe_phys_buf	buf[RXE_BUF_PER_MAP];
 };
 
-struct rxe_mem {
+struct rxe_mr {
 	struct rxe_pool_entry	pelem;
-	union {
-		struct ib_mr		ibmr;
-		struct ib_mw		ibmw;
-	};
+	struct ib_mr		ibmr;
 
 	struct rxe_pd		*pd;
 	struct ib_umem		*umem;
@@ -328,8 +325,8 @@ struct rxe_mem {
 	u32			lkey;
 	u32			rkey;
 
-	enum rxe_mem_state	state;
-	enum rxe_mem_type	type;
+	enum rxe_mr_state	state;
+	enum rxe_mr_type	type;
 	u64			va;
 	u64			iova;
 	size_t			length;
@@ -455,15 +452,15 @@ static inline struct rxe_cq *to_rcq(struct ib_cq *cq)
 	return cq ? container_of(cq, struct rxe_cq, ibcq) : NULL;
 }
 
-static inline struct rxe_mem *to_rmr(struct ib_mr *mr)
+static inline struct rxe_mr *to_rmr(struct ib_mr *mr)
 {
-	return mr ? container_of(mr, struct rxe_mem, ibmr) : NULL;
+	return mr ? container_of(mr, struct rxe_mr, ibmr) : NULL;
 }
 
-static inline struct rxe_mem *to_rmw(struct ib_mw *mw)
-{
-	return mw ? container_of(mw, struct rxe_mem, ibmw) : NULL;
-}
+//static inline struct rxe_mw *to_rmw(struct ib_mw *mw)
+//{
+	//return mw ? container_of(mw, struct rxe_mw, ibmw) : NULL;
+//}
 
 int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name);
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 06/20] Added a basic rxe_mw struct
  2020-08-15  4:58 Memory windows support for rxe Bob Pearson
                   ` (4 preceding siblings ...)
  2020-08-15  4:58 ` [PATCH 05/20] Separated MR and MW objects Bob Pearson
@ 2020-08-15  4:58 ` Bob Pearson
  2020-08-15  4:58 ` [PATCH 07/20] Implemented functional alloc_mw and dealloc_mw APIs Bob Pearson
                   ` (13 subsequent siblings)
  19 siblings, 0 replies; 23+ messages in thread
From: Bob Pearson @ 2020-08-15  4:58 UTC (permalink / raw)
  To: linux-rdma; +Cc: Bob Pearson

Created the new basic rxe_mw structure.

Signed-off-by: Bob Pearson <rpearson@hpe.com>
---
 drivers/infiniband/sw/rxe/rxe_verbs.h | 22 ++++++++++++++++++----
 1 file changed, 18 insertions(+), 4 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h
index f3f1a58e894b..6a4486893b86 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.h
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.h
@@ -347,6 +347,20 @@ struct rxe_mr {
 	struct rxe_map		**map;
 };
 
+enum rxe_mw_state {
+	RXE_MW_STATE_INVALID,
+	RXE_MW_STATE_FREE,
+	RXE_MW_STATE_VALID,
+};
+
+struct rxe_mw {
+	struct rxe_pool_entry	pelem;
+	struct ib_mw		ibmw;
+	struct rxe_qp		*qp;	/* type 2B only */
+	struct rxe_mem		*mr;
+	enum rxe_mw_state	state;
+};
+
 struct rxe_mc_grp {
 	struct rxe_pool_entry	pelem;
 	spinlock_t		mcg_lock; /* guard group */
@@ -457,10 +471,10 @@ static inline struct rxe_mr *to_rmr(struct ib_mr *mr)
 	return mr ? container_of(mr, struct rxe_mr, ibmr) : NULL;
 }
 
-//static inline struct rxe_mw *to_rmw(struct ib_mw *mw)
-//{
-	//return mw ? container_of(mw, struct rxe_mw, ibmw) : NULL;
-//}
+static inline struct rxe_mw *to_rmw(struct ib_mw *mw)
+{
+	return mw ? container_of(mw, struct rxe_mw, ibmw) : NULL;
+}
 
 int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name);
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 07/20] Implemented functional alloc_mw and dealloc_mw APIs
  2020-08-15  4:58 Memory windows support for rxe Bob Pearson
                   ` (5 preceding siblings ...)
  2020-08-15  4:58 ` [PATCH 06/20] Added a basic rxe_mw struct Bob Pearson
@ 2020-08-15  4:58 ` Bob Pearson
  2020-08-15  4:58 ` [PATCH 08/20] Added a stubbed bind_mw API Bob Pearson
                   ` (12 subsequent siblings)
  19 siblings, 0 replies; 23+ messages in thread
From: Bob Pearson @ 2020-08-15  4:58 UTC (permalink / raw)
  To: linux-rdma; +Cc: Bob Pearson

Created basic functional alloc_mw and dealloc_mw funnctions and
changed the parameters in rxe_param.h so that MWs can actually
be created. This change supported running user space test cases
for these APIs.

Signed-off-by: Bob Pearson <rpearson@hpe.com>
---
 drivers/infiniband/sw/rxe/rxe.c       |  1 +
 drivers/infiniband/sw/rxe/rxe_mw.c    | 57 +++++++++++++++++++++++++--
 drivers/infiniband/sw/rxe/rxe_param.h | 10 +++--
 drivers/infiniband/sw/rxe/rxe_pool.c  |  2 +-
 drivers/infiniband/sw/rxe/rxe_verbs.h |  4 ++
 5 files changed, 65 insertions(+), 9 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c
index 907203afbd99..25bd25371f8e 100644
--- a/drivers/infiniband/sw/rxe/rxe.c
+++ b/drivers/infiniband/sw/rxe/rxe.c
@@ -79,6 +79,7 @@ static void rxe_init_device_param(struct rxe_dev *rxe)
 	rxe->attr.max_cq			= RXE_MAX_CQ;
 	rxe->attr.max_cqe			= (1 << RXE_MAX_LOG_CQE) - 1;
 	rxe->attr.max_mr			= RXE_MAX_MR;
+	rxe->attr.max_mw			= RXE_MAX_MW;
 	rxe->attr.max_pd			= RXE_MAX_PD;
 	rxe->attr.max_qp_rd_atom		= RXE_MAX_QP_RD_ATOM;
 	rxe->attr.max_res_rd_atom		= RXE_MAX_RES_RD_ATOM;
diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c
index 6139dc9d8dd8..50cd451751b8 100644
--- a/drivers/infiniband/sw/rxe/rxe_mw.c
+++ b/drivers/infiniband/sw/rxe/rxe_mw.c
@@ -35,15 +35,64 @@
 #include "rxe.h"
 #include "rxe_loc.h"
 
+/* place holder alloc and dealloc routines
+ * need to add cross references between qp and mr with mw
+ * and cleanup when one side is deleted. Enough to make
+ * verbs function correctly for now */
 struct ib_mw *rxe_alloc_mw(struct ib_pd *ibpd, enum ib_mw_type type,
 			   struct ib_udata *udata)
 {
-	pr_err_once("rxe_alloc_mw: not implemented\n");
-	return ERR_PTR(-ENOSYS);
+	struct rxe_pd *pd = to_rpd(ibpd);
+	struct rxe_dev *rxe = to_rdev(ibpd->device);
+	struct rxe_mw *mw;
+	u32 rkey;
+	u8 key;
+
+	if (unlikely((type != IB_MW_TYPE_1) &&
+		     (type != IB_MW_TYPE_2)))
+		return ERR_PTR(-EINVAL);
+
+	rxe_add_ref(pd);
+
+	mw = rxe_alloc(&rxe->mw_pool);
+	if (!mw) {
+		rxe_drop_ref(pd);
+		return ERR_PTR(-ENOMEM);
+	}
+
+	/* pick a random key part as a starting point */
+	rxe_add_index(mw);
+	get_random_bytes(&key, sizeof(key));
+	rkey = mw->pelem.index << 8 | key;
+
+	spin_lock_init(&mw->lock);
+	mw->qp			= NULL;
+	mw->mr			= NULL;
+	mw->addr		= 0;
+	mw->length		= 0;
+        mw->ibmw.pd		= ibpd;
+        mw->ibmw.type		= type;
+        mw->ibmw.rkey		= rkey;
+	mw->state		= (type == IB_MW_TYPE_2) ?
+					RXE_MW_STATE_FREE :
+					RXE_MW_STATE_VALID;
+
+	return &mw->ibmw;
 }
 
 int rxe_dealloc_mw(struct ib_mw *ibmw)
 {
-	pr_err_once("rxe_dealloc_mw: not implemented\n");
-	return -ENOSYS;
+	struct rxe_mw *mw = to_rmw(ibmw);
+	struct rxe_pd *pd = to_rpd(ibmw->pd);
+	unsigned long flags;
+
+	spin_lock_irqsave(&mw->lock, flags);
+	mw->state = RXE_MW_STATE_INVALID;
+	spin_unlock_irqrestore(&mw->lock, flags);
+
+	rxe_drop_ref(pd);
+	rxe_drop_index(mw);
+	rxe_drop_ref(mw);
+
+	return 0;
 }
diff --git a/drivers/infiniband/sw/rxe/rxe_param.h b/drivers/infiniband/sw/rxe/rxe_param.h
index 2f381aeafcb5..7f914dde98a7 100644
--- a/drivers/infiniband/sw/rxe/rxe_param.h
+++ b/drivers/infiniband/sw/rxe/rxe_param.h
@@ -85,7 +85,8 @@ enum rxe_device_param {
 	RXE_MAX_SGE_RD			= 32,
 	RXE_MAX_CQ			= 16384,
 	RXE_MAX_LOG_CQE			= 15,
-	RXE_MAX_MR			= 256 * 1024,
+	RXE_MAX_MR			= 0x40000,
+	RXE_MAX_MW			= 0x40000,
 	RXE_MAX_PD			= 0x7ffc,
 	RXE_MAX_QP_RD_ATOM		= 128,
 	RXE_MAX_RES_RD_ATOM		= 0x3f000,
@@ -114,9 +115,10 @@ enum rxe_device_param {
 	RXE_MAX_SRQ_INDEX		= 0x00040000,
 
 	RXE_MIN_MR_INDEX		= 0x00000001,
-	RXE_MAX_MR_INDEX		= 0x00040000,
-	RXE_MIN_MW_INDEX		= 0x00040001,
-	RXE_MAX_MW_INDEX		= 0x00060000,
+	RXE_MAX_MR_INDEX		= RXE_MIN_MR_INDEX + RXE_MAX_MR - 1,
+	RXE_MIN_MW_INDEX		= RXE_MIN_MR_INDEX + RXE_MAX_MR,
+	RXE_MAX_MW_INDEX		= RXE_MIN_MW_INDEX + RXE_MAX_MW - 1,
+
 	RXE_MAX_PKT_PER_ACK		= 64,
 
 	RXE_MAX_UNACKED_PSNS		= 128,
diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c
index ba002fed8051..32b86a9979e6 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.c
+++ b/drivers/infiniband/sw/rxe/rxe_pool.c
@@ -85,7 +85,7 @@ struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = {
 	},
 	[RXE_TYPE_MW] = {
 		.name		= "rxe-mw",
-		.size		= sizeof(struct rxe_mr),
+		.size		= sizeof(struct rxe_mw),
 		.flags		= RXE_POOL_INDEX,
 		.max_index	= RXE_MAX_MW_INDEX,
 		.min_index	= RXE_MIN_MW_INDEX,
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h
index 6a4486893b86..ebe4157fbcdd 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.h
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.h
@@ -358,7 +358,11 @@ struct rxe_mw {
 	struct ib_mw		ibmw;
 	struct rxe_qp		*qp;	/* type 2B only */
 	struct rxe_mem		*mr;
+	spinlock_t		lock;
 	enum rxe_mw_state	state;
+	u32			access;
+	u64			addr;
+	u64			length;
 };
 
 struct rxe_mc_grp {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 08/20] Added a stubbed bind_mw API.
  2020-08-15  4:58 Memory windows support for rxe Bob Pearson
                   ` (6 preceding siblings ...)
  2020-08-15  4:58 ` [PATCH 07/20] Implemented functional alloc_mw and dealloc_mw APIs Bob Pearson
@ 2020-08-15  4:58 ` Bob Pearson
  2020-08-15  4:58 ` [PATCH 09/20] Fixed error logic in rxe_req.c Bob Pearson
                   ` (11 subsequent siblings)
  19 siblings, 0 replies; 23+ messages in thread
From: Bob Pearson @ 2020-08-15  4:58 UTC (permalink / raw)
  To: linux-rdma; +Cc: Bob Pearson

Added code to implement the path to rxe_bind_mw which
is still a stub. Now the ibv_bind_mw verb can be called.

Added bind_mw work requests to the opcodes file and added the new
local operation to rxe_req. Changed WR_REG_MASK to WR_LOCAL_MASK
since it is used to identify local operations.

Signed-off-by: Bob Pearson <rpearson@hpe.com>
---
 drivers/infiniband/sw/rxe/rxe_comp.c   |  3 +
 drivers/infiniband/sw/rxe/rxe_loc.h    |  1 +
 drivers/infiniband/sw/rxe/rxe_mw.c     |  6 ++
 drivers/infiniband/sw/rxe/rxe_opcode.c | 11 +++-
 drivers/infiniband/sw/rxe/rxe_req.c    | 76 +++++++++++++++++++++-----
 drivers/infiniband/sw/rxe/rxe_task.h   |  2 +
 6 files changed, 84 insertions(+), 15 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c
index b9b8c4e115f4..caa8ad990337 100644
--- a/drivers/infiniband/sw/rxe/rxe_comp.c
+++ b/drivers/infiniband/sw/rxe/rxe_comp.c
@@ -130,6 +130,7 @@ static enum ib_wc_opcode wr_to_wc_opcode(enum ib_wr_opcode opcode)
 	case IB_WR_RDMA_READ_WITH_INV:		return IB_WC_RDMA_READ;
 	case IB_WR_LOCAL_INV:			return IB_WC_LOCAL_INV;
 	case IB_WR_REG_MR:			return IB_WC_REG_MR;
+	case IB_WR_BIND_MW:			return IB_WC_BIND_MW;
 
 	default:
 		return 0xff;
@@ -787,6 +788,8 @@ int rxe_completer(void *arg)
 	 */
 	WARN_ON_ONCE(skb);
 	rxe_drop_ref(qp);
+	// TODO this seems plain backwards
+	// EAGAIN normally means call me again
 	return -EAGAIN;
 
 done:
diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
index 42375af68f48..02df9bf76d1a 100644
--- a/drivers/infiniband/sw/rxe/rxe_loc.h
+++ b/drivers/infiniband/sw/rxe/rxe_loc.h
@@ -140,6 +140,7 @@ int advance_dma_data(struct rxe_dma_info *dma, unsigned int length);
 struct ib_mw *rxe_alloc_mw(struct ib_pd *ibpd, enum ib_mw_type type,
 			   struct ib_udata *udata);
 int rxe_dealloc_mw(struct ib_mw *ibmw);
+int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe);
 
 /* rxe_net.c */
 void rxe_loopback(struct sk_buff *skb);
diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c
index 50cd451751b8..230263c6d3e5 100644
--- a/drivers/infiniband/sw/rxe/rxe_mw.c
+++ b/drivers/infiniband/sw/rxe/rxe_mw.c
@@ -96,3 +96,9 @@ int rxe_dealloc_mw(struct ib_mw *ibmw)
 
 	return 0;
 }
+
+int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
+{
+	pr_err("rxe_bind_mw: not implemented\n");
+	return -ENOSYS;
+}
diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.c b/drivers/infiniband/sw/rxe/rxe_opcode.c
index 4cf11063e0b5..d2f2092f0be5 100644
--- a/drivers/infiniband/sw/rxe/rxe_opcode.c
+++ b/drivers/infiniband/sw/rxe/rxe_opcode.c
@@ -114,13 +114,20 @@ struct rxe_wr_opcode_info rxe_wr_opcode_info[] = {
 	[IB_WR_LOCAL_INV]				= {
 		.name	= "IB_WR_LOCAL_INV",
 		.mask	= {
-			[IB_QPT_RC]	= WR_REG_MASK,
+			[IB_QPT_RC]	= WR_LOCAL_MASK,
 		},
 	},
 	[IB_WR_REG_MR]					= {
 		.name	= "IB_WR_REG_MR",
 		.mask	= {
-			[IB_QPT_RC]	= WR_REG_MASK,
+			[IB_QPT_RC]	= WR_LOCAL_MASK,
+		},
+	},
+	[IB_WR_BIND_MW]					= {
+		.name	= "IB_WR_BIND_MW",
+		.mask	= {
+			[IB_QPT_RC]	= WR_LOCAL_MASK,
+			[IB_QPT_UC]	= WR_LOCAL_MASK,
 		},
 	},
 };
diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
index c566372eebf8..b402eb82b402 100644
--- a/drivers/infiniband/sw/rxe/rxe_req.c
+++ b/drivers/infiniband/sw/rxe/rxe_req.c
@@ -586,6 +586,8 @@ static void update_state(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
 int rxe_requester(void *arg)
 {
 	struct rxe_qp *qp = (struct rxe_qp *)arg;
+	struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
+	struct rxe_mr *rmr;
 	struct rxe_pkt_info pkt;
 	struct sk_buff *skb;
 	struct rxe_send_wqe *wqe;
@@ -596,9 +598,17 @@ int rxe_requester(void *arg)
 	int ret;
 	struct rxe_send_wqe rollback_wqe;
 	u32 rollback_psn;
+	int entered;
 
 	rxe_add_ref(qp);
 
+	// this code is 'guaranteed' to never be entered more
+	// than once. Check to make sure that this is the case
+	entered = atomic_inc_return(&qp->req.task.entered);
+	if (entered > 1) {
+		pr_err("rxe_requester: entered %d times\n", entered);
+	}
+
 next_wqe:
 	if (unlikely(!qp->valid || qp->req.state == QP_STATE_ERROR))
 		goto exit;
@@ -621,13 +631,11 @@ int rxe_requester(void *arg)
 	if (unlikely(!wqe))
 		goto exit;
 
-	if (wqe->mask & WR_REG_MASK) {
-		if (wqe->wr.opcode == IB_WR_LOCAL_INV) {
-			struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
-			struct rxe_mr *rmr;
-
+	if (wqe->mask & WR_LOCAL_MASK) {
+		switch (wqe->wr.opcode) {
+		case IB_WR_LOCAL_INV:
 			rmr = rxe_pool_get_index(&rxe->mr_pool,
-						 wqe->wr.ex.invalidate_rkey >> 8);
+				 wqe->wr.ex.invalidate_rkey >> 8);
 			if (!rmr) {
 				pr_err("No mr for key %#x\n",
 				       wqe->wr.ex.invalidate_rkey);
@@ -635,13 +643,16 @@ int rxe_requester(void *arg)
 				wqe->status = IB_WC_MW_BIND_ERR;
 				goto exit;
 			}
+			// TODO this can race with external access
+			// to the MR in rxe_resp unless you can know
+			// that all accesses are done
 			rmr->state = RXE_MEM_STATE_FREE;
 			rxe_drop_ref(rmr);
 			wqe->state = wqe_state_done;
 			wqe->status = IB_WC_SUCCESS;
-		} else if (wqe->wr.opcode == IB_WR_REG_MR) {
-			struct rxe_mr *rmr = to_rmr(wqe->wr.wr.reg.mr);
-
+			break;
+		case IB_WR_REG_MR:
+			rmr = to_rmr(wqe->wr.wr.reg.mr);
 			rmr->state = RXE_MEM_STATE_VALID;
 			rmr->access = wqe->wr.wr.reg.access;
 			rmr->lkey = wqe->wr.wr.reg.key;
@@ -649,7 +660,21 @@ int rxe_requester(void *arg)
 			rmr->iova = wqe->wr.wr.reg.mr->iova;
 			wqe->state = wqe_state_done;
 			wqe->status = IB_WC_SUCCESS;
-		} else {
+			break;
+		case IB_WR_BIND_MW:
+			ret = rxe_bind_mw(qp, wqe);
+			if (ret) {
+				wqe->state = wqe_state_done;
+				wqe->status = IB_WC_MW_BIND_ERR;
+				// TODO err: will change status
+				// probably should not
+				goto err;
+			}
+			wqe->state = wqe_state_done;
+			wqe->status = IB_WC_SUCCESS;
+			break;
+		default:
+			pr_err("rxe_requester: unexpected LOCAL WR opcode = %d\n", wqe->wr.opcode);
 			goto exit;
 		}
 		if ((wqe->wr.send_flags & IB_SEND_SIGNALED) ||
@@ -704,9 +729,10 @@ int rxe_requester(void *arg)
 						       qp->req.wqe_index);
 			wqe->state = wqe_state_done;
 			wqe->status = IB_WC_SUCCESS;
+			// TODO why?? why not just treat the same as a
+			// successful wqe and go to next wqe?
 			__rxe_do_task(&qp->comp.task);
-			rxe_drop_ref(qp);
-			return 0;
+			goto again;
 		}
 		payload = mtu;
 	}
@@ -750,12 +776,36 @@ int rxe_requester(void *arg)
 
 	goto next_wqe;
 
+	// TODO this can be cleaned up
 err:
+	/* we come here if an error occured while processing
+	 * a send wqe. The completer will put the qp in error
+	 * state and no more wqes will be processed unless
+	 * the qp is cleaned up and restarted. We do not want
+	 * to be called again */
 	wqe->status = IB_WC_LOC_PROT_ERR;
 	wqe->state = wqe_state_error;
 	__rxe_do_task(&qp->comp.task);
+	ret = -EAGAIN;
+	goto done;
 
 exit:
+	/* we come here if either there are no more wqes in the send
+	 * queue or we are blocked waiting for some resource or event.
+	 * The current wqe will be restarted or new wqe started when
+	 * there is work to do. */
+	ret = -EAGAIN;
+	goto done;
+
+again:
+	/* we come here if we are done with the current wqe but want to
+	 * get called again. Mostly we loop back to next wqe so should
+	 * be all one way or the other */
+	ret = 0;
+	goto done;
+
+done:
+	atomic_dec(&qp->req.task.entered);
 	rxe_drop_ref(qp);
-	return -EAGAIN;
+	return ret;
 }
diff --git a/drivers/infiniband/sw/rxe/rxe_task.h b/drivers/infiniband/sw/rxe/rxe_task.h
index 08ff42d451c6..e33806c6f5a4 100644
--- a/drivers/infiniband/sw/rxe/rxe_task.h
+++ b/drivers/infiniband/sw/rxe/rxe_task.h
@@ -55,6 +55,8 @@ struct rxe_task {
 	int			ret;
 	char			name[16];
 	bool			destroyed;
+	// debug code, delete me when done
+	atomic_t		entered;
 };
 
 /*
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 09/20] Fixed error logic in rxe_req.c
  2020-08-15  4:58 Memory windows support for rxe Bob Pearson
                   ` (7 preceding siblings ...)
  2020-08-15  4:58 ` [PATCH 08/20] Added a stubbed bind_mw API Bob Pearson
@ 2020-08-15  4:58 ` Bob Pearson
  2020-08-15  4:58 ` [PATCH 10/20] Extended pools to support both keys and indices Bob Pearson
                   ` (10 subsequent siblings)
  19 siblings, 0 replies; 23+ messages in thread
From: Bob Pearson @ 2020-08-15  4:58 UTC (permalink / raw)
  To: linux-rdma; +Cc: Bob Pearson

Fixed returned status so each error return can set
status. Now the bind_mw verb returns the correct (error) status in the wc.

Signed-off-by: Bob Pearson <rpearson@hpe.com>
---
 drivers/infiniband/sw/rxe/rxe_req.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
index b402eb82b402..e0564d7b0ff7 100644
--- a/drivers/infiniband/sw/rxe/rxe_req.c
+++ b/drivers/infiniband/sw/rxe/rxe_req.c
@@ -740,12 +740,14 @@ int rxe_requester(void *arg)
 	skb = init_req_packet(qp, wqe, opcode, payload, &pkt);
 	if (unlikely(!skb)) {
 		pr_err("qp#%d Failed allocating skb\n", qp_num(qp));
+		wqe->status = IB_WC_LOC_PROT_ERR;
 		goto err;
 	}
 
 	if (fill_packet(qp, wqe, &pkt, skb, payload)) {
 		pr_debug("qp#%d Error during fill packet\n", qp_num(qp));
 		kfree_skb(skb);
+		wqe->status = IB_WC_LOC_PROT_ERR;
 		goto err;
 	}
 
@@ -769,6 +771,7 @@ int rxe_requester(void *arg)
 			goto exit;
 		}
 
+		wqe->status = IB_WC_LOC_PROT_ERR;	// ?? FIXME
 		goto err;
 	}
 
@@ -783,8 +786,10 @@ int rxe_requester(void *arg)
 	 * state and no more wqes will be processed unless
 	 * the qp is cleaned up and restarted. We do not want
 	 * to be called again */
-	wqe->status = IB_WC_LOC_PROT_ERR;
 	wqe->state = wqe_state_error;
+	// ?? we want to force the qp into error state before
+	// anyone else has a chance to process another wqe but
+	// this could collide with an already running completer
 	__rxe_do_task(&qp->comp.task);
 	ret = -EAGAIN;
 	goto done;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 10/20] Extended pools to support both keys and indices
  2020-08-15  4:58 Memory windows support for rxe Bob Pearson
                   ` (8 preceding siblings ...)
  2020-08-15  4:58 ` [PATCH 09/20] Fixed error logic in rxe_req.c Bob Pearson
@ 2020-08-15  4:58 ` Bob Pearson
  2020-08-15  4:58 ` [PATCH 11/20] Gave MRs and MWs " Bob Pearson
                   ` (9 subsequent siblings)
  19 siblings, 0 replies; 23+ messages in thread
From: Bob Pearson @ 2020-08-15  4:58 UTC (permalink / raw)
  To: linux-rdma; +Cc: Bob Pearson

Allowed both indices and keys to exist for objects in the pools.
Previously you were limited to one or the other. This will support
allowing the keys on MWs to change.

Signed-off-by: Bob Pearson <rpearson@hpe.com>
---
 drivers/infiniband/sw/rxe/rxe_pool.c | 73 ++++++++++++++--------------
 drivers/infiniband/sw/rxe/rxe_pool.h | 32 +++++++-----
 2 files changed, 58 insertions(+), 47 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c
index 32b86a9979e6..e157bf945175 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.c
+++ b/drivers/infiniband/sw/rxe/rxe_pool.c
@@ -177,18 +177,18 @@ static int rxe_pool_init_index(struct rxe_pool *pool, u32 max, u32 min)
 		goto out;
 	}
 
-	pool->max_index = max;
-	pool->min_index = min;
+	pool->index.max_index = max;
+	pool->index.min_index = min;
 
 	size = BITS_TO_LONGS(max - min + 1) * sizeof(long);
-	pool->table = kmalloc(size, GFP_KERNEL);
-	if (!pool->table) {
+	pool->index.table = kmalloc(size, GFP_KERNEL);
+	if (!pool->index.table) {
 		err = -ENOMEM;
 		goto out;
 	}
 
-	pool->table_size = size;
-	bitmap_zero(pool->table, max - min + 1);
+	pool->index.table_size = size;
+	bitmap_zero(pool->index.table, max - min + 1);
 
 out:
 	return err;
@@ -210,7 +210,8 @@ int rxe_pool_init(
 	pool->max_elem		= max_elem;
 	pool->elem_size		= ALIGN(size, RXE_POOL_ALIGN);
 	pool->flags		= rxe_type_info[type].flags;
-	pool->tree		= RB_ROOT;
+	pool->index.tree	= RB_ROOT;
+	pool->key.tree		= RB_ROOT;
 	pool->cleanup		= rxe_type_info[type].cleanup;
 
 	atomic_set(&pool->num_elem, 0);
@@ -228,8 +229,8 @@ int rxe_pool_init(
 	}
 
 	if (rxe_type_info[type].flags & RXE_POOL_KEY) {
-		pool->key_offset = rxe_type_info[type].key_offset;
-		pool->key_size = rxe_type_info[type].key_size;
+		pool->key.key_offset = rxe_type_info[type].key_offset;
+		pool->key.key_size = rxe_type_info[type].key_size;
 	}
 
 	pool->state = RXE_POOL_STATE_VALID;
@@ -243,7 +244,7 @@ static void rxe_pool_release(struct kref *kref)
 	struct rxe_pool *pool = container_of(kref, struct rxe_pool, ref_cnt);
 
 	pool->state = RXE_POOL_STATE_INVALID;
-	kfree(pool->table);
+	kfree(pool->index.table);
 }
 
 static void rxe_pool_put(struct rxe_pool *pool)
@@ -268,27 +269,27 @@ void rxe_pool_cleanup(struct rxe_pool *pool)
 static u32 alloc_index(struct rxe_pool *pool)
 {
 	u32 index;
-	u32 range = pool->max_index - pool->min_index + 1;
+	u32 range = pool->index.max_index - pool->index.min_index + 1;
 
-	index = find_next_zero_bit(pool->table, range, pool->last);
+	index = find_next_zero_bit(pool->index.table, range, pool->index.last);
 	if (index >= range)
-		index = find_first_zero_bit(pool->table, range);
+		index = find_first_zero_bit(pool->index.table, range);
 
 	WARN_ON_ONCE(index >= range);
-	set_bit(index, pool->table);
-	pool->last = index;
-	return index + pool->min_index;
+	set_bit(index, pool->index.table);
+	pool->index.last = index;
+	return index + pool->index.min_index;
 }
 
 static void insert_index(struct rxe_pool *pool, struct rxe_pool_entry *new)
 {
-	struct rb_node **link = &pool->tree.rb_node;
+	struct rb_node **link = &pool->index.tree.rb_node;
 	struct rb_node *parent = NULL;
 	struct rxe_pool_entry *elem;
 
 	while (*link) {
 		parent = *link;
-		elem = rb_entry(parent, struct rxe_pool_entry, node);
+		elem = rb_entry(parent, struct rxe_pool_entry, index_node);
 
 		if (elem->index == new->index) {
 			pr_warn("element already exists!\n");
@@ -301,25 +302,25 @@ static void insert_index(struct rxe_pool *pool, struct rxe_pool_entry *new)
 			link = &(*link)->rb_right;
 	}
 
-	rb_link_node(&new->node, parent, link);
-	rb_insert_color(&new->node, &pool->tree);
+	rb_link_node(&new->index_node, parent, link);
+	rb_insert_color(&new->index_node, &pool->index.tree);
 out:
 	return;
 }
 
 static void insert_key(struct rxe_pool *pool, struct rxe_pool_entry *new)
 {
-	struct rb_node **link = &pool->tree.rb_node;
+	struct rb_node **link = &pool->key.tree.rb_node;
 	struct rb_node *parent = NULL;
 	struct rxe_pool_entry *elem;
 	int cmp;
 
 	while (*link) {
 		parent = *link;
-		elem = rb_entry(parent, struct rxe_pool_entry, node);
+		elem = rb_entry(parent, struct rxe_pool_entry, key_node);
 
-		cmp = memcmp((u8 *)elem + pool->key_offset,
-			     (u8 *)new + pool->key_offset, pool->key_size);
+		cmp = memcmp((u8 *)elem + pool->key.key_offset,
+			     (u8 *)new + pool->key.key_offset, pool->key.key_size);
 
 		if (cmp == 0) {
 			pr_warn("key already exists!\n");
@@ -332,8 +333,8 @@ static void insert_key(struct rxe_pool *pool, struct rxe_pool_entry *new)
 			link = &(*link)->rb_right;
 	}
 
-	rb_link_node(&new->node, parent, link);
-	rb_insert_color(&new->node, &pool->tree);
+	rb_link_node(&new->key_node, parent, link);
+	rb_insert_color(&new->key_node, &pool->key.tree);
 out:
 	return;
 }
@@ -345,7 +346,7 @@ void rxe_add_key(void *arg, void *key)
 	unsigned long flags;
 
 	write_lock_irqsave(&pool->pool_lock, flags);
-	memcpy((u8 *)elem + pool->key_offset, key, pool->key_size);
+	memcpy((u8 *)elem + pool->key.key_offset, key, pool->key.key_size);
 	insert_key(pool, elem);
 	write_unlock_irqrestore(&pool->pool_lock, flags);
 }
@@ -357,7 +358,7 @@ void rxe_drop_key(void *arg)
 	unsigned long flags;
 
 	write_lock_irqsave(&pool->pool_lock, flags);
-	rb_erase(&elem->node, &pool->tree);
+	rb_erase(&elem->key_node, &pool->key.tree);
 	write_unlock_irqrestore(&pool->pool_lock, flags);
 }
 
@@ -380,8 +381,8 @@ void rxe_drop_index(void *arg)
 	unsigned long flags;
 
 	write_lock_irqsave(&pool->pool_lock, flags);
-	clear_bit(elem->index - pool->min_index, pool->table);
-	rb_erase(&elem->node, &pool->tree);
+	clear_bit(elem->index - pool->index.min_index, pool->index.table);
+	rb_erase(&elem->index_node, &pool->index.tree);
 	write_unlock_irqrestore(&pool->pool_lock, flags);
 }
 
@@ -485,10 +486,10 @@ void *rxe_pool_get_index(struct rxe_pool *pool, u32 index)
 	if (pool->state != RXE_POOL_STATE_VALID)
 		goto out;
 
-	node = pool->tree.rb_node;
+	node = pool->index.tree.rb_node;
 
 	while (node) {
-		elem = rb_entry(node, struct rxe_pool_entry, node);
+		elem = rb_entry(node, struct rxe_pool_entry, index_node);
 
 		if (elem->index > index)
 			node = node->rb_left;
@@ -517,13 +518,13 @@ void *rxe_pool_get_key(struct rxe_pool *pool, void *key)
 	if (pool->state != RXE_POOL_STATE_VALID)
 		goto out;
 
-	node = pool->tree.rb_node;
+	node = pool->key.tree.rb_node;
 
 	while (node) {
-		elem = rb_entry(node, struct rxe_pool_entry, node);
+		elem = rb_entry(node, struct rxe_pool_entry, key_node);
 
-		cmp = memcmp((u8 *)elem + pool->key_offset,
-			     key, pool->key_size);
+		cmp = memcmp((u8 *)elem + pool->key.key_offset,
+			     key, pool->key.key_size);
 
 		if (cmp > 0)
 			node = node->rb_left;
diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h
index 2f2cff1cbe43..bd684df6d847 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.h
+++ b/drivers/infiniband/sw/rxe/rxe_pool.h
@@ -84,8 +84,11 @@ struct rxe_pool_entry {
 	struct kref		ref_cnt;
 	struct list_head	list;
 
-	/* only used if indexed or keyed */
-	struct rb_node		node;
+	/* only used if keyed */
+	struct rb_node		key_node;
+
+	/* only used if indexed */
+	struct rb_node		index_node;
 	u32			index;
 };
 
@@ -102,15 +105,22 @@ struct rxe_pool {
 	unsigned int		max_elem;
 	atomic_t		num_elem;
 
-	/* only used if indexed or keyed */
-	struct rb_root		tree;
-	unsigned long		*table;
-	size_t			table_size;
-	u32			max_index;
-	u32			min_index;
-	u32			last;
-	size_t			key_offset;
-	size_t			key_size;
+	/* only used if indexed */
+	struct {
+		struct rb_root		tree;
+		unsigned long		*table;
+		size_t			table_size;
+		u32			last;
+		u32			max_index;
+		u32			min_index;
+	} index;
+
+	/* only used if keyed */
+	struct {
+		struct rb_root		tree;
+		size_t			key_offset;
+		size_t			key_size;
+	} key;
 };
 
 /* initialize slab caches for managed objects */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 11/20] Gave MRs and MWs both keys and indices
  2020-08-15  4:58 Memory windows support for rxe Bob Pearson
                   ` (9 preceding siblings ...)
  2020-08-15  4:58 ` [PATCH 10/20] Extended pools to support both keys and indices Bob Pearson
@ 2020-08-15  4:58 ` Bob Pearson
  2020-08-15  4:58 ` [PATCH 12/20] Cleanup after git pull Bob Pearson
                   ` (8 subsequent siblings)
  19 siblings, 0 replies; 23+ messages in thread
From: Bob Pearson @ 2020-08-15  4:58 UTC (permalink / raw)
  To: linux-rdma; +Cc: Bob Pearson

Finished decoupling indices and keys for MW and MR
objects. Now user space can refer to an object by index and kernel can lookup
object with l/rkey.

Tweaked the user/kernel ABI for rxe WQEs to use indices instead of rkeys to
identify MWs and MRs.

Type 1 MWs can now be bound with the ibv_bind_mw api.

Signed-off-by: Bob Pearson <rpearson@hpe.com>
---
 drivers/infiniband/sw/rxe/rxe_loc.h   |   3 +
 drivers/infiniband/sw/rxe/rxe_mr.c    |  55 +++++++-----
 drivers/infiniband/sw/rxe/rxe_mw.c    | 116 ++++++++++++++++++++++----
 drivers/infiniband/sw/rxe/rxe_pool.c  |  30 ++++---
 drivers/infiniband/sw/rxe/rxe_pool.h  |   2 +-
 drivers/infiniband/sw/rxe/rxe_verbs.c |  28 ++++++-
 include/uapi/rdma/rdma_user_rxe.h     |  34 +++++++-
 7 files changed, 212 insertions(+), 56 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
index 02df9bf76d1a..87d323b1ba07 100644
--- a/drivers/infiniband/sw/rxe/rxe_loc.h
+++ b/drivers/infiniband/sw/rxe/rxe_loc.h
@@ -98,6 +98,8 @@ struct rxe_mmap_info *rxe_create_mmap_info(struct rxe_dev *dev, u32 size,
 int rxe_mmap(struct ib_ucontext *context, struct vm_area_struct *vma);
 
 /* rxe_mr.c */
+void rxe_set_mr_lkey(struct rxe_mr *mr);
+
 enum copy_direction {
 	to_mr_obj,
 	from_mr_obj,
@@ -137,6 +139,7 @@ void rxe_mr_cleanup(struct rxe_pool_entry *arg);
 int advance_dma_data(struct rxe_dma_info *dma, unsigned int length);
 
 /* rxe_mw.c */
+void rxe_set_mw_rkey(struct rxe_mw *mw);
 struct ib_mw *rxe_alloc_mw(struct ib_pd *ibpd, enum ib_mw_type type,
 			   struct ib_udata *udata);
 int rxe_dealloc_mw(struct ib_mw *ibmw);
diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
index 0606f04e1d18..ba4e33227633 100644
--- a/drivers/infiniband/sw/rxe/rxe_mr.c
+++ b/drivers/infiniband/sw/rxe/rxe_mr.c
@@ -34,6 +34,23 @@
 #include "rxe.h"
 #include "rxe_loc.h"
 
+/* choose a unique non zero random number for lkey */
+void rxe_set_mr_lkey(struct rxe_mr *mr)
+{
+	int ret;
+	u32 lkey;
+
+next_lkey:
+	get_random_bytes(&lkey, sizeof(lkey));
+	lkey &= 0x7fffffff;
+	if (unlikely(lkey == 0))
+		goto next_lkey;
+	ret = rxe_add_key(mr, &lkey);
+	if (unlikely(ret == -EAGAIN))
+		goto next_lkey;
+}
+
+#if 0
 /*
  * lfsr (linear feedback shift register) with period 255
  */
@@ -50,6 +67,7 @@ static u8 rxe_get_key(void)
 
 	return key;
 }
+#endif
 
 int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length)
 {
@@ -76,16 +94,16 @@ int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length)
 
 static void rxe_mr_init(int access, struct rxe_mr *mr)
 {
-	u32 lkey = mr->pelem.index << 8 | rxe_get_key();
-	u32 rkey = (access & IB_ACCESS_REMOTE) ? lkey : 0;
+	rxe_set_mr_lkey(mr);
 
-	if (mr->pelem.pool->type == RXE_TYPE_MR) {
-		mr->ibmr.lkey		= lkey;
-		mr->ibmr.rkey		= rkey;
-	}
+	if (access & IB_ACCESS_REMOTE)
+		mr->ibmr.rkey = mr->ibmr.lkey;
+	else
+		mr->ibmr.rkey = 0;
 
-	mr->lkey		= lkey;
-	mr->rkey		= rkey;
+	// TODO we shouldn't carry two copies
+	mr->lkey		= mr->ibmr.lkey;
+	mr->rkey		= mr->ibmr.rkey;
 	mr->state		= RXE_MEM_STATE_INVALID;
 	mr->type		= RXE_MEM_TYPE_NONE;
 	mr->map_shift		= ilog2(RXE_BUF_PER_MAP);
@@ -155,9 +173,9 @@ void rxe_mr_init_dma(struct rxe_pd *pd,
 	mr->type		= RXE_MEM_TYPE_DMA;
 }
 
-int rxe_mr_init_user(struct rxe_pd *pd, u64 start,
-		      u64 length, u64 iova, int access, struct ib_udata *udata,
-		      struct rxe_mr *mr)
+int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length,
+		     u64 iova, int access, struct ib_udata *udata,
+		     struct rxe_mr *mr)
 {
 	struct rxe_map		**map;
 	struct rxe_phys_buf	*buf = NULL;
@@ -233,15 +251,15 @@ int rxe_mr_init_user(struct rxe_pd *pd, u64 start,
 	return err;
 }
 
-int rxe_mr_init_fast(struct rxe_pd *pd,
-		      int max_pages, struct rxe_mr *mr)
+int rxe_mr_init_fast(struct rxe_pd *pd, int max_pages,
+		     struct rxe_mr *mr)
 {
 	int err;
 
 	rxe_mr_init(0, mr);
 
 	/* In fastreg, we also set the rkey */
-	mr->ibmr.rkey = mr->ibmr.lkey;
+	mr->rkey = mr->ibmr.rkey = mr->ibmr.lkey;
 
 	err = rxe_mr_alloc(mr, max_pages);
 	if (err)
@@ -564,18 +582,17 @@ int advance_dma_data(struct rxe_dma_info *dma, unsigned int length)
  * (4) verify that mr state is valid
  */
 struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key,
-			   enum lookup_type type)
+			 enum lookup_type type)
 {
 	struct rxe_mr *mr;
 	struct rxe_dev *rxe = to_rdev(pd->ibpd.device);
-	int index = key >> 8;
 
-	mr = rxe_pool_get_index(&rxe->mr_pool, index);
+	mr = rxe_pool_get_key(&rxe->mr_pool, &key);
 	if (!mr)
 		return NULL;
 
-	if (unlikely((type == lookup_local && mr->lkey != key) ||
-		     (type == lookup_remote && mr->rkey != key) ||
+	if (unlikely((type == lookup_local && mr->ibmr.lkey != key) ||
+		     (type == lookup_remote && mr->ibmr.rkey != key) ||
 		     mr->pd != pd ||
 		     (access && !(access & mr->access)) ||
 		     mr->state != RXE_MEM_STATE_VALID)) {
diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c
index 230263c6d3e5..b45a04efa4a0 100644
--- a/drivers/infiniband/sw/rxe/rxe_mw.c
+++ b/drivers/infiniband/sw/rxe/rxe_mw.c
@@ -35,49 +35,95 @@
 #include "rxe.h"
 #include "rxe_loc.h"
 
+/* choose a unique non zero random number for rkey */
+void rxe_set_mw_rkey(struct rxe_mw *mw)
+{
+	int ret;
+	u32 rkey;
+
+next_rkey:
+	get_random_bytes(&rkey, sizeof(rkey));
+	if (unlikely(rkey == 0))
+		goto next_rkey;
+	rkey |= 0x80000000;
+	ret = rxe_add_key(mw, &rkey);
+	if (unlikely(ret == -EAGAIN))
+		goto next_rkey;
+}
+
 /* place holder alloc and dealloc routines
- * need to add cross references between qp and mr with mw
+ * TODO add cross references between qp and mr with mw
  * and cleanup when one side is deleted. Enough to make
  * verbs function correctly for now */
 struct ib_mw *rxe_alloc_mw(struct ib_pd *ibpd, enum ib_mw_type type,
 			   struct ib_udata *udata)
 {
+	int ret;
+	struct rxe_mw *mw;
 	struct rxe_pd *pd = to_rpd(ibpd);
 	struct rxe_dev *rxe = to_rdev(ibpd->device);
-	struct rxe_mw *mw;
-	u32 rkey;
-	u8 key;
+	struct rxe_alloc_mw_resp __user *uresp;
+
+	if (udata) {
+		if (udata->outlen < sizeof(*uresp)) {
+			ret = -EINVAL;
+			goto err1;
+		}
+	}
 
 	if (unlikely((type != IB_MW_TYPE_1) &&
-		     (type != IB_MW_TYPE_2)))
-		return ERR_PTR(-EINVAL);
+		     (type != IB_MW_TYPE_2))) {
+		ret = -EINVAL;
+		goto err1;
+	}
 
 	rxe_add_ref(pd);
 
 	mw = rxe_alloc(&rxe->mw_pool);
 	if (!mw) {
 		rxe_drop_ref(pd);
-		return ERR_PTR(-ENOMEM);
+		ret = -ENOMEM;
+		goto err1;
 	}
 
-	/* pick a random key part as a starting point */
 	rxe_add_index(mw);
-	get_random_bytes(&key, sizeof(key));
-	rkey = mw->pelem.index << 8 | key;
+	rxe_set_mw_rkey(mw);
+
+	pr_info("rxe_alloc_mw: index = 0x%08x, rkey = 0x%08x\n",
+			mw->pelem.index, mw->ibmw.rkey);
 
 	spin_lock_init(&mw->lock);
+
+	if (type == IB_MW_TYPE_2) {
+		mw->state		= RXE_MW_STATE_FREE;
+	} else {
+		mw->state		= RXE_MW_STATE_VALID;
+	}
+
 	mw->qp			= NULL;
 	mw->mr			= NULL;
 	mw->addr		= 0;
 	mw->length		= 0;
         mw->ibmw.pd		= ibpd;
         mw->ibmw.type		= type;
-        mw->ibmw.rkey		= rkey;
-	mw->state		= (type == IB_MW_TYPE_2) ?
-					RXE_MW_STATE_FREE :
-					RXE_MW_STATE_VALID;
+
+	if (udata) {
+		uresp = udata->outbuf;
+		if (copy_to_user(&uresp->index, &mw->pelem.index,
+				 sizeof(u32))) {
+			ret = -EFAULT;
+			goto err2;
+		}
+	}
 
 	return &mw->ibmw;
+err2:
+	rxe_drop_key(mw);
+	rxe_drop_index(mw);
+	rxe_drop_ref(mw);
+	rxe_drop_ref(pd);
+err1:
+	return ERR_PTR(ret);
 }
 
 int rxe_dealloc_mw(struct ib_mw *ibmw)
@@ -90,8 +136,9 @@ int rxe_dealloc_mw(struct ib_mw *ibmw)
 	mw->state = RXE_MW_STATE_INVALID;
 	spin_unlock_irqrestore(&mw->lock, flags);
 
-	rxe_drop_ref(pd);
+	rxe_drop_key(mw);
 	rxe_drop_index(mw);
+	rxe_drop_ref(pd);
 	rxe_drop_ref(mw);
 
 	return 0;
@@ -99,6 +146,41 @@ int rxe_dealloc_mw(struct ib_mw *ibmw)
 
 int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
 {
-	pr_err("rxe_bind_mw: not implemented\n");
-	return -ENOSYS;
+	struct rxe_mw *mw;
+	struct rxe_mr *mr;
+
+	pr_info("rxe_bind_mw: called\n");
+
+	if (qp->is_user) {
+	} else {
+		mw = to_rmw(wqe->wr.wr.kmw.ibmw);
+		mr = to_rmr(wqe->wr.wr.kmw.ibmr);
+	}
+
+#if 0
+	wqe->wr.wr.bind_mw
+	__aligned_u64	addr;
+	__aligned_u64	length;
+	__u32	mr_rkey;
+	__u32	mw_rkey;
+	__u32	rkey;
+	__u32	access;
+
+	mw
+	struct rxe_pool_entry	pelem;			// alloc
+	struct ib_mw		ibmw;			// alloc
+		struct ib_device	*device;	// alloc
+		struct ib_pd		*pd;		// alloc
+		struct ib_uobject	*uobject;	// alloc
+		u32			rkey;
+		enum ib_mw_type         type;		// alloc
+	struct rxe_qp		*qp;			// bind
+	struct rxe_mem		*mr;			// bind
+	spinlock_t		lock;			// alloc
+	enum rxe_mw_state	state;			// all
+	u32			access;			// bind
+	u64			addr;			// bind
+	u64			length;			// bind
+#endif
+	return 0;
 }
diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c
index e157bf945175..35e9646e104c 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.c
+++ b/drivers/infiniband/sw/rxe/rxe_pool.c
@@ -34,10 +34,6 @@
 #include "rxe.h"
 #include "rxe_loc.h"
 
-/* info about object pools
- * note that mr and mw share a single index space
- * so that one can map an lkey to the correct type of object
- */
 struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = {
 	[RXE_TYPE_UC] = {
 		.name		= "rxe-uc",
@@ -79,16 +75,22 @@ struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = {
 		.name		= "rxe-mr",
 		.size		= sizeof(struct rxe_mr),
 		.cleanup	= rxe_mr_cleanup,
-		.flags		= RXE_POOL_INDEX,
+		.flags		= RXE_POOL_INDEX
+				| RXE_POOL_KEY,
 		.max_index	= RXE_MAX_MR_INDEX,
 		.min_index	= RXE_MIN_MR_INDEX,
+		.key_offset	= offsetof(struct rxe_mr, ibmr.lkey),
+		.key_size	= sizeof(u32),
 	},
 	[RXE_TYPE_MW] = {
 		.name		= "rxe-mw",
 		.size		= sizeof(struct rxe_mw),
-		.flags		= RXE_POOL_INDEX,
+		.flags		= RXE_POOL_INDEX
+				| RXE_POOL_KEY,
 		.max_index	= RXE_MAX_MW_INDEX,
 		.min_index	= RXE_MIN_MW_INDEX,
+		.key_offset	= offsetof(struct rxe_mw, ibmw.rkey),
+		.key_size	= sizeof(u32),
 	},
 	[RXE_TYPE_MC_GRP] = {
 		.name		= "rxe-mc_grp",
@@ -308,8 +310,9 @@ static void insert_index(struct rxe_pool *pool, struct rxe_pool_entry *new)
 	return;
 }
 
-static void insert_key(struct rxe_pool *pool, struct rxe_pool_entry *new)
+static int insert_key(struct rxe_pool *pool, struct rxe_pool_entry *new)
 {
+	int ret;
 	struct rb_node **link = &pool->key.tree.rb_node;
 	struct rb_node *parent = NULL;
 	struct rxe_pool_entry *elem;
@@ -323,7 +326,7 @@ static void insert_key(struct rxe_pool *pool, struct rxe_pool_entry *new)
 			     (u8 *)new + pool->key.key_offset, pool->key.key_size);
 
 		if (cmp == 0) {
-			pr_warn("key already exists!\n");
+			ret = -EAGAIN;
 			goto out;
 		}
 
@@ -335,20 +338,25 @@ static void insert_key(struct rxe_pool *pool, struct rxe_pool_entry *new)
 
 	rb_link_node(&new->key_node, parent, link);
 	rb_insert_color(&new->key_node, &pool->key.tree);
+
+	ret = 0;
 out:
-	return;
+	return ret;
 }
 
-void rxe_add_key(void *arg, void *key)
+int rxe_add_key(void *arg, void *key)
 {
+	int ret;
 	struct rxe_pool_entry *elem = arg;
 	struct rxe_pool *pool = elem->pool;
 	unsigned long flags;
 
 	write_lock_irqsave(&pool->pool_lock, flags);
 	memcpy((u8 *)elem + pool->key.key_offset, key, pool->key.key_size);
-	insert_key(pool, elem);
+	ret = insert_key(pool, elem);
 	write_unlock_irqrestore(&pool->pool_lock, flags);
+
+	return ret;
 }
 
 void rxe_drop_key(void *arg)
diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h
index bd684df6d847..0ba811456f79 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.h
+++ b/drivers/infiniband/sw/rxe/rxe_pool.h
@@ -156,7 +156,7 @@ void rxe_drop_index(void *elem);
 /* assign a key to a keyed object and insert object into
  *  pool's rb tree
  */
-void rxe_add_key(void *elem, void *key);
+int rxe_add_key(void *elem, void *key);
 
 /* remove elem from rb tree */
 void rxe_drop_key(void *elem);
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
index cac0f3f0c7c1..29191cacfc56 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
@@ -911,9 +911,20 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd,
 				     int access, struct ib_udata *udata)
 {
 	int err;
+	struct rxe_mr *mr;
 	struct rxe_dev *rxe = to_rdev(ibpd->device);
 	struct rxe_pd *pd = to_rpd(ibpd);
-	struct rxe_mr *mr;
+	struct rxe_reg_mr_resp __user *uresp = NULL;
+
+	if (udata) {
+		if (udata->outlen < sizeof(*uresp)) {
+			err = -EINVAL;
+			goto err2;
+		}
+		uresp = udata->outbuf;
+	}
+
+	rxe_add_ref(pd);
 
 	mr = rxe_alloc(&rxe->mr_pool);
 	if (!mr) {
@@ -923,19 +934,28 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd,
 
 	rxe_add_index(mr);
 
-	rxe_add_ref(pd);
-
 	err = rxe_mr_init_user(pd, start, length, iova,
 				access, udata, mr);
 	if (err)
 		goto err3;
 
+	pr_info("rxe_reg_user_mr: index = 0x%08x, rkey = 0x%08x\n",
+			mr->pelem.index, mr->ibmr.rkey);
+
+	if (uresp) {
+		if (copy_to_user(&uresp->index, &mr->pelem.index,
+				 sizeof(uresp->index))) {
+			err = -EFAULT;
+			goto err3;
+		}
+	}
+
 	return &mr->ibmr;
 
 err3:
-	rxe_drop_ref(pd);
 	rxe_drop_index(mr);
 	rxe_drop_ref(mr);
+	rxe_drop_ref(pd);
 err2:
 	return ERR_PTR(err);
 }
diff --git a/include/uapi/rdma/rdma_user_rxe.h b/include/uapi/rdma/rdma_user_rxe.h
index f88867d85c3f..c1e84cd69c37 100644
--- a/include/uapi/rdma/rdma_user_rxe.h
+++ b/include/uapi/rdma/rdma_user_rxe.h
@@ -96,12 +96,28 @@ struct rxe_send_wr {
 		struct {
 			__aligned_u64	addr;
 			__aligned_u64	length;
-			__u32	mr_rkey;
-			__u32	mw_rkey;
+			__u32	mr_index;
+			__u32   pad1;
+			__u32	mw_index;
+			__u32   pad2;
 			__u32	rkey;
 			__u32	access;
-		} bind_mw;
-		/* reg is only used by the kernel and is not part of the uapi */
+		} umw;
+		/* below are only used by the kernel */
+		struct {
+			__aligned_u64	addr;
+			__aligned_u64	length;
+			union {
+				struct ib_mr	*ibmr;
+				__aligned_u64   reserved1;
+			};
+			union {
+				struct ib_mw	*ibmw;
+				__aligned_u64   reserved2;
+			};
+			__u32	rkey;
+			__u32	access;
+		} kmw;
 		struct {
 			union {
 				struct ib_mr *mr;
@@ -183,4 +199,14 @@ struct rxe_modify_srq_cmd {
 	__aligned_u64 mmap_info_addr;
 };
 
+struct rxe_reg_mr_resp {
+	__u32 index;
+	__u32 reserved;
+};
+
+struct rxe_alloc_mw_resp {
+	__u32 index;
+	__u32 reserved;
+};
+
 #endif /* RDMA_USER_RXE_H */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 12/20] Cleanup after git pull
  2020-08-15  4:58 Memory windows support for rxe Bob Pearson
                   ` (10 preceding siblings ...)
  2020-08-15  4:58 ` [PATCH 11/20] Gave MRs and MWs " Bob Pearson
@ 2020-08-15  4:58 ` Bob Pearson
  2020-08-15  4:58 ` [PATCH 13/20] add debug print statements Bob Pearson
                   ` (7 subsequent siblings)
  19 siblings, 0 replies; 23+ messages in thread
From: Bob Pearson @ 2020-08-15  4:58 UTC (permalink / raw)
  To: linux-rdma; +Cc: Bob Pearson

An experiment with rebase made a tiny mess which I cleaned up.
This one slipped through.

Signed-off-by: Bob Pearson <rpearson@hpe.com>
---
 drivers/infiniband/sw/rxe/rxe_loc.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
index 87d323b1ba07..03eb74947d62 100644
--- a/drivers/infiniband/sw/rxe/rxe_loc.h
+++ b/drivers/infiniband/sw/rxe/rxe_loc.h
@@ -105,7 +105,7 @@ enum copy_direction {
 	from_mr_obj,
 };
 
-int rxe_mr_init_dma(struct rxe_pd *pd,
+void rxe_mr_init_dma(struct rxe_pd *pd,
 		     int access, struct rxe_mr *mr);
 
 int rxe_mr_init_user(struct rxe_pd *pd, u64 start,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 13/20] add debug print statements
  2020-08-15  4:58 Memory windows support for rxe Bob Pearson
                   ` (11 preceding siblings ...)
  2020-08-15  4:58 ` [PATCH 12/20] Cleanup after git pull Bob Pearson
@ 2020-08-15  4:58 ` Bob Pearson
  2020-08-15  4:58 ` [PATCH 14/20] Addresses an issue with hardened user copy Bob Pearson
                   ` (6 subsequent siblings)
  19 siblings, 0 replies; 23+ messages in thread
From: Bob Pearson @ 2020-08-15  4:58 UTC (permalink / raw)
  To: linux-rdma; +Cc: Bob Pearson

Added some debug prints to help out. They will go away later.

Signed-off-by: Bob Pearson <rpearson@hpe.com>
---
 drivers/infiniband/sw/rxe/rxe_verbs.c | 25 ++++++++++++++++++++++---
 1 file changed, 22 insertions(+), 3 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
index 29191cacfc56..b91364ba2c68 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
@@ -172,6 +172,8 @@ static int rxe_alloc_pd(struct ib_pd *ibpd, struct ib_udata *udata)
 	struct rxe_dev *rxe = to_rdev(ibpd->device);
 	struct rxe_pd *pd = to_rpd(ibpd);
 
+	pr_info("rxe_alloc_pd: called\n");
+
 	return rxe_add_to_pool(&rxe->pd_pool, &pd->pelem);
 }
 
@@ -179,6 +181,8 @@ static void rxe_dealloc_pd(struct ib_pd *ibpd, struct ib_udata *udata)
 {
 	struct rxe_pd *pd = to_rpd(ibpd);
 
+	pr_info("rxe_dealloc_pd: called\n");
+
 	rxe_drop_ref(pd);
 }
 
@@ -410,6 +414,8 @@ static struct ib_qp *rxe_create_qp(struct ib_pd *ibpd,
 	struct rxe_qp *qp;
 	struct rxe_create_qp_resp __user *uresp = NULL;
 
+	pr_info("rxe_create_qp: called\n");
+
 	if (udata) {
 		if (udata->outlen < sizeof(*uresp))
 			return ERR_PTR(-EINVAL);
@@ -457,6 +463,8 @@ static int rxe_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
 	struct rxe_dev *rxe = to_rdev(ibqp->device);
 	struct rxe_qp *qp = to_rqp(ibqp);
 
+	pr_info("rxe_modify_qp: called\n");
+
 	err = rxe_qp_chk_attr(rxe, qp, attr, mask);
 	if (err)
 		goto err1;
@@ -476,6 +484,8 @@ static int rxe_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
 {
 	struct rxe_qp *qp = to_rqp(ibqp);
 
+	pr_info("rxe_query_qp: called\n");
+
 	rxe_qp_to_init(qp, init);
 	rxe_qp_to_attr(qp, attr, mask);
 
@@ -486,6 +496,8 @@ static int rxe_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
 {
 	struct rxe_qp *qp = to_rqp(ibqp);
 
+	pr_info("rxe_destroy_qp: called\n");
+
 	rxe_qp_destroy(qp);
 	rxe_drop_index(qp);
 	rxe_drop_ref(qp);
@@ -782,6 +794,8 @@ static int rxe_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
 	struct rxe_cq *cq = to_rcq(ibcq);
 	struct rxe_create_cq_resp __user *uresp = NULL;
 
+	pr_info("rxe_create_cq: called\n");
+
 	if (udata) {
 		if (udata->outlen < sizeof(*uresp))
 			return -EINVAL;
@@ -807,6 +821,8 @@ static void rxe_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
 {
 	struct rxe_cq *cq = to_rcq(ibcq);
 
+	pr_info("rxe_destroy_cq: called\n");
+
 	rxe_cq_disable(cq);
 
 	rxe_drop_ref(cq);
@@ -846,6 +862,8 @@ static int rxe_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *wc)
 	struct rxe_cqe *cqe;
 	unsigned long flags;
 
+	pr_info("rxe_poll_cq: called\n");
+
 	spin_lock_irqsave(&cq->cq_lock, flags);
 	for (i = 0; i < num_entries; i++) {
 		cqe = queue_head(cq->queue);
@@ -916,6 +934,8 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd,
 	struct rxe_pd *pd = to_rpd(ibpd);
 	struct rxe_reg_mr_resp __user *uresp = NULL;
 
+	pr_info("rxe_reg_user_mr: called\n");
+
 	if (udata) {
 		if (udata->outlen < sizeof(*uresp)) {
 			err = -EINVAL;
@@ -939,9 +959,6 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd,
 	if (err)
 		goto err3;
 
-	pr_info("rxe_reg_user_mr: index = 0x%08x, rkey = 0x%08x\n",
-			mr->pelem.index, mr->ibmr.rkey);
-
 	if (uresp) {
 		if (copy_to_user(&uresp->index, &mr->pelem.index,
 				 sizeof(uresp->index))) {
@@ -964,6 +981,8 @@ static int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata)
 {
 	struct rxe_mr *mr = to_rmr(ibmr);
 
+	pr_info("rxe_dereg_user_mr: called\n");
+
 	mr->state = RXE_MEM_STATE_ZOMBIE;
 	rxe_drop_ref(mr->pd);
 	rxe_drop_index(mr);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 14/20] Addresses an issue with hardened user copy
  2020-08-15  4:58 Memory windows support for rxe Bob Pearson
                   ` (12 preceding siblings ...)
  2020-08-15  4:58 ` [PATCH 13/20] add debug print statements Bob Pearson
@ 2020-08-15  4:58 ` Bob Pearson
  2020-08-15  4:58 ` [PATCH 15/20] Fixed a dumb bug Bob Pearson
                   ` (5 subsequent siblings)
  19 siblings, 0 replies; 23+ messages in thread
From: Bob Pearson @ 2020-08-15  4:58 UTC (permalink / raw)
  To: linux-rdma; +Cc: Bob Pearson

Copying to user space from the stack instead of slab cache cured
a kernel oops that was toubling me.A

Signed-off-by: Bob Pearson <rpearson@hpe.com>
---
 drivers/infiniband/core/uverbs_std_types_qp.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/infiniband/core/uverbs_std_types_qp.c b/drivers/infiniband/core/uverbs_std_types_qp.c
index 3bf8dcdfe7eb..2f8b14003b95 100644
--- a/drivers/infiniband/core/uverbs_std_types_qp.c
+++ b/drivers/infiniband/core/uverbs_std_types_qp.c
@@ -98,6 +98,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_QP_CREATE)(
 	struct ib_device *device;
 	u64 user_handle;
 	int ret;
+	int qp_num;
 
 	ret = uverbs_copy_from_or_zero(&cap, attrs,
 			       UVERBS_ATTR_CREATE_QP_CAP);
@@ -293,9 +294,10 @@ static int UVERBS_HANDLER(UVERBS_METHOD_QP_CREATE)(
 	if (ret)
 		return ret;
 
+	/* copy from stack to avoid whitelisting issues */
+	qp_num = qp->qp_num;
 	ret = uverbs_copy_to(attrs, UVERBS_ATTR_CREATE_QP_RESP_QP_NUM,
-			     &qp->qp_num,
-			     sizeof(qp->qp_num));
+			     &qp_num, sizeof(qp_num));
 
 	return ret;
 err_put:
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 15/20] Fixed a dumb bug
  2020-08-15  4:58 Memory windows support for rxe Bob Pearson
                   ` (13 preceding siblings ...)
  2020-08-15  4:58 ` [PATCH 14/20] Addresses an issue with hardened user copy Bob Pearson
@ 2020-08-15  4:58 ` Bob Pearson
  2020-08-15  4:58 ` [PATCH 16/20] Implemented stubbed invalidate APIs Bob Pearson
                   ` (4 subsequent siblings)
  19 siblings, 0 replies; 23+ messages in thread
From: Bob Pearson @ 2020-08-15  4:58 UTC (permalink / raw)
  To: linux-rdma; +Cc: Bob Pearson

added code to prevent infinite loops in get l/rkey
added a drop_key in mr dereg (fixing the root cause of loops)

Signed-off-by: Bob Pearson <rpearson@hpe.com>
---
 drivers/infiniband/sw/rxe/rxe_mr.c    | 22 +++++++++++-----------
 drivers/infiniband/sw/rxe/rxe_mw.c    | 24 +++++++++++++-----------
 drivers/infiniband/sw/rxe/rxe_verbs.c |  1 +
 3 files changed, 25 insertions(+), 22 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
index ba4e33227633..533b02fc2d0e 100644
--- a/drivers/infiniband/sw/rxe/rxe_mr.c
+++ b/drivers/infiniband/sw/rxe/rxe_mr.c
@@ -34,20 +34,20 @@
 #include "rxe.h"
 #include "rxe_loc.h"
 
-/* choose a unique non zero random number for lkey */
+/* choose a unique non zero random number for lkey
+ * use high order bit to indicate MR vs MW */
 void rxe_set_mr_lkey(struct rxe_mr *mr)
 {
-	int ret;
 	u32 lkey;
-
-next_lkey:
-	get_random_bytes(&lkey, sizeof(lkey));
-	lkey &= 0x7fffffff;
-	if (unlikely(lkey == 0))
-		goto next_lkey;
-	ret = rxe_add_key(mr, &lkey);
-	if (unlikely(ret == -EAGAIN))
-		goto next_lkey;
+	int tries = 0;
+
+	do {
+		get_random_bytes(&lkey, sizeof(lkey));
+		lkey &= 0x7fffffff;
+		if (likely(lkey && (rxe_add_key(mr, &lkey) == 0)))
+			return;
+	} while (tries++ < 10);
+	pr_err("rxe_set_mr_lkey: unable to get random lkey\n");
 }
 
 #if 0
diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c
index b45a04efa4a0..a0ff2543d0cd 100644
--- a/drivers/infiniband/sw/rxe/rxe_mw.c
+++ b/drivers/infiniband/sw/rxe/rxe_mw.c
@@ -35,22 +35,24 @@
 #include "rxe.h"
 #include "rxe_loc.h"
 
-/* choose a unique non zero random number for rkey */
+/* choose a unique non zero random number for rkey
+ * use high order bit to indicate MR vs MW */
 void rxe_set_mw_rkey(struct rxe_mw *mw)
 {
-	int ret;
 	u32 rkey;
-
-next_rkey:
-	get_random_bytes(&rkey, sizeof(rkey));
-	if (unlikely(rkey == 0))
-		goto next_rkey;
-	rkey |= 0x80000000;
-	ret = rxe_add_key(mw, &rkey);
-	if (unlikely(ret == -EAGAIN))
-		goto next_rkey;
+	int tries = 0;
+
+	do {
+		get_random_bytes(&rkey, sizeof(rkey));
+		rkey |= 0x80000000;
+		if (likely((rkey & 0x7fffffff) &&
+			   (rxe_add_key(mw, &rkey) == 0)))
+			return;
+	} while (tries++ < 10);
+	pr_err("rxe_set_mw_rkey: unable to get random rkey\n");
 }
 
+
 /* place holder alloc and dealloc routines
  * TODO add cross references between qp and mr with mw
  * and cleanup when one side is deleted. Enough to make
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
index b91364ba2c68..476d90e3f91f 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
@@ -986,6 +986,7 @@ static int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata)
 	mr->state = RXE_MEM_STATE_ZOMBIE;
 	rxe_drop_ref(mr->pd);
 	rxe_drop_index(mr);
+	rxe_drop_key(mr);
 	rxe_drop_ref(mr);
 	return 0;
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 16/20] Implemented stubbed invalidate APIs
  2020-08-15  4:58 Memory windows support for rxe Bob Pearson
                   ` (14 preceding siblings ...)
  2020-08-15  4:58 ` [PATCH 15/20] Fixed a dumb bug Bob Pearson
@ 2020-08-15  4:58 ` Bob Pearson
  2020-08-15  4:58 ` [PATCH 17/20] Implemented functional " Bob Pearson
                   ` (3 subsequent siblings)
  19 siblings, 0 replies; 23+ messages in thread
From: Bob Pearson @ 2020-08-15  4:58 UTC (permalink / raw)
  To: linux-rdma; +Cc: Bob Pearson

Fixed a bug in rxe_req that killed the last WC in error. Added a
teardown/completion routine for MWs. Invalidate routines are still
stubs. Still need to clean up some error path code for both MR and MW.
Added atomics to detect reentrancy in the tasklets for now. Will
go away later.

Signed-off-by: Bob Pearson <rpearson@hpe.com>
---
 drivers/infiniband/sw/rxe/rxe_comp.c  | 18 ++++--
 drivers/infiniband/sw/rxe/rxe_loc.h   |  3 +
 drivers/infiniband/sw/rxe/rxe_mr.c    | 17 ++++-
 drivers/infiniband/sw/rxe/rxe_mw.c    | 33 +++++++---
 drivers/infiniband/sw/rxe/rxe_pool.c  | 11 ++++
 drivers/infiniband/sw/rxe/rxe_req.c   | 91 +++++++++++++++------------
 drivers/infiniband/sw/rxe/rxe_resp.c  | 57 +++++++++++++----
 drivers/infiniband/sw/rxe/rxe_verbs.c |  4 --
 8 files changed, 162 insertions(+), 72 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c
index caa8ad990337..d2a094621486 100644
--- a/drivers/infiniband/sw/rxe/rxe_comp.c
+++ b/drivers/infiniband/sw/rxe/rxe_comp.c
@@ -563,12 +563,20 @@ int rxe_completer(void *arg)
 	struct sk_buff *skb = NULL;
 	struct rxe_pkt_info *pkt = NULL;
 	enum comp_state state;
+	int entered;
 
 	rxe_add_ref(qp);
 
+	// this code is 'guaranteed' to never be entered more
+	// than once. Check to make sure that this is the case
+	entered = atomic_inc_return(&qp->comp.task.entered);
+	if (entered > 1) {
+		pr_err("rxe_completer: entered %d times\n", entered);
+	}
+
 	if (!qp->valid || qp->req.state == QP_STATE_ERROR ||
 	    qp->req.state == QP_STATE_RESET) {
-		rxe_drain_resp_pkts(qp, qp->valid &&
+			rxe_drain_resp_pkts(qp, qp->valid &&
 				    qp->req.state == QP_STATE_ERROR);
 		goto exit;
 	}
@@ -782,14 +790,14 @@ int rxe_completer(void *arg)
 		}
 	}
 
+	/* these are the same. need to merge them TODO */
 exit:
 	/* we come here if we are done with processing and want the task to
-	 * exit from the loop calling us
+	 * exit from the loop calling us -- to call us again later
 	 */
 	WARN_ON_ONCE(skb);
+	atomic_dec(&qp->comp.task.entered);
 	rxe_drop_ref(qp);
-	// TODO this seems plain backwards
-	// EAGAIN normally means call me again
 	return -EAGAIN;
 
 done:
@@ -797,6 +805,8 @@ int rxe_completer(void *arg)
 	 * us again to see if there is anything else to do
 	 */
 	WARN_ON_ONCE(skb);
+	atomic_dec(&qp->comp.task.entered);
 	rxe_drop_ref(qp);
 	return 0;
+
 }
diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
index 03eb74947d62..1ddd1b5721d8 100644
--- a/drivers/infiniband/sw/rxe/rxe_loc.h
+++ b/drivers/infiniband/sw/rxe/rxe_loc.h
@@ -137,6 +137,7 @@ int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length);
 void rxe_mr_cleanup(struct rxe_pool_entry *arg);
 
 int advance_dma_data(struct rxe_dma_info *dma, unsigned int length);
+int rxe_invalidate_mr(struct rxe_mr *mr, int remote);
 
 /* rxe_mw.c */
 void rxe_set_mw_rkey(struct rxe_mw *mw);
@@ -144,6 +145,8 @@ struct ib_mw *rxe_alloc_mw(struct ib_pd *ibpd, enum ib_mw_type type,
 			   struct ib_udata *udata);
 int rxe_dealloc_mw(struct ib_mw *ibmw);
 int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe);
+void rxe_mw_cleanup(struct rxe_pool_entry *arg);
+int rxe_invalidate_mw(struct rxe_mw *mw, int remote);
 
 /* rxe_net.c */
 void rxe_loopback(struct sk_buff *skb);
diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
index 533b02fc2d0e..bebcce06e804 100644
--- a/drivers/infiniband/sw/rxe/rxe_mr.c
+++ b/drivers/infiniband/sw/rxe/rxe_mr.c
@@ -47,7 +47,7 @@ void rxe_set_mr_lkey(struct rxe_mr *mr)
 		if (likely(lkey && (rxe_add_key(mr, &lkey) == 0)))
 			return;
 	} while (tries++ < 10);
-	pr_err("rxe_set_mr_lkey: unable to get random lkey\n");
+	pr_err("unable to get random key for mr\n");
 }
 
 #if 0
@@ -114,7 +114,8 @@ void rxe_mr_cleanup(struct rxe_pool_entry *arg)
 	struct rxe_mr *mr = container_of(arg, typeof(*mr), pelem);
 	int i;
 
-	ib_umem_release(mr->umem);
+	if (mr->umem)
+		ib_umem_release(mr->umem);
 
 	if (mr->map) {
 		for (i = 0; i < mr->num_map; i++)
@@ -122,6 +123,9 @@ void rxe_mr_cleanup(struct rxe_pool_entry *arg)
 
 		kfree(mr->map);
 	}
+
+	rxe_drop_index(mr);
+	rxe_drop_key(mr);
 }
 
 static int rxe_mr_alloc(struct rxe_mr *mr, int num_buf)
@@ -602,3 +606,12 @@ struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key,
 
 	return mr;
 }
+
+int rxe_invalidate_mr(struct rxe_mr *mr, int remote)
+{
+	// more TODO here, can fail
+
+	mr->state = RXE_MEM_STATE_FREE;
+
+	return 0;
+}
diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c
index a0ff2543d0cd..7092045a2691 100644
--- a/drivers/infiniband/sw/rxe/rxe_mw.c
+++ b/drivers/infiniband/sw/rxe/rxe_mw.c
@@ -49,7 +49,7 @@ void rxe_set_mw_rkey(struct rxe_mw *mw)
 			   (rxe_add_key(mw, &rkey) == 0)))
 			return;
 	} while (tries++ < 10);
-	pr_err("rxe_set_mw_rkey: unable to get random rkey\n");
+	pr_err("unable to get random rkey for mw\n");
 }
 
 
@@ -91,9 +91,6 @@ struct ib_mw *rxe_alloc_mw(struct ib_pd *ibpd, enum ib_mw_type type,
 	rxe_add_index(mw);
 	rxe_set_mw_rkey(mw);
 
-	pr_info("rxe_alloc_mw: index = 0x%08x, rkey = 0x%08x\n",
-			mw->pelem.index, mw->ibmw.rkey);
-
 	spin_lock_init(&mw->lock);
 
 	if (type == IB_MW_TYPE_2) {
@@ -120,8 +117,6 @@ struct ib_mw *rxe_alloc_mw(struct ib_pd *ibpd, enum ib_mw_type type,
 
 	return &mw->ibmw;
 err2:
-	rxe_drop_key(mw);
-	rxe_drop_index(mw);
 	rxe_drop_ref(mw);
 	rxe_drop_ref(pd);
 err1:
@@ -138,8 +133,6 @@ int rxe_dealloc_mw(struct ib_mw *ibmw)
 	mw->state = RXE_MW_STATE_INVALID;
 	spin_unlock_irqrestore(&mw->lock, flags);
 
-	rxe_drop_key(mw);
-	rxe_drop_index(mw);
 	rxe_drop_ref(pd);
 	rxe_drop_ref(mw);
 
@@ -151,8 +144,6 @@ int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
 	struct rxe_mw *mw;
 	struct rxe_mr *mr;
 
-	pr_info("rxe_bind_mw: called\n");
-
 	if (qp->is_user) {
 	} else {
 		mw = to_rmw(wqe->wr.wr.kmw.ibmw);
@@ -186,3 +177,25 @@ int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
 #endif
 	return 0;
 }
+
+void rxe_mw_cleanup(struct rxe_pool_entry *arg)
+{
+	struct rxe_mw *mw = container_of(arg, typeof(*mw), pelem);
+
+	rxe_drop_index(mw);
+	rxe_drop_key(mw);
+}
+
+int rxe_invalidate_mw(struct rxe_mw *mw, int remote)
+{
+	/* type 1 MWs don't support invalidate */
+	if (mw->ibmw.type == IB_MW_TYPE_1) {
+		pr_err("attempt to %s-invalidate a type 1 mw\n",
+			remote ? "send" : "local");
+		return -EINVAL;
+	}
+
+	mw->state = RXE_MEM_STATE_FREE;
+
+	return 0;
+}
diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c
index 35e9646e104c..df3e2a514ce3 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.c
+++ b/drivers/infiniband/sw/rxe/rxe_pool.c
@@ -85,6 +85,7 @@ struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = {
 	[RXE_TYPE_MW] = {
 		.name		= "rxe-mw",
 		.size		= sizeof(struct rxe_mw),
+		.cleanup	= rxe_mw_cleanup,
 		.flags		= RXE_POOL_INDEX
 				| RXE_POOL_KEY,
 		.max_index	= RXE_MAX_MW_INDEX,
@@ -365,6 +366,11 @@ void rxe_drop_key(void *arg)
 	struct rxe_pool *pool = elem->pool;
 	unsigned long flags;
 
+	if (elem == NULL) {
+		pr_warn("rxe_drop_key: called with null pointer\n");
+		return;
+	}
+
 	write_lock_irqsave(&pool->pool_lock, flags);
 	rb_erase(&elem->key_node, &pool->key.tree);
 	write_unlock_irqrestore(&pool->pool_lock, flags);
@@ -388,6 +394,11 @@ void rxe_drop_index(void *arg)
 	struct rxe_pool *pool = elem->pool;
 	unsigned long flags;
 
+	if (elem == NULL) {
+		pr_warn("rxe_drop_index: called with null pointer\n");
+		return;
+	}
+
 	write_lock_irqsave(&pool->pool_lock, flags);
 	clear_bit(elem->index - pool->index.min_index, pool->index.table);
 	rb_erase(&elem->index_node, &pool->index.tree);
diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
index e0564d7b0ff7..2a38b7cdf4a8 100644
--- a/drivers/infiniband/sw/rxe/rxe_req.c
+++ b/drivers/infiniband/sw/rxe/rxe_req.c
@@ -583,11 +583,32 @@ static void update_state(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
 			  jiffies + qp->qp_timeout_jiffies);
 }
 
+static int local_invalidate(struct rxe_dev *rxe, struct rxe_send_wqe *wqe)
+{
+	int ret;
+	struct rxe_mr *mr;
+	struct rxe_mw *mw;
+	u32 key = wqe->wr.ex.invalidate_rkey;
+
+	if ((mr = rxe_pool_get_key(&rxe->mr_pool, &key))) {
+		ret = rxe_invalidate_mr(mr, 0);
+		rxe_drop_ref(mr);
+	} else if ((mw = rxe_pool_get_key(&rxe->mw_pool, &key))) {
+		ret = rxe_invalidate_mw(mw, 0);
+		rxe_drop_ref(mw);
+	} else {
+		ret = -EINVAL;
+		pr_err("No mr/mw for rkey %#x\n", key);
+	}
+
+	return ret;
+}
+
 int rxe_requester(void *arg)
 {
 	struct rxe_qp *qp = (struct rxe_qp *)arg;
 	struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
-	struct rxe_mr *rmr;
+	struct rxe_mr *mr;
 	struct rxe_pkt_info pkt;
 	struct sk_buff *skb;
 	struct rxe_send_wqe *wqe;
@@ -600,6 +621,8 @@ int rxe_requester(void *arg)
 	u32 rollback_psn;
 	int entered;
 
+	pr_info("rxe_requester: called\n");
+
 	rxe_add_ref(qp);
 
 	// this code is 'guaranteed' to never be entered more
@@ -631,57 +654,47 @@ int rxe_requester(void *arg)
 	if (unlikely(!wqe))
 		goto exit;
 
+	/* process local operations */
 	if (wqe->mask & WR_LOCAL_MASK) {
+		wqe->state = wqe_state_done;
+		wqe->status = IB_WC_SUCCESS;
+
 		switch (wqe->wr.opcode) {
 		case IB_WR_LOCAL_INV:
-			rmr = rxe_pool_get_index(&rxe->mr_pool,
-				 wqe->wr.ex.invalidate_rkey >> 8);
-			if (!rmr) {
-				pr_err("No mr for key %#x\n",
-				       wqe->wr.ex.invalidate_rkey);
-				wqe->state = wqe_state_error;
-				wqe->status = IB_WC_MW_BIND_ERR;
-				goto exit;
-			}
-			// TODO this can race with external access
-			// to the MR in rxe_resp unless you can know
-			// that all accesses are done
-			rmr->state = RXE_MEM_STATE_FREE;
-			rxe_drop_ref(rmr);
-			wqe->state = wqe_state_done;
-			wqe->status = IB_WC_SUCCESS;
+			if ((ret = local_invalidate(rxe, wqe)))
+				wqe->status = IB_WC_LOC_QP_OP_ERR;
 			break;
 		case IB_WR_REG_MR:
-			rmr = to_rmr(wqe->wr.wr.reg.mr);
-			rmr->state = RXE_MEM_STATE_VALID;
-			rmr->access = wqe->wr.wr.reg.access;
-			rmr->lkey = wqe->wr.wr.reg.key;
-			rmr->rkey = wqe->wr.wr.reg.key;
-			rmr->iova = wqe->wr.wr.reg.mr->iova;
-			wqe->state = wqe_state_done;
-			wqe->status = IB_WC_SUCCESS;
+			mr = to_rmr(wqe->wr.wr.reg.mr);
+			mr->state = RXE_MEM_STATE_VALID;
+			mr->access = wqe->wr.wr.reg.access;
+			mr->lkey = wqe->wr.wr.reg.key;
+			mr->rkey = wqe->wr.wr.reg.key;
+			mr->iova = wqe->wr.wr.reg.mr->iova;
 			break;
 		case IB_WR_BIND_MW:
-			ret = rxe_bind_mw(qp, wqe);
-			if (ret) {
-				wqe->state = wqe_state_done;
+			if ((ret = rxe_bind_mw(qp, wqe)))
 				wqe->status = IB_WC_MW_BIND_ERR;
-				// TODO err: will change status
-				// probably should not
-				goto err;
-			}
-			wqe->state = wqe_state_done;
-			wqe->status = IB_WC_SUCCESS;
 			break;
 		default:
-			pr_err("rxe_requester: unexpected LOCAL WR opcode = %d\n", wqe->wr.opcode);
-			goto exit;
+			pr_err("rxe_requester: unexpected local WR opcode = %d\n",
+				wqe->wr.opcode);
+			wqe->status = IB_WC_LOC_QP_OP_ERR;
 		}
+
+		/* we're done processing the wqe so move index */
+		qp->req.wqe_index = next_index(qp->sq.queue, qp->req.wqe_index);
+
+		/* if an error occurred do a completion pass now
+		 * (below) and then quit processing more wqes */
+		if (wqe->status != IB_WC_SUCCESS)
+			goto err;
+
+		/* if the wqe is signalled schedule a completion pass */
 		if ((wqe->wr.send_flags & IB_SEND_SIGNALED) ||
-		    qp->sq_sig_type == IB_SIGNAL_ALL_WR)
+		    (qp->sq_sig_type == IB_SIGNAL_ALL_WR))
 			rxe_run_task(&qp->comp.task, 1);
-		qp->req.wqe_index = next_index(qp->sq.queue,
-						qp->req.wqe_index);
+
 		goto next_wqe;
 	}
 
diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
index d54b5e7dad39..aac50c0a43c7 100644
--- a/drivers/infiniband/sw/rxe/rxe_resp.c
+++ b/drivers/infiniband/sw/rxe/rxe_resp.c
@@ -834,14 +834,38 @@ static enum resp_states execute(struct rxe_qp *qp, struct rxe_pkt_info *pkt)
 		return RESPST_CLEANUP;
 }
 
+static int send_invalidate(struct rxe_dev *rxe, u32 rkey)
+{
+	int ret;
+	struct rxe_mr *mr;
+	struct rxe_mw *mw;
+
+	pr_info("send_invalidate: called\n");
+
+	if ((mr = rxe_pool_get_key(&rxe->mr_pool, &rkey))) {
+		ret = rxe_invalidate_mr(mr, 1);
+		rxe_drop_ref(mr);
+	} else if ((mw = rxe_pool_get_key(&rxe->mw_pool, &rkey))) {
+		ret = rxe_invalidate_mw(mw, 1);
+		rxe_drop_ref(mw);
+	} else {
+		pr_err("send invalidate failed for rkey = 0x%x\n", rkey);
+		ret = -EINVAL;
+	}
+
+	return ret;
+}
+
 static enum resp_states do_complete(struct rxe_qp *qp,
 				    struct rxe_pkt_info *pkt)
 {
+	int ret;
 	struct rxe_cqe cqe;
 	struct ib_wc *wc = &cqe.ibwc;
 	struct ib_uverbs_wc *uwc = &cqe.uibwc;
 	struct rxe_recv_wqe *wqe = qp->resp.wqe;
 	struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
+	u32 rkey = ieth_rkey(pkt);
 
 	if (unlikely(!wqe))
 		return RESPST_CLEANUP;
@@ -858,6 +882,14 @@ static enum resp_states do_complete(struct rxe_qp *qp,
 		wc->wr_id               = wqe->wr_id;
 	}
 
+	if (pkt->mask & RXE_IETH_MASK) {
+		ret = send_invalidate(rxe, rkey);
+		if (ret) {
+			pr_err("do_complete: send invalidate failed\n");
+			// TODO
+		}
+	}
+
 	if (wc->status == IB_WC_SUCCESS) {
 		rxe_counter_inc(rxe, RXE_CNT_RDMA_RECV);
 		wc->opcode = (pkt->mask & RXE_IMMDT_MASK &&
@@ -881,7 +913,7 @@ static enum resp_states do_complete(struct rxe_qp *qp,
 
 			if (pkt->mask & RXE_IETH_MASK) {
 				uwc->wc_flags |= IB_WC_WITH_INVALIDATE;
-				uwc->ex.invalidate_rkey = ieth_rkey(pkt);
+				uwc->ex.invalidate_rkey = rkey;
 			}
 
 			uwc->qp_num		= qp->ibqp.qp_num;
@@ -910,20 +942,8 @@ static enum resp_states do_complete(struct rxe_qp *qp,
 			}
 
 			if (pkt->mask & RXE_IETH_MASK) {
-				struct rxe_mr *rmr;
-
 				wc->wc_flags |= IB_WC_WITH_INVALIDATE;
 				wc->ex.invalidate_rkey = ieth_rkey(pkt);
-
-				rmr = rxe_pool_get_index(&rxe->mr_pool,
-							 wc->ex.invalidate_rkey >> 8);
-				if (unlikely(!rmr)) {
-					pr_err("Bad rkey %#x invalidation\n",
-					       wc->ex.invalidate_rkey);
-					return RESPST_ERROR;
-				}
-				rmr->state = RXE_MEM_STATE_FREE;
-				rxe_drop_ref(rmr);
 			}
 
 			wc->qp			= &qp->ibqp;
@@ -933,6 +953,8 @@ static enum resp_states do_complete(struct rxe_qp *qp,
 
 			wc->port_num		= qp->attr.port_num;
 		}
+	} else {
+		// TODO what???
 	}
 
 	/* have copy for srq and reference for !srq */
@@ -1224,9 +1246,17 @@ int rxe_responder(void *arg)
 	enum resp_states state;
 	struct rxe_pkt_info *pkt = NULL;
 	int ret = 0;
+	int entered;
 
 	rxe_add_ref(qp);
 
+	// this code is 'guaranteed' to never be entered more
+	// than once. Check to make sure that this is the case
+	entered = atomic_inc_return(&qp->resp.task.entered);
+	if (entered > 1) {
+		pr_err("rxe_responder: entered %d times\n", entered);
+	}
+
 	qp->resp.aeth_syndrome = AETH_ACK_UNLIMITED;
 
 	if (!qp->valid) {
@@ -1405,6 +1435,7 @@ int rxe_responder(void *arg)
 exit:
 	ret = -EAGAIN;
 done:
+	atomic_dec(&qp->resp.task.entered);
 	rxe_drop_ref(qp);
 	return ret;
 }
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
index 476d90e3f91f..b11ab3ae87a3 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
@@ -970,7 +970,6 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd,
 	return &mr->ibmr;
 
 err3:
-	rxe_drop_index(mr);
 	rxe_drop_ref(mr);
 	rxe_drop_ref(pd);
 err2:
@@ -985,8 +984,6 @@ static int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata)
 
 	mr->state = RXE_MEM_STATE_ZOMBIE;
 	rxe_drop_ref(mr->pd);
-	rxe_drop_index(mr);
-	rxe_drop_key(mr);
 	rxe_drop_ref(mr);
 	return 0;
 }
@@ -1020,7 +1017,6 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type,
 
 err2:
 	rxe_drop_ref(pd);
-	rxe_drop_index(mr);
 	rxe_drop_ref(mr);
 err1:
 	return ERR_PTR(err);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 17/20] Implemented functional invalidate APIs
  2020-08-15  4:58 Memory windows support for rxe Bob Pearson
                   ` (15 preceding siblings ...)
  2020-08-15  4:58 ` [PATCH 16/20] Implemented stubbed invalidate APIs Bob Pearson
@ 2020-08-15  4:58 ` Bob Pearson
  2020-08-15  4:58 ` [PATCH 18/20] cleanup Bob Pearson
                   ` (2 subsequent siblings)
  19 siblings, 0 replies; 23+ messages in thread
From: Bob Pearson @ 2020-08-15  4:58 UTC (permalink / raw)
  To: linux-rdma; +Cc: Bob Pearson

Made progress on details of mw alloc, bind, invalidate
and dealloc. added a private flags field in rxe_send_wqe to allow ibv_bind_mw
to safely tell the kernel that bind has come through ibv_bind_mw.
The IBA requires that type 1 MWs can only be bound by the ibv_bind_mw
API and type 2 MWs by send WRs. But we implement the bind_mw call with
WRs too since rdma-core does not implement the verbs call. Need to
cleanly separate the two cases.

After implementing the checking core for MWs it is clear that MRs require
more effort to comply with the standard.

Signed-off-by: Bob Pearson <rpearson@hpe.com>
---
 drivers/infiniband/sw/rxe/rxe_loc.h   |   4 +-
 drivers/infiniband/sw/rxe/rxe_mr.c    |  44 +--
 drivers/infiniband/sw/rxe/rxe_mw.c    | 383 ++++++++++++++++++++------
 drivers/infiniband/sw/rxe/rxe_param.h |   2 +
 drivers/infiniband/sw/rxe/rxe_req.c   |  23 +-
 drivers/infiniband/sw/rxe/rxe_resp.c  |  12 +-
 drivers/infiniband/sw/rxe/rxe_verbs.c |  25 +-
 drivers/infiniband/sw/rxe/rxe_verbs.h |   9 +-
 include/uapi/rdma/rdma_user_rxe.h     |   2 +
 9 files changed, 366 insertions(+), 138 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
index 1ddd1b5721d8..652e0d67fe5c 100644
--- a/drivers/infiniband/sw/rxe/rxe_loc.h
+++ b/drivers/infiniband/sw/rxe/rxe_loc.h
@@ -137,7 +137,7 @@ int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length);
 void rxe_mr_cleanup(struct rxe_pool_entry *arg);
 
 int advance_dma_data(struct rxe_dma_info *dma, unsigned int length);
-int rxe_invalidate_mr(struct rxe_mr *mr, int remote);
+int rxe_invalidate_mr(struct rxe_qp *qp, struct rxe_mr *mr);
 
 /* rxe_mw.c */
 void rxe_set_mw_rkey(struct rxe_mw *mw);
@@ -146,7 +146,7 @@ struct ib_mw *rxe_alloc_mw(struct ib_pd *ibpd, enum ib_mw_type type,
 int rxe_dealloc_mw(struct ib_mw *ibmw);
 int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe);
 void rxe_mw_cleanup(struct rxe_pool_entry *arg);
-int rxe_invalidate_mw(struct rxe_mw *mw, int remote);
+int rxe_invalidate_mw(struct rxe_qp *qp, struct rxe_mw *mw);
 
 /* rxe_net.c */
 void rxe_loopback(struct sk_buff *skb);
diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
index bebcce06e804..a983a838bf4c 100644
--- a/drivers/infiniband/sw/rxe/rxe_mr.c
+++ b/drivers/infiniband/sw/rxe/rxe_mr.c
@@ -109,25 +109,6 @@ static void rxe_mr_init(int access, struct rxe_mr *mr)
 	mr->map_shift		= ilog2(RXE_BUF_PER_MAP);
 }
 
-void rxe_mr_cleanup(struct rxe_pool_entry *arg)
-{
-	struct rxe_mr *mr = container_of(arg, typeof(*mr), pelem);
-	int i;
-
-	if (mr->umem)
-		ib_umem_release(mr->umem);
-
-	if (mr->map) {
-		for (i = 0; i < mr->num_map; i++)
-			kfree(mr->map[i]);
-
-		kfree(mr->map);
-	}
-
-	rxe_drop_index(mr);
-	rxe_drop_key(mr);
-}
-
 static int rxe_mr_alloc(struct rxe_mr *mr, int num_buf)
 {
 	int i;
@@ -607,11 +588,32 @@ struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key,
 	return mr;
 }
 
-int rxe_invalidate_mr(struct rxe_mr *mr, int remote)
+int rxe_invalidate_mr(struct rxe_qp *qp, struct rxe_mr *mr)
 {
-	// more TODO here, can fail
+	// much more TODO here, can fail
+	// mw is closer to what is needed
+	// but for another day
 
 	mr->state = RXE_MEM_STATE_FREE;
 
 	return 0;
 }
+
+void rxe_mr_cleanup(struct rxe_pool_entry *arg)
+{
+	struct rxe_mr *mr = container_of(arg, typeof(*mr), pelem);
+	int i;
+
+	if (mr->umem)
+		ib_umem_release(mr->umem);
+
+	if (mr->map) {
+		for (i = 0; i < mr->num_map; i++)
+			kfree(mr->map[i]);
+
+		kfree(mr->map);
+	}
+
+	rxe_drop_index(mr);
+	rxe_drop_key(mr);
+}
diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c
index 7092045a2691..0c774aadf6c7 100644
--- a/drivers/infiniband/sw/rxe/rxe_mw.c
+++ b/drivers/infiniband/sw/rxe/rxe_mw.c
@@ -52,29 +52,17 @@ void rxe_set_mw_rkey(struct rxe_mw *mw)
 	pr_err("unable to get random rkey for mw\n");
 }
 
-
-/* place holder alloc and dealloc routines
- * TODO add cross references between qp and mr with mw
- * and cleanup when one side is deleted. Enough to make
- * verbs function correctly for now */
 struct ib_mw *rxe_alloc_mw(struct ib_pd *ibpd, enum ib_mw_type type,
 			   struct ib_udata *udata)
 {
 	int ret;
+	int index;
 	struct rxe_mw *mw;
 	struct rxe_pd *pd = to_rpd(ibpd);
 	struct rxe_dev *rxe = to_rdev(ibpd->device);
 	struct rxe_alloc_mw_resp __user *uresp;
 
-	if (udata) {
-		if (udata->outlen < sizeof(*uresp)) {
-			ret = -EINVAL;
-			goto err1;
-		}
-	}
-
-	if (unlikely((type != IB_MW_TYPE_1) &&
-		     (type != IB_MW_TYPE_2))) {
+	if (unlikely(udata && (udata->outlen < sizeof(*uresp)))) {
 		ret = -EINVAL;
 		goto err1;
 	}
@@ -82,22 +70,29 @@ struct ib_mw *rxe_alloc_mw(struct ib_pd *ibpd, enum ib_mw_type type,
 	rxe_add_ref(pd);
 
 	mw = rxe_alloc(&rxe->mw_pool);
-	if (!mw) {
-		rxe_drop_ref(pd);
+	if (unlikely(!mw)) {
 		ret = -ENOMEM;
-		goto err1;
+		goto err2;
 	}
 
-	rxe_add_index(mw);
-	rxe_set_mw_rkey(mw);
+	switch (type) {
+	case IB_MW_TYPE_1:
+		mw->state	= RXE_MW_STATE_VALID;
+		break;
+	case IB_MW_TYPE_2:
+		mw->state	= RXE_MW_STATE_FREE;
+		break;
+	default:
+		pr_err("attempt to allocate MW with unknown type\n");
+		ret = -EINVAL;
+		goto err3;
+	}
 
-	spin_lock_init(&mw->lock);
+	rxe_add_index(mw);
+	index = mw->pelem.index;
 
-	if (type == IB_MW_TYPE_2) {
-		mw->state		= RXE_MW_STATE_FREE;
-	} else {
-		mw->state		= RXE_MW_STATE_VALID;
-	}
+	/* o10-37.2.32: */
+	rxe_set_mw_rkey(mw);
 
 	mw->qp			= NULL;
 	mw->mr			= NULL;
@@ -106,96 +101,330 @@ struct ib_mw *rxe_alloc_mw(struct ib_pd *ibpd, enum ib_mw_type type,
         mw->ibmw.pd		= ibpd;
         mw->ibmw.type		= type;
 
+	spin_lock_init(&mw->lock);
+
 	if (udata) {
 		uresp = udata->outbuf;
-		if (copy_to_user(&uresp->index, &mw->pelem.index,
-				 sizeof(u32))) {
+		if (copy_to_user(&uresp->index, &index, sizeof(index))) {
 			ret = -EFAULT;
-			goto err2;
+			goto err3;
 		}
 	}
 
 	return &mw->ibmw;
-err2:
+err3:
 	rxe_drop_ref(mw);
+err2:
 	rxe_drop_ref(pd);
 err1:
 	return ERR_PTR(ret);
 }
 
-int rxe_dealloc_mw(struct ib_mw *ibmw)
+/* Check the rules for bind MW oepration. */
+static int check_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
+			 struct rxe_mw *mw, struct rxe_mr *mr)
 {
-	struct rxe_mw *mw = to_rmw(ibmw);
-	struct rxe_pd *pd = to_rpd(ibmw->pd);
-	unsigned long flags;
+	/* check to see if bind operation came through
+	 * ibv_bind_mw verbs API. */
+	switch (mw->ibmw.type) {
+	case IB_MW_TYPE_1:
+		/* o10-37.2.34: */
+		if (unlikely(!(wqe->wr.wr.umw.flags & RXE_BIND_MW))) {
+			pr_err("attempt to bind type 1 MW with send WR\n");
+			return -EINVAL;
+		}
+		break;
+	case IB_MW_TYPE_2:
+		/* o10-37.2.35: */
+		if (unlikely(wqe->wr.wr.umw.flags & RXE_BIND_MW)) {
+			pr_err("attempt to bind type 2 MW with verbs API\n");
+			return -EINVAL;
+		}
 
-	spin_lock_irqsave(&mw->lock, flags);
-	mw->state = RXE_MW_STATE_INVALID;
-	spin_unlock_irqrestore(&mw->lock, flags);
+		/* C10-72: */
+		if (unlikely(qp->pd != to_rpd(mw->ibmw.pd))) {
+			pr_err("attempt to bind type 2 MW with qp"
+				" with different PD\n");
+			return -EINVAL;
+		}
 
-	rxe_drop_ref(pd);
-	rxe_drop_ref(mw);
+		/* o10-37.2.40: */
+		if (unlikely(wqe->wr.wr.umw.length == 0)) {
+			pr_err("attempt to invalidate type 2 MW by"
+				" binding with zero length\n");
+			return -EINVAL;
+		}
+
+		if (unlikely(!mr)) {
+			pr_err("attempt to invalidate type 2 MW by"
+				" binding to NULL mr\n");
+			return -EINVAL;
+		}
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	if (unlikely((mw->ibmw.type == IB_MW_TYPE_1) &&
+			(mw->state != RXE_MW_STATE_VALID))) {
+		pr_err("attempt to bind a type 1 MW not in the"
+			" valid state\n");
+		return -EINVAL;
+	}
+
+	/* o10-36.2.2: */
+	if (unlikely((mw->access & IB_ZERO_BASED) &&
+			(mw->ibmw.type == IB_MW_TYPE_1))) {
+		pr_err("attempt to bind a zero based type 1 MW\n");
+		return -EINVAL;
+	}
+
+	if ((wqe->wr.wr.umw.rkey & 0xff) == (mw->ibmw.rkey & 0xff)) {
+		pr_err("attempt to bind MW with same key\n");
+		return -EINVAL;
+	}
+
+	/* remaining checks only apply to a nonzero MR */
+	if (!mr)
+		return 0;
+
+	if (unlikely(mr->access & IB_ZERO_BASED)) {
+		pr_err("attempt to bind MW to zero based MR\n");
+		return -EINVAL;
+	}
+
+	/* o10-37.2.30: */
+	if (unlikely((mw->ibmw.type == IB_MW_TYPE_2) &&
+			(mw->state != RXE_MW_STATE_FREE))) {
+		pr_err("attempt to bind a type 2 MW not in the"
+			" free state\n");
+		return -EINVAL;
+	}
+
+	/* C10-73: */
+	if (unlikely(!(mr->access & IB_ACCESS_MW_BIND))) {
+		pr_err("attempt to bind an MW to an MR without"
+			" bind access\n");
+		return -EINVAL;
+	}
+
+	/* C10-74: */
+	if (unlikely((mw->access & (IB_ACCESS_REMOTE_WRITE |
+				    IB_ACCESS_REMOTE_ATOMIC)) &&
+	    !(mr->access & IB_ACCESS_LOCAL_WRITE))) {
+		pr_err("attempt to bind an MW with write/atomic"
+			" access to an MR without local write access\n");
+		return -EINVAL;
+	}
+
+	/* MR duplicates address and length in the private and ib
+	 * parts of the rxe_mr struct. TODO should only keep one. */
+
+	/* C10-75: */
+	if (mw->access & IB_ZERO_BASED) {
+		if (unlikely(wqe->wr.wr.umw.length > mr->length)) {
+			pr_err("attempt to bind a ZB MW outside"
+				" of the MR\n");
+			return -EINVAL;
+		}
+	} else {
+		if (unlikely((wqe->wr.wr.umw.addr < mr->iova) ||
+		    ((wqe->wr.wr.umw.addr + wqe->wr.wr.umw.length) >
+		     (mr->iova + mr->length)))) {
+			pr_err("attempt to bind a VA MW outside"
+				" of the MR\n");
+			return -EINVAL;
+		}
+	}
 
 	return 0;
 }
 
+static void do_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
+			struct rxe_mw *mw, struct rxe_mr *mr)
+{
+	u32 rkey;
+
+	mw->access = wqe->wr.wr.umw.access;
+	mw->state = RXE_MW_STATE_VALID;
+	mw->addr = wqe->wr.wr.umw.addr;
+	mw->length = wqe->wr.wr.umw.length;
+
+	/* get rid of existing MR if any, type 1 only */
+	if (mw->mr) {
+		rxe_drop_ref(mw->mr);
+		atomic_dec(&mw->mr->num_mw);
+		mw->mr = NULL;
+	}
+
+	/* if length != 0 bind to new MR */
+	if (mw->length) {
+		mw->mr = mr;
+		atomic_inc(&mr->num_mw);
+		rxe_add_ref(mr);
+	}
+
+	/* remember qp if type 2, cleared by invalidate
+	 * this is weak since qp can go away legally
+	 * only used to compare with qp used to perform
+	 * memory ops */
+	if (mw->ibmw.type == IB_MW_TYPE_2) {
+		mw->qp = qp;
+	}
+
+	/* key part of new rkey is provided by user for type 2
+	 * and ibv_bind_mw() for type 1 MWs */
+	rkey = mw->ibmw.rkey;
+	rxe_drop_key(mw);
+	rkey = (rkey & 0xffffff00) | (wqe->wr.wr.umw.rkey & 0x000000ff);
+	rxe_add_key(mw, &rkey);
+
+	return;
+}
+
 int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
 {
+	int ret;
 	struct rxe_mw *mw;
 	struct rxe_mr *mr;
+	struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
+	unsigned long flags;
 
 	if (qp->is_user) {
+		mw = rxe_pool_get_index(&rxe->mw_pool,
+					wqe->wr.wr.umw.mw_index);
+		if (!mw) {
+			pr_err("mw with index = %d not found\n",
+				wqe->wr.wr.umw.mw_index);
+			ret = -EINVAL;
+			goto err1;
+		}
+		mr = rxe_pool_get_index(&rxe->mr_pool,
+					wqe->wr.wr.umw.mr_index);
+		if (!mr && wqe->wr.wr.umw.length) {
+			pr_err("mr with index = %d not found\n",
+				wqe->wr.wr.umw.mr_index);
+			ret = -EINVAL;
+			goto err2;
+		}
 	} else {
 		mw = to_rmw(wqe->wr.wr.kmw.ibmw);
-		mr = to_rmr(wqe->wr.wr.kmw.ibmr);
-	}
-
-#if 0
-	wqe->wr.wr.bind_mw
-	__aligned_u64	addr;
-	__aligned_u64	length;
-	__u32	mr_rkey;
-	__u32	mw_rkey;
-	__u32	rkey;
-	__u32	access;
-
-	mw
-	struct rxe_pool_entry	pelem;			// alloc
-	struct ib_mw		ibmw;			// alloc
-		struct ib_device	*device;	// alloc
-		struct ib_pd		*pd;		// alloc
-		struct ib_uobject	*uobject;	// alloc
-		u32			rkey;
-		enum ib_mw_type         type;		// alloc
-	struct rxe_qp		*qp;			// bind
-	struct rxe_mem		*mr;			// bind
-	spinlock_t		lock;			// alloc
-	enum rxe_mw_state	state;			// all
-	u32			access;			// bind
-	u64			addr;			// bind
-	u64			length;			// bind
-#endif
+		rxe_add_ref(mw);
+		if (wqe->wr.wr.kmw.ibmr) {
+			mr = to_rmr(wqe->wr.wr.kmw.ibmr);
+			rxe_add_ref(mr);
+		} else {
+			mr = NULL;
+		}
+	}
+
+	spin_lock_irqsave(&mw->lock, flags);
+
+	/* check the rules */
+	ret = check_bind_mw(qp, wqe, mw, mr);
+	if (ret)
+		goto err3;
+
+	/* implement the change */
+	do_bind_mw(qp, wqe, mw, mr);
+err3:
+	spin_unlock_irqrestore(&mw->lock, flags);
+
+	if (mr)
+		rxe_drop_ref(mr);
+err2:
+	rxe_drop_ref(mw);
+err1:
+	return ret;
+}
+
+static int check_invalidate_mw(struct rxe_qp *qp, struct rxe_mw *mw)
+{
+	/* o10-37.2.26: */
+	if (unlikely(mw->ibmw.type == IB_MW_TYPE_1)) {
+		pr_err("attempt to invalidate a type 1 MW\n");
+		return -EINVAL;
+	}
+
+	if (unlikely(mw->state != RXE_MW_STATE_VALID)) {
+		pr_warn("attempt to invalidate a MW that"
+			" is not valid\n");
+		return -EINVAL;
+	}
+
 	return 0;
 }
 
-void rxe_mw_cleanup(struct rxe_pool_entry *arg)
+static void do_invalidate_mw(struct rxe_mw *mw)
 {
-	struct rxe_mw *mw = container_of(arg, typeof(*mw), pelem);
+	mw->qp = NULL;
 
-	rxe_drop_index(mw);
-	rxe_drop_key(mw);
+	rxe_drop_ref(mw->mr);
+	atomic_dec(&mw->mr->num_mw);
+	mw->mr = NULL;
+
+	mw->access = 0;
+	mw->addr = 0; 
+	mw->length = 0;
+	mw->state = RXE_MW_STATE_FREE;
 }
 
-int rxe_invalidate_mw(struct rxe_mw *mw, int remote)
+int rxe_invalidate_mw(struct rxe_qp *qp, struct rxe_mw *mw)
 {
-	/* type 1 MWs don't support invalidate */
-	if (mw->ibmw.type == IB_MW_TYPE_1) {
-		pr_err("attempt to %s-invalidate a type 1 mw\n",
-			remote ? "send" : "local");
-		return -EINVAL;
+	int ret;
+	unsigned long flags;
+
+	spin_lock_irqsave(&mw->lock, flags);
+
+	ret = check_invalidate_mw(qp, mw);
+	if (ret)
+		goto err;
+
+	do_invalidate_mw(mw);
+err:
+	spin_unlock_irqrestore(&mw->lock, flags);
+
+	return ret;
+}
+
+static void do_deallocate_mw(struct rxe_mw *mw)
+{
+	mw->qp = NULL;
+
+	if (mw->mr) {
+		rxe_drop_ref(mw->mr);
+		atomic_dec(&mw->mr->num_mw);
+		mw->mr = NULL;
 	}
 
-	mw->state = RXE_MEM_STATE_FREE;
+	mw->access = 0;
+	mw->addr = 0; 
+	mw->length = 0;
+	mw->state = RXE_MW_STATE_INVALID;
+}
+
+int rxe_dealloc_mw(struct ib_mw *ibmw)
+{
+	struct rxe_mw *mw = to_rmw(ibmw);
+	struct rxe_pd *pd = to_rpd(ibmw->pd);
+	unsigned long flags;
+
+	spin_lock_irqsave(&mw->lock, flags);
+
+	do_deallocate_mw(mw);
+
+	spin_unlock_irqrestore(&mw->lock, flags);
+
+	rxe_drop_ref(pd);
+	rxe_drop_ref(mw);
 
 	return 0;
 }
+
+void rxe_mw_cleanup(struct rxe_pool_entry *arg)
+{
+	struct rxe_mw *mw = container_of(arg, typeof(*mw), pelem);
+
+	rxe_drop_index(mw);
+	rxe_drop_key(mw);
+}
diff --git a/drivers/infiniband/sw/rxe/rxe_param.h b/drivers/infiniband/sw/rxe/rxe_param.h
index 7f914dde98a7..41e7b74efcbc 100644
--- a/drivers/infiniband/sw/rxe/rxe_param.h
+++ b/drivers/infiniband/sw/rxe/rxe_param.h
@@ -75,7 +75,9 @@ enum rxe_device_param {
 					| IB_DEVICE_SYS_IMAGE_GUID
 					| IB_DEVICE_RC_RNR_NAK_GEN
 					| IB_DEVICE_SRQ_RESIZE
+					| IB_DEVICE_MEM_WINDOW
 					| IB_DEVICE_MEM_MGT_EXTENSIONS
+					| IB_DEVICE_MEM_WINDOW_TYPE_2B
 					| IB_DEVICE_ALLOW_USER_UNREG,
 	RXE_MAX_SGE			= 32,
 	RXE_MAX_WQE_SIZE		= sizeof(struct rxe_send_wqe) +
diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
index 2a38b7cdf4a8..ad747f230318 100644
--- a/drivers/infiniband/sw/rxe/rxe_req.c
+++ b/drivers/infiniband/sw/rxe/rxe_req.c
@@ -583,18 +583,19 @@ static void update_state(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
 			  jiffies + qp->qp_timeout_jiffies);
 }
 
-static int local_invalidate(struct rxe_dev *rxe, struct rxe_send_wqe *wqe)
+static int local_invalidate(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
 {
 	int ret;
 	struct rxe_mr *mr;
 	struct rxe_mw *mw;
+	struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
 	u32 key = wqe->wr.ex.invalidate_rkey;
 
 	if ((mr = rxe_pool_get_key(&rxe->mr_pool, &key))) {
-		ret = rxe_invalidate_mr(mr, 0);
+		ret = rxe_invalidate_mr(qp, mr);
 		rxe_drop_ref(mr);
 	} else if ((mw = rxe_pool_get_key(&rxe->mw_pool, &key))) {
-		ret = rxe_invalidate_mw(mw, 0);
+		ret = rxe_invalidate_mw(qp, mw);
 		rxe_drop_ref(mw);
 	} else {
 		ret = -EINVAL;
@@ -607,7 +608,6 @@ static int local_invalidate(struct rxe_dev *rxe, struct rxe_send_wqe *wqe)
 int rxe_requester(void *arg)
 {
 	struct rxe_qp *qp = (struct rxe_qp *)arg;
-	struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
 	struct rxe_mr *mr;
 	struct rxe_pkt_info pkt;
 	struct sk_buff *skb;
@@ -621,8 +621,6 @@ int rxe_requester(void *arg)
 	u32 rollback_psn;
 	int entered;
 
-	pr_info("rxe_requester: called\n");
-
 	rxe_add_ref(qp);
 
 	// this code is 'guaranteed' to never be entered more
@@ -655,13 +653,18 @@ int rxe_requester(void *arg)
 		goto exit;
 
 	/* process local operations */
+	/* current behavior if an error occurs
+	 * for any of these local operations
+	 * is to generate an error work completion
+	 * then error the QP and flush any
+	 * remaining WRs */
 	if (wqe->mask & WR_LOCAL_MASK) {
 		wqe->state = wqe_state_done;
 		wqe->status = IB_WC_SUCCESS;
 
 		switch (wqe->wr.opcode) {
 		case IB_WR_LOCAL_INV:
-			if ((ret = local_invalidate(rxe, wqe)))
+			if ((ret = local_invalidate(qp, wqe)))
 				wqe->status = IB_WC_LOC_QP_OP_ERR;
 			break;
 		case IB_WR_REG_MR:
@@ -677,8 +680,10 @@ int rxe_requester(void *arg)
 				wqe->status = IB_WC_MW_BIND_ERR;
 			break;
 		default:
-			pr_err("rxe_requester: unexpected local WR opcode = %d\n",
-				wqe->wr.opcode);
+			pr_err("rxe_requester: unexpected local"
+				" WR opcode = %d\n", wqe->wr.opcode);
+			/* these should be memory operation errors
+			 * but there isn't one available */
 			wqe->status = IB_WC_LOC_QP_OP_ERR;
 		}
 
diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
index aac50c0a43c7..49cd77cd6264 100644
--- a/drivers/infiniband/sw/rxe/rxe_resp.c
+++ b/drivers/infiniband/sw/rxe/rxe_resp.c
@@ -834,19 +834,17 @@ static enum resp_states execute(struct rxe_qp *qp, struct rxe_pkt_info *pkt)
 		return RESPST_CLEANUP;
 }
 
-static int send_invalidate(struct rxe_dev *rxe, u32 rkey)
+static int send_invalidate(struct rxe_qp *qp, struct rxe_dev *rxe, u32 rkey)
 {
 	int ret;
 	struct rxe_mr *mr;
 	struct rxe_mw *mw;
 
-	pr_info("send_invalidate: called\n");
-
 	if ((mr = rxe_pool_get_key(&rxe->mr_pool, &rkey))) {
-		ret = rxe_invalidate_mr(mr, 1);
+		ret = rxe_invalidate_mr(qp, mr);
 		rxe_drop_ref(mr);
 	} else if ((mw = rxe_pool_get_key(&rxe->mw_pool, &rkey))) {
-		ret = rxe_invalidate_mw(mw, 1);
+		ret = rxe_invalidate_mw(qp, mw);
 		rxe_drop_ref(mw);
 	} else {
 		pr_err("send invalidate failed for rkey = 0x%x\n", rkey);
@@ -883,9 +881,9 @@ static enum resp_states do_complete(struct rxe_qp *qp,
 	}
 
 	if (pkt->mask & RXE_IETH_MASK) {
-		ret = send_invalidate(rxe, rkey);
+		ret = send_invalidate(qp, rxe, rkey);
 		if (ret) {
-			pr_err("do_complete: send invalidate failed\n");
+			pr_err("send with invalidate failed\n");
 			// TODO
 		}
 	}
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
index b11ab3ae87a3..caaacfabadbc 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
@@ -172,8 +172,6 @@ static int rxe_alloc_pd(struct ib_pd *ibpd, struct ib_udata *udata)
 	struct rxe_dev *rxe = to_rdev(ibpd->device);
 	struct rxe_pd *pd = to_rpd(ibpd);
 
-	pr_info("rxe_alloc_pd: called\n");
-
 	return rxe_add_to_pool(&rxe->pd_pool, &pd->pelem);
 }
 
@@ -181,8 +179,6 @@ static void rxe_dealloc_pd(struct ib_pd *ibpd, struct ib_udata *udata)
 {
 	struct rxe_pd *pd = to_rpd(ibpd);
 
-	pr_info("rxe_dealloc_pd: called\n");
-
 	rxe_drop_ref(pd);
 }
 
@@ -414,8 +410,6 @@ static struct ib_qp *rxe_create_qp(struct ib_pd *ibpd,
 	struct rxe_qp *qp;
 	struct rxe_create_qp_resp __user *uresp = NULL;
 
-	pr_info("rxe_create_qp: called\n");
-
 	if (udata) {
 		if (udata->outlen < sizeof(*uresp))
 			return ERR_PTR(-EINVAL);
@@ -463,8 +457,6 @@ static int rxe_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
 	struct rxe_dev *rxe = to_rdev(ibqp->device);
 	struct rxe_qp *qp = to_rqp(ibqp);
 
-	pr_info("rxe_modify_qp: called\n");
-
 	err = rxe_qp_chk_attr(rxe, qp, attr, mask);
 	if (err)
 		goto err1;
@@ -484,8 +476,6 @@ static int rxe_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
 {
 	struct rxe_qp *qp = to_rqp(ibqp);
 
-	pr_info("rxe_query_qp: called\n");
-
 	rxe_qp_to_init(qp, init);
 	rxe_qp_to_attr(qp, attr, mask);
 
@@ -496,8 +486,6 @@ static int rxe_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
 {
 	struct rxe_qp *qp = to_rqp(ibqp);
 
-	pr_info("rxe_destroy_qp: called\n");
-
 	rxe_qp_destroy(qp);
 	rxe_drop_index(qp);
 	rxe_drop_ref(qp);
@@ -794,8 +782,6 @@ static int rxe_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
 	struct rxe_cq *cq = to_rcq(ibcq);
 	struct rxe_create_cq_resp __user *uresp = NULL;
 
-	pr_info("rxe_create_cq: called\n");
-
 	if (udata) {
 		if (udata->outlen < sizeof(*uresp))
 			return -EINVAL;
@@ -821,8 +807,6 @@ static void rxe_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
 {
 	struct rxe_cq *cq = to_rcq(ibcq);
 
-	pr_info("rxe_destroy_cq: called\n");
-
 	rxe_cq_disable(cq);
 
 	rxe_drop_ref(cq);
@@ -862,8 +846,6 @@ static int rxe_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *wc)
 	struct rxe_cqe *cqe;
 	unsigned long flags;
 
-	pr_info("rxe_poll_cq: called\n");
-
 	spin_lock_irqsave(&cq->cq_lock, flags);
 	for (i = 0; i < num_entries; i++) {
 		cqe = queue_head(cq->queue);
@@ -934,8 +916,6 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd,
 	struct rxe_pd *pd = to_rpd(ibpd);
 	struct rxe_reg_mr_resp __user *uresp = NULL;
 
-	pr_info("rxe_reg_user_mr: called\n");
-
 	if (udata) {
 		if (udata->outlen < sizeof(*uresp)) {
 			err = -EINVAL;
@@ -980,7 +960,10 @@ static int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata)
 {
 	struct rxe_mr *mr = to_rmr(ibmr);
 
-	pr_info("rxe_dereg_user_mr: called\n");
+	if (atomic_read(&mr->num_mw)) {
+		pr_err("attempt to dereg MR with bound MWs\n");
+		return -EBUSY;
+	}
 
 	mr->state = RXE_MEM_STATE_ZOMBIE;
 	rxe_drop_ref(mr->pd);
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h
index ebe4157fbcdd..2fe8433d0801 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.h
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.h
@@ -344,6 +344,8 @@ struct rxe_mr {
 	u32			max_buf;
 	u32			num_map;
 
+	atomic_t		num_mw;
+
 	struct rxe_map		**map;
 };
 
@@ -353,11 +355,16 @@ enum rxe_mw_state {
 	RXE_MW_STATE_VALID,
 };
 
+enum rxe_send_flags {
+	/* flag indicaes bind call came through verbs API */
+	RXE_BIND_MW		= (1 << 0),
+};
+
 struct rxe_mw {
 	struct rxe_pool_entry	pelem;
 	struct ib_mw		ibmw;
+	struct rxe_mr		*mr;
 	struct rxe_qp		*qp;	/* type 2B only */
-	struct rxe_mem		*mr;
 	spinlock_t		lock;
 	enum rxe_mw_state	state;
 	u32			access;
diff --git a/include/uapi/rdma/rdma_user_rxe.h b/include/uapi/rdma/rdma_user_rxe.h
index c1e84cd69c37..05fe8bef947d 100644
--- a/include/uapi/rdma/rdma_user_rxe.h
+++ b/include/uapi/rdma/rdma_user_rxe.h
@@ -102,6 +102,7 @@ struct rxe_send_wr {
 			__u32   pad2;
 			__u32	rkey;
 			__u32	access;
+			__u32	flags;
 		} umw;
 		/* below are only used by the kernel */
 		struct {
@@ -117,6 +118,7 @@ struct rxe_send_wr {
 			};
 			__u32	rkey;
 			__u32	access;
+			__u32	flags;
 		} kmw;
 		struct {
 			union {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 18/20] cleanup
  2020-08-15  4:58 Memory windows support for rxe Bob Pearson
                   ` (16 preceding siblings ...)
  2020-08-15  4:58 ` [PATCH 17/20] Implemented functional " Bob Pearson
@ 2020-08-15  4:58 ` Bob Pearson
  2020-08-15  4:58 ` [PATCH 19/20] fixed white space issues Bob Pearson
  2020-08-15  4:58 ` [PATCH 20/20] fixed checkpatch issues for all files in rxe Bob Pearson
  19 siblings, 0 replies; 23+ messages in thread
From: Bob Pearson @ 2020-08-15  4:58 UTC (permalink / raw)
  To: linux-rdma; +Cc: Bob Pearson

This patch culls some left over comments and made things a little
neater.

Signed-off-by: Bob Pearson <rpearson@hpe.com>
---
 drivers/infiniband/sw/rxe/rxe_comp.c  |  10 +--
 drivers/infiniband/sw/rxe/rxe_loc.h   |  37 +++------
 drivers/infiniband/sw/rxe/rxe_mr.c    | 106 ++++++++++++------------
 drivers/infiniband/sw/rxe/rxe_mw.c    | 115 +++++++++++++++++++-------
 drivers/infiniband/sw/rxe/rxe_req.c   |  33 +++-----
 drivers/infiniband/sw/rxe/rxe_resp.c  |  57 ++++++++-----
 drivers/infiniband/sw/rxe/rxe_verbs.h |  17 ++--
 7 files changed, 208 insertions(+), 167 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c
index d2a094621486..ed9e27eeaadd 100644
--- a/drivers/infiniband/sw/rxe/rxe_comp.c
+++ b/drivers/infiniband/sw/rxe/rxe_comp.c
@@ -790,23 +790,19 @@ int rxe_completer(void *arg)
 		}
 	}
 
-	/* these are the same. need to merge them TODO */
 exit:
 	/* we come here if we are done with processing and want the task to
-	 * exit from the loop calling us -- to call us again later
-	 */
+	 * exit from the loop calling us */
 	WARN_ON_ONCE(skb);
 	atomic_dec(&qp->comp.task.entered);
 	rxe_drop_ref(qp);
 	return -EAGAIN;
 
 done:
-	/* we come here if we have processed a packet we want the task to call
-	 * us again to see if there is anything else to do
-	 */
+	/* we come here if we have processed a packet and we want
+	 * to be called again to see if there is anything else to do */
 	WARN_ON_ONCE(skb);
 	atomic_dec(&qp->comp.task.entered);
 	rxe_drop_ref(qp);
 	return 0;
-
 }
diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
index 652e0d67fe5c..2421ca311845 100644
--- a/drivers/infiniband/sw/rxe/rxe_loc.h
+++ b/drivers/infiniband/sw/rxe/rxe_loc.h
@@ -99,45 +99,26 @@ int rxe_mmap(struct ib_ucontext *context, struct vm_area_struct *vma);
 
 /* rxe_mr.c */
 void rxe_set_mr_lkey(struct rxe_mr *mr);
-
 enum copy_direction {
 	to_mr_obj,
 	from_mr_obj,
 };
-
-void rxe_mr_init_dma(struct rxe_pd *pd,
-		     int access, struct rxe_mr *mr);
-
-int rxe_mr_init_user(struct rxe_pd *pd, u64 start,
-		      u64 length, u64 iova, int access, struct ib_udata *udata,
-		      struct rxe_mr *mr);
-
-int rxe_mr_init_fast(struct rxe_pd *pd,
-		      int max_pages, struct rxe_mr *mr);
-
+void rxe_mr_init_dma(struct rxe_pd *pd, int access, struct rxe_mr *mr);
+int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length,
+		     u64 iova, int access, struct ib_udata *udata,
+		     struct rxe_mr *mr);
+int rxe_mr_init_fast(struct rxe_pd *pd, int max_pages, struct rxe_mr *mr);
 int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr,
-		 int length, enum copy_direction dir, u32 *crcp);
-
+		int length, enum copy_direction dir, u32 *crcp);
 int copy_data(struct rxe_pd *pd, int access,
 	      struct rxe_dma_info *dma, void *addr, int length,
 	      enum copy_direction dir, u32 *crcp);
-
 void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length);
-
-enum lookup_type {
-	lookup_local,
-	lookup_remote,
-};
-
-struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key,
-			   enum lookup_type type);
-
-int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length);
-
 void rxe_mr_cleanup(struct rxe_pool_entry *arg);
-
 int advance_dma_data(struct rxe_dma_info *dma, unsigned int length);
 int rxe_invalidate_mr(struct rxe_qp *qp, struct rxe_mr *mr);
+int rxe_mr_check_access(struct rxe_qp *qp, struct rxe_mr *mr,
+			int access, u64 va, u32 resid);
 
 /* rxe_mw.c */
 void rxe_set_mw_rkey(struct rxe_mw *mw);
@@ -147,6 +128,8 @@ int rxe_dealloc_mw(struct ib_mw *ibmw);
 int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe);
 void rxe_mw_cleanup(struct rxe_pool_entry *arg);
 int rxe_invalidate_mw(struct rxe_qp *qp, struct rxe_mw *mw);
+int rxe_mw_check_access(struct rxe_qp *qp, struct rxe_mw *mw,
+			int access, u64 va, u32 resid);
 
 /* rxe_net.c */
 void rxe_loopback(struct sk_buff *skb);
diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
index a983a838bf4c..ce64d4101888 100644
--- a/drivers/infiniband/sw/rxe/rxe_mr.c
+++ b/drivers/infiniband/sw/rxe/rxe_mr.c
@@ -43,33 +43,14 @@ void rxe_set_mr_lkey(struct rxe_mr *mr)
 
 	do {
 		get_random_bytes(&lkey, sizeof(lkey));
-		lkey &= 0x7fffffff;
+		lkey &= ~IS_MW;
 		if (likely(lkey && (rxe_add_key(mr, &lkey) == 0)))
 			return;
 	} while (tries++ < 10);
 	pr_err("unable to get random key for mr\n");
 }
 
-#if 0
-/*
- * lfsr (linear feedback shift register) with period 255
- */
-static u8 rxe_get_key(void)
-{
-	static u32 key = 1;
-
-	key = key << 1;
-
-	key |= (0 != (key & 0x100)) ^ (0 != (key & 0x10))
-		^ (0 != (key & 0x80)) ^ (0 != (key & 0x40));
-
-	key &= 0xff;
-
-	return key;
-}
-#endif
-
-int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length)
+static int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length)
 {
 	switch (mr->type) {
 	case RXE_MEM_TYPE_DMA:
@@ -430,6 +411,25 @@ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length,
 	return err;
 }
 
+static struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 lkey)
+{
+	struct rxe_mr *mr;
+	struct rxe_dev *rxe = to_rdev(pd->ibpd.device);
+
+	mr = rxe_pool_get_key(&rxe->mr_pool, &lkey);
+	if (!mr)
+		return NULL;
+
+	if (unlikely((mr->ibmr.lkey != lkey) || (mr->pd != pd) ||
+		     (access && !(access & mr->access)) ||
+		     (mr->state != RXE_MEM_STATE_VALID))) {
+		rxe_drop_ref(mr);
+		return NULL;
+	}
+
+	return mr;
+}
+
 /* copy data in or out of a wqe, i.e. sg list
  * under the control of a dma descriptor
  */
@@ -459,7 +459,7 @@ int copy_data(
 	}
 
 	if (sge->length && (offset < sge->length)) {
-		mr = lookup_mr(pd, access, sge->lkey, lookup_local);
+		mr = lookup_mr(pd, access, sge->lkey);
 		if (!mr) {
 			err = -EINVAL;
 			goto err1;
@@ -484,8 +484,7 @@ int copy_data(
 			}
 
 			if (sge->length) {
-				mr = lookup_mr(pd, access, sge->lkey,
-						 lookup_local);
+				mr = lookup_mr(pd, access, sge->lkey);
 				if (!mr) {
 					err = -EINVAL;
 					goto err1;
@@ -560,34 +559,6 @@ int advance_dma_data(struct rxe_dma_info *dma, unsigned int length)
 	return 0;
 }
 
-/* (1) find the mr (mr or mw) corresponding to lkey/rkey
- *     depending on lookup_type
- * (2) verify that the (qp) pd matches the mr pd
- * (3) verify that the mr can support the requested access
- * (4) verify that mr state is valid
- */
-struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key,
-			 enum lookup_type type)
-{
-	struct rxe_mr *mr;
-	struct rxe_dev *rxe = to_rdev(pd->ibpd.device);
-
-	mr = rxe_pool_get_key(&rxe->mr_pool, &key);
-	if (!mr)
-		return NULL;
-
-	if (unlikely((type == lookup_local && mr->ibmr.lkey != key) ||
-		     (type == lookup_remote && mr->ibmr.rkey != key) ||
-		     mr->pd != pd ||
-		     (access && !(access & mr->access)) ||
-		     mr->state != RXE_MEM_STATE_VALID)) {
-		rxe_drop_ref(mr);
-		mr = NULL;
-	}
-
-	return mr;
-}
-
 int rxe_invalidate_mr(struct rxe_qp *qp, struct rxe_mr *mr)
 {
 	// much more TODO here, can fail
@@ -599,6 +570,37 @@ int rxe_invalidate_mr(struct rxe_qp *qp, struct rxe_mr *mr)
 	return 0;
 }
 
+int rxe_mr_check_access(struct rxe_qp *qp, struct rxe_mr *mr,
+			int access, u64 va, u32 resid)
+{
+	int ret;
+	struct rxe_pd *pd = to_rpd(mr->ibmr.pd);
+
+	if (unlikely(mr->state != RXE_MEM_STATE_VALID)) {
+		pr_err("attempt to access a MR that is"
+			" not in the valid state\n");
+		return -EINVAL;
+	}
+
+	/* C10-56 */
+	if (unlikely(pd != qp->pd)) {
+		pr_err("attempt to access a MR with a"
+			" different PD than the QP\n");
+		return -EINVAL;
+	}
+
+	/* C10-57 */
+	if (unlikely(access && !(access & mr->access))) {
+		pr_err("attempt to access a MR that does"
+			" not have the required access rights\n");
+		return -EINVAL;
+	}
+
+	ret = mr_check_range(mr, va, resid);
+
+	return ret;
+}
+
 void rxe_mr_cleanup(struct rxe_pool_entry *arg)
 {
 	struct rxe_mr *mr = container_of(arg, typeof(*mr), pelem);
diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c
index 0c774aadf6c7..6b998527b34b 100644
--- a/drivers/infiniband/sw/rxe/rxe_mw.c
+++ b/drivers/infiniband/sw/rxe/rxe_mw.c
@@ -44,8 +44,8 @@ void rxe_set_mw_rkey(struct rxe_mw *mw)
 
 	do {
 		get_random_bytes(&rkey, sizeof(rkey));
-		rkey |= 0x80000000;
-		if (likely((rkey & 0x7fffffff) &&
+		rkey |= IS_MW;
+		if (likely((rkey & ~IS_MW) &&
 			   (rxe_add_key(mw, &rkey) == 0)))
 			return;
 	} while (tries++ < 10);
@@ -77,10 +77,10 @@ struct ib_mw *rxe_alloc_mw(struct ib_pd *ibpd, enum ib_mw_type type,
 
 	switch (type) {
 	case IB_MW_TYPE_1:
-		mw->state	= RXE_MW_STATE_VALID;
+		mw->state	= RXE_MEM_STATE_VALID;
 		break;
 	case IB_MW_TYPE_2:
-		mw->state	= RXE_MW_STATE_FREE;
+		mw->state	= RXE_MEM_STATE_FREE;
 		break;
 	default:
 		pr_err("attempt to allocate MW with unknown type\n");
@@ -166,7 +166,7 @@ static int check_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
 	}
 
 	if (unlikely((mw->ibmw.type == IB_MW_TYPE_1) &&
-			(mw->state != RXE_MW_STATE_VALID))) {
+			(mw->state != RXE_MEM_STATE_VALID))) {
 		pr_err("attempt to bind a type 1 MW not in the"
 			" valid state\n");
 		return -EINVAL;
@@ -195,7 +195,7 @@ static int check_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
 
 	/* o10-37.2.30: */
 	if (unlikely((mw->ibmw.type == IB_MW_TYPE_2) &&
-			(mw->state != RXE_MW_STATE_FREE))) {
+			(mw->state != RXE_MEM_STATE_FREE))) {
 		pr_err("attempt to bind a type 2 MW not in the"
 			" free state\n");
 		return -EINVAL;
@@ -217,9 +217,6 @@ static int check_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
 		return -EINVAL;
 	}
 
-	/* MR duplicates address and length in the private and ib
-	 * parts of the rxe_mr struct. TODO should only keep one. */
-
 	/* C10-75: */
 	if (mw->access & IB_ZERO_BASED) {
 		if (unlikely(wqe->wr.wr.umw.length > mr->length)) {
@@ -240,13 +237,29 @@ static int check_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
 	return 0;
 }
 
-static void do_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
+static int do_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
 			struct rxe_mw *mw, struct rxe_mr *mr)
 {
+	int ret;
 	u32 rkey;
+	u32 new_rkey;
+
+	/* key part of new rkey is provided by user for type 2
+	 * and ibv_bind_mw() for type 1 MWs */
+	rkey = mw->ibmw.rkey;
+	rxe_drop_key(mw);
+	new_rkey = (rkey & 0xffffff00) | (wqe->wr.wr.umw.rkey & 0x000000ff);
+	ret = rxe_add_key(mw, &new_rkey);
+	if (ret) {
+		/* this should never happen */
+		pr_err("shouldn't happen unable to set new rkey\n");
+		/* try to put back the old one */
+		rxe_add_key(mw, &rkey);
+		return ret;
+	}
 
 	mw->access = wqe->wr.wr.umw.access;
-	mw->state = RXE_MW_STATE_VALID;
+	mw->state = RXE_MEM_STATE_VALID;
 	mw->addr = wqe->wr.wr.umw.addr;
 	mw->length = wqe->wr.wr.umw.length;
 
@@ -272,14 +285,7 @@ static void do_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
 		mw->qp = qp;
 	}
 
-	/* key part of new rkey is provided by user for type 2
-	 * and ibv_bind_mw() for type 1 MWs */
-	rkey = mw->ibmw.rkey;
-	rxe_drop_key(mw);
-	rkey = (rkey & 0xffffff00) | (wqe->wr.wr.umw.rkey & 0x000000ff);
-	rxe_add_key(mw, &rkey);
-
-	return;
+	return 0;
 }
 
 int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
@@ -326,7 +332,7 @@ int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
 		goto err3;
 
 	/* implement the change */
-	do_bind_mw(qp, wqe, mw, mr);
+	ret = do_bind_mw(qp, wqe, mw, mr);
 err3:
 	spin_unlock_irqrestore(&mw->lock, flags);
 
@@ -340,15 +346,15 @@ int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
 
 static int check_invalidate_mw(struct rxe_qp *qp, struct rxe_mw *mw)
 {
-	/* o10-37.2.26: */
-	if (unlikely(mw->ibmw.type == IB_MW_TYPE_1)) {
-		pr_err("attempt to invalidate a type 1 MW\n");
+	if (unlikely(mw->state != RXE_MEM_STATE_VALID)) {
+		pr_warn("attempt to invalidate a MW that"
+			" is not valid\n");
 		return -EINVAL;
 	}
 
-	if (unlikely(mw->state != RXE_MW_STATE_VALID)) {
-		pr_warn("attempt to invalidate a MW that"
-			" is not valid\n");
+	/* o10-37.2.26: */
+	if (unlikely(mw->ibmw.type == IB_MW_TYPE_1)) {
+		pr_err("attempt to invalidate a type 1 MW\n");
 		return -EINVAL;
 	}
 
@@ -366,7 +372,7 @@ static void do_invalidate_mw(struct rxe_mw *mw)
 	mw->access = 0;
 	mw->addr = 0; 
 	mw->length = 0;
-	mw->state = RXE_MW_STATE_FREE;
+	mw->state = RXE_MEM_STATE_FREE;
 }
 
 int rxe_invalidate_mw(struct rxe_qp *qp, struct rxe_mw *mw)
@@ -387,7 +393,7 @@ int rxe_invalidate_mw(struct rxe_qp *qp, struct rxe_mw *mw)
 	return ret;
 }
 
-static void do_deallocate_mw(struct rxe_mw *mw)
+static void do_dealloc_mw(struct rxe_mw *mw)
 {
 	mw->qp = NULL;
 
@@ -397,10 +403,11 @@ static void do_deallocate_mw(struct rxe_mw *mw)
 		mw->mr = NULL;
 	}
 
+	mw->ibmw.pd = NULL;
 	mw->access = 0;
 	mw->addr = 0; 
 	mw->length = 0;
-	mw->state = RXE_MW_STATE_INVALID;
+	mw->state = RXE_MEM_STATE_INVALID;
 }
 
 int rxe_dealloc_mw(struct ib_mw *ibmw)
@@ -411,7 +418,7 @@ int rxe_dealloc_mw(struct ib_mw *ibmw)
 
 	spin_lock_irqsave(&mw->lock, flags);
 
-	do_deallocate_mw(mw);
+	do_dealloc_mw(mw);
 
 	spin_unlock_irqrestore(&mw->lock, flags);
 
@@ -421,6 +428,54 @@ int rxe_dealloc_mw(struct ib_mw *ibmw)
 	return 0;
 }
 
+int rxe_mw_check_access(struct rxe_qp *qp, struct rxe_mw *mw,
+			int access, u64 va, u32 resid)
+{
+	struct rxe_pd *pd = to_rpd(mw->ibmw.pd);
+
+	if (unlikely(mw->state != RXE_MEM_STATE_VALID)) {
+		pr_err("attempt to access a MW that is"
+			" not in the valid state\n");
+		return -EINVAL;
+	}
+
+	/* C10-76.2.1 */
+	if (unlikely((mw->ibmw.type == IB_MW_TYPE_1) && (pd != qp->pd))) {
+		pr_err("attempt to access a type 1 MW with a"
+			" different PD than the QP\n");
+		return -EINVAL;
+	}
+
+	/* o10-37.2.43 */
+	if (unlikely((mw->ibmw.type == IB_MW_TYPE_2) && (mw->qp != qp))) {
+		pr_err("attempt to access a type 2 MW that is"
+			" associated with a different QP\n");
+		return -EINVAL;
+	}
+
+	/* C10-77 */
+	if (unlikely(access && !(access & mw->access))) {
+		pr_err("attempt to access a MW that does"
+			" not have the required access rights\n");
+		return -EINVAL;
+	}
+
+	if (mw->access & IB_ZERO_BASED) {
+		if (unlikely((va + resid) > mw->length)) {
+			pr_err("attempt to access a MW out of bounds\n");
+			return -EINVAL;
+		}
+	} else {
+		if (unlikely((va < mw->addr) ||
+			((va + resid) > (mw->addr + mw->length)))) {
+			pr_err("attempt to access a MW out of bounds\n");
+			return -EINVAL;
+		}
+	}
+
+	return 0;
+}
+
 void rxe_mw_cleanup(struct rxe_pool_entry *arg)
 {
 	struct rxe_mw *mw = container_of(arg, typeof(*mw), pelem);
diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
index ad747f230318..f0fa195fcc70 100644
--- a/drivers/infiniband/sw/rxe/rxe_req.c
+++ b/drivers/infiniband/sw/rxe/rxe_req.c
@@ -591,7 +591,7 @@ static int local_invalidate(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
 	struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
 	u32 key = wqe->wr.ex.invalidate_rkey;
 
-	if ((mr = rxe_pool_get_key(&rxe->mr_pool, &key))) {
+	if (!(key & IS_MW) && (mr = rxe_pool_get_key(&rxe->mr_pool, &key))) {
 		ret = rxe_invalidate_mr(qp, mr);
 		rxe_drop_ref(mr);
 	} else if ((mw = rxe_pool_get_key(&rxe->mw_pool, &key))) {
@@ -732,12 +732,7 @@ int rxe_requester(void *arg)
 	payload = (mask & RXE_WRITE_OR_SEND) ? wqe->dma.resid : 0;
 	if (payload > mtu) {
 		if (qp_type(qp) == IB_QPT_UD) {
-			/* C10-93.1.1: If the total sum of all the buffer lengths specified for a
-			 * UD message exceeds the MTU of the port as returned by QueryHCA, the CI
-			 * shall not emit any packets for this message. Further, the CI shall not
-			 * generate an error due to this condition.
-			 */
-
+			/* C10-93.1.1 */
 			/* fake a successful UD send */
 			wqe->first_psn = qp->req.psn;
 			wqe->last_psn = qp->req.psn;
@@ -747,8 +742,13 @@ int rxe_requester(void *arg)
 						       qp->req.wqe_index);
 			wqe->state = wqe_state_done;
 			wqe->status = IB_WC_SUCCESS;
-			// TODO why?? why not just treat the same as a
-			// successful wqe and go to next wqe?
+
+			/* TODO why?? why not just treat the same as a
+			 * successful wqe and go to next wqe?
+			 * __rxe_do_task probably shouldn't be used
+			 * it reenters the completion task which may
+			 * already be running
+			 */
 			__rxe_do_task(&qp->comp.task);
 			goto again;
 		}
@@ -789,7 +789,7 @@ int rxe_requester(void *arg)
 			goto exit;
 		}
 
-		wqe->status = IB_WC_LOC_PROT_ERR;	// ?? FIXME
+		wqe->status = IB_WC_LOC_PROT_ERR;
 		goto err;
 	}
 
@@ -797,17 +797,12 @@ int rxe_requester(void *arg)
 
 	goto next_wqe;
 
-	// TODO this can be cleaned up
 err:
 	/* we come here if an error occured while processing
 	 * a send wqe. The completer will put the qp in error
 	 * state and no more wqes will be processed unless
-	 * the qp is cleaned up and restarted. We do not want
-	 * to be called again */
+	 * the qp is cleaned up and restarted. */
 	wqe->state = wqe_state_error;
-	// ?? we want to force the qp into error state before
-	// anyone else has a chance to process another wqe but
-	// this could collide with an already running completer
 	__rxe_do_task(&qp->comp.task);
 	ret = -EAGAIN;
 	goto done;
@@ -816,14 +811,12 @@ int rxe_requester(void *arg)
 	/* we come here if either there are no more wqes in the send
 	 * queue or we are blocked waiting for some resource or event.
 	 * The current wqe will be restarted or new wqe started when
-	 * there is work to do. */
+	 * there is something to do. */
 	ret = -EAGAIN;
 	goto done;
 
 again:
-	/* we come here if we are done with the current wqe but want to
-	 * get called again. Mostly we loop back to next wqe so should
-	 * be all one way or the other */
+	/* we come here if we need to exit and reenter the task */
 	ret = 0;
 	goto done;
 
diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
index 49cd77cd6264..0bfea50505d1 100644
--- a/drivers/infiniband/sw/rxe/rxe_resp.c
+++ b/drivers/infiniband/sw/rxe/rxe_resp.c
@@ -417,7 +417,9 @@ static enum resp_states check_length(struct rxe_qp *qp,
 static enum resp_states check_rkey(struct rxe_qp *qp,
 				   struct rxe_pkt_info *pkt)
 {
-	struct rxe_mr *mr = NULL;
+	struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
+	struct rxe_mr *mr;
+	struct rxe_mw *mw;
 	u64 va;
 	u32 rkey;
 	u32 resid;
@@ -425,6 +427,7 @@ static enum resp_states check_rkey(struct rxe_qp *qp,
 	int mtu = qp->mtu;
 	enum resp_states state;
 	int access;
+	unsigned long flags;
 
 	if (pkt->mask & (RXE_READ_MASK | RXE_WRITE_MASK)) {
 		if (pkt->mask & RXE_RETH_MASK) {
@@ -432,13 +435,16 @@ static enum resp_states check_rkey(struct rxe_qp *qp,
 			qp->resp.rkey = reth_rkey(pkt);
 			qp->resp.resid = reth_len(pkt);
 			qp->resp.length = reth_len(pkt);
+			qp->resp.offset = 0;
 		}
-		access = (pkt->mask & RXE_READ_MASK) ? IB_ACCESS_REMOTE_READ
-						     : IB_ACCESS_REMOTE_WRITE;
+		access = (pkt->mask & RXE_READ_MASK)
+				? IB_ACCESS_REMOTE_READ
+				: IB_ACCESS_REMOTE_WRITE;
 	} else if (pkt->mask & RXE_ATOMIC_MASK) {
 		qp->resp.va = atmeth_va(pkt);
 		qp->resp.rkey = atmeth_rkey(pkt);
 		qp->resp.resid = sizeof(u64);
+		qp->resp.offset = 0;
 		access = IB_ACCESS_REMOTE_ATOMIC;
 	} else {
 		return RESPST_EXECUTE;
@@ -456,18 +462,31 @@ static enum resp_states check_rkey(struct rxe_qp *qp,
 	resid	= qp->resp.resid;
 	pktlen	= payload_size(pkt);
 
-	mr = lookup_mr(qp->pd, access, rkey, lookup_remote);
-	if (!mr) {
-		state = RESPST_ERR_RKEY_VIOLATION;
-		goto err;
-	}
+	if ((rkey & IS_MW) && (mw = rxe_pool_get_key(&rxe->mw_pool, &rkey))) {
+		spin_lock_irqsave(&mw->lock, flags);
+		if (rxe_mw_check_access(qp, mw, access, va, resid)) {
+			spin_unlock_irqrestore(&mw->lock, flags);
+			rxe_drop_ref(mw);
+			state = RESPST_ERR_RKEY_VIOLATION;
+			goto err;
+		}
 
-	if (unlikely(mr->state == RXE_MEM_STATE_FREE)) {
-		state = RESPST_ERR_RKEY_VIOLATION;
-		goto err;
-	}
+		mr = mw->mr;
+		rxe_add_ref(mr);
+
+		if (mw->access & IB_ZERO_BASED)
+			qp->resp.offset = mw->addr;
 
-	if (mr_check_range(mr, va, resid)) {
+		spin_unlock_irqrestore(&mw->lock, flags);
+		rxe_drop_ref(mw);
+	} else if ((mr = rxe_pool_get_key(&rxe->mr_pool, &rkey)) &&
+		   (mr->rkey == rkey)) {
+		if (rxe_mr_check_access(qp, mr, access, va, resid)) {
+			state = RESPST_ERR_RKEY_VIOLATION;
+			goto err;
+		}
+	} else {
+		pr_err("no MR/MW found with rkey = 0x%08x\n", rkey);
 		state = RESPST_ERR_RKEY_VIOLATION;
 		goto err;
 	}
@@ -525,8 +544,8 @@ static enum resp_states write_data_in(struct rxe_qp *qp,
 	int	err;
 	int data_len = payload_size(pkt);
 
-	err = rxe_mr_copy(qp->resp.mr, qp->resp.va, payload_addr(pkt),
-			   data_len, to_mr_obj, NULL);
+	err = rxe_mr_copy(qp->resp.mr, qp->resp.va + qp->resp.offset,
+			payload_addr(pkt), data_len, to_mr_obj, NULL);
 	if (err) {
 		rc = RESPST_ERR_RKEY_VIOLATION;
 		goto out;
@@ -545,17 +564,11 @@ static DEFINE_SPINLOCK(atomic_ops_lock);
 static enum resp_states process_atomic(struct rxe_qp *qp,
 				       struct rxe_pkt_info *pkt)
 {
-	u64 iova = atmeth_va(pkt);
 	u64 *vaddr;
 	enum resp_states ret;
 	struct rxe_mr *mr = qp->resp.mr;
 
-	if (mr->state != RXE_MEM_STATE_VALID) {
-		ret = RESPST_ERR_RKEY_VIOLATION;
-		goto out;
-	}
-
-	vaddr = iova_to_vaddr(mr, iova, sizeof(u64));
+	vaddr = iova_to_vaddr(mr, qp->resp.va + qp->resp.offset, sizeof(u64));
 
 	/* check vaddr is 8 bytes aligned. */
 	if (!vaddr || (uintptr_t)vaddr & 7) {
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h
index 2fe8433d0801..b4855d3ea6f4 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.h
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.h
@@ -210,6 +210,7 @@ struct rxe_resp_info {
 
 	/* RDMA read / atomic only */
 	u64			va;
+	u64			offset;
 	struct rxe_mr		*mr;
 	u32			resid;
 	u32			rkey;
@@ -289,7 +290,8 @@ struct rxe_qp {
 	struct execute_work	cleanup_work;
 };
 
-enum rxe_mr_state {
+/* common state for rxe_mr and rxe_mw */
+enum rxe_mem_state {
 	RXE_MEM_STATE_ZOMBIE,
 	RXE_MEM_STATE_INVALID,
 	RXE_MEM_STATE_FREE,
@@ -325,7 +327,7 @@ struct rxe_mr {
 	u32			lkey;
 	u32			rkey;
 
-	enum rxe_mr_state	state;
+	enum rxe_mem_state	state;
 	enum rxe_mr_type	type;
 	u64			va;
 	u64			iova;
@@ -349,24 +351,21 @@ struct rxe_mr {
 	struct rxe_map		**map;
 };
 
-enum rxe_mw_state {
-	RXE_MW_STATE_INVALID,
-	RXE_MW_STATE_FREE,
-	RXE_MW_STATE_VALID,
-};
-
 enum rxe_send_flags {
 	/* flag indicaes bind call came through verbs API */
 	RXE_BIND_MW		= (1 << 0),
 };
 
+/* use high order bit to separate MW and MR rkeys */
+#define IS_MW	(1 << 31)
+
 struct rxe_mw {
 	struct rxe_pool_entry	pelem;
 	struct ib_mw		ibmw;
 	struct rxe_mr		*mr;
 	struct rxe_qp		*qp;	/* type 2B only */
 	spinlock_t		lock;
-	enum rxe_mw_state	state;
+	enum rxe_mem_state	state;
 	u32			access;
 	u64			addr;
 	u64			length;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 19/20] fixed white space issues
  2020-08-15  4:58 Memory windows support for rxe Bob Pearson
                   ` (17 preceding siblings ...)
  2020-08-15  4:58 ` [PATCH 18/20] cleanup Bob Pearson
@ 2020-08-15  4:58 ` Bob Pearson
  2020-08-15  4:58 ` [PATCH 20/20] fixed checkpatch issues for all files in rxe Bob Pearson
  19 siblings, 0 replies; 23+ messages in thread
From: Bob Pearson @ 2020-08-15  4:58 UTC (permalink / raw)
  To: linux-rdma; +Cc: Bob Pearson

Checkpatch reported some trailing whitespace. Trying to fix that
here.

Signed-off-by: Bob Pearson <rpearson@hpe.com>
---
 drivers/infiniband/sw/rxe/rxe_mw.c    | 4 ++--
 drivers/infiniband/sw/rxe/rxe_verbs.h | 6 +++---
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c
index 6b998527b34b..ae7f5710f7dd 100644
--- a/drivers/infiniband/sw/rxe/rxe_mw.c
+++ b/drivers/infiniband/sw/rxe/rxe_mw.c
@@ -370,7 +370,7 @@ static void do_invalidate_mw(struct rxe_mw *mw)
 	mw->mr = NULL;
 
 	mw->access = 0;
-	mw->addr = 0; 
+	mw->addr = 0;
 	mw->length = 0;
 	mw->state = RXE_MEM_STATE_FREE;
 }
@@ -405,7 +405,7 @@ static void do_dealloc_mw(struct rxe_mw *mw)
 
 	mw->ibmw.pd = NULL;
 	mw->access = 0;
-	mw->addr = 0; 
+	mw->addr = 0;
 	mw->length = 0;
 	mw->state = RXE_MEM_STATE_INVALID;
 }
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h
index b4855d3ea6f4..c990654e396d 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.h
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.h
@@ -66,7 +66,7 @@ struct rxe_ucontext {
 };
 
 struct rxe_pd {
-	struct ib_pd            ibpd;
+	struct ib_pd		ibpd;
 	struct rxe_pool_entry	pelem;
 };
 
@@ -309,8 +309,8 @@ enum rxe_mr_type {
 #define RXE_BUF_PER_MAP		(PAGE_SIZE / sizeof(struct rxe_phys_buf))
 
 struct rxe_phys_buf {
-	u64      addr;
-	u64      size;
+	u64	 addr;
+	u64	 size;
 };
 
 struct rxe_map {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 20/20] fixed checkpatch issues for all files in rxe
  2020-08-15  4:58 Memory windows support for rxe Bob Pearson
                   ` (18 preceding siblings ...)
  2020-08-15  4:58 ` [PATCH 19/20] fixed white space issues Bob Pearson
@ 2020-08-15  4:58 ` Bob Pearson
  2020-08-16  5:29     ` kernel test robot
  19 siblings, 1 reply; 23+ messages in thread
From: Bob Pearson @ 2020-08-15  4:58 UTC (permalink / raw)
  To: linux-rdma; +Cc: Bob Pearson

Went through all the files in the rxe dirextory and fixed all issues
reported by checkpatch. Removed remaining debugging code. Added SPDX
headers.

Signed-off-by: Bob Pearson <rpearson@hpe.com>
---
 drivers/infiniband/sw/rxe/rxe.c             |  31 +---
 drivers/infiniband/sw/rxe/rxe.h             |  31 +---
 drivers/infiniband/sw/rxe/rxe_av.c          |  31 +---
 drivers/infiniband/sw/rxe/rxe_comp.c        |  54 ++----
 drivers/infiniband/sw/rxe/rxe_cq.c          |  31 +---
 drivers/infiniband/sw/rxe/rxe_hdr.h         |  67 +++-----
 drivers/infiniband/sw/rxe/rxe_hw_counters.c |  61 ++-----
 drivers/infiniband/sw/rxe/rxe_hw_counters.h |  31 +---
 drivers/infiniband/sw/rxe/rxe_icrc.c        |  31 +---
 drivers/infiniband/sw/rxe/rxe_loc.h         |  38 +----
 drivers/infiniband/sw/rxe/rxe_mcast.c       |  31 +---
 drivers/infiniband/sw/rxe/rxe_mmap.c        |  31 +---
 drivers/infiniband/sw/rxe/rxe_mr.c          |  59 ++-----
 drivers/infiniband/sw/rxe/rxe_mw.c          | 176 ++++++++------------
 drivers/infiniband/sw/rxe/rxe_net.c         |  37 +---
 drivers/infiniband/sw/rxe/rxe_net.h         |  31 +---
 drivers/infiniband/sw/rxe/rxe_opcode.c      |  33 +---
 drivers/infiniband/sw/rxe/rxe_opcode.h      |  31 +---
 drivers/infiniband/sw/rxe/rxe_param.h       |  31 +---
 drivers/infiniband/sw/rxe/rxe_pool.c        |  43 +----
 drivers/infiniband/sw/rxe/rxe_pool.h        |  31 +---
 drivers/infiniband/sw/rxe/rxe_qp.c          |  53 ++----
 drivers/infiniband/sw/rxe/rxe_queue.c       |  31 +---
 drivers/infiniband/sw/rxe/rxe_queue.h       |  31 +---
 drivers/infiniband/sw/rxe/rxe_recv.c        |  31 +---
 drivers/infiniband/sw/rxe/rxe_req.c         | 122 ++++++--------
 drivers/infiniband/sw/rxe/rxe_resp.c        | 170 +++++++++----------
 drivers/infiniband/sw/rxe/rxe_srq.c         |  31 +---
 drivers/infiniband/sw/rxe/rxe_sysfs.c       |  34 +---
 drivers/infiniband/sw/rxe/rxe_task.c        |  31 +---
 drivers/infiniband/sw/rxe/rxe_task.h        |  33 +---
 drivers/infiniband/sw/rxe/rxe_verbs.c       |  51 ++----
 drivers/infiniband/sw/rxe/rxe_verbs.h       |  34 +---
 33 files changed, 384 insertions(+), 1208 deletions(-)

diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c
index 25bd25371f8e..97ed495c840e 100644
--- a/drivers/infiniband/sw/rxe/rxe.c
+++ b/drivers/infiniband/sw/rxe/rxe.c
@@ -1,34 +1,9 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
 /*
+ * drivers/infiniband/sw/rxe/rxe.c
+ *
  * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
  * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *     Redistribution and use in source and binary forms, with or
- *     without modification, are permitted provided that the following
- *     conditions are met:
- *
- *	- Redistributions of source code must retain the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer.
- *
- *	- Redistributions in binary form must reproduce the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer in the documentation and/or other materials
- *	  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
  */
 
 #include <rdma/rdma_netlink.h>
diff --git a/drivers/infiniband/sw/rxe/rxe.h b/drivers/infiniband/sw/rxe/rxe.h
index fb07eed9e402..87a75943ac27 100644
--- a/drivers/infiniband/sw/rxe/rxe.h
+++ b/drivers/infiniband/sw/rxe/rxe.h
@@ -1,34 +1,9 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
 /*
+ * drivers/infiniband/sw/rxe/rxe.h
+ *
  * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
  * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *     Redistribution and use in source and binary forms, with or
- *     without modification, are permitted provided that the following
- *     conditions are met:
- *
- *	- Redistributions of source code must retain the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer.
- *
- *	- Redistributions in binary form must reproduce the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer in the documentation and/or other materials
- *	  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
  */
 
 #ifndef RXE_H
diff --git a/drivers/infiniband/sw/rxe/rxe_av.c b/drivers/infiniband/sw/rxe/rxe_av.c
index 81ee756c19b8..9ab524ae4517 100644
--- a/drivers/infiniband/sw/rxe/rxe_av.c
+++ b/drivers/infiniband/sw/rxe/rxe_av.c
@@ -1,34 +1,9 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
 /*
+ * drivers/infiniband/sw/rxe/rxe_av.c
+ *
  * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
  * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *	   Redistribution and use in source and binary forms, with or
- *	   without modification, are permitted provided that the following
- *	   conditions are met:
- *
- *		- Redistributions of source code must retain the above
- *		  copyright notice, this list of conditions and the following
- *		  disclaimer.
- *
- *		- Redistributions in binary form must reproduce the above
- *		  copyright notice, this list of conditions and the following
- *		  disclaimer in the documentation and/or other materials
- *		  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
  */
 
 #include "rxe.h"
diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c
index ed9e27eeaadd..681e2b9811f2 100644
--- a/drivers/infiniband/sw/rxe/rxe_comp.c
+++ b/drivers/infiniband/sw/rxe/rxe_comp.c
@@ -1,34 +1,9 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
 /*
+ * drivers/infiniband/sw/rxe/rxe_comp.c
+ *
  * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
  * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *     Redistribution and use in source and binary forms, with or
- *     without modification, are permitted provided that the following
- *     conditions are met:
- *
- *	- Redistributions of source code must retain the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer.
- *
- *	- Redistributions in binary form must reproduce the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer in the documentation and/or other materials
- *	  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
  */
 
 #include <linux/skbuff.h>
@@ -563,21 +538,13 @@ int rxe_completer(void *arg)
 	struct sk_buff *skb = NULL;
 	struct rxe_pkt_info *pkt = NULL;
 	enum comp_state state;
-	int entered;
 
 	rxe_add_ref(qp);
 
-	// this code is 'guaranteed' to never be entered more
-	// than once. Check to make sure that this is the case
-	entered = atomic_inc_return(&qp->comp.task.entered);
-	if (entered > 1) {
-		pr_err("rxe_completer: entered %d times\n", entered);
-	}
-
 	if (!qp->valid || qp->req.state == QP_STATE_ERROR ||
 	    qp->req.state == QP_STATE_RESET) {
-			rxe_drain_resp_pkts(qp, qp->valid &&
-				    qp->req.state == QP_STATE_ERROR);
+		rxe_drain_resp_pkts(qp, qp->valid &&
+			    qp->req.state == QP_STATE_ERROR);
 		goto exit;
 	}
 
@@ -699,9 +666,8 @@ int rxe_completer(void *arg)
 			 */
 
 			/* there is nothing to retry in this case */
-			if (!wqe || (wqe->state == wqe_state_posted)) {
+			if (!wqe || (wqe->state == wqe_state_posted))
 				goto exit;
-			}
 
 			/* if we've started a retry, don't start another
 			 * retry sequence, unless this is a timeout.
@@ -792,17 +758,17 @@ int rxe_completer(void *arg)
 
 exit:
 	/* we come here if we are done with processing and want the task to
-	 * exit from the loop calling us */
+	 * exit from the loop calling us
+	 */
 	WARN_ON_ONCE(skb);
-	atomic_dec(&qp->comp.task.entered);
 	rxe_drop_ref(qp);
 	return -EAGAIN;
 
 done:
 	/* we come here if we have processed a packet and we want
-	 * to be called again to see if there is anything else to do */
+	 * to be called again to see if there is anything else to do
+	 */
 	WARN_ON_ONCE(skb);
-	atomic_dec(&qp->comp.task.entered);
 	rxe_drop_ref(qp);
 	return 0;
 }
diff --git a/drivers/infiniband/sw/rxe/rxe_cq.c b/drivers/infiniband/sw/rxe/rxe_cq.c
index ad3090131126..20e4d8bfd2e7 100644
--- a/drivers/infiniband/sw/rxe/rxe_cq.c
+++ b/drivers/infiniband/sw/rxe/rxe_cq.c
@@ -1,34 +1,9 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
 /*
+ * drivers/infiniband/sw/rxe/rxe_cq.c
+ *
  * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
  * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *	   Redistribution and use in source and binary forms, with or
- *	   without modification, are permitted provided that the following
- *	   conditions are met:
- *
- *	- Redistributions of source code must retain the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer.
- *
- *	- Redistributions in binary form must reproduce the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer in the documentation and/or other materials
- *	  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
  */
 #include <linux/vmalloc.h>
 #include "rxe.h"
diff --git a/drivers/infiniband/sw/rxe/rxe_hdr.h b/drivers/infiniband/sw/rxe/rxe_hdr.h
index ce003666b800..3edd49bb331c 100644
--- a/drivers/infiniband/sw/rxe/rxe_hdr.h
+++ b/drivers/infiniband/sw/rxe/rxe_hdr.h
@@ -1,34 +1,9 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
 /*
+ * drivers/infiniband/sw/rxe/rxe_hdr.h
+ *
  * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
  * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *     Redistribution and use in source and binary forms, with or
- *     without modification, are permitted provided that the following
- *     conditions are met:
- *
- *	- Redistributions of source code must retain the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer.
- *
- *	- Redistributions in binary form must reproduce the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer in the documentation and/or other materials
- *	  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
  */
 
 #ifndef RXE_HDR_H
@@ -83,9 +58,9 @@ static inline struct sk_buff *PKT_TO_SKB(struct rxe_pkt_info *pkt)
 #define RXE_ICRC_SIZE		(4)
 #define RXE_MAX_HDR_LENGTH	(80)
 
-/******************************************************************************
+/*
  * Base Transport Header
- ******************************************************************************/
+ */
 struct rxe_bth {
 	u8			opcode;
 	u8			flags;
@@ -450,9 +425,9 @@ static inline void bth_init(struct rxe_pkt_info *pkt, u8 opcode, int se,
 	bth->apsn = cpu_to_be32(psn);
 }
 
-/******************************************************************************
+/*
  * Reliable Datagram Extended Transport Header
- ******************************************************************************/
+ */
 struct rxe_rdeth {
 	__be32			een;
 };
@@ -485,9 +460,9 @@ static inline void rdeth_set_een(struct rxe_pkt_info *pkt, u32 een)
 		+ rxe_opcode[pkt->opcode].offset[RXE_RDETH], een);
 }
 
-/******************************************************************************
+/*
  * Datagram Extended Transport Header
- ******************************************************************************/
+ */
 struct rxe_deth {
 	__be32			qkey;
 	__be32			sqp;
@@ -548,9 +523,9 @@ static inline void deth_set_sqp(struct rxe_pkt_info *pkt, u32 sqp)
 		+ rxe_opcode[pkt->opcode].offset[RXE_DETH], sqp);
 }
 
-/******************************************************************************
+/*
  * RDMA Extended Transport Header
- ******************************************************************************/
+ */
 struct rxe_reth {
 	__be64			va;
 	__be32			rkey;
@@ -635,9 +610,9 @@ static inline void reth_set_len(struct rxe_pkt_info *pkt, u32 len)
 		+ rxe_opcode[pkt->opcode].offset[RXE_RETH], len);
 }
 
-/******************************************************************************
+/*
  * Atomic Extended Transport Header
- ******************************************************************************/
+ */
 struct rxe_atmeth {
 	__be64			va;
 	__be32			rkey;
@@ -749,9 +724,9 @@ static inline void atmeth_set_comp(struct rxe_pkt_info *pkt, u64 comp)
 		+ rxe_opcode[pkt->opcode].offset[RXE_ATMETH], comp);
 }
 
-/******************************************************************************
+/*
  * Ack Extended Transport Header
- ******************************************************************************/
+ */
 struct rxe_aeth {
 	__be32			smsn;
 };
@@ -829,9 +804,9 @@ static inline void aeth_set_msn(struct rxe_pkt_info *pkt, u32 msn)
 		+ rxe_opcode[pkt->opcode].offset[RXE_AETH], msn);
 }
 
-/******************************************************************************
+/*
  * Atomic Ack Extended Transport Header
- ******************************************************************************/
+ */
 struct rxe_atmack {
 	__be64			orig;
 };
@@ -862,9 +837,9 @@ static inline void atmack_set_orig(struct rxe_pkt_info *pkt, u64 orig)
 		+ rxe_opcode[pkt->opcode].offset[RXE_ATMACK], orig);
 }
 
-/******************************************************************************
+/*
  * Immediate Extended Transport Header
- ******************************************************************************/
+ */
 struct rxe_immdt {
 	__be32			imm;
 };
@@ -895,9 +870,9 @@ static inline void immdt_set_imm(struct rxe_pkt_info *pkt, __be32 imm)
 		+ rxe_opcode[pkt->opcode].offset[RXE_IMMDT], imm);
 }
 
-/******************************************************************************
+/*
  * Invalidate Extended Transport Header
- ******************************************************************************/
+ */
 struct rxe_ieth {
 	__be32			rkey;
 };
diff --git a/drivers/infiniband/sw/rxe/rxe_hw_counters.c b/drivers/infiniband/sw/rxe/rxe_hw_counters.c
index 636edb5f4cf4..d61e484f034e 100644
--- a/drivers/infiniband/sw/rxe/rxe_hw_counters.c
+++ b/drivers/infiniband/sw/rxe/rxe_hw_counters.c
@@ -1,54 +1,29 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
 /*
- * Copyright (c) 2017 Mellanox Technologies Ltd. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *     Redistribution and use in source and binary forms, with or
- *     without modification, are permitted provided that the following
- *     conditions are met:
+ * drivers/infiniband/sw/rxe/rxe_hw_counters.c
  *
- *	- Redistributions of source code must retain the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer.
- *
- *	- Redistributions in binary form must reproduce the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer in the documentation and/or other materials
- *	  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
+ * Copyright (c) 2017 Mellanox Technologies Ltd. All rights reserved.
  */
 
 #include "rxe.h"
 #include "rxe_hw_counters.h"
 
 static const char * const rxe_counter_name[] = {
-	[RXE_CNT_SENT_PKTS]           =  "sent_pkts",
-	[RXE_CNT_RCVD_PKTS]           =  "rcvd_pkts",
-	[RXE_CNT_DUP_REQ]             =  "duplicate_request",
-	[RXE_CNT_OUT_OF_SEQ_REQ]      =  "out_of_seq_request",
-	[RXE_CNT_RCV_RNR]             =  "rcvd_rnr_err",
-	[RXE_CNT_SND_RNR]             =  "send_rnr_err",
-	[RXE_CNT_RCV_SEQ_ERR]         =  "rcvd_seq_err",
-	[RXE_CNT_COMPLETER_SCHED]     =  "ack_deferred",
-	[RXE_CNT_RETRY_EXCEEDED]      =  "retry_exceeded_err",
-	[RXE_CNT_RNR_RETRY_EXCEEDED]  =  "retry_rnr_exceeded_err",
-	[RXE_CNT_COMP_RETRY]          =  "completer_retry_err",
-	[RXE_CNT_SEND_ERR]            =  "send_err",
-	[RXE_CNT_LINK_DOWNED]         =  "link_downed",
-	[RXE_CNT_RDMA_SEND]           =  "rdma_sends",
-	[RXE_CNT_RDMA_RECV]           =  "rdma_recvs",
+	[RXE_CNT_SENT_PKTS]	      =	 "sent_pkts",
+	[RXE_CNT_RCVD_PKTS]	      =	 "rcvd_pkts",
+	[RXE_CNT_DUP_REQ]	      =	 "duplicate_request",
+	[RXE_CNT_OUT_OF_SEQ_REQ]      =	 "out_of_seq_request",
+	[RXE_CNT_RCV_RNR]	      =	 "rcvd_rnr_err",
+	[RXE_CNT_SND_RNR]	      =	 "send_rnr_err",
+	[RXE_CNT_RCV_SEQ_ERR]	      =	 "rcvd_seq_err",
+	[RXE_CNT_COMPLETER_SCHED]     =	 "ack_deferred",
+	[RXE_CNT_RETRY_EXCEEDED]      =	 "retry_exceeded_err",
+	[RXE_CNT_RNR_RETRY_EXCEEDED]  =	 "retry_rnr_exceeded_err",
+	[RXE_CNT_COMP_RETRY]	      =	 "completer_retry_err",
+	[RXE_CNT_SEND_ERR]	      =	 "send_err",
+	[RXE_CNT_LINK_DOWNED]	      =	 "link_downed",
+	[RXE_CNT_RDMA_SEND]	      =	 "rdma_sends",
+	[RXE_CNT_RDMA_RECV]	      =	 "rdma_recvs",
 };
 
 int rxe_ib_get_hw_stats(struct ib_device *ibdev,
diff --git a/drivers/infiniband/sw/rxe/rxe_hw_counters.h b/drivers/infiniband/sw/rxe/rxe_hw_counters.h
index 72c0d63c79e0..a3c26f66a76c 100644
--- a/drivers/infiniband/sw/rxe/rxe_hw_counters.h
+++ b/drivers/infiniband/sw/rxe/rxe_hw_counters.h
@@ -1,33 +1,8 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
 /*
- * Copyright (c) 2017 Mellanox Technologies Ltd. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *     Redistribution and use in source and binary forms, with or
- *     without modification, are permitted provided that the following
- *     conditions are met:
+ * drivers/infiniband/sw/rxe/rxe_hw_counters.h
  *
- *	- Redistributions of source code must retain the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer.
- *
- *	- Redistributions in binary form must reproduce the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer in the documentation and/or other materials
- *	  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
+ * Copyright (c) 2017 Mellanox Technologies Ltd. All rights reserved.
  */
 
 #ifndef RXE_HW_COUNTERS_H
diff --git a/drivers/infiniband/sw/rxe/rxe_icrc.c b/drivers/infiniband/sw/rxe/rxe_icrc.c
index 39e0be31aab1..d02eca260053 100644
--- a/drivers/infiniband/sw/rxe/rxe_icrc.c
+++ b/drivers/infiniband/sw/rxe/rxe_icrc.c
@@ -1,34 +1,9 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
 /*
+ * drivers/infiniband/sw/rxe/rxe_icrc.c
+ *
  * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
  * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *     Redistribution and use in source and binary forms, with or
- *     without modification, are permitted provided that the following
- *     conditions are met:
- *
- *	- Redistributions of source code must retain the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer.
- *
- *	- Redistributions in binary form must reproduce the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer in the documentation and/or other materials
- *	  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
  */
 
 #include "rxe.h"
diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
index 2421ca311845..e0f566b48c71 100644
--- a/drivers/infiniband/sw/rxe/rxe_loc.h
+++ b/drivers/infiniband/sw/rxe/rxe_loc.h
@@ -1,34 +1,9 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
 /*
+ * drivers/infiniband/sw/rxe/rxe_loc.h
+ *
  * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
  * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *     Redistribution and use in source and binary forms, with or
- *     without modification, are permitted provided that the following
- *     conditions are met:
- *
- *	- Redistributions of source code must retain the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer.
- *
- *	- Redistributions in binary form must reproduce the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer in the documentation and/or other materials
- *	  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
  */
 
 #ifndef RXE_LOC_H
@@ -145,8 +120,8 @@ int rxe_mcast_delete(struct rxe_dev *rxe, union ib_gid *mgid);
 /* rxe_qp.c */
 int rxe_qp_chk_init(struct rxe_dev *rxe, struct ib_qp_init_attr *init);
 
-int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd,
-		     struct ib_qp_init_attr *init,
+int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp,
+		     struct rxe_pd *pd, struct ib_qp_init_attr *init,
 		     struct rxe_create_qp_resp __user *uresp,
 		     struct ib_pd *ibpd, struct ib_udata *udata);
 
@@ -219,7 +194,8 @@ int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq,
 
 int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq,
 		      struct ib_srq_attr *attr, enum ib_srq_attr_mask mask,
-		      struct rxe_modify_srq_cmd *ucmd, struct ib_udata *udata);
+		      struct rxe_modify_srq_cmd *ucmd,
+		      struct ib_udata *udata);
 
 void rxe_dealloc(struct ib_device *ib_dev);
 
diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c
index 522a7942c56c..244e47759aa2 100644
--- a/drivers/infiniband/sw/rxe/rxe_mcast.c
+++ b/drivers/infiniband/sw/rxe/rxe_mcast.c
@@ -1,34 +1,9 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
 /*
+ * drivers/infiniband/sw/rxe/rxe_mcast.c
+ *
  * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
  * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *	   Redistribution and use in source and binary forms, with or
- *	   without modification, are permitted provided that the following
- *	   conditions are met:
- *
- *		- Redistributions of source code must retain the above
- *		  copyright notice, this list of conditions and the following
- *		  disclaimer.
- *
- *		- Redistributions in binary form must reproduce the above
- *		  copyright notice, this list of conditions and the following
- *		  disclaimer in the documentation and/or other materials
- *		  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
  */
 
 #include "rxe.h"
diff --git a/drivers/infiniband/sw/rxe/rxe_mmap.c b/drivers/infiniband/sw/rxe/rxe_mmap.c
index 7887f623f62c..0640578c1ae3 100644
--- a/drivers/infiniband/sw/rxe/rxe_mmap.c
+++ b/drivers/infiniband/sw/rxe/rxe_mmap.c
@@ -1,34 +1,9 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
 /*
+ * drivers/infiniband/sw/rxe/rxe_mmap.c
+ *
  * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
  * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *     Redistribution and use in source and binary forms, with or
- *     without modification, are permitted provided that the following
- *     conditions are met:
- *
- *	- Redistributions of source code must retain the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer.
- *
- *	- Redistributions in binary form must reproduce the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer in the documentation and/or other materials
- *	  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
  */
 
 #include <linux/module.h>
diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
index ce64d4101888..e49251ed38a4 100644
--- a/drivers/infiniband/sw/rxe/rxe_mr.c
+++ b/drivers/infiniband/sw/rxe/rxe_mr.c
@@ -1,41 +1,19 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
 /*
+ * drivers/infiniband/sw/rxe/rxe_mr.c
+ *
+ * Copyright (c) 2020 Hewlett Packard Enterprise, Inc. All rights reserved.
  * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
  * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *     Redistribution and use in source and binary forms, with or
- *     without modification, are permitted provided that the following
- *     conditions are met:
- *
- *	- Redistributions of source code must retain the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer.
- *
- *	- Redistributions in binary form must reproduce the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer in the documentation and/or other materials
- *	  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
  */
 
 #include "rxe.h"
 #include "rxe_loc.h"
 
-/* choose a unique non zero random number for lkey
- * use high order bit to indicate MR vs MW */
+/*
+ * choose a unique non zero random number for lkey
+ * use high order bit to indicate MR vs MW
+ */
 void rxe_set_mr_lkey(struct rxe_mr *mr)
 {
 	u32 lkey;
@@ -82,7 +60,6 @@ static void rxe_mr_init(int access, struct rxe_mr *mr)
 	else
 		mr->ibmr.rkey = 0;
 
-	// TODO we shouldn't carry two copies
 	mr->lkey		= mr->ibmr.lkey;
 	mr->rkey		= mr->ibmr.rkey;
 	mr->state		= RXE_MEM_STATE_INVALID;
@@ -319,7 +296,8 @@ void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length)
 	return addr;
 }
 
-/* copy data from a range (vaddr, vaddr+length-1) to or from
+/*
+ * copy data from a range (vaddr, vaddr+length-1) to or from
  * a mr object starting at iova. Compute incremental value of
  * crc32 if crcp is not zero. caller must hold a reference to mr
  */
@@ -430,7 +408,8 @@ static struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 lkey)
 	return mr;
 }
 
-/* copy data in or out of a wqe, i.e. sg list
+/*
+ * copy data in or out of a wqe, i.e. sg list
  * under the control of a dma descriptor
  */
 int copy_data(
@@ -559,12 +538,9 @@ int advance_dma_data(struct rxe_dma_info *dma, unsigned int length)
 	return 0;
 }
 
+/* this is a placeholder. there is lots more to do */
 int rxe_invalidate_mr(struct rxe_qp *qp, struct rxe_mr *mr)
 {
-	// much more TODO here, can fail
-	// mw is closer to what is needed
-	// but for another day
-
 	mr->state = RXE_MEM_STATE_FREE;
 
 	return 0;
@@ -577,22 +553,19 @@ int rxe_mr_check_access(struct rxe_qp *qp, struct rxe_mr *mr,
 	struct rxe_pd *pd = to_rpd(mr->ibmr.pd);
 
 	if (unlikely(mr->state != RXE_MEM_STATE_VALID)) {
-		pr_err("attempt to access a MR that is"
-			" not in the valid state\n");
+		pr_err("attempt to access a MR that is not in the valid state\n");
 		return -EINVAL;
 	}
 
 	/* C10-56 */
 	if (unlikely(pd != qp->pd)) {
-		pr_err("attempt to access a MR with a"
-			" different PD than the QP\n");
+		pr_err("attempt to access a MR with a different PD than the QP\n");
 		return -EINVAL;
 	}
 
 	/* C10-57 */
 	if (unlikely(access && !(access & mr->access))) {
-		pr_err("attempt to access a MR that does"
-			" not have the required access rights\n");
+		pr_err("attempt to access a MR that does not have the required access rights\n");
 		return -EINVAL;
 	}
 
diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c
index ae7f5710f7dd..735b83a7eb49 100644
--- a/drivers/infiniband/sw/rxe/rxe_mw.c
+++ b/drivers/infiniband/sw/rxe/rxe_mw.c
@@ -1,52 +1,30 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
 /*
+ * drivers/infiniband/sw/rxe/rxe_mw.c
+ *
  * Copyright (c) 2020 Hewlett Packard Enterprise, Inc. All rights reserved.
  * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
  * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *     Redistribution and use in source and binary forms, with or
- *     without modification, are permitted provided that the following
- *     conditions are met:
- *
- *	- Redistributions of source code must retain the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer.
- *
- *	- Redistributions in binary form must reproduce the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer in the documentation and/or other materials
- *	  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
  */
 
 #include "rxe.h"
 #include "rxe_loc.h"
 
-/* choose a unique non zero random number for rkey
- * use high order bit to indicate MR vs MW */
-void rxe_set_mw_rkey(struct rxe_mw *mw)
+static void set_mw_rkey(struct rxe_mw *mw)
 {
 	u32 rkey;
 	int tries = 0;
 
+	/*
+	 * there is a very rare chance the RNG will produce all zeros
+	 * or that it will produce a duplicate to an existing MW
+	 * just try again
+	 */
 	do {
 		get_random_bytes(&rkey, sizeof(rkey));
 		rkey |= IS_MW;
 		if (likely((rkey & ~IS_MW) &&
-			   (rxe_add_key(mw, &rkey) == 0)))
+		    (rxe_add_key(mw, &rkey) == 0)))
 			return;
 	} while (tries++ < 10);
 	pr_err("unable to get random rkey for mw\n");
@@ -83,7 +61,8 @@ struct ib_mw *rxe_alloc_mw(struct ib_pd *ibpd, enum ib_mw_type type,
 		mw->state	= RXE_MEM_STATE_FREE;
 		break;
 	default:
-		pr_err("attempt to allocate MW with unknown type\n");
+		pr_err_once("attempt to allocate MW with bad type = %d\n",
+				type);
 		ret = -EINVAL;
 		goto err3;
 	}
@@ -91,15 +70,15 @@ struct ib_mw *rxe_alloc_mw(struct ib_pd *ibpd, enum ib_mw_type type,
 	rxe_add_index(mw);
 	index = mw->pelem.index;
 
-	/* o10-37.2.32: */
-	rxe_set_mw_rkey(mw);
+	/* o10-37.2.32 */
+	set_mw_rkey(mw);
 
 	mw->qp			= NULL;
 	mw->mr			= NULL;
 	mw->addr		= 0;
 	mw->length		= 0;
-        mw->ibmw.pd		= ibpd;
-        mw->ibmw.type		= type;
+	mw->ibmw.pd		= ibpd;
+	mw->ibmw.type		= type;
 
 	spin_lock_init(&mw->lock);
 
@@ -120,44 +99,39 @@ struct ib_mw *rxe_alloc_mw(struct ib_pd *ibpd, enum ib_mw_type type,
 	return ERR_PTR(ret);
 }
 
-/* Check the rules for bind MW oepration. */
 static int check_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
 			 struct rxe_mw *mw, struct rxe_mr *mr)
 {
-	/* check to see if bind operation came through
-	 * ibv_bind_mw verbs API. */
+	/* check to see if bind operation came through verbs API. */
 	switch (mw->ibmw.type) {
 	case IB_MW_TYPE_1:
-		/* o10-37.2.34: */
+		/* o10-37.2.34 */
 		if (unlikely(!(wqe->wr.wr.umw.flags & RXE_BIND_MW))) {
 			pr_err("attempt to bind type 1 MW with send WR\n");
 			return -EINVAL;
 		}
 		break;
 	case IB_MW_TYPE_2:
-		/* o10-37.2.35: */
+		/* o10-37.2.35 */
 		if (unlikely(wqe->wr.wr.umw.flags & RXE_BIND_MW)) {
-			pr_err("attempt to bind type 2 MW with verbs API\n");
+			pr_err_once("attempt to bind type 2 MW with verbs API\n");
 			return -EINVAL;
 		}
 
-		/* C10-72: */
+		/* C10-72 */
 		if (unlikely(qp->pd != to_rpd(mw->ibmw.pd))) {
-			pr_err("attempt to bind type 2 MW with qp"
-				" with different PD\n");
+			pr_err_once("attempt to bind type 2 MW with qp with different PD\n");
 			return -EINVAL;
 		}
 
-		/* o10-37.2.40: */
+		/* o10-37.2.40 */
 		if (unlikely(wqe->wr.wr.umw.length == 0)) {
-			pr_err("attempt to invalidate type 2 MW by"
-				" binding with zero length\n");
+			pr_err_once("attempt to invalidate type 2 MW by binding with zero length\n");
 			return -EINVAL;
 		}
 
 		if (unlikely(!mr)) {
-			pr_err("attempt to invalidate type 2 MW by"
-				" binding to NULL mr\n");
+			pr_err_once("attempt to bind type 2 MW to NULL MR\n");
 			return -EINVAL;
 		}
 		break;
@@ -167,20 +141,19 @@ static int check_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
 
 	if (unlikely((mw->ibmw.type == IB_MW_TYPE_1) &&
 			(mw->state != RXE_MEM_STATE_VALID))) {
-		pr_err("attempt to bind a type 1 MW not in the"
-			" valid state\n");
+		pr_err_once("attempt to bind an invalid type 1 MW\n");
 		return -EINVAL;
 	}
 
-	/* o10-36.2.2: */
+	/* o10-36.2.2 */
 	if (unlikely((mw->access & IB_ZERO_BASED) &&
 			(mw->ibmw.type == IB_MW_TYPE_1))) {
-		pr_err("attempt to bind a zero based type 1 MW\n");
+		pr_err_once("attempt to bind a zero based type 1 MW\n");
 		return -EINVAL;
 	}
 
 	if ((wqe->wr.wr.umw.rkey & 0xff) == (mw->ibmw.rkey & 0xff)) {
-		pr_err("attempt to bind MW with same key\n");
+		pr_err_once("attempt to bind a MW with same key\n");
 		return -EINVAL;
 	}
 
@@ -189,47 +162,42 @@ static int check_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
 		return 0;
 
 	if (unlikely(mr->access & IB_ZERO_BASED)) {
-		pr_err("attempt to bind MW to zero based MR\n");
+		pr_err_once("attempt to bind MW to zero based MR\n");
 		return -EINVAL;
 	}
 
-	/* o10-37.2.30: */
+	/* o10-37.2.30 */
 	if (unlikely((mw->ibmw.type == IB_MW_TYPE_2) &&
 			(mw->state != RXE_MEM_STATE_FREE))) {
-		pr_err("attempt to bind a type 2 MW not in the"
-			" free state\n");
+		pr_err_once("attempt to bind a not free type 2 MW\n");
 		return -EINVAL;
 	}
 
-	/* C10-73: */
+	/* C10-73 */
 	if (unlikely(!(mr->access & IB_ACCESS_MW_BIND))) {
-		pr_err("attempt to bind an MW to an MR without"
-			" bind access\n");
+		pr_err_once("attempt to bind an MW to an MR without bind access\n");
 		return -EINVAL;
 	}
 
-	/* C10-74: */
+	/* C10-74 */
 	if (unlikely((mw->access & (IB_ACCESS_REMOTE_WRITE |
 				    IB_ACCESS_REMOTE_ATOMIC)) &&
 	    !(mr->access & IB_ACCESS_LOCAL_WRITE))) {
-		pr_err("attempt to bind an MW with write/atomic"
-			" access to an MR without local write access\n");
+		pr_err_once("attempt to bind an MW with write/atomic access to an MR without local write access\n");
 		return -EINVAL;
 	}
 
-	/* C10-75: */
+	/* C10-75 */
 	if (mw->access & IB_ZERO_BASED) {
 		if (unlikely(wqe->wr.wr.umw.length > mr->length)) {
-			pr_err("attempt to bind a ZB MW outside"
-				" of the MR\n");
+			pr_err_once("attempt to bind a MW out of the MR\n");
 			return -EINVAL;
 		}
 	} else {
 		if (unlikely((wqe->wr.wr.umw.addr < mr->iova) ||
 		    ((wqe->wr.wr.umw.addr + wqe->wr.wr.umw.length) >
 		     (mr->iova + mr->length)))) {
-			pr_err("attempt to bind a VA MW outside"
-				" of the MR\n");
+			pr_err_once("attempt to bind a MW out of the MR\n");
 			return -EINVAL;
 		}
 	}
@@ -243,47 +211,48 @@ static int do_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
 	int ret;
 	u32 rkey;
 	u32 new_rkey;
+	struct rxe_mw *duplicate_mw;
+	struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
 
-	/* key part of new rkey is provided by user for type 2
-	 * and ibv_bind_mw() for type 1 MWs */
+	/*
+	 * key part of new rkey is provided by user for type 2
+	 * and ibv_bind_mw() for type 1 MWs
+	 * there is a very rare chance that the new rkey will
+	 * collide with an existing MW. Return an error if this
+	 * occurs
+	 */
 	rkey = mw->ibmw.rkey;
-	rxe_drop_key(mw);
 	new_rkey = (rkey & 0xffffff00) | (wqe->wr.wr.umw.rkey & 0x000000ff);
-	ret = rxe_add_key(mw, &new_rkey);
-	if (ret) {
-		/* this should never happen */
-		pr_err("shouldn't happen unable to set new rkey\n");
-		/* try to put back the old one */
-		rxe_add_key(mw, &rkey);
-		return ret;
+
+	duplicate_mw = rxe_get_key(rxe, &new_key);
+	if (duplicate_mw) {
+		pr_err_once("new MW key is a duplicate, try another\n");
+		rxe_drop_ref(duplicate_mw);
+		return -EINVAL;
 	}
 
+	rxe_drop_key(mw);
+	rxe_add_key(mw, &new_rkey);
+
 	mw->access = wqe->wr.wr.umw.access;
 	mw->state = RXE_MEM_STATE_VALID;
 	mw->addr = wqe->wr.wr.umw.addr;
 	mw->length = wqe->wr.wr.umw.length;
 
-	/* get rid of existing MR if any, type 1 only */
 	if (mw->mr) {
 		rxe_drop_ref(mw->mr);
 		atomic_dec(&mw->mr->num_mw);
 		mw->mr = NULL;
 	}
 
-	/* if length != 0 bind to new MR */
 	if (mw->length) {
 		mw->mr = mr;
 		atomic_inc(&mr->num_mw);
 		rxe_add_ref(mr);
 	}
 
-	/* remember qp if type 2, cleared by invalidate
-	 * this is weak since qp can go away legally
-	 * only used to compare with qp used to perform
-	 * memory ops */
-	if (mw->ibmw.type == IB_MW_TYPE_2) {
+	if (mw->ibmw.type == IB_MW_TYPE_2)
 		mw->qp = qp;
-	}
 
 	return 0;
 }
@@ -300,7 +269,7 @@ int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
 		mw = rxe_pool_get_index(&rxe->mw_pool,
 					wqe->wr.wr.umw.mw_index);
 		if (!mw) {
-			pr_err("mw with index = %d not found\n",
+			pr_err_once("mw with index = %d not found\n",
 				wqe->wr.wr.umw.mw_index);
 			ret = -EINVAL;
 			goto err1;
@@ -308,7 +277,7 @@ int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
 		mr = rxe_pool_get_index(&rxe->mr_pool,
 					wqe->wr.wr.umw.mr_index);
 		if (!mr && wqe->wr.wr.umw.length) {
-			pr_err("mr with index = %d not found\n",
+			pr_err_once("mr with index = %d not found\n",
 				wqe->wr.wr.umw.mr_index);
 			ret = -EINVAL;
 			goto err2;
@@ -326,12 +295,10 @@ int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
 
 	spin_lock_irqsave(&mw->lock, flags);
 
-	/* check the rules */
 	ret = check_bind_mw(qp, wqe, mw, mr);
 	if (ret)
 		goto err3;
 
-	/* implement the change */
 	ret = do_bind_mw(qp, wqe, mw, mr);
 err3:
 	spin_unlock_irqrestore(&mw->lock, flags);
@@ -347,14 +314,13 @@ int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
 static int check_invalidate_mw(struct rxe_qp *qp, struct rxe_mw *mw)
 {
 	if (unlikely(mw->state != RXE_MEM_STATE_VALID)) {
-		pr_warn("attempt to invalidate a MW that"
-			" is not valid\n");
+		pr_err_once("attempt to invalidate a MW that is not valid\n");
 		return -EINVAL;
 	}
 
-	/* o10-37.2.26: */
+	/* o10-37.2.26 */
 	if (unlikely(mw->ibmw.type == IB_MW_TYPE_1)) {
-		pr_err("attempt to invalidate a type 1 MW\n");
+		pr_err_once("attempt to invalidate a type 1 MW\n");
 		return -EINVAL;
 	}
 
@@ -434,41 +400,37 @@ int rxe_mw_check_access(struct rxe_qp *qp, struct rxe_mw *mw,
 	struct rxe_pd *pd = to_rpd(mw->ibmw.pd);
 
 	if (unlikely(mw->state != RXE_MEM_STATE_VALID)) {
-		pr_err("attempt to access a MW that is"
-			" not in the valid state\n");
+		pr_err_once("attempt to access a MW that is not in the valid state\n");
 		return -EINVAL;
 	}
 
 	/* C10-76.2.1 */
 	if (unlikely((mw->ibmw.type == IB_MW_TYPE_1) && (pd != qp->pd))) {
-		pr_err("attempt to access a type 1 MW with a"
-			" different PD than the QP\n");
+		pr_err_once("attempt to access a type 1 MW with a different PD than the QP\n");
 		return -EINVAL;
 	}
 
 	/* o10-37.2.43 */
 	if (unlikely((mw->ibmw.type == IB_MW_TYPE_2) && (mw->qp != qp))) {
-		pr_err("attempt to access a type 2 MW that is"
-			" associated with a different QP\n");
+		pr_err_once("attempt to access a type 2 MW that is associated with a different QP\n");
 		return -EINVAL;
 	}
 
 	/* C10-77 */
 	if (unlikely(access && !(access & mw->access))) {
-		pr_err("attempt to access a MW that does"
-			" not have the required access rights\n");
+		pr_err_once("attempt to access a MW that does not have the required access rights\n");
 		return -EINVAL;
 	}
 
 	if (mw->access & IB_ZERO_BASED) {
 		if (unlikely((va + resid) > mw->length)) {
-			pr_err("attempt to access a MW out of bounds\n");
+			pr_err_once("attempt to access a MW out of bounds\n");
 			return -EINVAL;
 		}
 	} else {
 		if (unlikely((va < mw->addr) ||
 			((va + resid) > (mw->addr + mw->length)))) {
-			pr_err("attempt to access a MW out of bounds\n");
+			pr_err_once("attempt to access a MW out of bounds\n");
 			return -EINVAL;
 		}
 	}
diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c
index 0c3808611f95..e9b6f491e922 100644
--- a/drivers/infiniband/sw/rxe/rxe_net.c
+++ b/drivers/infiniband/sw/rxe/rxe_net.c
@@ -1,34 +1,9 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
 /*
+ * drivers/infiniband/sw/rxe/rxe_net.c
+ *
  * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
  * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *     Redistribution and use in source and binary forms, with or
- *     without modification, are permitted provided that the following
- *     conditions are met:
- *
- *	- Redistributions of source code must retain the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer.
- *
- *	- Redistributions in binary form must reproduce the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer in the documentation and/or other materials
- *	  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
  */
 
 #include <linux/skbuff.h>
@@ -120,7 +95,7 @@ static struct dst_entry *rxe_find_route6(struct net_device *ndev,
 	ndst = ipv6_stub->ipv6_dst_lookup_flow(sock_net(recv_sockets.sk6->sk),
 					       recv_sockets.sk6->sk, &fl6,
 					       NULL);
-	if (unlikely(IS_ERR(ndst))) {
+	if (IS_ERR(ndst)) {
 		pr_err_ratelimited("no route to %pI6\n", daddr);
 		return NULL;
 	}
@@ -333,8 +308,8 @@ static void prepare_ipv6_hdr(struct dst_entry *dst, struct sk_buff *skb,
 	ip6h		  = ipv6_hdr(skb);
 	ip6_flow_hdr(ip6h, prio, htonl(0));
 	ip6h->payload_len = htons(skb->len);
-	ip6h->nexthdr     = proto;
-	ip6h->hop_limit   = ttl;
+	ip6h->nexthdr	  = proto;
+	ip6h->hop_limit	  = ttl;
 	ip6h->daddr	  = *daddr;
 	ip6h->saddr	  = *saddr;
 	ip6h->payload_len = htons(skb->len - sizeof(*ip6h));
diff --git a/drivers/infiniband/sw/rxe/rxe_net.h b/drivers/infiniband/sw/rxe/rxe_net.h
index 2ca71d3d245c..1142dd4b47cb 100644
--- a/drivers/infiniband/sw/rxe/rxe_net.h
+++ b/drivers/infiniband/sw/rxe/rxe_net.h
@@ -1,34 +1,9 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
 /*
+ * drivers/infiniband/sw/rxe/rxe_net.h
+ *
  * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
  * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *     Redistribution and use in source and binary forms, with or
- *     without modification, are permitted provided that the following
- *     conditions are met:
- *
- *	- Redistributions of source code must retain the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer.
- *
- *	- Redistributions in binary form must reproduce the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer in the documentation and/or other materials
- *	  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
  */
 
 #ifndef RXE_NET_H
diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.c b/drivers/infiniband/sw/rxe/rxe_opcode.c
index d2f2092f0be5..31065d772f10 100644
--- a/drivers/infiniband/sw/rxe/rxe_opcode.c
+++ b/drivers/infiniband/sw/rxe/rxe_opcode.c
@@ -1,34 +1,9 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
 /*
+ * drivers/infiniband/sw/rxe/rxe_opcode.c
+ *
  * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
  * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *     Redistribution and use in source and binary forms, with or
- *     without modification, are permitted provided that the following
- *     conditions are met:
- *
- *	- Redistributions of source code must retain the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer.
- *
- *	- Redistributions in binary form must reproduce the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer in the documentation and/or other materials
- *	  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
  */
 
 #include <rdma/ib_pack.h>
@@ -397,7 +372,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = {
 		.name	= "IB_OPCODE_RC_SEND_ONLY_INV",
 		.mask	= RXE_IETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK
 				| RXE_COMP_MASK | RXE_RWR_MASK | RXE_SEND_MASK
-				| RXE_END_MASK  | RXE_START_MASK,
+				| RXE_END_MASK	| RXE_START_MASK,
 		.length = RXE_BTH_BYTES + RXE_IETH_BYTES,
 		.offset = {
 			[RXE_BTH]	= 0,
diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.h b/drivers/infiniband/sw/rxe/rxe_opcode.h
index 307604e9c78d..7a42bfab1d45 100644
--- a/drivers/infiniband/sw/rxe/rxe_opcode.h
+++ b/drivers/infiniband/sw/rxe/rxe_opcode.h
@@ -1,34 +1,9 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
 /*
+ * drivers/infiniband/sw/rxe/rxe_opcode.h
+ *
  * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
  * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *     Redistribution and use in source and binary forms, with or
- *     without modification, are permitted provided that the following
- *     conditions are met:
- *
- *	- Redistributions of source code must retain the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer.
- *
- *	- Redistributions in binary form must reproduce the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer in the documentation and/or other materials
- *	  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
  */
 
 #ifndef RXE_OPCODE_H
diff --git a/drivers/infiniband/sw/rxe/rxe_param.h b/drivers/infiniband/sw/rxe/rxe_param.h
index 41e7b74efcbc..c24d90911434 100644
--- a/drivers/infiniband/sw/rxe/rxe_param.h
+++ b/drivers/infiniband/sw/rxe/rxe_param.h
@@ -1,34 +1,9 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
 /*
+ * drivers/infiniband/sw/rxe/rxe_param.h
+ *
  * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
  * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *     Redistribution and use in source and binary forms, with or
- *     without modification, are permitted provided that the following
- *     conditions are met:
- *
- *	- Redistributions of source code must retain the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer.
- *
- *	- Redistributions in binary form must reproduce the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer in the documentation and/or other materials
- *	  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
  */
 
 #ifndef RXE_PARAM_H
diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c
index df3e2a514ce3..0f8b83f0965a 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.c
+++ b/drivers/infiniband/sw/rxe/rxe_pool.c
@@ -1,34 +1,9 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
 /*
+ * drivers/infiniband/sw/rxe/rxe_pool.c
+ *
  * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
  * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *	   Redistribution and use in source and binary forms, with or
- *	   without modification, are permitted provided that the following
- *	   conditions are met:
- *
- *		- Redistributions of source code must retain the above
- *		  copyright notice, this list of conditions and the following
- *		  disclaimer.
- *
- *		- Redistributions in binary form must reproduce the above
- *		  copyright notice, this list of conditions and the following
- *		  disclaimer in the documentation and/or other materials
- *		  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
  */
 
 #include "rxe.h"
@@ -38,7 +13,7 @@ struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = {
 	[RXE_TYPE_UC] = {
 		.name		= "rxe-uc",
 		.size		= sizeof(struct rxe_ucontext),
-		.flags          = RXE_POOL_NO_ALLOC,
+		.flags		= RXE_POOL_NO_ALLOC,
 	},
 	[RXE_TYPE_PD] = {
 		.name		= "rxe-pd",
@@ -68,7 +43,7 @@ struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = {
 	[RXE_TYPE_CQ] = {
 		.name		= "rxe-cq",
 		.size		= sizeof(struct rxe_cq),
-		.flags          = RXE_POOL_NO_ALLOC,
+		.flags		= RXE_POOL_NO_ALLOC,
 		.cleanup	= rxe_cq_cleanup,
 	},
 	[RXE_TYPE_MR] = {
@@ -366,10 +341,8 @@ void rxe_drop_key(void *arg)
 	struct rxe_pool *pool = elem->pool;
 	unsigned long flags;
 
-	if (elem == NULL) {
-		pr_warn("rxe_drop_key: called with null pointer\n");
+	if (elem == NULL)
 		return;
-	}
 
 	write_lock_irqsave(&pool->pool_lock, flags);
 	rb_erase(&elem->key_node, &pool->key.tree);
@@ -394,10 +367,8 @@ void rxe_drop_index(void *arg)
 	struct rxe_pool *pool = elem->pool;
 	unsigned long flags;
 
-	if (elem == NULL) {
-		pr_warn("rxe_drop_index: called with null pointer\n");
+	if (elem == NULL)
 		return;
-	}
 
 	write_lock_irqsave(&pool->pool_lock, flags);
 	clear_bit(elem->index - pool->index.min_index, pool->index.table);
diff --git a/drivers/infiniband/sw/rxe/rxe_pool.h b/drivers/infiniband/sw/rxe/rxe_pool.h
index 0ba811456f79..43c38f67fa26 100644
--- a/drivers/infiniband/sw/rxe/rxe_pool.h
+++ b/drivers/infiniband/sw/rxe/rxe_pool.h
@@ -1,34 +1,9 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
 /*
+ * drivers/infiniband/sw/rxe/rxe_pool.h
+ *
  * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
  * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *	   Redistribution and use in source and binary forms, with or
- *	   without modification, are permitted provided that the following
- *	   conditions are met:
- *
- *		- Redistributions of source code must retain the above
- *		  copyright notice, this list of conditions and the following
- *		  disclaimer.
- *
- *		- Redistributions in binary form must reproduce the above
- *		  copyright notice, this list of conditions and the following
- *		  disclaimer in the documentation and/or other materials
- *		  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
  */
 
 #ifndef RXE_POOL_H
diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
index 6c11c3aeeca6..0b09ab0b1543 100644
--- a/drivers/infiniband/sw/rxe/rxe_qp.c
+++ b/drivers/infiniband/sw/rxe/rxe_qp.c
@@ -1,34 +1,9 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
 /*
+ * drivers/infiniband/sw/rxe/rxe_qp.c
+ *
  * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
  * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *	   Redistribution and use in source and binary forms, with or
- *	   without modification, are permitted provided that the following
- *	   conditions are met:
- *
- *		- Redistributions of source code must retain the above
- *		  copyright notice, this list of conditions and the following
- *		  disclaimer.
- *
- *		- Redistributions in binary form must reproduce the above
- *		  copyright notice, this list of conditions and the following
- *		  disclaimer in the documentation and/or other materials
- *		  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
  */
 
 #include <linux/skbuff.h>
@@ -217,7 +192,8 @@ static void rxe_qp_init_misc(struct rxe_dev *rxe, struct rxe_qp *qp,
 }
 
 static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp,
-			   struct ib_qp_init_attr *init, struct ib_udata *udata,
+			   struct ib_qp_init_attr *init,
+			   struct ib_udata *udata,
 			   struct rxe_create_qp_resp __user *uresp)
 {
 	int err;
@@ -331,7 +307,8 @@ static int rxe_qp_init_resp(struct rxe_dev *rxe, struct rxe_qp *qp,
 }
 
 /* called by the create qp verb */
-int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd,
+int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp,
+		     struct rxe_pd *pd,
 		     struct ib_qp_init_attr *init,
 		     struct rxe_create_qp_resp __user *uresp,
 		     struct ib_pd *ibpd,
@@ -413,7 +390,7 @@ int rxe_qp_chk_attr(struct rxe_dev *rxe, struct rxe_qp *qp,
 		    struct ib_qp_attr *attr, int mask)
 {
 	enum ib_qp_state cur_state = (mask & IB_QP_CUR_STATE) ?
-					attr->cur_qp_state : qp->attr.qp_state;
+				attr->cur_qp_state : qp->attr.qp_state;
 	enum ib_qp_state new_state = (mask & IB_QP_STATE) ?
 					attr->qp_state : cur_state;
 
@@ -628,9 +605,8 @@ int rxe_qp_from_attr(struct rxe_qp *qp, struct ib_qp_attr *attr, int mask,
 	if (mask & IB_QP_QKEY)
 		qp->attr.qkey = attr->qkey;
 
-	if (mask & IB_QP_AV) {
+	if (mask & IB_QP_AV)
 		rxe_init_av(&attr->ah_attr, &qp->pri_av);
-	}
 
 	if (mask & IB_QP_ALT_PATH) {
 		rxe_init_av(&attr->alt_ah_attr, &qp->alt_av);
@@ -649,7 +625,10 @@ int rxe_qp_from_attr(struct rxe_qp *qp, struct ib_qp_attr *attr, int mask,
 		if (attr->timeout == 0) {
 			qp->qp_timeout_jiffies = 0;
 		} else {
-			/* According to the spec, timeout = 4.096 * 2 ^ attr->timeout [us] */
+			/*
+			 * According to the spec,
+			 * timeout = 4.096 * 2 ^ attr->timeout [us]
+			 */
 			int j = nsecs_to_jiffies(4096ULL << attr->timeout);
 
 			qp->qp_timeout_jiffies = j ? j : 1;
@@ -687,7 +666,8 @@ int rxe_qp_from_attr(struct rxe_qp *qp, struct ib_qp_attr *attr, int mask,
 		qp->attr.sq_psn = (attr->sq_psn & BTH_PSN_MASK);
 		qp->req.psn = qp->attr.sq_psn;
 		qp->comp.psn = qp->attr.sq_psn;
-		pr_debug("qp#%d set req psn = 0x%x\n", qp_num(qp), qp->req.psn);
+		pr_debug("qp#%d set req psn = 0x%x\n",
+				qp_num(qp), qp->req.psn);
 	}
 
 	if (mask & IB_QP_PATH_MIG_STATE)
@@ -803,7 +783,8 @@ void rxe_qp_destroy(struct rxe_qp *qp)
 /* called when the last reference to the qp is dropped */
 static void rxe_qp_do_cleanup(struct work_struct *work)
 {
-	struct rxe_qp *qp = container_of(work, typeof(*qp), cleanup_work.work);
+	struct rxe_qp *qp = container_of(work, typeof(*qp),
+					cleanup_work.work);
 
 	rxe_drop_all_mcast_groups(qp);
 
diff --git a/drivers/infiniband/sw/rxe/rxe_queue.c b/drivers/infiniband/sw/rxe/rxe_queue.c
index 245040c3a35d..f761943e7467 100644
--- a/drivers/infiniband/sw/rxe/rxe_queue.c
+++ b/drivers/infiniband/sw/rxe/rxe_queue.c
@@ -1,34 +1,9 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
 /*
+ * drivers/infiniband/sw/rxe/rxe_queue.c
+ *
  * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
  * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *     Redistribution and use in source and binary forms, with or
- *     without modification, are permitted provided that the following
- *     conditions are met:
- *
- *	- Redistributions of source code must retain the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer.
- *
- *	- Redistributions in binary form must retailuce the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer in the documentation and/or other materials
- *	  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
  */
 
 #include <linux/vmalloc.h>
diff --git a/drivers/infiniband/sw/rxe/rxe_queue.h b/drivers/infiniband/sw/rxe/rxe_queue.h
index 8ef17d617022..98fb2f50621a 100644
--- a/drivers/infiniband/sw/rxe/rxe_queue.h
+++ b/drivers/infiniband/sw/rxe/rxe_queue.h
@@ -1,34 +1,9 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
 /*
+ * drivers/infiniband/sw/rxe/rxe_queue.h
+ *
  * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
  * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *     Redistribution and use in source and binary forms, with or
- *     without modification, are permitted provided that the following
- *     conditions are met:
- *
- *	- Redistributions of source code must retain the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer.
- *
- *	- Redistributions in binary form must reproduce the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer in the documentation and/or other materials
- *	  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
  */
 
 #ifndef RXE_QUEUE_H
diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c
index 7e123d3c4d09..9eb38008f603 100644
--- a/drivers/infiniband/sw/rxe/rxe_recv.c
+++ b/drivers/infiniband/sw/rxe/rxe_recv.c
@@ -1,34 +1,9 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
 /*
+ * drivers/infiniband/sw/rxe/rxe_recv.c
+ *
  * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
  * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *     Redistribution and use in source and binary forms, with or
- *     without modification, are permitted provided that the following
- *     conditions are met:
- *
- *	- Redistributions of source code must retain the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer.
- *
- *	- Redistributions in binary form must reproduce the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer in the documentation and/or other materials
- *	  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
  */
 
 #include <linux/skbuff.h>
diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
index f0fa195fcc70..61f41cfdfefd 100644
--- a/drivers/infiniband/sw/rxe/rxe_req.c
+++ b/drivers/infiniband/sw/rxe/rxe_req.c
@@ -1,34 +1,10 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
 /*
+ * drivers/infiniband/sw/rxe/rxe_req.c
+ *
+ * Copyright (c) 2020 Hewlett Packard Enterprise, Inc. All rights reserved.
  * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
  * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *     Redistribution and use in source and binary forms, with or
- *     without modification, are permitted provided that the following
- *     conditions are met:
- *
- *	- Redistributions of source code must retain the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer.
- *
- *	- Redistributions in binary form must reproduce the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer in the documentation and/or other materials
- *	  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
  */
 
 #include <linux/skbuff.h>
@@ -135,7 +111,8 @@ static struct rxe_send_wqe *req_next_wqe(struct rxe_qp *qp)
 	unsigned long flags;
 
 	if (unlikely(qp->req.state == QP_STATE_DRAIN)) {
-		/* check to see if we are drained;
+		/*
+		 * check to see if we are drained;
 		 * state_lock used by requester and completer
 		 */
 		spin_lock_irqsave(&qp->state_lock, flags);
@@ -345,7 +322,8 @@ static int next_opcode(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
 	return -EINVAL;
 }
 
-static inline int check_init_depth(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
+static inline int check_init_depth(struct rxe_qp *qp,
+				struct rxe_send_wqe *wqe)
 {
 	int depth;
 
@@ -394,9 +372,7 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp,
 	/* length from start of bth to end of icrc */
 	paylen = rxe_opcode[opcode].length + payload + pad + RXE_ICRC_SIZE;
 
-	/* pkt->hdr, rxe, port_num and mask are initialized in ifc
-	 * layer
-	 */
+	/* pkt->hdr, rxe, port_num and mask are initialized in ifc layer */
 	pkt->opcode	= opcode;
 	pkt->qp		= qp;
 	pkt->psn	= qp->req.psn;
@@ -551,9 +527,9 @@ static void save_state(struct rxe_send_wqe *wqe,
 		       struct rxe_send_wqe *rollback_wqe,
 		       u32 *rollback_psn)
 {
-	rollback_wqe->state     = wqe->state;
+	rollback_wqe->state	= wqe->state;
 	rollback_wqe->first_psn = wqe->first_psn;
-	rollback_wqe->last_psn  = wqe->last_psn;
+	rollback_wqe->last_psn	= wqe->last_psn;
 	*rollback_psn		= qp->req.psn;
 }
 
@@ -591,18 +567,25 @@ static int local_invalidate(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
 	struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
 	u32 key = wqe->wr.ex.invalidate_rkey;
 
-	if (!(key & IS_MW) && (mr = rxe_pool_get_key(&rxe->mr_pool, &key))) {
-		ret = rxe_invalidate_mr(qp, mr);
-		rxe_drop_ref(mr);
-	} else if ((mw = rxe_pool_get_key(&rxe->mw_pool, &key))) {
+	if (key & IS_MW) {
+		mw = rxe_pool_get_key(&rxe->mw_pool, &key);
+		if (!mw)
+			goto err;
 		ret = rxe_invalidate_mw(qp, mw);
 		rxe_drop_ref(mw);
-	} else {
-		ret = -EINVAL;
-		pr_err("No mr/mw for rkey %#x\n", key);
+		return ret;
 	}
 
+	mr = rxe_pool_get_key(&rxe->mr_pool, &key);
+	if (!mr)
+		goto err;
+	ret = rxe_invalidate_mr(qp, mr);
+	rxe_drop_ref(mr);
 	return ret;
+
+err:
+	pr_err("No mr/mw for rkey 0x%x\n", key);
+	return -EINVAL;
 }
 
 int rxe_requester(void *arg)
@@ -619,17 +602,9 @@ int rxe_requester(void *arg)
 	int ret;
 	struct rxe_send_wqe rollback_wqe;
 	u32 rollback_psn;
-	int entered;
 
 	rxe_add_ref(qp);
 
-	// this code is 'guaranteed' to never be entered more
-	// than once. Check to make sure that this is the case
-	entered = atomic_inc_return(&qp->req.task.entered);
-	if (entered > 1) {
-		pr_err("rxe_requester: entered %d times\n", entered);
-	}
-
 next_wqe:
 	if (unlikely(!qp->valid || qp->req.state == QP_STATE_ERROR))
 		goto exit;
@@ -652,19 +627,21 @@ int rxe_requester(void *arg)
 	if (unlikely(!wqe))
 		goto exit;
 
-	/* process local operations */
-	/* current behavior if an error occurs
+	/*
+	 * process local operations
+	 * current behavior if an error occurs
 	 * for any of these local operations
 	 * is to generate an error work completion
 	 * then error the QP and flush any
-	 * remaining WRs */
+	 * remaining WRs
+	 */
 	if (wqe->mask & WR_LOCAL_MASK) {
 		wqe->state = wqe_state_done;
 		wqe->status = IB_WC_SUCCESS;
 
 		switch (wqe->wr.opcode) {
 		case IB_WR_LOCAL_INV:
-			if ((ret = local_invalidate(qp, wqe)))
+			if (local_invalidate(qp, wqe))
 				wqe->status = IB_WC_LOC_QP_OP_ERR;
 			break;
 		case IB_WR_REG_MR:
@@ -676,22 +653,26 @@ int rxe_requester(void *arg)
 			mr->iova = wqe->wr.wr.reg.mr->iova;
 			break;
 		case IB_WR_BIND_MW:
-			if ((ret = rxe_bind_mw(qp, wqe)))
+			if (rxe_bind_mw(qp, wqe))
 				wqe->status = IB_WC_MW_BIND_ERR;
 			break;
 		default:
-			pr_err("rxe_requester: unexpected local"
-				" WR opcode = %d\n", wqe->wr.opcode);
-			/* these should be memory operation errors
-			 * but there isn't one available */
+			pr_err("unexpected local WR opcode = %d\n",
+				wqe->wr.opcode);
+			/*
+			 * these should be memory operation errors
+			 * but there isn't one available
+			 */
 			wqe->status = IB_WC_LOC_QP_OP_ERR;
 		}
 
 		/* we're done processing the wqe so move index */
 		qp->req.wqe_index = next_index(qp->sq.queue, qp->req.wqe_index);
 
-		/* if an error occurred do a completion pass now
-		 * (below) and then quit processing more wqes */
+		/*
+		 * if an error occurred do a completion pass now
+		 * (below) and then quit processing more wqes
+		 */
 		if (wqe->status != IB_WC_SUCCESS)
 			goto err;
 
@@ -743,12 +724,6 @@ int rxe_requester(void *arg)
 			wqe->state = wqe_state_done;
 			wqe->status = IB_WC_SUCCESS;
 
-			/* TODO why?? why not just treat the same as a
-			 * successful wqe and go to next wqe?
-			 * __rxe_do_task probably shouldn't be used
-			 * it reenters the completion task which may
-			 * already be running
-			 */
 			__rxe_do_task(&qp->comp.task);
 			goto again;
 		}
@@ -798,20 +773,24 @@ int rxe_requester(void *arg)
 	goto next_wqe;
 
 err:
-	/* we come here if an error occured while processing
+	/*
+	 * we come here if an error occured while processing
 	 * a send wqe. The completer will put the qp in error
 	 * state and no more wqes will be processed unless
-	 * the qp is cleaned up and restarted. */
+	 * the qp is cleaned up and restarted.
+	 */
 	wqe->state = wqe_state_error;
 	__rxe_do_task(&qp->comp.task);
 	ret = -EAGAIN;
 	goto done;
 
 exit:
-	/* we come here if either there are no more wqes in the send
+	/*
+	 * we come here if either there are no more wqes in the send
 	 * queue or we are blocked waiting for some resource or event.
 	 * The current wqe will be restarted or new wqe started when
-	 * there is something to do. */
+	 * there is something to do.
+	 */
 	ret = -EAGAIN;
 	goto done;
 
@@ -821,7 +800,6 @@ int rxe_requester(void *arg)
 	goto done;
 
 done:
-	atomic_dec(&qp->req.task.entered);
 	rxe_drop_ref(qp);
 	return ret;
 }
diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
index 0bfea50505d1..0696ca85161e 100644
--- a/drivers/infiniband/sw/rxe/rxe_resp.c
+++ b/drivers/infiniband/sw/rxe/rxe_resp.c
@@ -1,34 +1,10 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
 /*
+ * drivers/infiniband/sw/rxe/rxe_resp.c
+ *
+ * Copyright (c) 2020 Hewlett Packard Enterprise, Inc. All rights reserved.
  * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
  * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *     Redistribution and use in source and binary forms, with or
- *     without modification, are permitted provided that the following
- *     conditions are met:
- *
- *	- Redistributions of source code must retain the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer.
- *
- *	- Redistributions in binary form must reproduce the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer in the documentation and/or other materials
- *	  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
  */
 
 #include <linux/skbuff.h>
@@ -71,36 +47,36 @@ enum resp_states {
 };
 
 static char *resp_state_name[] = {
-	[RESPST_NONE]				= "NONE",
-	[RESPST_GET_REQ]			= "GET_REQ",
-	[RESPST_CHK_PSN]			= "CHK_PSN",
-	[RESPST_CHK_OP_SEQ]			= "CHK_OP_SEQ",
-	[RESPST_CHK_OP_VALID]			= "CHK_OP_VALID",
-	[RESPST_CHK_RESOURCE]			= "CHK_RESOURCE",
-	[RESPST_CHK_LENGTH]			= "CHK_LENGTH",
-	[RESPST_CHK_RKEY]			= "CHK_RKEY",
-	[RESPST_EXECUTE]			= "EXECUTE",
-	[RESPST_READ_REPLY]			= "READ_REPLY",
-	[RESPST_COMPLETE]			= "COMPLETE",
-	[RESPST_ACKNOWLEDGE]			= "ACKNOWLEDGE",
-	[RESPST_CLEANUP]			= "CLEANUP",
-	[RESPST_DUPLICATE_REQUEST]		= "DUPLICATE_REQUEST",
-	[RESPST_ERR_MALFORMED_WQE]		= "ERR_MALFORMED_WQE",
-	[RESPST_ERR_UNSUPPORTED_OPCODE]		= "ERR_UNSUPPORTED_OPCODE",
-	[RESPST_ERR_MISALIGNED_ATOMIC]		= "ERR_MISALIGNED_ATOMIC",
-	[RESPST_ERR_PSN_OUT_OF_SEQ]		= "ERR_PSN_OUT_OF_SEQ",
-	[RESPST_ERR_MISSING_OPCODE_FIRST]	= "ERR_MISSING_OPCODE_FIRST",
-	[RESPST_ERR_MISSING_OPCODE_LAST_C]	= "ERR_MISSING_OPCODE_LAST_C",
-	[RESPST_ERR_MISSING_OPCODE_LAST_D1E]	= "ERR_MISSING_OPCODE_LAST_D1E",
-	[RESPST_ERR_TOO_MANY_RDMA_ATM_REQ]	= "ERR_TOO_MANY_RDMA_ATM_REQ",
-	[RESPST_ERR_RNR]			= "ERR_RNR",
-	[RESPST_ERR_RKEY_VIOLATION]		= "ERR_RKEY_VIOLATION",
-	[RESPST_ERR_LENGTH]			= "ERR_LENGTH",
-	[RESPST_ERR_CQ_OVERFLOW]		= "ERR_CQ_OVERFLOW",
-	[RESPST_ERROR]				= "ERROR",
-	[RESPST_RESET]				= "RESET",
-	[RESPST_DONE]				= "DONE",
-	[RESPST_EXIT]				= "EXIT",
+	[RESPST_NONE]			      = "NONE",
+	[RESPST_GET_REQ]		      = "GET_REQ",
+	[RESPST_CHK_PSN]		      = "CHK_PSN",
+	[RESPST_CHK_OP_SEQ]		      = "CHK_OP_SEQ",
+	[RESPST_CHK_OP_VALID]		      = "CHK_OP_VALID",
+	[RESPST_CHK_RESOURCE]		      = "CHK_RESOURCE",
+	[RESPST_CHK_LENGTH]		      = "CHK_LENGTH",
+	[RESPST_CHK_RKEY]		      = "CHK_RKEY",
+	[RESPST_EXECUTE]		      = "EXECUTE",
+	[RESPST_READ_REPLY]		      = "READ_REPLY",
+	[RESPST_COMPLETE]		      = "COMPLETE",
+	[RESPST_ACKNOWLEDGE]		      = "ACKNOWLEDGE",
+	[RESPST_CLEANUP]		      = "CLEANUP",
+	[RESPST_DUPLICATE_REQUEST]	      = "DUPLICATE_REQUEST",
+	[RESPST_ERR_MALFORMED_WQE]	      = "ERR_MALFORMED_WQE",
+	[RESPST_ERR_UNSUPPORTED_OPCODE]	      = "ERR_UNSUPPORTED_OPCODE",
+	[RESPST_ERR_MISALIGNED_ATOMIC]	      = "ERR_MISALIGNED_ATOMIC",
+	[RESPST_ERR_PSN_OUT_OF_SEQ]	      = "ERR_PSN_OUT_OF_SEQ",
+	[RESPST_ERR_MISSING_OPCODE_FIRST]     = "ERR_MISSING_OPCODE_FIRST",
+	[RESPST_ERR_MISSING_OPCODE_LAST_C]    = "ERR_MISSING_OPCODE_LAST_C",
+	[RESPST_ERR_MISSING_OPCODE_LAST_D1E]  = "ERR_MISSING_OPCODE_LAST_D1E",
+	[RESPST_ERR_TOO_MANY_RDMA_ATM_REQ]    = "ERR_TOO_MANY_RDMA_ATM_REQ",
+	[RESPST_ERR_RNR]		      = "ERR_RNR",
+	[RESPST_ERR_RKEY_VIOLATION]	      = "ERR_RKEY_VIOLATION",
+	[RESPST_ERR_LENGTH]		      = "ERR_LENGTH",
+	[RESPST_ERR_CQ_OVERFLOW]	      = "ERR_CQ_OVERFLOW",
+	[RESPST_ERROR]			      = "ERROR",
+	[RESPST_RESET]			      = "RESET",
+	[RESPST_DONE]			      = "DONE",
+	[RESPST_EXIT]			      = "EXIT",
 };
 
 /* rxe_recv calls here to add a request packet to the input queue */
@@ -462,7 +438,13 @@ static enum resp_states check_rkey(struct rxe_qp *qp,
 	resid	= qp->resp.resid;
 	pktlen	= payload_size(pkt);
 
-	if ((rkey & IS_MW) && (mw = rxe_pool_get_key(&rxe->mw_pool, &rkey))) {
+	if (rkey & IS_MW) {
+		mw = rxe_pool_get_key(&rxe->mw_pool, &rkey);
+		if (!mw) {
+			state = RESPST_ERR_RKEY_VIOLATION;
+			goto err;
+		}
+
 		spin_lock_irqsave(&mw->lock, flags);
 		if (rxe_mw_check_access(qp, mw, access, va, resid)) {
 			spin_unlock_irqrestore(&mw->lock, flags);
@@ -479,16 +461,19 @@ static enum resp_states check_rkey(struct rxe_qp *qp,
 
 		spin_unlock_irqrestore(&mw->lock, flags);
 		rxe_drop_ref(mw);
-	} else if ((mr = rxe_pool_get_key(&rxe->mr_pool, &rkey)) &&
-		   (mr->rkey == rkey)) {
+	} else {
+		mr = rxe_pool_get_key(&rxe->mr_pool, &rkey);
+		if (!mr || mr->rkey != rkey) {
+			if (mr)
+				rxe_drop_ref(mr);
+			state = RESPST_ERR_RKEY_VIOLATION;
+			goto err;
+		}
+
 		if (rxe_mr_check_access(qp, mr, access, va, resid)) {
 			state = RESPST_ERR_RKEY_VIOLATION;
 			goto err;
 		}
-	} else {
-		pr_err("no MR/MW found with rkey = 0x%08x\n", rkey);
-		state = RESPST_ERR_RKEY_VIOLATION;
-		goto err;
 	}
 
 	if (pkt->mask & RXE_WRITE_MASK)	 {
@@ -853,15 +838,22 @@ static int send_invalidate(struct rxe_qp *qp, struct rxe_dev *rxe, u32 rkey)
 	struct rxe_mr *mr;
 	struct rxe_mw *mw;
 
-	if ((mr = rxe_pool_get_key(&rxe->mr_pool, &rkey))) {
-		ret = rxe_invalidate_mr(qp, mr);
-		rxe_drop_ref(mr);
-	} else if ((mw = rxe_pool_get_key(&rxe->mw_pool, &rkey))) {
+	if (rkey & IS_MW) {
+		mw = rxe_pool_get_key(&rxe->mw_pool, &rkey);
+		if (!mw) {
+			pr_err("no MW found for rkey = 0x%x\n", rkey);
+			ret = -EINVAL;
+		}
 		ret = rxe_invalidate_mw(qp, mw);
 		rxe_drop_ref(mw);
 	} else {
-		pr_err("send invalidate failed for rkey = 0x%x\n", rkey);
-		ret = -EINVAL;
+		mr = rxe_pool_get_key(&rxe->mr_pool, &rkey);
+		if (!mr) {
+			pr_err("no MR found for rkey = 0x%x\n", rkey);
+			ret = -EINVAL;
+		}
+		ret = rxe_invalidate_mr(qp, mr);
+		rxe_drop_ref(mr);
 	}
 
 	return ret;
@@ -884,13 +876,13 @@ static enum resp_states do_complete(struct rxe_qp *qp,
 	memset(&cqe, 0, sizeof(cqe));
 
 	if (qp->rcq->is_user) {
-		uwc->status             = qp->resp.status;
-		uwc->qp_num             = qp->ibqp.qp_num;
-		uwc->wr_id              = wqe->wr_id;
+		uwc->status		= qp->resp.status;
+		uwc->qp_num		= qp->ibqp.qp_num;
+		uwc->wr_id		= wqe->wr_id;
 	} else {
-		wc->status              = qp->resp.status;
-		wc->qp                  = &qp->ibqp;
-		wc->wr_id               = wqe->wr_id;
+		wc->status		= qp->resp.status;
+		wc->qp			= &qp->ibqp;
+		wc->wr_id		= wqe->wr_id;
 	}
 
 	if (pkt->mask & RXE_IETH_MASK) {
@@ -909,7 +901,8 @@ static enum resp_states do_complete(struct rxe_qp *qp,
 		wc->vendor_err = 0;
 		wc->byte_len = (pkt->mask & RXE_IMMDT_MASK &&
 				pkt->mask & RXE_WRITE_MASK) ?
-					qp->resp.length : wqe->dma.length - wqe->dma.resid;
+					qp->resp.length :
+					wqe->dma.length - wqe->dma.resid;
 
 		/* fields after byte_len are different between kernel and user
 		 * space
@@ -936,7 +929,8 @@ static enum resp_states do_complete(struct rxe_qp *qp,
 		} else {
 			struct sk_buff *skb = PKT_TO_SKB(pkt);
 
-			wc->wc_flags = IB_WC_GRH | IB_WC_WITH_NETWORK_HDR_TYPE;
+			wc->wc_flags = IB_WC_GRH |
+					IB_WC_WITH_NETWORK_HDR_TYPE;
 			if (skb->protocol == htons(ETH_P_IP))
 				wc->network_hdr_type = RDMA_NETWORK_IPV4;
 			else
@@ -1175,7 +1169,7 @@ static enum resp_states duplicate_request(struct rxe_qp *qp,
 			/* Resend the result. */
 			rc = rxe_xmit_packet(qp, pkt, res->atomic.skb);
 			if (rc) {
-				pr_err("Failed resending result. This flow is not handled - skb ignored\n");
+				pr_err("Failed resending result\n");
 				rc = RESPST_CLEANUP;
 				goto out;
 			}
@@ -1189,7 +1183,9 @@ static enum resp_states duplicate_request(struct rxe_qp *qp,
 	return rc;
 }
 
-/* Process a class A or C. Both are treated the same in this implementation. */
+/* Process a class A or C. Both are treated the same
+ * in this implementation.
+ */
 static void do_class_ac_error(struct rxe_qp *qp, u8 syndrome,
 			      enum ib_wc_status status)
 {
@@ -1257,17 +1253,9 @@ int rxe_responder(void *arg)
 	enum resp_states state;
 	struct rxe_pkt_info *pkt = NULL;
 	int ret = 0;
-	int entered;
 
 	rxe_add_ref(qp);
 
-	// this code is 'guaranteed' to never be entered more
-	// than once. Check to make sure that this is the case
-	entered = atomic_inc_return(&qp->resp.task.entered);
-	if (entered > 1) {
-		pr_err("rxe_responder: entered %d times\n", entered);
-	}
-
 	qp->resp.aeth_syndrome = AETH_ACK_UNLIMITED;
 
 	if (!qp->valid) {
@@ -1330,7 +1318,8 @@ int rxe_responder(void *arg)
 			break;
 		case RESPST_ERR_PSN_OUT_OF_SEQ:
 			/* RC only - Class B. Drop packet. */
-			send_ack(qp, pkt, AETH_NAK_PSN_SEQ_ERROR, qp->resp.psn);
+			send_ack(qp, pkt, AETH_NAK_PSN_SEQ_ERROR,
+				 qp->resp.psn);
 			state = RESPST_CLEANUP;
 			break;
 
@@ -1446,7 +1435,6 @@ int rxe_responder(void *arg)
 exit:
 	ret = -EAGAIN;
 done:
-	atomic_dec(&qp->resp.task.entered);
 	rxe_drop_ref(qp);
 	return ret;
 }
diff --git a/drivers/infiniband/sw/rxe/rxe_srq.c b/drivers/infiniband/sw/rxe/rxe_srq.c
index d8459431534e..81394bab2c0f 100644
--- a/drivers/infiniband/sw/rxe/rxe_srq.c
+++ b/drivers/infiniband/sw/rxe/rxe_srq.c
@@ -1,34 +1,9 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
 /*
+ * drivers/infiniband/sw/rxe/rxe_srq.c
+ *
  * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
  * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *     Redistribution and use in source and binary forms, with or
- *     without modification, are permitted provided that the following
- *     conditions are met:
- *
- *	- Redistributions of source code must retain the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer.
- *
- *	- Redistributions in binary form must reproduce the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer in the documentation and/or other materials
- *	  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
  */
 
 #include <linux/vmalloc.h>
diff --git a/drivers/infiniband/sw/rxe/rxe_sysfs.c b/drivers/infiniband/sw/rxe/rxe_sysfs.c
index ccda5f5a3bc0..39aa0c04dde8 100644
--- a/drivers/infiniband/sw/rxe/rxe_sysfs.c
+++ b/drivers/infiniband/sw/rxe/rxe_sysfs.c
@@ -1,34 +1,9 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
 /*
+ * drivers/infiniband/sw/rxe/rxe_sysfs.c
+ *
  * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
  * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *     Redistribution and use in source and binary forms, with or
- *     without modification, are permitted provided that the following
- *     conditions are met:
- *
- *	- Redistributions of source code must retain the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer.
- *
- *	- Redistributions in binary form must reproduce the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer in the documentation and/or other materials
- *	  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
  */
 
 #include "rxe.h"
@@ -92,7 +67,8 @@ static int rxe_param_set_add(const char *val, const struct kernel_param *kp)
 	return err;
 }
 
-static int rxe_param_set_remove(const char *val, const struct kernel_param *kp)
+static int rxe_param_set_remove(const char *val,
+				const struct kernel_param *kp)
 {
 	int len;
 	char intf[32];
diff --git a/drivers/infiniband/sw/rxe/rxe_task.c b/drivers/infiniband/sw/rxe/rxe_task.c
index 08f05ac5f5d5..44c3b908b9f4 100644
--- a/drivers/infiniband/sw/rxe/rxe_task.c
+++ b/drivers/infiniband/sw/rxe/rxe_task.c
@@ -1,34 +1,9 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
 /*
+ * drivers/infiniband/sw/rxe/rxe_task.c
+ *
  * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
  * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *	   Redistribution and use in source and binary forms, with or
- *	   without modification, are permitted provided that the following
- *	   conditions are met:
- *
- *	- Redistributions of source code must retain the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer.
- *
- *	- Redistributions in binary form must reproduce the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer in the documentation and/or other materials
- *	  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
  */
 
 #include <linux/kernel.h>
diff --git a/drivers/infiniband/sw/rxe/rxe_task.h b/drivers/infiniband/sw/rxe/rxe_task.h
index e33806c6f5a4..836b21dcf2ae 100644
--- a/drivers/infiniband/sw/rxe/rxe_task.h
+++ b/drivers/infiniband/sw/rxe/rxe_task.h
@@ -1,34 +1,9 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
 /*
+ * drivers/infiniband/sw/rxe/rxe_task.h
+ *
  * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
  * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *	   Redistribution and use in source and binary forms, with or
- *	   without modification, are permitted provided that the following
- *	   conditions are met:
- *
- *	- Redistributions of source code must retain the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer.
- *
- *	- Redistributions in binary form must reproduce the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer in the documentation and/or other materials
- *	  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
  */
 
 #ifndef RXE_TASK_H
@@ -55,8 +30,6 @@ struct rxe_task {
 	int			ret;
 	char			name[16];
 	bool			destroyed;
-	// debug code, delete me when done
-	atomic_t		entered;
 };
 
 /*
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
index caaacfabadbc..7ddf97fac67d 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
@@ -1,34 +1,9 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
 /*
+ * drivers/infiniband/sw/rxe/rxe_verbs.c
+ *
  * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
  * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *     Redistribution and use in source and binary forms, with or
- *     without modification, are permitted provided that the following
- *     conditions are met:
- *
- *	- Redistributions of source code must retain the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer.
- *
- *	- Redistributions in binary form must reproduce the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer in the documentation and/or other materials
- *	  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
  */
 
 #include <linux/dma-mapping.h>
@@ -133,7 +108,8 @@ static enum rdma_link_layer rxe_get_link_layer(struct ib_device *dev,
 	return IB_LINK_LAYER_ETHERNET;
 }
 
-static int rxe_alloc_ucontext(struct ib_ucontext *uctx, struct ib_udata *udata)
+static int rxe_alloc_ucontext(struct ib_ucontext *uctx,
+				struct ib_udata *udata)
 {
 	struct rxe_dev *rxe = to_rdev(uctx->device);
 	struct rxe_ucontext *uc = to_ruc(uctx);
@@ -376,7 +352,8 @@ static void rxe_destroy_srq(struct ib_srq *ibsrq, struct ib_udata *udata)
 	rxe_drop_ref(srq);
 }
 
-static int rxe_post_srq_recv(struct ib_srq *ibsrq, const struct ib_recv_wr *wr,
+static int rxe_post_srq_recv(struct ib_srq *ibsrq,
+			     const struct ib_recv_wr *wr,
 			     const struct ib_recv_wr **bad_wr)
 {
 	int err = 0;
@@ -605,8 +582,9 @@ static int init_send_wqe(struct rxe_qp *qp, const struct ib_send_wr *ibwr,
 		wqe->mask = mask;
 		wqe->state = wqe_state_posted;
 		return 0;
-	} else
-		memcpy(wqe->dma.sge, ibwr->sg_list,
+	}
+
+	memcpy(wqe->dma.sge, ibwr->sg_list,
 		       num_sge * sizeof(struct ib_sge));
 
 	wqe->iova = mask & WR_ATOMIC_MASK ? atomic_wr(ibwr)->remote_addr :
@@ -664,7 +642,8 @@ static int post_one_send(struct rxe_qp *qp, const struct ib_send_wr *ibwr,
 	return err;
 }
 
-static int rxe_post_send_kernel(struct rxe_qp *qp, const struct ib_send_wr *wr,
+static int rxe_post_send_kernel(struct rxe_qp *qp,
+				const struct ib_send_wr *wr,
 				const struct ib_send_wr **bad_wr)
 {
 	int err = 0;
@@ -773,7 +752,8 @@ static int rxe_post_recv(struct ib_qp *ibqp, const struct ib_recv_wr *wr,
 	return err;
 }
 
-static int rxe_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
+static int rxe_create_cq(struct ib_cq *ibcq,
+			 const struct ib_cq_init_attr *attr,
 			 struct ib_udata *udata)
 {
 	int err;
@@ -868,7 +848,8 @@ static int rxe_peek_cq(struct ib_cq *ibcq, int wc_cnt)
 	return (count > wc_cnt) ? wc_cnt : count;
 }
 
-static int rxe_req_notify_cq(struct ib_cq *ibcq, enum ib_cq_notify_flags flags)
+static int rxe_req_notify_cq(struct ib_cq *ibcq,
+			     enum ib_cq_notify_flags flags)
 {
 	struct rxe_cq *cq = to_rcq(ibcq);
 	unsigned long irq_flags;
diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h
index c990654e396d..b738f1603d13 100644
--- a/drivers/infiniband/sw/rxe/rxe_verbs.h
+++ b/drivers/infiniband/sw/rxe/rxe_verbs.h
@@ -1,34 +1,9 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
 /*
+ * drivers/infiniband/sw/rxe/rxe_verbs.h
+ *
  * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
  * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses.  You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * COPYING in the main directory of this source tree, or the
- * OpenIB.org BSD license below:
- *
- *	   Redistribution and use in source and binary forms, with or
- *	   without modification, are permitted provided that the following
- *	   conditions are met:
- *
- *	- Redistributions of source code must retain the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer.
- *
- *	- Redistributions in binary form must reproduce the above
- *	  copyright notice, this list of conditions and the following
- *	  disclaimer in the documentation and/or other materials
- *	  provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
  */
 
 #ifndef RXE_VERBS_H
@@ -436,7 +411,8 @@ struct rxe_dev {
 	struct crypto_shash	*tfm;
 };
 
-static inline void rxe_counter_inc(struct rxe_dev *rxe, enum rxe_counters index)
+static inline void rxe_counter_inc(struct rxe_dev *rxe,
+				   enum rxe_counters index)
 {
 	atomic64_inc(&rxe->stats_counters[index]);
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH 20/20] fixed checkpatch issues for all files in rxe
  2020-08-15  4:58 ` [PATCH 20/20] fixed checkpatch issues for all files in rxe Bob Pearson
@ 2020-08-16  5:29     ` kernel test robot
  0 siblings, 0 replies; 23+ messages in thread
From: kernel test robot @ 2020-08-16  5:29 UTC (permalink / raw)
  To: Bob Pearson, linux-rdma; +Cc: kbuild-all, Bob Pearson

[-- Attachment #1: Type: text/plain, Size: 6413 bytes --]

Hi Bob,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on rdma/for-next]
[also build test ERROR on next-20200814]
[cannot apply to v5.8]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Bob-Pearson/Added-ib_uverbs_wc_opcode-to-ib_user_verbs-h/20200816-090418
base:   https://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma.git for-next
config: xtensa-allyesconfig (attached as .config)
compiler: xtensa-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=xtensa 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   In file included from include/linux/kernel.h:11,
                    from include/linux/list.h:9,
                    from include/linux/module.h:12,
                    from drivers/infiniband/sw/rxe/rxe.h:17,
                    from drivers/infiniband/sw/rxe/rxe_mw.c:10:
   include/linux/scatterlist.h: In function 'sg_set_buf':
   arch/xtensa/include/asm/page.h:193:9: warning: comparison of unsigned expression >= 0 is always true [-Wtype-limits]
     193 |  ((pfn) >= ARCH_PFN_OFFSET && ((pfn) - ARCH_PFN_OFFSET) < max_mapnr)
         |         ^~
   include/linux/compiler.h:78:42: note: in definition of macro 'unlikely'
      78 | # define unlikely(x) __builtin_expect(!!(x), 0)
         |                                          ^
   include/linux/scatterlist.h:143:2: note: in expansion of macro 'BUG_ON'
     143 |  BUG_ON(!virt_addr_valid(buf));
         |  ^~~~~~
   arch/xtensa/include/asm/page.h:201:32: note: in expansion of macro 'pfn_valid'
     201 | #define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
         |                                ^~~~~~~~~
   include/linux/scatterlist.h:143:10: note: in expansion of macro 'virt_addr_valid'
     143 |  BUG_ON(!virt_addr_valid(buf));
         |          ^~~~~~~~~~~~~~~
   In file included from ./arch/xtensa/include/generated/asm/bug.h:1,
                    from include/linux/bug.h:5,
                    from include/linux/thread_info.h:12,
                    from include/asm-generic/preempt.h:5,
                    from ./arch/xtensa/include/generated/asm/preempt.h:1,
                    from include/linux/preempt.h:78,
                    from include/linux/spinlock.h:51,
                    from include/linux/seqlock.h:36,
                    from include/linux/time.h:6,
                    from include/linux/stat.h:19,
                    from include/linux/module.h:13,
                    from drivers/infiniband/sw/rxe/rxe.h:17,
                    from drivers/infiniband/sw/rxe/rxe_mw.c:10:
   include/linux/dma-mapping.h: In function 'dma_map_resource':
   arch/xtensa/include/asm/page.h:193:9: warning: comparison of unsigned expression >= 0 is always true [-Wtype-limits]
     193 |  ((pfn) >= ARCH_PFN_OFFSET && ((pfn) - ARCH_PFN_OFFSET) < max_mapnr)
         |         ^~
   include/asm-generic/bug.h:144:27: note: in definition of macro 'WARN_ON_ONCE'
     144 |  int __ret_warn_once = !!(condition);   \
         |                           ^~~~~~~~~
   include/linux/dma-mapping.h:352:19: note: in expansion of macro 'pfn_valid'
     352 |  if (WARN_ON_ONCE(pfn_valid(PHYS_PFN(phys_addr))))
         |                   ^~~~~~~~~
   drivers/infiniband/sw/rxe/rxe_mw.c: In function 'do_bind_mw':
>> drivers/infiniband/sw/rxe/rxe_mw.c:227:17: error: implicit declaration of function 'rxe_get_key'; did you mean 'rxe_get_av'? [-Werror=implicit-function-declaration]
     227 |  duplicate_mw = rxe_get_key(rxe, &new_key);
         |                 ^~~~~~~~~~~
         |                 rxe_get_av
>> drivers/infiniband/sw/rxe/rxe_mw.c:227:35: error: 'new_key' undeclared (first use in this function); did you mean 'new_rkey'?
     227 |  duplicate_mw = rxe_get_key(rxe, &new_key);
         |                                   ^~~~~~~
         |                                   new_rkey
   drivers/infiniband/sw/rxe/rxe_mw.c:227:35: note: each undeclared identifier is reported only once for each function it appears in
   drivers/infiniband/sw/rxe/rxe_mw.c:211:6: warning: unused variable 'ret' [-Wunused-variable]
     211 |  int ret;
         |      ^~~
   cc1: some warnings being treated as errors

vim +227 drivers/infiniband/sw/rxe/rxe_mw.c

   207	
   208	static int do_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
   209				struct rxe_mw *mw, struct rxe_mr *mr)
   210	{
   211		int ret;
   212		u32 rkey;
   213		u32 new_rkey;
   214		struct rxe_mw *duplicate_mw;
   215		struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
   216	
   217		/*
   218		 * key part of new rkey is provided by user for type 2
   219		 * and ibv_bind_mw() for type 1 MWs
   220		 * there is a very rare chance that the new rkey will
   221		 * collide with an existing MW. Return an error if this
   222		 * occurs
   223		 */
   224		rkey = mw->ibmw.rkey;
   225		new_rkey = (rkey & 0xffffff00) | (wqe->wr.wr.umw.rkey & 0x000000ff);
   226	
 > 227		duplicate_mw = rxe_get_key(rxe, &new_key);
   228		if (duplicate_mw) {
   229			pr_err_once("new MW key is a duplicate, try another\n");
   230			rxe_drop_ref(duplicate_mw);
   231			return -EINVAL;
   232		}
   233	
   234		rxe_drop_key(mw);
   235		rxe_add_key(mw, &new_rkey);
   236	
   237		mw->access = wqe->wr.wr.umw.access;
   238		mw->state = RXE_MEM_STATE_VALID;
   239		mw->addr = wqe->wr.wr.umw.addr;
   240		mw->length = wqe->wr.wr.umw.length;
   241	
   242		if (mw->mr) {
   243			rxe_drop_ref(mw->mr);
   244			atomic_dec(&mw->mr->num_mw);
   245			mw->mr = NULL;
   246		}
   247	
   248		if (mw->length) {
   249			mw->mr = mr;
   250			atomic_inc(&mr->num_mw);
   251			rxe_add_ref(mr);
   252		}
   253	
   254		if (mw->ibmw.type == IB_MW_TYPE_2)
   255			mw->qp = qp;
   256	
   257		return 0;
   258	}
   259	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 64430 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 20/20] fixed checkpatch issues for all files in rxe
@ 2020-08-16  5:29     ` kernel test robot
  0 siblings, 0 replies; 23+ messages in thread
From: kernel test robot @ 2020-08-16  5:29 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 6559 bytes --]

Hi Bob,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on rdma/for-next]
[also build test ERROR on next-20200814]
[cannot apply to v5.8]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Bob-Pearson/Added-ib_uverbs_wc_opcode-to-ib_user_verbs-h/20200816-090418
base:   https://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma.git for-next
config: xtensa-allyesconfig (attached as .config)
compiler: xtensa-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=xtensa 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   In file included from include/linux/kernel.h:11,
                    from include/linux/list.h:9,
                    from include/linux/module.h:12,
                    from drivers/infiniband/sw/rxe/rxe.h:17,
                    from drivers/infiniband/sw/rxe/rxe_mw.c:10:
   include/linux/scatterlist.h: In function 'sg_set_buf':
   arch/xtensa/include/asm/page.h:193:9: warning: comparison of unsigned expression >= 0 is always true [-Wtype-limits]
     193 |  ((pfn) >= ARCH_PFN_OFFSET && ((pfn) - ARCH_PFN_OFFSET) < max_mapnr)
         |         ^~
   include/linux/compiler.h:78:42: note: in definition of macro 'unlikely'
      78 | # define unlikely(x) __builtin_expect(!!(x), 0)
         |                                          ^
   include/linux/scatterlist.h:143:2: note: in expansion of macro 'BUG_ON'
     143 |  BUG_ON(!virt_addr_valid(buf));
         |  ^~~~~~
   arch/xtensa/include/asm/page.h:201:32: note: in expansion of macro 'pfn_valid'
     201 | #define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
         |                                ^~~~~~~~~
   include/linux/scatterlist.h:143:10: note: in expansion of macro 'virt_addr_valid'
     143 |  BUG_ON(!virt_addr_valid(buf));
         |          ^~~~~~~~~~~~~~~
   In file included from ./arch/xtensa/include/generated/asm/bug.h:1,
                    from include/linux/bug.h:5,
                    from include/linux/thread_info.h:12,
                    from include/asm-generic/preempt.h:5,
                    from ./arch/xtensa/include/generated/asm/preempt.h:1,
                    from include/linux/preempt.h:78,
                    from include/linux/spinlock.h:51,
                    from include/linux/seqlock.h:36,
                    from include/linux/time.h:6,
                    from include/linux/stat.h:19,
                    from include/linux/module.h:13,
                    from drivers/infiniband/sw/rxe/rxe.h:17,
                    from drivers/infiniband/sw/rxe/rxe_mw.c:10:
   include/linux/dma-mapping.h: In function 'dma_map_resource':
   arch/xtensa/include/asm/page.h:193:9: warning: comparison of unsigned expression >= 0 is always true [-Wtype-limits]
     193 |  ((pfn) >= ARCH_PFN_OFFSET && ((pfn) - ARCH_PFN_OFFSET) < max_mapnr)
         |         ^~
   include/asm-generic/bug.h:144:27: note: in definition of macro 'WARN_ON_ONCE'
     144 |  int __ret_warn_once = !!(condition);   \
         |                           ^~~~~~~~~
   include/linux/dma-mapping.h:352:19: note: in expansion of macro 'pfn_valid'
     352 |  if (WARN_ON_ONCE(pfn_valid(PHYS_PFN(phys_addr))))
         |                   ^~~~~~~~~
   drivers/infiniband/sw/rxe/rxe_mw.c: In function 'do_bind_mw':
>> drivers/infiniband/sw/rxe/rxe_mw.c:227:17: error: implicit declaration of function 'rxe_get_key'; did you mean 'rxe_get_av'? [-Werror=implicit-function-declaration]
     227 |  duplicate_mw = rxe_get_key(rxe, &new_key);
         |                 ^~~~~~~~~~~
         |                 rxe_get_av
>> drivers/infiniband/sw/rxe/rxe_mw.c:227:35: error: 'new_key' undeclared (first use in this function); did you mean 'new_rkey'?
     227 |  duplicate_mw = rxe_get_key(rxe, &new_key);
         |                                   ^~~~~~~
         |                                   new_rkey
   drivers/infiniband/sw/rxe/rxe_mw.c:227:35: note: each undeclared identifier is reported only once for each function it appears in
   drivers/infiniband/sw/rxe/rxe_mw.c:211:6: warning: unused variable 'ret' [-Wunused-variable]
     211 |  int ret;
         |      ^~~
   cc1: some warnings being treated as errors

vim +227 drivers/infiniband/sw/rxe/rxe_mw.c

   207	
   208	static int do_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
   209				struct rxe_mw *mw, struct rxe_mr *mr)
   210	{
   211		int ret;
   212		u32 rkey;
   213		u32 new_rkey;
   214		struct rxe_mw *duplicate_mw;
   215		struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
   216	
   217		/*
   218		 * key part of new rkey is provided by user for type 2
   219		 * and ibv_bind_mw() for type 1 MWs
   220		 * there is a very rare chance that the new rkey will
   221		 * collide with an existing MW. Return an error if this
   222		 * occurs
   223		 */
   224		rkey = mw->ibmw.rkey;
   225		new_rkey = (rkey & 0xffffff00) | (wqe->wr.wr.umw.rkey & 0x000000ff);
   226	
 > 227		duplicate_mw = rxe_get_key(rxe, &new_key);
   228		if (duplicate_mw) {
   229			pr_err_once("new MW key is a duplicate, try another\n");
   230			rxe_drop_ref(duplicate_mw);
   231			return -EINVAL;
   232		}
   233	
   234		rxe_drop_key(mw);
   235		rxe_add_key(mw, &new_rkey);
   236	
   237		mw->access = wqe->wr.wr.umw.access;
   238		mw->state = RXE_MEM_STATE_VALID;
   239		mw->addr = wqe->wr.wr.umw.addr;
   240		mw->length = wqe->wr.wr.umw.length;
   241	
   242		if (mw->mr) {
   243			rxe_drop_ref(mw->mr);
   244			atomic_dec(&mw->mr->num_mw);
   245			mw->mr = NULL;
   246		}
   247	
   248		if (mw->length) {
   249			mw->mr = mr;
   250			atomic_inc(&mr->num_mw);
   251			rxe_add_ref(mr);
   252		}
   253	
   254		if (mw->ibmw.type == IB_MW_TYPE_2)
   255			mw->qp = qp;
   256	
   257		return 0;
   258	}
   259	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 64430 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2020-08-16  6:24 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-15  4:58 Memory windows support for rxe Bob Pearson
2020-08-15  4:58 ` [PATCH 01/20] Added ib_uverbs_wc_opcode to ib_user_verbs.h Bob Pearson
2020-08-15  4:58 ` [PATCH 02/20] Added missing IB_WR_BIND_MW opcode Bob Pearson
2020-08-15  4:58 ` [PATCH 03/20] Added bind_mw parameters to rxe_send_wr Bob Pearson
2020-08-15  4:58 ` [PATCH 04/20] Added stubs for alloc_mw and dealloc_mw verbs Bob Pearson
2020-08-15  4:58 ` [PATCH 05/20] Separated MR and MW objects Bob Pearson
2020-08-15  4:58 ` [PATCH 06/20] Added a basic rxe_mw struct Bob Pearson
2020-08-15  4:58 ` [PATCH 07/20] Implemented functional alloc_mw and dealloc_mw APIs Bob Pearson
2020-08-15  4:58 ` [PATCH 08/20] Added a stubbed bind_mw API Bob Pearson
2020-08-15  4:58 ` [PATCH 09/20] Fixed error logic in rxe_req.c Bob Pearson
2020-08-15  4:58 ` [PATCH 10/20] Extended pools to support both keys and indices Bob Pearson
2020-08-15  4:58 ` [PATCH 11/20] Gave MRs and MWs " Bob Pearson
2020-08-15  4:58 ` [PATCH 12/20] Cleanup after git pull Bob Pearson
2020-08-15  4:58 ` [PATCH 13/20] add debug print statements Bob Pearson
2020-08-15  4:58 ` [PATCH 14/20] Addresses an issue with hardened user copy Bob Pearson
2020-08-15  4:58 ` [PATCH 15/20] Fixed a dumb bug Bob Pearson
2020-08-15  4:58 ` [PATCH 16/20] Implemented stubbed invalidate APIs Bob Pearson
2020-08-15  4:58 ` [PATCH 17/20] Implemented functional " Bob Pearson
2020-08-15  4:58 ` [PATCH 18/20] cleanup Bob Pearson
2020-08-15  4:58 ` [PATCH 19/20] fixed white space issues Bob Pearson
2020-08-15  4:58 ` [PATCH 20/20] fixed checkpatch issues for all files in rxe Bob Pearson
2020-08-16  5:29   ` kernel test robot
2020-08-16  5:29     ` kernel test robot

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.