linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout
@ 2019-12-12  9:37 Leon Romanovsky
  2019-12-12  9:37 ` [PATCH rdma-rc v2 01/48] RDMA/cm: Provide private data size to CM users Leon Romanovsky
                   ` (49 more replies)
  0 siblings, 50 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:37 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Changelog:
v1->v2: https://lore.kernel.org/linux-rdma/20191121181313.129430-1-leon@kernel.org
 * Added forgotten CM_FIELD64_LOC().
v0->v1: https://lore.kernel.org/linux-rdma/20191027070621.11711-1-leon@kernel.org
 * Used Jason's macros as a basis for all get/set operation for wire protocol.
 * Fixed wrong offsets.
 * Grouped all CM related patches in one patchset bomb.
----------------------------------------------------------------------
Hi,

This series continues already started task to clean up CM related code.

Over the years, the IB/core gained a number of anti-patterns which
led to mistakes. First and most distracting is spread of hardware
specification types (e.g. __beXX) to the core logic. Second, endless
copy/paste to access IBTA binary blobs, which made any IBTA extensions
not an easy task.
In this series, we add Enhance Connection Establishment bits which
were added recently to IBTA and will continue to convert rest of the CM
code to propose macros by eliminating __beXX variables from core code.

All IBTA CM declarations are places into new header
file: include/rdma/ibta_vol1_c12.h and the idea that every
spec chapter will have separate header file, so we will see
immediately the relations between declarations and values.

Thanks

BTW,
1. The whole area near private_data looks sketchy to me and needs
   separate cleanup.
2. I know that it is more than 15 patches, but they are small and
   self-contained.

Leon Romanovsky (48):
  RDMA/cm: Provide private data size to CM users
  RDMA/srpt: Use private_data_len instead of hardcoded value
  RDMA/ucma: Mask QPN to be 24 bits according to IBTA
  RDMA/cm: Add SET/GET implementations to hide IBA wire format
  RDMA/cm: Request For Communication (REQ) message definitions
  RDMA/cm: Message Receipt Acknowledgment (MRA) message definitions
  RDMA/cm: Reject (REJ) message definitions
  RDMA/cm: Reply To Request for communication (REP) definitions
  RDMA/cm: Ready To Use (RTU) definitions
  RDMA/cm: Request For Communication Release (DREQ) definitions
  RDMA/cm: Reply To Request For Communication Release (DREP) definitions
  RDMA/cm: Load Alternate Path (LAP) definitions
  RDMA/cm: Alternate Path Response (APR) message definitions
  RDMA/cm: Service ID Resolution Request (SIDR_REQ) definitions
  RDMA/cm: Service ID Resolution Response (SIDR_REP) definitions
  RDMA/cm: Convert QPN and EECN to be u32 variables
  RDMA/cm: Convert REQ responded resources to the new scheme
  RDMA/cm: Convert REQ initiator depth to the new scheme
  RDMA/cm: Convert REQ remote response timeout
  RDMA/cm: Simplify QP type to wire protocol translation
  RDMA/cm: Convert REQ flow control
  RDMA/cm: Convert starting PSN to be u32 variable
  RDMA/cm: Update REQ local response timeout
  RDMA/cm: Convert REQ retry count to use new scheme
  RDMA/cm: Update REQ path MTU field
  RDMA/cm: Convert REQ RNR retry timeout counter
  RDMA/cm: Convert REQ MAX CM retries
  RDMA/cm: Convert REQ SRQ field
  RDMA/cm: Convert REQ flow label field
  RDMA/cm: Convert REQ packet rate
  RDMA/cm: Convert REQ SL fields
  RDMA/cm: Convert REQ subnet local fields
  RDMA/cm: Convert REQ local ack timeout
  RDMA/cm: Convert MRA MRAed field
  RDMA/cm: Convert MRA service timeout
  RDMA/cm: Update REJ struct to use new scheme
  RDMA/cm: Convert REP target ack delay field
  RDMA/cm: Convert REP failover accepted field
  RDMA/cm: Convert REP flow control field
  RDMA/cm: Convert REP RNR retry count field
  RDMA/cm: Convert REP SRQ field
  RDMA/cm: Delete unused CM LAP functions
  RDMA/cm: Convert LAP flow label field
  RDMA/cm: Convert LAP fields
  RDMA/cm: Delete unused CM ARP functions
  RDMA/cm: Convert SIDR_REP to new scheme
  RDMA/cm: Add Enhanced Connection Establishment (ECE) bits
  RDMA/cm: Convert private_date access

 drivers/infiniband/core/cm.c          | 554 ++++++++++--------------
 drivers/infiniband/core/cm_msgs.h     | 600 +-------------------------
 drivers/infiniband/core/cma.c         |  11 +-
 drivers/infiniband/core/ucma.c        |   2 +-
 drivers/infiniband/ulp/srpt/ib_srpt.c |   2 +-
 include/rdma/ib_cm.h                  |  55 +--
 include/rdma/iba.h                    | 138 ++++++
 include/rdma/ibta_vol1_c12.h          | 211 +++++++++
 8 files changed, 599 insertions(+), 974 deletions(-)
 create mode 100644 include/rdma/iba.h
 create mode 100644 include/rdma/ibta_vol1_c12.h

--
2.20.1


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 01/48] RDMA/cm: Provide private data size to CM users
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
@ 2019-12-12  9:37 ` Leon Romanovsky
  2019-12-12  9:37 ` [PATCH rdma-rc v2 02/48] RDMA/srpt: Use private_data_len instead of hardcoded value Leon Romanovsky
                   ` (48 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:37 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Prepare code to removal IB_CM_*_PRIVATE_DATA_SIZE enum so we will store
such size in adjacent to actual data.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c | 11 +++++++++++
 include/rdma/ib_cm.h         |  1 +
 2 files changed, 12 insertions(+)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 455b3659d84b..c341a68b6f97 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -1681,6 +1681,7 @@ static void cm_format_req_event(struct cm_work *work,
 	param->srq = cm_req_get_srq(req_msg);
 	param->ppath_sgid_attr = cm_id_priv->av.ah_attr.grh.sgid_attr;
 	work->cm_event.private_data = &req_msg->private_data;
+	work->cm_event.private_data_len = IB_CM_REQ_PRIVATE_DATA_SIZE;
 }
 
 static void cm_process_work(struct cm_id_private *cm_id_priv,
@@ -2193,6 +2194,7 @@ static void cm_format_rep_event(struct cm_work *work, enum ib_qp_type qp_type)
 	param->rnr_retry_count = cm_rep_get_rnr_retry_count(rep_msg);
 	param->srq = cm_rep_get_srq(rep_msg);
 	work->cm_event.private_data = &rep_msg->private_data;
+	work->cm_event.private_data_len = IB_CM_REP_PRIVATE_DATA_SIZE;
 }
 
 static void cm_dup_rep_handler(struct cm_work *work)
@@ -2395,6 +2397,7 @@ static int cm_rtu_handler(struct cm_work *work)
 		return -EINVAL;
 
 	work->cm_event.private_data = &rtu_msg->private_data;
+	work->cm_event.private_data_len = IB_CM_RTU_PRIVATE_DATA_SIZE;
 
 	spin_lock_irq(&cm_id_priv->lock);
 	if (cm_id_priv->id.state != IB_CM_REP_SENT &&
@@ -2597,6 +2600,7 @@ static int cm_dreq_handler(struct cm_work *work)
 	}
 
 	work->cm_event.private_data = &dreq_msg->private_data;
+	work->cm_event.private_data_len = IB_CM_DREQ_PRIVATE_DATA_SIZE;
 
 	spin_lock_irq(&cm_id_priv->lock);
 	if (cm_id_priv->local_qpn != cm_dreq_get_remote_qpn(dreq_msg))
@@ -2671,6 +2675,7 @@ static int cm_drep_handler(struct cm_work *work)
 		return -EINVAL;
 
 	work->cm_event.private_data = &drep_msg->private_data;
+	work->cm_event.private_data_len = IB_CM_DREP_PRIVATE_DATA_SIZE;
 
 	spin_lock_irq(&cm_id_priv->lock);
 	if (cm_id_priv->id.state != IB_CM_DREQ_SENT &&
@@ -2770,6 +2775,7 @@ static void cm_format_rej_event(struct cm_work *work)
 	param->ari_length = cm_rej_get_reject_info_len(rej_msg);
 	param->reason = __be16_to_cpu(rej_msg->reason);
 	work->cm_event.private_data = &rej_msg->private_data;
+	work->cm_event.private_data_len = IB_CM_REJ_PRIVATE_DATA_SIZE;
 }
 
 static struct cm_id_private * cm_acquire_rejected_id(struct cm_rej_msg *rej_msg)
@@ -2982,6 +2988,7 @@ static int cm_mra_handler(struct cm_work *work)
 		return -EINVAL;
 
 	work->cm_event.private_data = &mra_msg->private_data;
+	work->cm_event.private_data_len = IB_CM_MRA_PRIVATE_DATA_SIZE;
 	work->cm_event.param.mra_rcvd.service_timeout =
 					cm_mra_get_service_timeout(mra_msg);
 	timeout = cm_convert_to_ms(cm_mra_get_service_timeout(mra_msg)) +
@@ -3214,6 +3221,7 @@ static int cm_lap_handler(struct cm_work *work)
 	param->alternate_path = &work->path[0];
 	cm_format_path_from_lap(cm_id_priv, param->alternate_path, lap_msg);
 	work->cm_event.private_data = &lap_msg->private_data;
+	work->cm_event.private_data_len = IB_CM_LAP_PRIVATE_DATA_SIZE;
 
 	spin_lock_irq(&cm_id_priv->lock);
 	if (cm_id_priv->id.state != IB_CM_ESTABLISHED)
@@ -3367,6 +3375,7 @@ static int cm_apr_handler(struct cm_work *work)
 	work->cm_event.param.apr_rcvd.apr_info = &apr_msg->info;
 	work->cm_event.param.apr_rcvd.info_len = apr_msg->info_length;
 	work->cm_event.private_data = &apr_msg->private_data;
+	work->cm_event.private_data_len = IB_CM_APR_PRIVATE_DATA_SIZE;
 
 	spin_lock_irq(&cm_id_priv->lock);
 	if (cm_id_priv->id.state != IB_CM_ESTABLISHED ||
@@ -3515,6 +3524,7 @@ static void cm_format_sidr_req_event(struct cm_work *work,
 	param->port = work->port->port_num;
 	param->sgid_attr = rx_cm_id->av.ah_attr.grh.sgid_attr;
 	work->cm_event.private_data = &sidr_req_msg->private_data;
+	work->cm_event.private_data_len = IB_CM_SIDR_REQ_PRIVATE_DATA_SIZE;
 }
 
 static int cm_sidr_req_handler(struct cm_work *work)
@@ -3664,6 +3674,7 @@ static void cm_format_sidr_rep_event(struct cm_work *work,
 	param->info_len = sidr_rep_msg->info_length;
 	param->sgid_attr = cm_id_priv->av.ah_attr.grh.sgid_attr;
 	work->cm_event.private_data = &sidr_rep_msg->private_data;
+	work->cm_event.private_data_len = IB_CM_SIDR_REP_PRIVATE_DATA_SIZE;
 }
 
 static int cm_sidr_rep_handler(struct cm_work *work)
diff --git a/include/rdma/ib_cm.h b/include/rdma/ib_cm.h
index b01a8a8d4de9..b476e0e27ec9 100644
--- a/include/rdma/ib_cm.h
+++ b/include/rdma/ib_cm.h
@@ -254,6 +254,7 @@ struct ib_cm_event {
 	} param;
 
 	void			*private_data;
+	u8			private_data_len;
 };
 
 #define CM_REQ_ATTR_ID		cpu_to_be16(0x0010)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 02/48] RDMA/srpt: Use private_data_len instead of hardcoded value
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
  2019-12-12  9:37 ` [PATCH rdma-rc v2 01/48] RDMA/cm: Provide private data size to CM users Leon Romanovsky
@ 2019-12-12  9:37 ` Leon Romanovsky
  2019-12-12  9:37 ` [PATCH rdma-rc v2 03/48] RDMA/ucma: Mask QPN to be 24 bits according to IBTA Leon Romanovsky
                   ` (47 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:37 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Reuse recently introduced private_data_len to get IBTA REJ message
private data size.

Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/ulp/srpt/ib_srpt.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
index 23c782e3d49a..a0dd17f90861 100644
--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
+++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
@@ -2648,7 +2648,7 @@ static int srpt_cm_handler(struct ib_cm_id *cm_id,
 	case IB_CM_REJ_RECEIVED:
 		srpt_cm_rej_recv(ch, event->param.rej_rcvd.reason,
 				 event->private_data,
-				 IB_CM_REJ_PRIVATE_DATA_SIZE);
+				 event->private_data_len);
 		break;
 	case IB_CM_RTU_RECEIVED:
 	case IB_CM_USER_ESTABLISHED:
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 03/48] RDMA/ucma: Mask QPN to be 24 bits according to IBTA
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
  2019-12-12  9:37 ` [PATCH rdma-rc v2 01/48] RDMA/cm: Provide private data size to CM users Leon Romanovsky
  2019-12-12  9:37 ` [PATCH rdma-rc v2 02/48] RDMA/srpt: Use private_data_len instead of hardcoded value Leon Romanovsky
@ 2019-12-12  9:37 ` Leon Romanovsky
  2019-12-12  9:37 ` [PATCH rdma-rc v2 04/48] RDMA/cm: Add SET/GET implementations to hide IBA wire format Leon Romanovsky
                   ` (46 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:37 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

IBTA declares QPN as 24bits, mask input to ensure that kernel
doesn't get higher bits and ensure by adding WANR_ONCE() that
other CM users do the same.

Fixes: 75216638572f ("RDMA/cma: Export rdma cm interface to userspace")
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c   | 5 ++++-
 drivers/infiniband/core/ucma.c | 2 +-
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index c341a68b6f97..efa2d329da30 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -2102,7 +2102,10 @@ int ib_send_cm_rep(struct ib_cm_id *cm_id,
 	cm_id_priv->initiator_depth = param->initiator_depth;
 	cm_id_priv->responder_resources = param->responder_resources;
 	cm_id_priv->rq_psn = cm_rep_get_starting_psn(rep_msg);
-	cm_id_priv->local_qpn = cpu_to_be32(param->qp_num & 0xFFFFFF);
+	WARN_ONCE(param->qp_num & 0xFF000000,
+		  "IBTA declares QPN to be 24 bits, but it is 0x%X\n",
+		  param->qp_num);
+	cm_id_priv->local_qpn = cpu_to_be32(param->qp_num);
 
 out:	spin_unlock_irqrestore(&cm_id_priv->lock, flags);
 	return ret;
diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c
index 0274e9b704be..57e68491a2fd 100644
--- a/drivers/infiniband/core/ucma.c
+++ b/drivers/infiniband/core/ucma.c
@@ -1045,7 +1045,7 @@ static void ucma_copy_conn_param(struct rdma_cm_id *id,
 	dst->retry_count = src->retry_count;
 	dst->rnr_retry_count = src->rnr_retry_count;
 	dst->srq = src->srq;
-	dst->qp_num = src->qp_num;
+	dst->qp_num = src->qp_num & 0xFFFFFF;
 	dst->qkey = (id->route.addr.src_addr.ss_family == AF_IB) ? src->qkey : 0;
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 04/48] RDMA/cm: Add SET/GET implementations to hide IBA wire format
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (2 preceding siblings ...)
  2019-12-12  9:37 ` [PATCH rdma-rc v2 03/48] RDMA/ucma: Mask QPN to be 24 bits according to IBTA Leon Romanovsky
@ 2019-12-12  9:37 ` Leon Romanovsky
  2020-01-04  2:15   ` Jason Gunthorpe
  2019-12-12  9:37 ` [PATCH rdma-rc v2 05/48] RDMA/cm: Request For Communication (REQ) message definitions Leon Romanovsky
                   ` (45 subsequent siblings)
  49 siblings, 1 reply; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:37 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

There is no separation between RDMA-CM wire format as it is declared in
IBTA and kernel logic which implements needed support. Such situation
causes to many mistakes in conversion between big-endian (wire format)
and CPU format used by kernel. It also mixes RDMA core code with
combination of uXX and beXX variables.

The idea that all accesses to IBA definitions will go through special
GET/SET macros to ensure that no conversion mistakes are done.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm_msgs.h |   6 +-
 include/rdma/iba.h                | 138 ++++++++++++++++++++++++++++++
 2 files changed, 139 insertions(+), 5 deletions(-)
 create mode 100644 include/rdma/iba.h

diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index 92d7260ac913..9bc468833831 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -8,14 +8,10 @@
 #ifndef CM_MSGS_H
 #define CM_MSGS_H
 
+#include <rdma/iba.h>
 #include <rdma/ib_mad.h>
 #include <rdma/ib_cm.h>
 
-/*
- * Parameters to routines below should be in network-byte order, and values
- * are returned in network-byte order.
- */
-
 #define IB_CM_CLASS_VERSION	2 /* IB specification 1.2 */
 
 struct cm_req_msg {
diff --git a/include/rdma/iba.h b/include/rdma/iba.h
new file mode 100644
index 000000000000..a77cc89f9f3f
--- /dev/null
+++ b/include/rdma/iba.h
@@ -0,0 +1,138 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
+/*
+ * Copyright (c) 2019, Mellanox Technologies inc.  All rights reserved.
+ */
+#ifndef _IBA_DEFS_H_
+#define _IBA_DEFS_H_
+
+#include <linux/kernel.h>
+#include <linux/bitfield.h>
+#include <asm/unaligned.h>
+
+static inline u32 _iba_get8(const u8 *ptr)
+{
+	return *ptr;
+}
+
+static inline void _iba_set8(u8 *ptr, u32 mask, u32 prep_value)
+{
+	*ptr = (*ptr & ~mask) | prep_value;
+}
+
+static inline u16 _iba_get16(const __be16 *ptr)
+{
+	return be16_to_cpu(*ptr);
+}
+
+static inline void _iba_set16(__be16 *ptr, u16 mask, u16 prep_value)
+{
+	*ptr = cpu_to_be16((be16_to_cpu(*ptr) & ~mask) | prep_value);
+}
+
+static inline u32 _iba_get32(const __be32 *ptr)
+{
+	return be32_to_cpu(*ptr);
+}
+
+static inline void _iba_set32(__be32 *ptr, u32 mask, u32 prep_value)
+{
+	*ptr = cpu_to_be32((be32_to_cpu(*ptr) & ~mask) | prep_value);
+}
+
+static inline u64 _iba_get64(const __be64 *ptr)
+{
+	/*
+	 * The mads are constructed so that 32 bit and smaller are naturally
+	 * aligned, everything larger has a max alignment of 4 bytes.
+	 */
+	return be64_to_cpu(get_unaligned(ptr));
+}
+
+static inline void _iba_set64(__be64 *ptr, u64 mask, u64 prep_value)
+{
+	put_unaligned(cpu_to_be64((_iba_get64(ptr) & ~mask) | prep_value), ptr);
+}
+
+#define _IBA_SET(field_struct, field_offset, field_mask, mask_width, ptr,      \
+		 value)                                                        \
+	({                                                                     \
+		field_struct *_ptr = ptr;                                      \
+		_iba_set##mask_width((void *)_ptr + (field_offset),            \
+				     field_mask,                               \
+				     FIELD_PREP(field_mask, value));           \
+	})
+#define IBA_SET(field, ptr, value) _IBA_SET(field, ptr, value)
+
+#define _IBA_SET_MEM(field_struct, field_offset, byte_size, ptr, in, bytes)    \
+	({                                                                     \
+		WARN_ON(bytes > byte_size);                                    \
+		if (in && bytes) {                                             \
+			field_struct *_ptr = ptr;                              \
+			memcpy((void *)_ptr + (field_offset), in, bytes);      \
+		}                                                              \
+	})
+#define IBA_SET_MEM(field, ptr, in, bytes) _IBA_SET_MEM(field, ptr, in, bytes)
+
+#define _IBA_GET(field_struct, field_offset, field_mask, mask_width, ptr)      \
+	({                                                                     \
+		const field_struct *_ptr = ptr;                                \
+		(u##mask_width) FIELD_GET(                                     \
+			field_mask, _iba_get##mask_width((const void *)_ptr +  \
+							 (field_offset)));     \
+	})
+#define IBA_GET(field, ptr) _IBA_GET(field, ptr)
+
+#define _IBA_GET_MEM(field_struct, field_offset, byte_size, ptr, out, bytes)   \
+	({                                                                     \
+		WARN_ON(bytes > byte_size);                                    \
+		if (out && bytes) {                                            \
+			const field_struct *_ptr = ptr;                        \
+			memcpy(out, (void *)_ptr + (field_offset), bytes);     \
+		}                                                              \
+	})
+#define IBA_GET_MEM(field, ptr, out, bytes) _IBA_GET_MEM(field, ptr, out, bytes)
+
+/*
+ * The generated list becomes the parameters to the macros, the order is:
+ *  - struct this applies to
+ *  - starting offset of the max
+ *  - GENMASK or GENMASK_ULL in CPU order
+ *  - The width of data the mask operations should work on, in bits
+ */
+
+/*
+ * Extraction using a tabular description like table 106. bit_offset is from
+ * the Byte[Bit] notation.
+ */
+#define IBA_FIELD_BLOC(field_struct, byte_offset, bit_offset, num_bits)        \
+	field_struct, byte_offset,                                             \
+		GENMASK(7 - (bit_offset), 7 - (bit_offset) - (num_bits - 1)),  \
+		8
+#define IBA_FIELD8_LOC(field_struct, byte_offset, num_bits)                    \
+	IBA_FIELD_BLOC(field_struct, byte_offset, 0, num_bits)
+
+#define IBA_FIELD16_LOC(field_struct, byte_offset, num_bits)                   \
+	field_struct, (byte_offset)&0xFFFE,                                    \
+		GENMASK(15 - (((byte_offset) % 2) * 8),                        \
+			15 - (((byte_offset) % 2) * 8) - (num_bits - 1)),      \
+		16
+
+#define IBA_FIELD32_LOC(field_struct, byte_offset, num_bits)                   \
+	field_struct, (byte_offset)&0xFFFC,                                    \
+		GENMASK(31 - (((byte_offset) % 4) * 8),                        \
+			31 - (((byte_offset) % 4) * 8) - (num_bits - 1)),      \
+		32
+
+#define IBA_FIELD64_LOC(field_struct, byte_offset, num_bits)                   \
+	field_struct, (byte_offset)&0xFFF8,                                    \
+		GENMASK_ULL(63 - (((byte_offset) % 8) * 8),                    \
+			    63 - (((byte_offset) % 8) * 8) - (num_bits - 1)),  \
+		64
+/*
+ * In IBTA spec, everything that is more than 64bits is multiple
+ * of bytes without leftover bits.
+ */
+#define IBA_FIELD_MLOC(field_struct, byte_offset, num_bits)                    \
+	field_struct, (byte_offset)&0xFFFC, (num_bits / 8)
+
+#endif /* _IBA_DEFS_H_ */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 05/48] RDMA/cm: Request For Communication (REQ) message definitions
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (3 preceding siblings ...)
  2019-12-12  9:37 ` [PATCH rdma-rc v2 04/48] RDMA/cm: Add SET/GET implementations to hide IBA wire format Leon Romanovsky
@ 2019-12-12  9:37 ` Leon Romanovsky
  2020-01-07  1:17   ` Jason Gunthorpe
  2019-12-12  9:37 ` [PATCH rdma-rc v2 06/48] RDMA/cm: Message Receipt Acknowledgment (MRA) " Leon Romanovsky
                   ` (44 subsequent siblings)
  49 siblings, 1 reply; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:37 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Add Request For Communication (REQ) message definitions as it is written
in IBTA release 1.3 volume 1.

There are three types of definitions:
1. Regular ones with offset and mask, they will be accessible
   by IBA_GET()/IBA_SET().
2. GIDs and private data will be accessible by IBA_GET_MEM()/IBA_SET_MEM().

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      |  4 +-
 drivers/infiniband/core/cm_msgs.h |  4 +-
 drivers/infiniband/core/cma.c     |  3 +-
 include/rdma/ib_cm.h              |  1 -
 include/rdma/ibta_vol1_c12.h      | 91 +++++++++++++++++++++++++++++++
 5 files changed, 97 insertions(+), 6 deletions(-)
 create mode 100644 include/rdma/ibta_vol1_c12.h

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index efa2d329da30..3c0cbdc748ac 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -1368,7 +1368,7 @@ static int cm_validate_req_param(struct ib_cm_req_param *param)
 		return -EINVAL;
 
 	if (param->private_data &&
-	    param->private_data_len > IB_CM_REQ_PRIVATE_DATA_SIZE)
+	    param->private_data_len > CM_REQ_PRIVATE_DATA_SIZE)
 		return -EINVAL;
 
 	if (param->alternate_path &&
@@ -1681,7 +1681,7 @@ static void cm_format_req_event(struct cm_work *work,
 	param->srq = cm_req_get_srq(req_msg);
 	param->ppath_sgid_attr = cm_id_priv->av.ah_attr.grh.sgid_attr;
 	work->cm_event.private_data = &req_msg->private_data;
-	work->cm_event.private_data_len = IB_CM_REQ_PRIVATE_DATA_SIZE;
+	work->cm_event.private_data_len = CM_REQ_PRIVATE_DATA_SIZE;
 }
 
 static void cm_process_work(struct cm_id_private *cm_id_priv,
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index 9bc468833831..9e50da044c43 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -8,7 +8,7 @@
 #ifndef CM_MSGS_H
 #define CM_MSGS_H
 
-#include <rdma/iba.h>
+#include <rdma/ibta_vol1_c12.h>
 #include <rdma/ib_mad.h>
 #include <rdma/ib_cm.h>
 
@@ -66,7 +66,7 @@ struct cm_req_msg {
 	/* local ACK timeout:5, rsvd:3 */
 	u8 alt_offset139;
 
-	u32 private_data[IB_CM_REQ_PRIVATE_DATA_SIZE / sizeof(u32)];
+	u32 private_data[CM_REQ_PRIVATE_DATA_SIZE / sizeof(u32)];
 
 } __packed;
 
diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
index 25f2b70fd8ef..02490a3c11f3 100644
--- a/drivers/infiniband/core/cma.c
+++ b/drivers/infiniband/core/cma.c
@@ -36,6 +36,7 @@
 
 #include "core_priv.h"
 #include "cma_priv.h"
+#include "cm_msgs.h"
 
 MODULE_AUTHOR("Sean Hefty");
 MODULE_DESCRIPTION("Generic RDMA CM Agent");
@@ -2085,7 +2086,7 @@ static void cma_set_req_event_data(struct rdma_cm_event *event,
 				   void *private_data, int offset)
 {
 	event->param.conn.private_data = private_data + offset;
-	event->param.conn.private_data_len = IB_CM_REQ_PRIVATE_DATA_SIZE - offset;
+	event->param.conn.private_data_len = CM_REQ_PRIVATE_DATA_SIZE - offset;
 	event->param.conn.responder_resources = req_data->responder_resources;
 	event->param.conn.initiator_depth = req_data->initiator_depth;
 	event->param.conn.flow_control = req_data->flow_control;
diff --git a/include/rdma/ib_cm.h b/include/rdma/ib_cm.h
index b476e0e27ec9..956256b2fc5d 100644
--- a/include/rdma/ib_cm.h
+++ b/include/rdma/ib_cm.h
@@ -65,7 +65,6 @@ enum ib_cm_event_type {
 };
 
 enum ib_cm_data_size {
-	IB_CM_REQ_PRIVATE_DATA_SIZE	 = 92,
 	IB_CM_MRA_PRIVATE_DATA_SIZE	 = 222,
 	IB_CM_REJ_PRIVATE_DATA_SIZE	 = 148,
 	IB_CM_REP_PRIVATE_DATA_SIZE	 = 196,
diff --git a/include/rdma/ibta_vol1_c12.h b/include/rdma/ibta_vol1_c12.h
new file mode 100644
index 000000000000..2db7f736379e
--- /dev/null
+++ b/include/rdma/ibta_vol1_c12.h
@@ -0,0 +1,91 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
+/*
+ * Copyright (c) 2019, Mellanox Technologies inc. All rights reserved.
+ *
+ * This file is IBTA volume 1, chapter 12 declarations:
+ * * CHAPTER 12: C OMMUNICATION MANAGEMENT
+ */
+#ifndef _IBTA_VOL1_C12_H_
+#define _IBTA_VOL1_C12_H_
+
+#include <rdma/iba.h>
+
+#define CM_FIELD_BLOC(field_struct, byte_offset, bits_offset, width)           \
+	IBA_FIELD_BLOC(field_struct,                                           \
+		       (byte_offset + sizeof(struct ib_mad_hdr)), bits_offset, \
+		       width)
+#define CM_FIELD8_LOC(field_struct, byte_offset, width)                        \
+	IBA_FIELD8_LOC(field_struct,                                           \
+		       (byte_offset + sizeof(struct ib_mad_hdr)), width)
+#define CM_FIELD16_LOC(field_struct, byte_offset, width)                       \
+	IBA_FIELD16_LOC(field_struct,                                          \
+			(byte_offset + sizeof(struct ib_mad_hdr)), width)
+#define CM_FIELD32_LOC(field_struct, byte_offset, width)                       \
+	IBA_FIELD32_LOC(field_struct,                                          \
+			(byte_offset + sizeof(struct ib_mad_hdr)), width)
+#define CM_FIELD64_LOC(field_struct, byte_offset, width)                       \
+	IBA_FIELD64_LOC(field_struct,                                          \
+			(byte_offset + sizeof(struct ib_mad_hdr)), width)
+#define CM_FIELD_MLOC(field_struct, byte_offset, width)                        \
+	IBA_FIELD_MLOC(field_struct,                                           \
+		       (byte_offset + sizeof(struct ib_mad_hdr)), width)
+
+/* Table 106 REQ Message Contents */
+#define CM_REQ_LOCAL_COMM_ID CM_FIELD32_LOC(struct cm_req_msg, 0, 32)
+#define CM_REQ_SERVICE_ID CM_FIELD64_LOC(struct cm_req_msg, 8, 64)
+#define CM_REQ_LOCAL_CA_GUID CM_FIELD64_LOC(struct cm_req_msg, 16, 64)
+#define CM_REQ_LOCAL_Q_KEY CM_FIELD32_LOC(struct cm_req_msg, 28, 32)
+#define CM_REQ_LOCAL_QPN CM_FIELD32_LOC(struct cm_req_msg, 32, 24)
+#define CM_REQ_RESPONDED_RESOURCES CM_FIELD8_LOC(struct cm_req_msg, 35, 8)
+#define CM_REQ_LOCAL_EECN CM_FIELD32_LOC(struct cm_req_msg, 36, 24)
+#define CM_REQ_INITIATOR_DEPTH CM_FIELD8_LOC(struct cm_req_msg, 39, 8)
+#define CM_REQ_REMOTE_EECN CM_FIELD32_LOC(struct cm_req_msg, 40, 24)
+#define CM_REQ_REMOTE_CM_RESPONSE_TIMEOUT                                      \
+	CM_FIELD8_LOC(struct cm_req_msg, 43, 5)
+#define CM_REQ_TRANSPORT_SERVICE_TYPE CM_FIELD_BLOC(struct cm_req_msg, 43, 5, 2)
+#define CM_REQ_END_TO_END_FLOW_CONTROL                                         \
+	CM_FIELD_BLOC(struct cm_req_msg, 43, 7, 1)
+#define CM_REQ_STARTING_PSN CM_FIELD32_LOC(struct cm_req_msg, 44, 24)
+#define CM_REQ_LOCAL_CM_RESPONSE_TIMEOUT CM_FIELD8_LOC(struct cm_req_msg, 47, 5)
+#define CM_REQ_RETRY_COUNT CM_FIELD_BLOC(struct cm_req_msg, 47, 5, 3)
+#define CM_REQ_PARTITION_KEY CM_FIELD16_LOC(struct cm_req_msg, 48, 16)
+#define CM_REQ_PATH_PACKET_PAYLOAD_MTU CM_FIELD8_LOC(struct cm_req_msg, 50, 4)
+#define CM_REQ_RDC_EXISTS CM_FIELD_BLOC(struct cm_req_msg, 50, 4, 1)
+#define CM_REQ_RNR_RETRY_COUNT CM_FIELD_BLOC(struct cm_req_msg, 50, 5, 3)
+#define CM_REQ_MAX_CM_RETRIES CM_FIELD8_LOC(struct cm_req_msg, 51, 4)
+#define CM_REQ_SRQ CM_FIELD_BLOC(struct cm_req_msg, 51, 4, 1)
+#define CM_REQ_EXTENDED_TRANSPORT_TYPE                                         \
+	CM_FIELD_BLOC(struct cm_req_msg, 51, 5, 3)
+#define CM_REQ_PRIMARY_LOCAL_PORT_LID CM_FIELD16_LOC(struct cm_req_msg, 52, 16)
+#define CM_REQ_PRIMARY_REMOTE_PORT_LID CM_FIELD16_LOC(struct cm_req_msg, 54, 16)
+#define CM_REQ_PRIMARY_LOCAL_PORT_GID CM_FIELD_MLOC(struct cm_req_msg, 56, 128)
+#define CM_REQ_PRIMARY_REMOTE_PORT_GID CM_FIELD_MLOC(struct cm_req_msg, 72, 128)
+#define CM_REQ_PRIMARY_FLOW_LABEL CM_FIELD32_LOC(struct cm_req_msg, 88, 20)
+#define CM_REQ_PRIMARY_PACKET_RATE CM_FIELD_BLOC(struct cm_req_msg, 91, 2, 2)
+#define CM_REQ_PRIMARY_TRAFFIC_CLASS CM_FIELD8_LOC(struct cm_req_msg, 92, 8)
+#define CM_REQ_PRIMARY_HOP_LIMIT CM_FIELD8_LOC(struct cm_req_msg, 93, 8)
+#define CM_REQ_PRIMARY_SL CM_FIELD8_LOC(struct cm_req_msg, 94, 4)
+#define CM_REQ_PRIMARY_SUBNET_LOCAL CM_FIELD_BLOC(struct cm_req_msg, 94, 4, 1)
+#define CM_REQ_PRIMARY_LOCAL_ACK_TIMEOUT CM_FIELD8_LOC(struct cm_req_msg, 95, 5)
+#define CM_REQ_ALTERNATE_LOCAL_PORT_LID                                        \
+	CM_FIELD16_LOC(struct cm_req_msg, 96, 16)
+#define CM_REQ_ALTERNATE_REMOTE_PORT_LID                                       \
+	CM_FIELD16_LOC(struct cm_req_msg, 98, 16)
+#define CM_REQ_ALTERNATE_LOCAL_PORT_GID                                        \
+	CM_FIELD_MLOC(struct cm_req_msg, 100, 128)
+#define CM_REQ_ALTERNATE_REMOTE_PORT_GID                                       \
+	CM_FIELD_MLOC(struct cm_req_msg, 116, 128)
+#define CM_REQ_ALTERNATE_FLOW_LABEL CM_FIELD32_LOC(struct cm_req_msg, 132, 20)
+#define CM_REQ_ALTERNATE_PACKET_RATE CM_FIELD_BLOC(struct cm_req_msg, 135, 2, 6)
+#define CM_REQ_ALTERNATE_TRAFFIC_CLASS CM_FIELD8_LOC(struct cm_req_msg, 136, 8)
+#define CM_REQ_ALTERNATE_HOP_LIMIT CM_FIELD8_LOC(struct cm_req_msg, 137, 8)
+#define CM_REQ_ALTERNATE_SL CM_FIELD8_LOC(struct cm_req_msg, 138, 4)
+#define CM_REQ_ALTERNATE_SUBNET_LOCAL                                          \
+	CM_FIELD_BLOC(struct cm_req_msg, 138, 4, 1)
+#define CM_REQ_ALTERNATE_LOCAL_ACK_TIMEOUT                                     \
+	CM_FIELD8_LOC(struct cm_req_msg, 139, 5)
+#define CM_REQ_SAP_SUPPORTED CM_FIELD_BLOC(struct cm_req_msg, 139, 5, 1)
+#define CM_REQ_PRIVATE_DATA CM_FIELD_MLOC(struct cm_req_msg, 140, 736)
+#define CM_REQ_PRIVATE_DATA_SIZE 92
+
+#endif /* _IBTA_VOL1_C12_H_ */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 06/48] RDMA/cm: Message Receipt Acknowledgment (MRA) message definitions
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (4 preceding siblings ...)
  2019-12-12  9:37 ` [PATCH rdma-rc v2 05/48] RDMA/cm: Request For Communication (REQ) message definitions Leon Romanovsky
@ 2019-12-12  9:37 ` Leon Romanovsky
  2019-12-12  9:37 ` [PATCH rdma-rc v2 07/48] RDMA/cm: Reject (REJ) " Leon Romanovsky
                   ` (43 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:37 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Add Message Receipt Acknowledgment (MRA) definitions as it is written
in IBTA release 1.3 volume 1.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      | 4 ++--
 drivers/infiniband/core/cm_msgs.h | 2 +-
 include/rdma/ib_cm.h              | 1 -
 include/rdma/ibta_vol1_c12.h      | 8 ++++++++
 4 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 3c0cbdc748ac..5a0ee9e46ff9 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -2897,7 +2897,7 @@ int ib_send_cm_mra(struct ib_cm_id *cm_id,
 	unsigned long flags;
 	int ret;
 
-	if (private_data && private_data_len > IB_CM_MRA_PRIVATE_DATA_SIZE)
+	if (private_data && private_data_len > CM_MRA_PRIVATE_DATA_SIZE)
 		return -EINVAL;
 
 	data = cm_copy_private_data(private_data, private_data_len);
@@ -2991,7 +2991,7 @@ static int cm_mra_handler(struct cm_work *work)
 		return -EINVAL;
 
 	work->cm_event.private_data = &mra_msg->private_data;
-	work->cm_event.private_data_len = IB_CM_MRA_PRIVATE_DATA_SIZE;
+	work->cm_event.private_data_len = CM_MRA_PRIVATE_DATA_SIZE;
 	work->cm_event.param.mra_rcvd.service_timeout =
 					cm_mra_get_service_timeout(mra_msg);
 	timeout = cm_convert_to_ms(cm_mra_get_service_timeout(mra_msg)) +
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index 9e50da044c43..888209ec058d 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -391,7 +391,7 @@ enum cm_msg_response {
 	/* service timeout:5, rsvd:3 */
 	u8 offset9;
 
-	u8 private_data[IB_CM_MRA_PRIVATE_DATA_SIZE];
+	u8 private_data[CM_MRA_PRIVATE_DATA_SIZE];
 
 } __packed;
 
diff --git a/include/rdma/ib_cm.h b/include/rdma/ib_cm.h
index 956256b2fc5d..6d73316be651 100644
--- a/include/rdma/ib_cm.h
+++ b/include/rdma/ib_cm.h
@@ -65,7 +65,6 @@ enum ib_cm_event_type {
 };
 
 enum ib_cm_data_size {
-	IB_CM_MRA_PRIVATE_DATA_SIZE	 = 222,
 	IB_CM_REJ_PRIVATE_DATA_SIZE	 = 148,
 	IB_CM_REP_PRIVATE_DATA_SIZE	 = 196,
 	IB_CM_RTU_PRIVATE_DATA_SIZE	 = 224,
diff --git a/include/rdma/ibta_vol1_c12.h b/include/rdma/ibta_vol1_c12.h
index 2db7f736379e..52dbc80a0d1b 100644
--- a/include/rdma/ibta_vol1_c12.h
+++ b/include/rdma/ibta_vol1_c12.h
@@ -88,4 +88,12 @@
 #define CM_REQ_PRIVATE_DATA CM_FIELD_MLOC(struct cm_req_msg, 140, 736)
 #define CM_REQ_PRIVATE_DATA_SIZE 92
 
+/* Table 107 MRA Message Contents */
+#define CM_MRA_LOCAL_COMM_ID CM_FIELD32_LOC(struct cm_mra_msg, 0, 32)
+#define CM_MRA_REMOTE_COMM_ID CM_FIELD32_LOC(struct cm_mra_msg, 4, 32)
+#define CM_MRA_MESSAGE_MRAED CM_FIELD8_LOC(struct cm_mra_msg, 8, 2)
+#define CM_MRA_SERVICE_TIMEOUT CM_FIELD8_LOC(struct cm_mra_msg, 9, 5)
+#define CM_MRA_PRIVATE_DATA CM_FIELD_MLOC(struct cm_mra_msg, 10, 1776)
+#define CM_MRA_PRIVATE_DATA_SIZE 222
+
 #endif /* _IBTA_VOL1_C12_H_ */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 07/48] RDMA/cm: Reject (REJ) message definitions
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (5 preceding siblings ...)
  2019-12-12  9:37 ` [PATCH rdma-rc v2 06/48] RDMA/cm: Message Receipt Acknowledgment (MRA) " Leon Romanovsky
@ 2019-12-12  9:37 ` Leon Romanovsky
  2019-12-12  9:37 ` [PATCH rdma-rc v2 08/48] RDMA/cm: Reply To Request for communication (REP) definitions Leon Romanovsky
                   ` (42 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:37 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Add Reject (REJ) definitions as it is written
in IBTA release 1.3 volume 1.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      |  6 +++---
 drivers/infiniband/core/cm_msgs.h |  4 ++--
 drivers/infiniband/core/cma.c     |  2 +-
 include/rdma/ib_cm.h              |  2 --
 include/rdma/ibta_vol1_c12.h      | 11 +++++++++++
 5 files changed, 17 insertions(+), 8 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 5a0ee9e46ff9..f19c817ac99f 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -2716,8 +2716,8 @@ int ib_send_cm_rej(struct ib_cm_id *cm_id,
 	unsigned long flags;
 	int ret;
 
-	if ((private_data && private_data_len > IB_CM_REJ_PRIVATE_DATA_SIZE) ||
-	    (ari && ari_length > IB_CM_REJ_ARI_LENGTH))
+	if ((private_data && private_data_len > CM_REJ_PRIVATE_DATA_SIZE) ||
+	    (ari && ari_length > CM_REJ_ARI_SIZE))
 		return -EINVAL;
 
 	cm_id_priv = container_of(cm_id, struct cm_id_private, id);
@@ -2778,7 +2778,7 @@ static void cm_format_rej_event(struct cm_work *work)
 	param->ari_length = cm_rej_get_reject_info_len(rej_msg);
 	param->reason = __be16_to_cpu(rej_msg->reason);
 	work->cm_event.private_data = &rej_msg->private_data;
-	work->cm_event.private_data_len = IB_CM_REJ_PRIVATE_DATA_SIZE;
+	work->cm_event.private_data_len = CM_REJ_PRIVATE_DATA_SIZE;
 }
 
 static struct cm_id_private * cm_acquire_rejected_id(struct cm_rej_msg *rej_msg)
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index 888209ec058d..48c97ec4ae13 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -427,9 +427,9 @@ struct cm_rej_msg {
 	/* reject info length:7, rsvd:1. */
 	u8 offset9;
 	__be16 reason;
-	u8 ari[IB_CM_REJ_ARI_LENGTH];
+	u8 ari[CM_REJ_ARI_SIZE];
 
-	u8 private_data[IB_CM_REJ_PRIVATE_DATA_SIZE];
+	u8 private_data[CM_REJ_PRIVATE_DATA_SIZE];
 
 } __packed;
 
diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
index 02490a3c11f3..8495ad001e92 100644
--- a/drivers/infiniband/core/cma.c
+++ b/drivers/infiniband/core/cma.c
@@ -1951,7 +1951,7 @@ static int cma_ib_handler(struct ib_cm_id *cm_id,
 		event.status = ib_event->param.rej_rcvd.reason;
 		event.event = RDMA_CM_EVENT_REJECTED;
 		event.param.conn.private_data = ib_event->private_data;
-		event.param.conn.private_data_len = IB_CM_REJ_PRIVATE_DATA_SIZE;
+		event.param.conn.private_data_len = CM_REJ_PRIVATE_DATA_SIZE;
 		break;
 	default:
 		pr_err("RDMA CMA: unexpected IB CM event: %d\n",
diff --git a/include/rdma/ib_cm.h b/include/rdma/ib_cm.h
index 6d73316be651..a5b9bd49041b 100644
--- a/include/rdma/ib_cm.h
+++ b/include/rdma/ib_cm.h
@@ -65,12 +65,10 @@ enum ib_cm_event_type {
 };
 
 enum ib_cm_data_size {
-	IB_CM_REJ_PRIVATE_DATA_SIZE	 = 148,
 	IB_CM_REP_PRIVATE_DATA_SIZE	 = 196,
 	IB_CM_RTU_PRIVATE_DATA_SIZE	 = 224,
 	IB_CM_DREQ_PRIVATE_DATA_SIZE	 = 220,
 	IB_CM_DREP_PRIVATE_DATA_SIZE	 = 224,
-	IB_CM_REJ_ARI_LENGTH		 = 72,
 	IB_CM_LAP_PRIVATE_DATA_SIZE	 = 168,
 	IB_CM_APR_PRIVATE_DATA_SIZE	 = 148,
 	IB_CM_APR_INFO_LENGTH		 = 72,
diff --git a/include/rdma/ibta_vol1_c12.h b/include/rdma/ibta_vol1_c12.h
index 52dbc80a0d1b..08378eb4d6df 100644
--- a/include/rdma/ibta_vol1_c12.h
+++ b/include/rdma/ibta_vol1_c12.h
@@ -96,4 +96,15 @@
 #define CM_MRA_PRIVATE_DATA CM_FIELD_MLOC(struct cm_mra_msg, 10, 1776)
 #define CM_MRA_PRIVATE_DATA_SIZE 222
 
+/* Table 108 REJ Message Contents */
+#define CM_REJ_LOCAL_COMM_ID CM_FIELD32_LOC(struct cm_rej_msg, 0, 32)
+#define CM_REJ_REMOTE_COMM_ID CM_FIELD32_LOC(struct cm_rej_msg, 4, 32)
+#define CM_REJ_MESSAGE_REJECTED CM_FIELD8_LOC(struct cm_rej_msg, 8, 2)
+#define CM_REJ_REJECTED_INFO_LENGTH CM_FIELD8_LOC(struct cm_rej_msg, 9, 7)
+#define CM_REJ_REASON CM_FIELD16_LOC(struct cm_rej_msg, 10, 16)
+#define CM_REJ_ARI CM_FIELD_MLOC(struct cm_rej_msg, 12, 576)
+#define CM_REJ_ARI_SIZE 72
+#define CM_REJ_PRIVATE_DATA CM_FIELD_MLOC(struct cm_rej_msg, 84, 1184)
+#define CM_REJ_PRIVATE_DATA_SIZE 148
+
 #endif /* _IBTA_VOL1_C12_H_ */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 08/48] RDMA/cm: Reply To Request for communication (REP) definitions
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (6 preceding siblings ...)
  2019-12-12  9:37 ` [PATCH rdma-rc v2 07/48] RDMA/cm: Reject (REJ) " Leon Romanovsky
@ 2019-12-12  9:37 ` Leon Romanovsky
  2019-12-12  9:37 ` [PATCH rdma-rc v2 09/48] RDMA/cm: Ready To Use (RTU) definitions Leon Romanovsky
                   ` (41 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:37 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Add REP message definitions as it is written
in IBTA release 1.3 volume 1.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      |  4 ++--
 drivers/infiniband/core/cm_msgs.h |  2 +-
 drivers/infiniband/core/cma.c     |  2 +-
 include/rdma/ib_cm.h              |  1 -
 include/rdma/ibta_vol1_c12.h      | 19 +++++++++++++++++++
 5 files changed, 23 insertions(+), 5 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index f19c817ac99f..ffdca9d1c3f6 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -2068,7 +2068,7 @@ int ib_send_cm_rep(struct ib_cm_id *cm_id,
 	int ret;
 
 	if (param->private_data &&
-	    param->private_data_len > IB_CM_REP_PRIVATE_DATA_SIZE)
+	    param->private_data_len > CM_REP_PRIVATE_DATA_SIZE)
 		return -EINVAL;
 
 	cm_id_priv = container_of(cm_id, struct cm_id_private, id);
@@ -2197,7 +2197,7 @@ static void cm_format_rep_event(struct cm_work *work, enum ib_qp_type qp_type)
 	param->rnr_retry_count = cm_rep_get_rnr_retry_count(rep_msg);
 	param->srq = cm_rep_get_srq(rep_msg);
 	work->cm_event.private_data = &rep_msg->private_data;
-	work->cm_event.private_data_len = IB_CM_REP_PRIVATE_DATA_SIZE;
+	work->cm_event.private_data_len = CM_REP_PRIVATE_DATA_SIZE;
 }
 
 static void cm_dup_rep_handler(struct cm_work *work)
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index 48c97ec4ae13..b66e9eaf9721 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -474,7 +474,7 @@ struct cm_rep_msg {
 	u8 offset27;
 	__be64 local_ca_guid;
 
-	u8 private_data[IB_CM_REP_PRIVATE_DATA_SIZE];
+	u8 private_data[CM_REP_PRIVATE_DATA_SIZE];
 
 } __packed;
 
diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
index 8495ad001e92..ece92889aa88 100644
--- a/drivers/infiniband/core/cma.c
+++ b/drivers/infiniband/core/cma.c
@@ -1882,7 +1882,7 @@ static void cma_set_rep_event_data(struct rdma_cm_event *event,
 				   void *private_data)
 {
 	event->param.conn.private_data = private_data;
-	event->param.conn.private_data_len = IB_CM_REP_PRIVATE_DATA_SIZE;
+	event->param.conn.private_data_len = CM_REP_PRIVATE_DATA_SIZE;
 	event->param.conn.responder_resources = rep_data->responder_resources;
 	event->param.conn.initiator_depth = rep_data->initiator_depth;
 	event->param.conn.flow_control = rep_data->flow_control;
diff --git a/include/rdma/ib_cm.h b/include/rdma/ib_cm.h
index a5b9bd49041b..ebfbf63388de 100644
--- a/include/rdma/ib_cm.h
+++ b/include/rdma/ib_cm.h
@@ -65,7 +65,6 @@ enum ib_cm_event_type {
 };
 
 enum ib_cm_data_size {
-	IB_CM_REP_PRIVATE_DATA_SIZE	 = 196,
 	IB_CM_RTU_PRIVATE_DATA_SIZE	 = 224,
 	IB_CM_DREQ_PRIVATE_DATA_SIZE	 = 220,
 	IB_CM_DREP_PRIVATE_DATA_SIZE	 = 224,
diff --git a/include/rdma/ibta_vol1_c12.h b/include/rdma/ibta_vol1_c12.h
index 08378eb4d6df..966517ed229d 100644
--- a/include/rdma/ibta_vol1_c12.h
+++ b/include/rdma/ibta_vol1_c12.h
@@ -107,4 +107,23 @@
 #define CM_REJ_PRIVATE_DATA CM_FIELD_MLOC(struct cm_rej_msg, 84, 1184)
 #define CM_REJ_PRIVATE_DATA_SIZE 148
 
+/* Table 110 REP Message Contents */
+#define CM_REP_LOCAL_COMM_ID CM_FIELD32_LOC(struct cm_rep_msg, 0, 32)
+#define CM_REP_REMOTE_COMM_ID CM_FIELD32_LOC(struct cm_rep_msg, 4, 32)
+#define CM_REP_LOCAL_Q_KEY CM_FIELD32_LOC(struct cm_rep_msg, 8, 32)
+#define CM_REP_LOCAL_QPN CM_FIELD32_LOC(struct cm_rep_msg, 12, 24)
+#define CM_REP_LOCAL_EE_CONTEXT_NUMBER CM_FIELD32_LOC(struct cm_rep_msg, 16, 24)
+#define CM_REP_STARTING_PSN CM_FIELD32_LOC(struct cm_rep_msg, 20, 24)
+#define CM_REP_RESPONDER_RESOURCES CM_FIELD8_LOC(struct cm_rep_msg, 24, 8)
+#define CM_REP_INITIATOR_DEPTH CM_FIELD8_LOC(struct cm_rep_msg, 25, 8)
+#define CM_REP_TARGET_ACK_DELAY CM_FIELD8_LOC(struct cm_rep_msg, 26, 5)
+#define CM_REP_FAILOVER_ACCEPTED CM_FIELD_BLOC(struct cm_rep_msg, 26, 5, 2)
+#define CM_REP_END_TO_END_FLOW_CONTROL                                         \
+	CM_FIELD_BLOC(struct cm_rep_msg, 26, 7, 1)
+#define CM_REP_RNR_RETRY_COUNT CM_FIELD8_LOC(struct cm_rep_msg, 27, 3)
+#define CM_REP_SRQ CM_FIELD_BLOC(struct cm_rep_msg, 27, 3, 1)
+#define CM_REP_LOCAL_CA_GUID CM_FIELD64_LOC(struct cm_rep_msg, 28, 64)
+#define CM_REP_PRIVATE_DATA CM_FIELD_MLOC(struct cm_rep_msg, 36, 1568)
+#define CM_REP_PRIVATE_DATA_SIZE 196
+
 #endif /* _IBTA_VOL1_C12_H_ */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 09/48] RDMA/cm: Ready To Use (RTU) definitions
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (7 preceding siblings ...)
  2019-12-12  9:37 ` [PATCH rdma-rc v2 08/48] RDMA/cm: Reply To Request for communication (REP) definitions Leon Romanovsky
@ 2019-12-12  9:37 ` Leon Romanovsky
  2019-12-12  9:37 ` [PATCH rdma-rc v2 10/48] RDMA/cm: Request For Communication Release (DREQ) definitions Leon Romanovsky
                   ` (40 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:37 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Add RTU message definitions as it is written in IBTA release 1.3 volume 1.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      | 4 ++--
 drivers/infiniband/core/cm_msgs.h | 2 +-
 include/rdma/ib_cm.h              | 1 -
 include/rdma/ibta_vol1_c12.h      | 6 ++++++
 4 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index ffdca9d1c3f6..73f93e354d8b 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -2135,7 +2135,7 @@ int ib_send_cm_rtu(struct ib_cm_id *cm_id,
 	void *data;
 	int ret;
 
-	if (private_data && private_data_len > IB_CM_RTU_PRIVATE_DATA_SIZE)
+	if (private_data && private_data_len > CM_RTU_PRIVATE_DATA_SIZE)
 		return -EINVAL;
 
 	data = cm_copy_private_data(private_data, private_data_len);
@@ -2400,7 +2400,7 @@ static int cm_rtu_handler(struct cm_work *work)
 		return -EINVAL;
 
 	work->cm_event.private_data = &rtu_msg->private_data;
-	work->cm_event.private_data_len = IB_CM_RTU_PRIVATE_DATA_SIZE;
+	work->cm_event.private_data_len = CM_RTU_PRIVATE_DATA_SIZE;
 
 	spin_lock_irq(&cm_id_priv->lock);
 	if (cm_id_priv->id.state != IB_CM_REP_SENT &&
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index b66e9eaf9721..d36e90ccaeb7 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -582,7 +582,7 @@ struct cm_rtu_msg {
 	__be32 local_comm_id;
 	__be32 remote_comm_id;
 
-	u8 private_data[IB_CM_RTU_PRIVATE_DATA_SIZE];
+	u8 private_data[CM_RTU_PRIVATE_DATA_SIZE];
 
 } __packed;
 
diff --git a/include/rdma/ib_cm.h b/include/rdma/ib_cm.h
index ebfbf63388de..34dc12b30399 100644
--- a/include/rdma/ib_cm.h
+++ b/include/rdma/ib_cm.h
@@ -65,7 +65,6 @@ enum ib_cm_event_type {
 };
 
 enum ib_cm_data_size {
-	IB_CM_RTU_PRIVATE_DATA_SIZE	 = 224,
 	IB_CM_DREQ_PRIVATE_DATA_SIZE	 = 220,
 	IB_CM_DREP_PRIVATE_DATA_SIZE	 = 224,
 	IB_CM_LAP_PRIVATE_DATA_SIZE	 = 168,
diff --git a/include/rdma/ibta_vol1_c12.h b/include/rdma/ibta_vol1_c12.h
index 966517ed229d..892d3122023c 100644
--- a/include/rdma/ibta_vol1_c12.h
+++ b/include/rdma/ibta_vol1_c12.h
@@ -126,4 +126,10 @@
 #define CM_REP_PRIVATE_DATA CM_FIELD_MLOC(struct cm_rep_msg, 36, 1568)
 #define CM_REP_PRIVATE_DATA_SIZE 196
 
+/* Table 111 RTU Message Contents */
+#define CM_RTU_LOCAL_COMM_ID CM_FIELD32_LOC(struct cm_rtu_msg, 0, 32)
+#define CM_RTU_REMOTE_COMM_ID CM_FIELD32_LOC(struct cm_rtu_msg, 4, 32)
+#define CM_RTU_PRIVATE_DATA CM_FIELD_MLOC(struct cm_rtu_msg, 8, 1792)
+#define CM_RTU_PRIVATE_DATA_SIZE 224
+
 #endif /* _IBTA_VOL1_C12_H_ */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 10/48] RDMA/cm: Request For Communication Release (DREQ) definitions
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (8 preceding siblings ...)
  2019-12-12  9:37 ` [PATCH rdma-rc v2 09/48] RDMA/cm: Ready To Use (RTU) definitions Leon Romanovsky
@ 2019-12-12  9:37 ` Leon Romanovsky
  2019-12-12  9:37 ` [PATCH rdma-rc v2 11/48] RDMA/cm: Reply To Request For Communication Release (DREP) definitions Leon Romanovsky
                   ` (39 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:37 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Add DREQ definitions as it is written in IBTA release 1.3 volume 1.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      | 4 ++--
 drivers/infiniband/core/cm_msgs.h | 2 +-
 include/rdma/ib_cm.h              | 1 -
 include/rdma/ibta_vol1_c12.h      | 7 +++++++
 4 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 73f93e354d8b..57e35d2f657f 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -2452,7 +2452,7 @@ int ib_send_cm_dreq(struct ib_cm_id *cm_id,
 	unsigned long flags;
 	int ret;
 
-	if (private_data && private_data_len > IB_CM_DREQ_PRIVATE_DATA_SIZE)
+	if (private_data && private_data_len > CM_DREQ_PRIVATE_DATA_SIZE)
 		return -EINVAL;
 
 	cm_id_priv = container_of(cm_id, struct cm_id_private, id);
@@ -2603,7 +2603,7 @@ static int cm_dreq_handler(struct cm_work *work)
 	}
 
 	work->cm_event.private_data = &dreq_msg->private_data;
-	work->cm_event.private_data_len = IB_CM_DREQ_PRIVATE_DATA_SIZE;
+	work->cm_event.private_data_len = CM_DREQ_PRIVATE_DATA_SIZE;
 
 	spin_lock_irq(&cm_id_priv->lock);
 	if (cm_id_priv->local_qpn != cm_dreq_get_remote_qpn(dreq_msg))
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index d36e90ccaeb7..c2b19d21fbe6 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -594,7 +594,7 @@ struct cm_dreq_msg {
 	/* remote QPN/EECN:24, rsvd:8 */
 	__be32 offset8;
 
-	u8 private_data[IB_CM_DREQ_PRIVATE_DATA_SIZE];
+	u8 private_data[CM_DREQ_PRIVATE_DATA_SIZE];
 
 } __packed;
 
diff --git a/include/rdma/ib_cm.h b/include/rdma/ib_cm.h
index 34dc12b30399..77f4818aaf10 100644
--- a/include/rdma/ib_cm.h
+++ b/include/rdma/ib_cm.h
@@ -65,7 +65,6 @@ enum ib_cm_event_type {
 };
 
 enum ib_cm_data_size {
-	IB_CM_DREQ_PRIVATE_DATA_SIZE	 = 220,
 	IB_CM_DREP_PRIVATE_DATA_SIZE	 = 224,
 	IB_CM_LAP_PRIVATE_DATA_SIZE	 = 168,
 	IB_CM_APR_PRIVATE_DATA_SIZE	 = 148,
diff --git a/include/rdma/ibta_vol1_c12.h b/include/rdma/ibta_vol1_c12.h
index 892d3122023c..e3116b62d7ba 100644
--- a/include/rdma/ibta_vol1_c12.h
+++ b/include/rdma/ibta_vol1_c12.h
@@ -132,4 +132,11 @@
 #define CM_RTU_PRIVATE_DATA CM_FIELD_MLOC(struct cm_rtu_msg, 8, 1792)
 #define CM_RTU_PRIVATE_DATA_SIZE 224
 
+/* Table 112 DREQ Message Contents */
+#define CM_DREQ_LOCAL_COMM_ID CM_FIELD32_LOC(struct cm_dreq_msg, 0, 32)
+#define CM_DREQ_REMOTE_COMM_ID CM_FIELD32_LOC(struct cm_dreq_msg, 4, 32)
+#define CM_DREQ_REMOTE_QPN_EECN CM_FIELD32_LOC(struct cm_dreq_msg, 8, 24)
+#define CM_DREQ_PRIVATE_DATA CM_FIELD_MLOC(struct cm_dreq_msg, 12, 1760)
+#define CM_DREQ_PRIVATE_DATA_SIZE 220
+
 #endif /* _IBTA_VOL1_C12_H_ */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 11/48] RDMA/cm: Reply To Request For Communication Release (DREP) definitions
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (9 preceding siblings ...)
  2019-12-12  9:37 ` [PATCH rdma-rc v2 10/48] RDMA/cm: Request For Communication Release (DREQ) definitions Leon Romanovsky
@ 2019-12-12  9:37 ` Leon Romanovsky
  2019-12-12  9:37 ` [PATCH rdma-rc v2 12/48] RDMA/cm: Load Alternate Path (LAP) definitions Leon Romanovsky
                   ` (38 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:37 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Add DREP definitions as it is written in IBTA release 1.3 volume 1.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      | 4 ++--
 drivers/infiniband/core/cm_msgs.h | 2 +-
 include/rdma/ib_cm.h              | 1 -
 include/rdma/ibta_vol1_c12.h      | 6 ++++++
 4 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 57e35d2f657f..d11ca6bdf016 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -2517,7 +2517,7 @@ int ib_send_cm_drep(struct ib_cm_id *cm_id,
 	void *data;
 	int ret;
 
-	if (private_data && private_data_len > IB_CM_DREP_PRIVATE_DATA_SIZE)
+	if (private_data && private_data_len > CM_DREP_PRIVATE_DATA_SIZE)
 		return -EINVAL;
 
 	data = cm_copy_private_data(private_data, private_data_len);
@@ -2678,7 +2678,7 @@ static int cm_drep_handler(struct cm_work *work)
 		return -EINVAL;
 
 	work->cm_event.private_data = &drep_msg->private_data;
-	work->cm_event.private_data_len = IB_CM_DREP_PRIVATE_DATA_SIZE;
+	work->cm_event.private_data_len = CM_DREP_PRIVATE_DATA_SIZE;
 
 	spin_lock_irq(&cm_id_priv->lock);
 	if (cm_id_priv->id.state != IB_CM_DREQ_SENT &&
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index c2b19d21fbe6..98088a84f2fc 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -615,7 +615,7 @@ struct cm_drep_msg {
 	__be32 local_comm_id;
 	__be32 remote_comm_id;
 
-	u8 private_data[IB_CM_DREP_PRIVATE_DATA_SIZE];
+	u8 private_data[CM_DREP_PRIVATE_DATA_SIZE];
 
 } __packed;
 
diff --git a/include/rdma/ib_cm.h b/include/rdma/ib_cm.h
index 77f4818aaf10..f1fccc8f387f 100644
--- a/include/rdma/ib_cm.h
+++ b/include/rdma/ib_cm.h
@@ -65,7 +65,6 @@ enum ib_cm_event_type {
 };
 
 enum ib_cm_data_size {
-	IB_CM_DREP_PRIVATE_DATA_SIZE	 = 224,
 	IB_CM_LAP_PRIVATE_DATA_SIZE	 = 168,
 	IB_CM_APR_PRIVATE_DATA_SIZE	 = 148,
 	IB_CM_APR_INFO_LENGTH		 = 72,
diff --git a/include/rdma/ibta_vol1_c12.h b/include/rdma/ibta_vol1_c12.h
index e3116b62d7ba..8d57e24534aa 100644
--- a/include/rdma/ibta_vol1_c12.h
+++ b/include/rdma/ibta_vol1_c12.h
@@ -139,4 +139,10 @@
 #define CM_DREQ_PRIVATE_DATA CM_FIELD_MLOC(struct cm_dreq_msg, 12, 1760)
 #define CM_DREQ_PRIVATE_DATA_SIZE 220
 
+/* Table 113 DREP Message Contents */
+#define CM_DREP_LOCAL_COMM_ID CM_FIELD32_LOC(struct cm_drep_msg, 0, 32)
+#define CM_DREP_REMOTE_COMM_ID CM_FIELD32_LOC(struct cm_drep_msg, 4, 32)
+#define CM_DREP_PRIVATE_DATA CM_FIELD_MLOC(struct cm_drep_msg, 8, 1792)
+#define CM_DREP_PRIVATE_DATA_SIZE 224
+
 #endif /* _IBTA_VOL1_C12_H_ */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 12/48] RDMA/cm: Load Alternate Path (LAP) definitions
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (10 preceding siblings ...)
  2019-12-12  9:37 ` [PATCH rdma-rc v2 11/48] RDMA/cm: Reply To Request For Communication Release (DREP) definitions Leon Romanovsky
@ 2019-12-12  9:37 ` Leon Romanovsky
  2019-12-12  9:37 ` [PATCH rdma-rc v2 13/48] RDMA/cm: Alternate Path Response (APR) message definitions Leon Romanovsky
                   ` (37 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:37 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Add LAP message definitions as it is written in IBTA release 1.3 volume 1.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      |  4 ++--
 drivers/infiniband/core/cm_msgs.h |  2 +-
 include/rdma/ib_cm.h              |  1 -
 include/rdma/ibta_vol1_c12.h      | 26 ++++++++++++++++++++++++++
 4 files changed, 29 insertions(+), 4 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index d11ca6bdf016..002904c03554 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -3110,7 +3110,7 @@ int ib_send_cm_lap(struct ib_cm_id *cm_id,
 	unsigned long flags;
 	int ret;
 
-	if (private_data && private_data_len > IB_CM_LAP_PRIVATE_DATA_SIZE)
+	if (private_data && private_data_len > CM_LAP_PRIVATE_DATA_SIZE)
 		return -EINVAL;
 
 	cm_id_priv = container_of(cm_id, struct cm_id_private, id);
@@ -3224,7 +3224,7 @@ static int cm_lap_handler(struct cm_work *work)
 	param->alternate_path = &work->path[0];
 	cm_format_path_from_lap(cm_id_priv, param->alternate_path, lap_msg);
 	work->cm_event.private_data = &lap_msg->private_data;
-	work->cm_event.private_data_len = IB_CM_LAP_PRIVATE_DATA_SIZE;
+	work->cm_event.private_data_len = CM_LAP_PRIVATE_DATA_SIZE;
 
 	spin_lock_irq(&cm_id_priv->lock);
 	if (cm_id_priv->id.state != IB_CM_ESTABLISHED)
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index 98088a84f2fc..6c94d083c996 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -644,7 +644,7 @@ struct cm_lap_msg {
 	/* local ACK timeout:5, rsvd:3 */
 	u8 offset63;
 
-	u8 private_data[IB_CM_LAP_PRIVATE_DATA_SIZE];
+	u8 private_data[CM_LAP_PRIVATE_DATA_SIZE];
 } __packed;
 
 static inline __be32 cm_lap_get_remote_qpn(struct cm_lap_msg *lap_msg)
diff --git a/include/rdma/ib_cm.h b/include/rdma/ib_cm.h
index f1fccc8f387f..08d3217bdaf1 100644
--- a/include/rdma/ib_cm.h
+++ b/include/rdma/ib_cm.h
@@ -65,7 +65,6 @@ enum ib_cm_event_type {
 };
 
 enum ib_cm_data_size {
-	IB_CM_LAP_PRIVATE_DATA_SIZE	 = 168,
 	IB_CM_APR_PRIVATE_DATA_SIZE	 = 148,
 	IB_CM_APR_INFO_LENGTH		 = 72,
 	IB_CM_SIDR_REQ_PRIVATE_DATA_SIZE = 216,
diff --git a/include/rdma/ibta_vol1_c12.h b/include/rdma/ibta_vol1_c12.h
index 8d57e24534aa..d998cf0cde4c 100644
--- a/include/rdma/ibta_vol1_c12.h
+++ b/include/rdma/ibta_vol1_c12.h
@@ -145,4 +145,30 @@
 #define CM_DREP_PRIVATE_DATA CM_FIELD_MLOC(struct cm_drep_msg, 8, 1792)
 #define CM_DREP_PRIVATE_DATA_SIZE 224
 
+/* Table 115 LAP Message Contents */
+#define CM_LAP_LOCAL_COMM_ID CM_FIELD32_LOC(struct cm_lap_msg, 0, 32)
+#define CM_LAP_REMOTE_COMM_ID CM_FIELD32_LOC(struct cm_lap_msg, 4, 32)
+#define CM_LAP_REMOTE_QPN_EECN CM_FIELD32_LOC(struct cm_lap_msg, 12, 24)
+#define CM_LAP_REMOTE_CM_RESPONSE_TIMEOUT                                      \
+	CM_FIELD8_LOC(struct cm_lap_msg, 15, 5)
+#define CM_LAP_ALTERNATE_LOCAL_PORT_LID                                        \
+	CM_FIELD16_LOC(struct cm_lap_msg, 20, 16)
+#define CM_LAP_ALTERNATE_REMOTE_PORT_LID                                       \
+	CM_FIELD16_LOC(struct cm_lap_msg, 22, 16)
+#define CM_LAP_ALTERNATE_LOCAL_PORT_GID                                        \
+	CM_FIELD_MLOC(struct cm_lap_msg, 24, 128)
+#define CM_LAP_ALTERNATE_REMOTE_PORT_GID                                       \
+	CM_FIELD_MLOC(struct cm_lap_msg, 40, 128)
+#define CM_LAP_ALTERNATE_FLOW_LABEL CM_FIELD32_LOC(struct cm_lap_msg, 56, 20)
+#define CM_LAP_ALTERNATE_TRAFFIC_CLASS CM_FIELD8_LOC(struct cm_lap_msg, 59, 8)
+#define CM_LAP_ALTERNATE_HOP_LIMIT CM_FIELD8_LOC(struct cm_lap_msg, 60, 8)
+#define CM_LAP_ALTERNATE_PACKET_RATE CM_FIELD_BLOC(struct cm_lap_msg, 61, 2, 6)
+#define CM_LAP_ALTERNATE_SL CM_FIELD8_LOC(struct cm_lap_msg, 62, 4)
+#define CM_LAP_ALTERNATE_SUBNET_LOCAL                                          \
+	CM_FIELD_BLOC(struct cm_lap_msg, 62, 4, 1)
+#define CM_LAP_ALTERNATE_LOCAL_ACK_TIMEOUT                                     \
+	CM_FIELD8_LOC(struct cm_lap_msg, 63, 5)
+#define CM_LAP_PRIVATE_DATA CM_FIELD_MLOC(struct cm_lap_msg, 64, 1344)
+#define CM_LAP_PRIVATE_DATA_SIZE 168
+
 #endif /* _IBTA_VOL1_C12_H_ */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 13/48] RDMA/cm: Alternate Path Response (APR) message definitions
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (11 preceding siblings ...)
  2019-12-12  9:37 ` [PATCH rdma-rc v2 12/48] RDMA/cm: Load Alternate Path (LAP) definitions Leon Romanovsky
@ 2019-12-12  9:37 ` Leon Romanovsky
  2019-12-12  9:37 ` [PATCH rdma-rc v2 14/48] RDMA/cm: Service ID Resolution Request (SIDR_REQ) definitions Leon Romanovsky
                   ` (36 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:37 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Add APR definitions as it is written in IBTA release 1.3 volume 1.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      |  6 +++---
 drivers/infiniband/core/cm_msgs.h |  4 ++--
 include/rdma/ib_cm.h              |  1 -
 include/rdma/ibta_vol1_c12.h      | 11 +++++++++++
 4 files changed, 16 insertions(+), 6 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 002904c03554..275603c56581 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -3323,8 +3323,8 @@ int ib_send_cm_apr(struct ib_cm_id *cm_id,
 	unsigned long flags;
 	int ret;
 
-	if ((private_data && private_data_len > IB_CM_APR_PRIVATE_DATA_SIZE) ||
-	    (info && info_length > IB_CM_APR_INFO_LENGTH))
+	if ((private_data && private_data_len > CM_APR_PRIVATE_DATA_SIZE) ||
+	    (info && info_length > CM_APR_ADDITIONAL_INFORMATION_SIZE))
 		return -EINVAL;
 
 	cm_id_priv = container_of(cm_id, struct cm_id_private, id);
@@ -3378,7 +3378,7 @@ static int cm_apr_handler(struct cm_work *work)
 	work->cm_event.param.apr_rcvd.apr_info = &apr_msg->info;
 	work->cm_event.param.apr_rcvd.info_len = apr_msg->info_length;
 	work->cm_event.private_data = &apr_msg->private_data;
-	work->cm_event.private_data_len = IB_CM_APR_PRIVATE_DATA_SIZE;
+	work->cm_event.private_data_len = CM_APR_PRIVATE_DATA_SIZE;
 
 	spin_lock_irq(&cm_id_priv->lock);
 	if (cm_id_priv->id.state != IB_CM_ESTABLISHED ||
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index 6c94d083c996..1ca2378bbf0d 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -751,9 +751,9 @@ struct cm_apr_msg {
 	u8 info_length;
 	u8 ap_status;
 	__be16 rsvd;
-	u8 info[IB_CM_APR_INFO_LENGTH];
+	u8 info[CM_APR_ADDITIONAL_INFORMATION_SIZE];
 
-	u8 private_data[IB_CM_APR_PRIVATE_DATA_SIZE];
+	u8 private_data[CM_APR_PRIVATE_DATA_SIZE];
 } __packed;
 
 struct cm_sidr_req_msg {
diff --git a/include/rdma/ib_cm.h b/include/rdma/ib_cm.h
index 08d3217bdaf1..18a3e2ee0758 100644
--- a/include/rdma/ib_cm.h
+++ b/include/rdma/ib_cm.h
@@ -65,7 +65,6 @@ enum ib_cm_event_type {
 };
 
 enum ib_cm_data_size {
-	IB_CM_APR_PRIVATE_DATA_SIZE	 = 148,
 	IB_CM_APR_INFO_LENGTH		 = 72,
 	IB_CM_SIDR_REQ_PRIVATE_DATA_SIZE = 216,
 	IB_CM_SIDR_REP_PRIVATE_DATA_SIZE = 136,
diff --git a/include/rdma/ibta_vol1_c12.h b/include/rdma/ibta_vol1_c12.h
index d998cf0cde4c..6a747067d9c0 100644
--- a/include/rdma/ibta_vol1_c12.h
+++ b/include/rdma/ibta_vol1_c12.h
@@ -171,4 +171,15 @@
 #define CM_LAP_PRIVATE_DATA CM_FIELD_MLOC(struct cm_lap_msg, 64, 1344)
 #define CM_LAP_PRIVATE_DATA_SIZE 168
 
+/* Table 116 APR Message Contents */
+#define CM_APR_LOCAL_COMM_ID CM_FIELD32_LOC(struct cm_apr_msg, 0, 32)
+#define CM_APR_REMOTE_COMM_ID CM_FIELD32_LOC(struct cm_apr_msg, 4, 32)
+#define CM_APR_ADDITIONAL_INFORMATION_LENGTH                                   \
+	CM_FIELD8_LOC(struct cm_apr_msg, 8, 8)
+#define CM_APR_AR_STATUS CM_FIELD8_LOC(struct cm_apr_msg, 9, 8)
+#define CM_APR_ADDITIONAL_INFORMATION CM_FIELD_MLOC(struct cm_apr_msg, 12, 576)
+#define CM_APR_ADDITIONAL_INFORMATION_SIZE 72
+#define CM_APR_PRIVATE_DATA CM_FIELD_MLOC(struct cm_apr_msg, 84, 1184)
+#define CM_APR_PRIVATE_DATA_SIZE 148
+
 #endif /* _IBTA_VOL1_C12_H_ */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 14/48] RDMA/cm: Service ID Resolution Request (SIDR_REQ) definitions
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (12 preceding siblings ...)
  2019-12-12  9:37 ` [PATCH rdma-rc v2 13/48] RDMA/cm: Alternate Path Response (APR) message definitions Leon Romanovsky
@ 2019-12-12  9:37 ` Leon Romanovsky
  2019-12-12  9:37 ` [PATCH rdma-rc v2 15/48] RDMA/cm: Service ID Resolution Response (SIDR_REP) definitions Leon Romanovsky
                   ` (35 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:37 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Add SIDR_REQ message definitions as it is written
in IBTA release 1.3 volume 1.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      | 4 ++--
 drivers/infiniband/core/cm_msgs.h | 2 +-
 drivers/infiniband/core/cma.c     | 2 +-
 include/rdma/ib_cm.h              | 2 --
 include/rdma/ibta_vol1_c12.h      | 7 +++++++
 5 files changed, 11 insertions(+), 6 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 275603c56581..db17421beeff 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -3468,7 +3468,7 @@ int ib_send_cm_sidr_req(struct ib_cm_id *cm_id,
 	int ret;
 
 	if (!param->path || (param->private_data &&
-	     param->private_data_len > IB_CM_SIDR_REQ_PRIVATE_DATA_SIZE))
+	     param->private_data_len > CM_SIDR_REQ_PRIVATE_DATA_SIZE))
 		return -EINVAL;
 
 	cm_id_priv = container_of(cm_id, struct cm_id_private, id);
@@ -3527,7 +3527,7 @@ static void cm_format_sidr_req_event(struct cm_work *work,
 	param->port = work->port->port_num;
 	param->sgid_attr = rx_cm_id->av.ah_attr.grh.sgid_attr;
 	work->cm_event.private_data = &sidr_req_msg->private_data;
-	work->cm_event.private_data_len = IB_CM_SIDR_REQ_PRIVATE_DATA_SIZE;
+	work->cm_event.private_data_len = CM_SIDR_REQ_PRIVATE_DATA_SIZE;
 }
 
 static int cm_sidr_req_handler(struct cm_work *work)
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index 1ca2378bbf0d..47f7ce1ac143 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -764,7 +764,7 @@ struct cm_sidr_req_msg {
 	__be16 rsvd;
 	__be64 service_id;
 
-	u32 private_data[IB_CM_SIDR_REQ_PRIVATE_DATA_SIZE / sizeof(u32)];
+	u32 private_data[CM_SIDR_REQ_PRIVATE_DATA_SIZE / sizeof(u32)];
 } __packed;
 
 struct cm_sidr_rep_msg {
diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
index ece92889aa88..aeb528b2aa49 100644
--- a/drivers/infiniband/core/cma.c
+++ b/drivers/infiniband/core/cma.c
@@ -2137,7 +2137,7 @@ static int cma_ib_req_handler(struct ib_cm_id *cm_id,
 		conn_id = cma_ib_new_udp_id(&listen_id->id, ib_event, net_dev);
 		event.param.ud.private_data = ib_event->private_data + offset;
 		event.param.ud.private_data_len =
-				IB_CM_SIDR_REQ_PRIVATE_DATA_SIZE - offset;
+			CM_SIDR_REQ_PRIVATE_DATA_SIZE - offset;
 	} else {
 		conn_id = cma_ib_new_conn_id(&listen_id->id, ib_event, net_dev);
 		cma_set_req_event_data(&event, &ib_event->param.req_rcvd,
diff --git a/include/rdma/ib_cm.h b/include/rdma/ib_cm.h
index 18a3e2ee0758..8f0c377ad250 100644
--- a/include/rdma/ib_cm.h
+++ b/include/rdma/ib_cm.h
@@ -65,8 +65,6 @@ enum ib_cm_event_type {
 };
 
 enum ib_cm_data_size {
-	IB_CM_APR_INFO_LENGTH		 = 72,
-	IB_CM_SIDR_REQ_PRIVATE_DATA_SIZE = 216,
 	IB_CM_SIDR_REP_PRIVATE_DATA_SIZE = 136,
 	IB_CM_SIDR_REP_INFO_LENGTH	 = 72,
 };
diff --git a/include/rdma/ibta_vol1_c12.h b/include/rdma/ibta_vol1_c12.h
index 6a747067d9c0..36aa3ab25b42 100644
--- a/include/rdma/ibta_vol1_c12.h
+++ b/include/rdma/ibta_vol1_c12.h
@@ -182,4 +182,11 @@
 #define CM_APR_PRIVATE_DATA CM_FIELD_MLOC(struct cm_apr_msg, 84, 1184)
 #define CM_APR_PRIVATE_DATA_SIZE 148
 
+/* Table 119 SIDR_REQ Message Contents */
+#define CM_SIDR_REQ_REQUESTID CM_FIELD32_LOC(struct cm_sidr_req_msg, 0, 32)
+#define CM_SIDR_REQ_PARTITION_KEY CM_FIELD16_LOC(struct cm_sidr_req_msg, 4, 16)
+#define CM_SIDR_REQ_SERVICEID CM_FIELD64_LOC(struct cm_sidr_req_msg, 8, 64)
+#define CM_SIDR_REQ_PRIVATE_DATA CM_FIELD_MLOC(struct cm_sidr_req_msg, 16, 1728)
+#define CM_SIDR_REQ_PRIVATE_DATA_SIZE 216
+
 #endif /* _IBTA_VOL1_C12_H_ */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 15/48] RDMA/cm: Service ID Resolution Response (SIDR_REP) definitions
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (13 preceding siblings ...)
  2019-12-12  9:37 ` [PATCH rdma-rc v2 14/48] RDMA/cm: Service ID Resolution Request (SIDR_REQ) definitions Leon Romanovsky
@ 2019-12-12  9:37 ` Leon Romanovsky
  2019-12-12  9:37 ` [PATCH rdma-rc v2 16/48] RDMA/cm: Convert QPN and EECN to be u32 variables Leon Romanovsky
                   ` (34 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:37 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Add SIDR_REP message definitions as it is written
in IBTA release 1.3 volume 1.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      |  7 ++++---
 drivers/infiniband/core/cm_msgs.h |  4 ++--
 drivers/infiniband/core/cma.c     |  2 +-
 include/rdma/ib_cm.h              |  5 -----
 include/rdma/ibta_vol1_c12.h      | 14 ++++++++++++++
 5 files changed, 21 insertions(+), 11 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index db17421beeff..fd79605c9e8b 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -3621,9 +3621,10 @@ int ib_send_cm_sidr_rep(struct ib_cm_id *cm_id,
 	unsigned long flags;
 	int ret;
 
-	if ((param->info && param->info_length > IB_CM_SIDR_REP_INFO_LENGTH) ||
+	if ((param->info &&
+	     param->info_length > CM_SIDR_REP_ADDITIONAL_INFORMATION_SIZE) ||
 	    (param->private_data &&
-	     param->private_data_len > IB_CM_SIDR_REP_PRIVATE_DATA_SIZE))
+	     param->private_data_len > CM_SIDR_REP_PRIVATE_DATA_SIZE))
 		return -EINVAL;
 
 	cm_id_priv = container_of(cm_id, struct cm_id_private, id);
@@ -3677,7 +3678,7 @@ static void cm_format_sidr_rep_event(struct cm_work *work,
 	param->info_len = sidr_rep_msg->info_length;
 	param->sgid_attr = cm_id_priv->av.ah_attr.grh.sgid_attr;
 	work->cm_event.private_data = &sidr_rep_msg->private_data;
-	work->cm_event.private_data_len = IB_CM_SIDR_REP_PRIVATE_DATA_SIZE;
+	work->cm_event.private_data_len = CM_SIDR_REP_PRIVATE_DATA_SIZE;
 }
 
 static int cm_sidr_rep_handler(struct cm_work *work)
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index 47f7ce1ac143..ed887be775e3 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -778,9 +778,9 @@ struct cm_sidr_rep_msg {
 	__be32 offset8;
 	__be64 service_id;
 	__be32 qkey;
-	u8 info[IB_CM_SIDR_REP_INFO_LENGTH];
+	u8 info[CM_SIDR_REP_ADDITIONAL_INFORMATION_SIZE];
 
-	u8 private_data[IB_CM_SIDR_REP_PRIVATE_DATA_SIZE];
+	u8 private_data[CM_SIDR_REP_PRIVATE_DATA_SIZE];
 } __packed;
 
 static inline __be32 cm_sidr_rep_get_qpn(struct cm_sidr_rep_msg *sidr_rep_msg)
diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
index aeb528b2aa49..6c8beabc3363 100644
--- a/drivers/infiniband/core/cma.c
+++ b/drivers/infiniband/core/cma.c
@@ -3706,7 +3706,7 @@ static int cma_sidr_rep_handler(struct ib_cm_id *cm_id,
 		break;
 	case IB_CM_SIDR_REP_RECEIVED:
 		event.param.ud.private_data = ib_event->private_data;
-		event.param.ud.private_data_len = IB_CM_SIDR_REP_PRIVATE_DATA_SIZE;
+		event.param.ud.private_data_len = CM_SIDR_REP_PRIVATE_DATA_SIZE;
 		if (rep->status != IB_SIDR_SUCCESS) {
 			event.event = RDMA_CM_EVENT_UNREACHABLE;
 			event.status = ib_event->param.sidr_rep_rcvd.status;
diff --git a/include/rdma/ib_cm.h b/include/rdma/ib_cm.h
index 8f0c377ad250..6237c369dbd6 100644
--- a/include/rdma/ib_cm.h
+++ b/include/rdma/ib_cm.h
@@ -64,11 +64,6 @@ enum ib_cm_event_type {
 	IB_CM_SIDR_REP_RECEIVED
 };
 
-enum ib_cm_data_size {
-	IB_CM_SIDR_REP_PRIVATE_DATA_SIZE = 136,
-	IB_CM_SIDR_REP_INFO_LENGTH	 = 72,
-};
-
 struct ib_cm_id;
 
 struct ib_cm_req_event_param {
diff --git a/include/rdma/ibta_vol1_c12.h b/include/rdma/ibta_vol1_c12.h
index 36aa3ab25b42..f937865fe6b5 100644
--- a/include/rdma/ibta_vol1_c12.h
+++ b/include/rdma/ibta_vol1_c12.h
@@ -189,4 +189,18 @@
 #define CM_SIDR_REQ_PRIVATE_DATA CM_FIELD_MLOC(struct cm_sidr_req_msg, 16, 1728)
 #define CM_SIDR_REQ_PRIVATE_DATA_SIZE 216
 
+/* Table 120 SIDR_REP Message Contents */
+#define CM_SIDR_REP_REQUESTID CM_FIELD32_LOC(struct cm_sidr_rep_msg, 0, 32)
+#define CM_SIDR_REP_STATUS CM_FIELD8_LOC(struct cm_sidr_rep_msg, 4, 8)
+#define CM_SIDR_REP_ADDITIONAL_INFORMATION_LENGTH                              \
+	CM_FIELD8_LOC(struct cm_sidr_rep_msg, 5, 8)
+#define CM_SIDR_REP_QPN CM_FIELD32_LOC(struct cm_sidr_rep_msg, 8, 24)
+#define CM_SIDR_REP_SERVICEID CM_FIELD64_LOC(struct cm_sidr_rep_msg, 12, 64)
+#define CM_SIDR_REP_Q_KEY CM_FIELD32_LOC(struct cm_sidr_rep_msg, 20, 32)
+#define CM_SIDR_REP_ADDITIONAL_INFORMATION                                     \
+	CM_FIELD_MLOC(struct cm_sidr_rep_msg, 24, 576)
+#define CM_SIDR_REP_ADDITIONAL_INFORMATION_SIZE 72
+#define CM_SIDR_REP_PRIVATE_DATA CM_FIELD_MLOC(struct cm_sidr_rep_msg, 96, 1088)
+#define CM_SIDR_REP_PRIVATE_DATA_SIZE 136
+
 #endif /* _IBTA_VOL1_C12_H_ */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 16/48] RDMA/cm: Convert QPN and EECN to be u32 variables
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (14 preceding siblings ...)
  2019-12-12  9:37 ` [PATCH rdma-rc v2 15/48] RDMA/cm: Service ID Resolution Response (SIDR_REP) definitions Leon Romanovsky
@ 2019-12-12  9:37 ` Leon Romanovsky
  2019-12-12  9:37 ` [PATCH rdma-rc v2 17/48] RDMA/cm: Convert REQ responded resources to the new scheme Leon Romanovsky
                   ` (33 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:37 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Remove unnecessary ambiguity in mixing be32<->u32 declarations
of QPN and EECN, convert them from be32 to u32 with help
of newly introduced CM_GET/CM_SET macros.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      | 55 ++++++++++++++++----------
 drivers/infiniband/core/cm_msgs.h | 64 +------------------------------
 2 files changed, 35 insertions(+), 84 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index fd79605c9e8b..247238469af1 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -225,7 +225,7 @@ struct cm_timewait_info {
 	struct rb_node remote_qp_node;
 	struct rb_node remote_id_node;
 	__be64 remote_ca_guid;
-	__be32 remote_qpn;
+	u32 remote_qpn;
 	u8 inserted_remote_qp;
 	u8 inserted_remote_id;
 };
@@ -250,8 +250,8 @@ struct cm_id_private {
 
 	void *private_data;
 	__be64 tid;
-	__be32 local_qpn;
-	__be32 remote_qpn;
+	u32 local_qpn;
+	u32 remote_qpn;
 	enum ib_qp_type qp_type;
 	__be32 sq_psn;
 	__be32 rq_psn;
@@ -764,15 +764,15 @@ static struct cm_timewait_info * cm_insert_remote_qpn(struct cm_timewait_info
 	struct rb_node *parent = NULL;
 	struct cm_timewait_info *cur_timewait_info;
 	__be64 remote_ca_guid = timewait_info->remote_ca_guid;
-	__be32 remote_qpn = timewait_info->remote_qpn;
+	u32 remote_qpn = timewait_info->remote_qpn;
 
 	while (*link) {
 		parent = *link;
 		cur_timewait_info = rb_entry(parent, struct cm_timewait_info,
 					     remote_qp_node);
-		if (be32_lt(remote_qpn, cur_timewait_info->remote_qpn))
+		if (remote_qpn < cur_timewait_info->remote_qpn)
 			link = &(*link)->rb_left;
-		else if (be32_gt(remote_qpn, cur_timewait_info->remote_qpn))
+		else if (remote_qpn > cur_timewait_info->remote_qpn)
 			link = &(*link)->rb_right;
 		else if (be64_lt(remote_ca_guid, cur_timewait_info->remote_ca_guid))
 			link = &(*link)->rb_left;
@@ -1265,7 +1265,7 @@ static void cm_format_req(struct cm_req_msg *req_msg,
 	req_msg->local_comm_id = cm_id_priv->id.local_id;
 	req_msg->service_id = param->service_id;
 	req_msg->local_ca_guid = cm_id_priv->id.device->node_guid;
-	cm_req_set_local_qpn(req_msg, cpu_to_be32(param->qp_num));
+	IBA_SET(CM_REQ_LOCAL_QPN, req_msg, param->qp_num);
 	cm_req_set_init_depth(req_msg, param->initiator_depth);
 	cm_req_set_remote_resp_timeout(req_msg,
 				       param->remote_cm_response_timeout);
@@ -1443,7 +1443,7 @@ int ib_send_cm_req(struct ib_cm_id *cm_id,
 	cm_id_priv->msg->timeout_ms = cm_id_priv->timeout_ms;
 	cm_id_priv->msg->context[1] = (void *) (unsigned long) IB_CM_REQ_SENT;
 
-	cm_id_priv->local_qpn = cm_req_get_local_qpn(req_msg);
+	cm_id_priv->local_qpn = IBA_GET(CM_REQ_LOCAL_QPN, req_msg);
 	cm_id_priv->rq_psn = cm_req_get_starting_psn(req_msg);
 
 	spin_lock_irqsave(&cm_id_priv->lock, flags);
@@ -1666,7 +1666,7 @@ static void cm_format_req_event(struct cm_work *work,
 	}
 	param->remote_ca_guid = req_msg->local_ca_guid;
 	param->remote_qkey = be32_to_cpu(req_msg->local_qkey);
-	param->remote_qpn = be32_to_cpu(cm_req_get_local_qpn(req_msg));
+	param->remote_qpn = IBA_GET(CM_REQ_LOCAL_QPN, req_msg);
 	param->qp_type = cm_req_get_qp_type(req_msg);
 	param->starting_psn = be32_to_cpu(cm_req_get_starting_psn(req_msg));
 	param->responder_resources = cm_req_get_init_depth(req_msg);
@@ -1929,7 +1929,7 @@ static int cm_req_handler(struct cm_work *work)
 	}
 	cm_id_priv->timewait_info->work.remote_id = req_msg->local_comm_id;
 	cm_id_priv->timewait_info->remote_ca_guid = req_msg->local_ca_guid;
-	cm_id_priv->timewait_info->remote_qpn = cm_req_get_local_qpn(req_msg);
+	cm_id_priv->timewait_info->remote_qpn = IBA_GET(CM_REQ_LOCAL_QPN, req_msg);
 
 	listen_cm_id_priv = cm_match_req(work, cm_id_priv);
 	if (!listen_cm_id_priv) {
@@ -2003,7 +2003,7 @@ static int cm_req_handler(struct cm_work *work)
 	cm_id_priv->timeout_ms = cm_convert_to_ms(
 					cm_req_get_local_resp_timeout(req_msg));
 	cm_id_priv->max_cm_retries = cm_req_get_max_cm_retries(req_msg);
-	cm_id_priv->remote_qpn = cm_req_get_local_qpn(req_msg);
+	cm_id_priv->remote_qpn = IBA_GET(CM_REQ_LOCAL_QPN, req_msg);
 	cm_id_priv->initiator_depth = cm_req_get_resp_res(req_msg);
 	cm_id_priv->responder_resources = cm_req_get_init_depth(req_msg);
 	cm_id_priv->path_mtu = cm_req_get_path_mtu(req_msg);
@@ -2047,10 +2047,10 @@ static void cm_format_rep(struct cm_rep_msg *rep_msg,
 		rep_msg->initiator_depth = param->initiator_depth;
 		cm_rep_set_flow_ctrl(rep_msg, param->flow_control);
 		cm_rep_set_srq(rep_msg, param->srq);
-		cm_rep_set_local_qpn(rep_msg, cpu_to_be32(param->qp_num));
+		IBA_SET(CM_REP_LOCAL_QPN, rep_msg, param->qp_num);
 	} else {
 		cm_rep_set_srq(rep_msg, 1);
-		cm_rep_set_local_eecn(rep_msg, cpu_to_be32(param->qp_num));
+		IBA_SET(CM_REP_LOCAL_EE_CONTEXT_NUMBER, rep_msg, param->qp_num);
 	}
 
 	if (param->private_data && param->private_data_len)
@@ -2105,7 +2105,7 @@ int ib_send_cm_rep(struct ib_cm_id *cm_id,
 	WARN_ONCE(param->qp_num & 0xFF000000,
 		  "IBTA declares QPN to be 24 bits, but it is 0x%X\n",
 		  param->qp_num);
-	cm_id_priv->local_qpn = cpu_to_be32(param->qp_num);
+	cm_id_priv->local_qpn = param->qp_num;
 
 out:	spin_unlock_irqrestore(&cm_id_priv->lock, flags);
 	return ret;
@@ -2187,7 +2187,11 @@ static void cm_format_rep_event(struct cm_work *work, enum ib_qp_type qp_type)
 	param = &work->cm_event.param.rep_rcvd;
 	param->remote_ca_guid = rep_msg->local_ca_guid;
 	param->remote_qkey = be32_to_cpu(rep_msg->local_qkey);
-	param->remote_qpn = be32_to_cpu(cm_rep_get_qpn(rep_msg, qp_type));
+	if (qp_type == IB_QPT_XRC_INI)
+		param->remote_qpn =
+			IBA_GET(CM_REP_LOCAL_EE_CONTEXT_NUMBER, rep_msg);
+	else
+		param->remote_qpn = IBA_GET(CM_REP_LOCAL_QPN, rep_msg);
 	param->starting_psn = be32_to_cpu(cm_rep_get_starting_psn(rep_msg));
 	param->responder_resources = rep_msg->initiator_depth;
 	param->initiator_depth = rep_msg->resp_resources;
@@ -2280,7 +2284,12 @@ static int cm_rep_handler(struct cm_work *work)
 
 	cm_id_priv->timewait_info->work.remote_id = rep_msg->local_comm_id;
 	cm_id_priv->timewait_info->remote_ca_guid = rep_msg->local_ca_guid;
-	cm_id_priv->timewait_info->remote_qpn = cm_rep_get_qpn(rep_msg, cm_id_priv->qp_type);
+	if (cm_id_priv->qp_type == IB_QPT_XRC_INI)
+		cm_id_priv->timewait_info->remote_qpn =
+			IBA_GET(CM_REP_LOCAL_EE_CONTEXT_NUMBER, rep_msg);
+	else
+		cm_id_priv->timewait_info->remote_qpn =
+			IBA_GET(CM_REP_LOCAL_QPN, rep_msg);
 
 	spin_lock(&cm.lock);
 	/* Check for duplicate REP. */
@@ -2323,7 +2332,11 @@ static int cm_rep_handler(struct cm_work *work)
 
 	cm_id_priv->id.state = IB_CM_REP_RCVD;
 	cm_id_priv->id.remote_id = rep_msg->local_comm_id;
-	cm_id_priv->remote_qpn = cm_rep_get_qpn(rep_msg, cm_id_priv->qp_type);
+	if (cm_id_priv->qp_type == IB_QPT_XRC_INI)
+		cm_id_priv->remote_qpn =
+			IBA_GET(CM_REP_LOCAL_EE_CONTEXT_NUMBER, rep_msg);
+	else
+		cm_id_priv->remote_qpn = IBA_GET(CM_REP_LOCAL_QPN, rep_msg);
 	cm_id_priv->initiator_depth = rep_msg->resp_resources;
 	cm_id_priv->responder_resources = rep_msg->initiator_depth;
 	cm_id_priv->sq_psn = cm_rep_get_starting_psn(rep_msg);
@@ -2437,7 +2450,7 @@ static void cm_format_dreq(struct cm_dreq_msg *dreq_msg,
 			  cm_form_tid(cm_id_priv));
 	dreq_msg->local_comm_id = cm_id_priv->id.local_id;
 	dreq_msg->remote_comm_id = cm_id_priv->id.remote_id;
-	cm_dreq_set_remote_qpn(dreq_msg, cm_id_priv->remote_qpn);
+	IBA_SET(CM_DREQ_REMOTE_QPN_EECN, dreq_msg, cm_id_priv->remote_qpn);
 
 	if (private_data && private_data_len)
 		memcpy(dreq_msg->private_data, private_data, private_data_len);
@@ -2606,7 +2619,7 @@ static int cm_dreq_handler(struct cm_work *work)
 	work->cm_event.private_data_len = CM_DREQ_PRIVATE_DATA_SIZE;
 
 	spin_lock_irq(&cm_id_priv->lock);
-	if (cm_id_priv->local_qpn != cm_dreq_get_remote_qpn(dreq_msg))
+	if (cm_id_priv->local_qpn != IBA_GET(CM_DREQ_REMOTE_QPN_EECN, dreq_msg))
 		goto unlock;
 
 	switch (cm_id_priv->id.state) {
@@ -3071,7 +3084,7 @@ static void cm_format_lap(struct cm_lap_msg *lap_msg,
 			  cm_form_tid(cm_id_priv));
 	lap_msg->local_comm_id = cm_id_priv->id.local_id;
 	lap_msg->remote_comm_id = cm_id_priv->id.remote_id;
-	cm_lap_set_remote_qpn(lap_msg, cm_id_priv->remote_qpn);
+	IBA_SET(CM_LAP_REMOTE_QPN_EECN, lap_msg, cm_id_priv->remote_qpn);
 	/* todo: need remote CM response timeout */
 	cm_lap_set_remote_resp_timeout(lap_msg, 0x1F);
 	lap_msg->alt_local_lid =
@@ -4114,7 +4127,7 @@ static int cm_init_qp_rtr_attr(struct cm_id_private *cm_id_priv,
 				IB_QP_DEST_QPN | IB_QP_RQ_PSN;
 		qp_attr->ah_attr = cm_id_priv->av.ah_attr;
 		qp_attr->path_mtu = cm_id_priv->path_mtu;
-		qp_attr->dest_qp_num = be32_to_cpu(cm_id_priv->remote_qpn);
+		qp_attr->dest_qp_num = cm_id_priv->remote_qpn;
 		qp_attr->rq_psn = be32_to_cpu(cm_id_priv->rq_psn);
 		if (cm_id_priv->qp_type == IB_QPT_RC ||
 		    cm_id_priv->qp_type == IB_QPT_XRC_TGT) {
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index ed887be775e3..650d6fb312c8 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -70,18 +70,6 @@ struct cm_req_msg {
 
 } __packed;
 
-static inline __be32 cm_req_get_local_qpn(struct cm_req_msg *req_msg)
-{
-	return cpu_to_be32(be32_to_cpu(req_msg->offset32) >> 8);
-}
-
-static inline void cm_req_set_local_qpn(struct cm_req_msg *req_msg, __be32 qpn)
-{
-	req_msg->offset32 = cpu_to_be32((be32_to_cpu(qpn) << 8) |
-					 (be32_to_cpu(req_msg->offset32) &
-					  0x000000FF));
-}
-
 static inline u8 cm_req_get_resp_res(struct cm_req_msg *req_msg)
 {
 	return (u8) be32_to_cpu(req_msg->offset32);
@@ -462,6 +450,7 @@ struct cm_rep_msg {
 	__be32 local_qkey;
 	/* local QPN:24, rsvd:8 */
 	__be32 offset12;
+
 	/* local EECN:24, rsvd:8 */
 	__be32 offset16;
 	/* starting PSN:24 rsvd:8 */
@@ -478,34 +467,6 @@ struct cm_rep_msg {
 
 } __packed;
 
-static inline __be32 cm_rep_get_local_qpn(struct cm_rep_msg *rep_msg)
-{
-	return cpu_to_be32(be32_to_cpu(rep_msg->offset12) >> 8);
-}
-
-static inline void cm_rep_set_local_qpn(struct cm_rep_msg *rep_msg, __be32 qpn)
-{
-	rep_msg->offset12 = cpu_to_be32((be32_to_cpu(qpn) << 8) |
-			    (be32_to_cpu(rep_msg->offset12) & 0x000000FF));
-}
-
-static inline __be32 cm_rep_get_local_eecn(struct cm_rep_msg *rep_msg)
-{
-	return cpu_to_be32(be32_to_cpu(rep_msg->offset16) >> 8);
-}
-
-static inline void cm_rep_set_local_eecn(struct cm_rep_msg *rep_msg, __be32 eecn)
-{
-	rep_msg->offset16 = cpu_to_be32((be32_to_cpu(eecn) << 8) |
-			    (be32_to_cpu(rep_msg->offset16) & 0x000000FF));
-}
-
-static inline __be32 cm_rep_get_qpn(struct cm_rep_msg *rep_msg, enum ib_qp_type qp_type)
-{
-	return (qp_type == IB_QPT_XRC_INI) ?
-		cm_rep_get_local_eecn(rep_msg) : cm_rep_get_local_qpn(rep_msg);
-}
-
 static inline __be32 cm_rep_get_starting_psn(struct cm_rep_msg *rep_msg)
 {
 	return cpu_to_be32(be32_to_cpu(rep_msg->offset20) >> 8);
@@ -598,17 +559,6 @@ struct cm_dreq_msg {
 
 } __packed;
 
-static inline __be32 cm_dreq_get_remote_qpn(struct cm_dreq_msg *dreq_msg)
-{
-	return cpu_to_be32(be32_to_cpu(dreq_msg->offset8) >> 8);
-}
-
-static inline void cm_dreq_set_remote_qpn(struct cm_dreq_msg *dreq_msg, __be32 qpn)
-{
-	dreq_msg->offset8 = cpu_to_be32((be32_to_cpu(qpn) << 8) |
-			    (be32_to_cpu(dreq_msg->offset8) & 0x000000FF));
-}
-
 struct cm_drep_msg {
 	struct ib_mad_hdr hdr;
 
@@ -647,18 +597,6 @@ struct cm_lap_msg {
 	u8 private_data[CM_LAP_PRIVATE_DATA_SIZE];
 } __packed;
 
-static inline __be32 cm_lap_get_remote_qpn(struct cm_lap_msg *lap_msg)
-{
-	return cpu_to_be32(be32_to_cpu(lap_msg->offset12) >> 8);
-}
-
-static inline void cm_lap_set_remote_qpn(struct cm_lap_msg *lap_msg, __be32 qpn)
-{
-	lap_msg->offset12 = cpu_to_be32((be32_to_cpu(qpn) << 8) |
-					 (be32_to_cpu(lap_msg->offset12) &
-					  0x000000FF));
-}
-
 static inline u8 cm_lap_get_remote_resp_timeout(struct cm_lap_msg *lap_msg)
 {
 	return (u8) ((be32_to_cpu(lap_msg->offset12) & 0xF8) >> 3);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 17/48] RDMA/cm: Convert REQ responded resources to the new scheme
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (15 preceding siblings ...)
  2019-12-12  9:37 ` [PATCH rdma-rc v2 16/48] RDMA/cm: Convert QPN and EECN to be u32 variables Leon Romanovsky
@ 2019-12-12  9:37 ` Leon Romanovsky
  2019-12-12  9:38 ` [PATCH rdma-rc v2 18/48] RDMA/cm: Convert REQ initiator depth " Leon Romanovsky
                   ` (32 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:37 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Use new scheme to get/set REQ responded resources.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      |  7 ++++---
 drivers/infiniband/core/cm_msgs.h | 12 ------------
 2 files changed, 4 insertions(+), 15 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 247238469af1..26718d25564c 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -1279,7 +1279,8 @@ static void cm_format_req(struct cm_req_msg *req_msg,
 	cm_req_set_max_cm_retries(req_msg, param->max_cm_retries);
 
 	if (param->qp_type != IB_QPT_XRC_INI) {
-		cm_req_set_resp_res(req_msg, param->responder_resources);
+		IBA_SET(CM_REQ_RESPONDED_RESOURCES, req_msg,
+		       param->responder_resources);
 		cm_req_set_retry_count(req_msg, param->retry_count);
 		cm_req_set_rnr_retry_count(req_msg, param->rnr_retry_count);
 		cm_req_set_srq(req_msg, param->srq);
@@ -1670,7 +1671,7 @@ static void cm_format_req_event(struct cm_work *work,
 	param->qp_type = cm_req_get_qp_type(req_msg);
 	param->starting_psn = be32_to_cpu(cm_req_get_starting_psn(req_msg));
 	param->responder_resources = cm_req_get_init_depth(req_msg);
-	param->initiator_depth = cm_req_get_resp_res(req_msg);
+	param->initiator_depth = IBA_GET(CM_REQ_RESPONDED_RESOURCES, req_msg);
 	param->local_cm_response_timeout =
 					cm_req_get_remote_resp_timeout(req_msg);
 	param->flow_control = cm_req_get_flow_ctrl(req_msg);
@@ -2004,7 +2005,7 @@ static int cm_req_handler(struct cm_work *work)
 					cm_req_get_local_resp_timeout(req_msg));
 	cm_id_priv->max_cm_retries = cm_req_get_max_cm_retries(req_msg);
 	cm_id_priv->remote_qpn = IBA_GET(CM_REQ_LOCAL_QPN, req_msg);
-	cm_id_priv->initiator_depth = cm_req_get_resp_res(req_msg);
+	cm_id_priv->initiator_depth = IBA_GET(CM_REQ_RESPONDED_RESOURCES, req_msg);
 	cm_id_priv->responder_resources = cm_req_get_init_depth(req_msg);
 	cm_id_priv->path_mtu = cm_req_get_path_mtu(req_msg);
 	cm_id_priv->pkey = req_msg->pkey;
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index 650d6fb312c8..7909100dc9eb 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -70,18 +70,6 @@ struct cm_req_msg {
 
 } __packed;
 
-static inline u8 cm_req_get_resp_res(struct cm_req_msg *req_msg)
-{
-	return (u8) be32_to_cpu(req_msg->offset32);
-}
-
-static inline void cm_req_set_resp_res(struct cm_req_msg *req_msg, u8 resp_res)
-{
-	req_msg->offset32 = cpu_to_be32(resp_res |
-					(be32_to_cpu(req_msg->offset32) &
-					 0xFFFFFF00));
-}
-
 static inline u8 cm_req_get_init_depth(struct cm_req_msg *req_msg)
 {
 	return (u8) be32_to_cpu(req_msg->offset36);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 18/48] RDMA/cm: Convert REQ initiator depth to the new scheme
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (16 preceding siblings ...)
  2019-12-12  9:37 ` [PATCH rdma-rc v2 17/48] RDMA/cm: Convert REQ responded resources to the new scheme Leon Romanovsky
@ 2019-12-12  9:38 ` Leon Romanovsky
  2019-12-12  9:38 ` [PATCH rdma-rc v2 19/48] RDMA/cm: Convert REQ remote response timeout Leon Romanovsky
                   ` (31 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:38 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Adapt REQ initiator depth to new get/set macros.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      |  6 +++---
 drivers/infiniband/core/cm_msgs.h | 13 -------------
 2 files changed, 3 insertions(+), 16 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 26718d25564c..1f80db6b24e7 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -1266,7 +1266,7 @@ static void cm_format_req(struct cm_req_msg *req_msg,
 	req_msg->service_id = param->service_id;
 	req_msg->local_ca_guid = cm_id_priv->id.device->node_guid;
 	IBA_SET(CM_REQ_LOCAL_QPN, req_msg, param->qp_num);
-	cm_req_set_init_depth(req_msg, param->initiator_depth);
+	IBA_SET(CM_REQ_INITIATOR_DEPTH, req_msg, param->initiator_depth);
 	cm_req_set_remote_resp_timeout(req_msg,
 				       param->remote_cm_response_timeout);
 	cm_req_set_qp_type(req_msg, param->qp_type);
@@ -1670,7 +1670,7 @@ static void cm_format_req_event(struct cm_work *work,
 	param->remote_qpn = IBA_GET(CM_REQ_LOCAL_QPN, req_msg);
 	param->qp_type = cm_req_get_qp_type(req_msg);
 	param->starting_psn = be32_to_cpu(cm_req_get_starting_psn(req_msg));
-	param->responder_resources = cm_req_get_init_depth(req_msg);
+	param->responder_resources = IBA_GET(CM_REQ_INITIATOR_DEPTH, req_msg);
 	param->initiator_depth = IBA_GET(CM_REQ_RESPONDED_RESOURCES, req_msg);
 	param->local_cm_response_timeout =
 					cm_req_get_remote_resp_timeout(req_msg);
@@ -2006,7 +2006,7 @@ static int cm_req_handler(struct cm_work *work)
 	cm_id_priv->max_cm_retries = cm_req_get_max_cm_retries(req_msg);
 	cm_id_priv->remote_qpn = IBA_GET(CM_REQ_LOCAL_QPN, req_msg);
 	cm_id_priv->initiator_depth = IBA_GET(CM_REQ_RESPONDED_RESOURCES, req_msg);
-	cm_id_priv->responder_resources = cm_req_get_init_depth(req_msg);
+	cm_id_priv->responder_resources = IBA_GET(CM_REQ_INITIATOR_DEPTH, req_msg);
 	cm_id_priv->path_mtu = cm_req_get_path_mtu(req_msg);
 	cm_id_priv->pkey = req_msg->pkey;
 	cm_id_priv->sq_psn = cm_req_get_starting_psn(req_msg);
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index 7909100dc9eb..4adf107f07f0 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -70,19 +70,6 @@ struct cm_req_msg {
 
 } __packed;
 
-static inline u8 cm_req_get_init_depth(struct cm_req_msg *req_msg)
-{
-	return (u8) be32_to_cpu(req_msg->offset36);
-}
-
-static inline void cm_req_set_init_depth(struct cm_req_msg *req_msg,
-					 u8 init_depth)
-{
-	req_msg->offset36 = cpu_to_be32(init_depth |
-					(be32_to_cpu(req_msg->offset36) &
-					 0xFFFFFF00));
-}
-
 static inline u8 cm_req_get_remote_resp_timeout(struct cm_req_msg *req_msg)
 {
 	return (u8) ((be32_to_cpu(req_msg->offset40) & 0xF8) >> 3);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 19/48] RDMA/cm: Convert REQ remote response timeout
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (17 preceding siblings ...)
  2019-12-12  9:38 ` [PATCH rdma-rc v2 18/48] RDMA/cm: Convert REQ initiator depth " Leon Romanovsky
@ 2019-12-12  9:38 ` Leon Romanovsky
  2019-12-12  9:38 ` [PATCH rdma-rc v2 20/48] RDMA/cm: Simplify QP type to wire protocol translation Leon Romanovsky
                   ` (30 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:38 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Use new get/set macros to access REQ remote response timeout.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      |  6 +++---
 drivers/infiniband/core/cm_msgs.h | 13 -------------
 2 files changed, 3 insertions(+), 16 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 1f80db6b24e7..34654aba638e 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -1267,8 +1267,8 @@ static void cm_format_req(struct cm_req_msg *req_msg,
 	req_msg->local_ca_guid = cm_id_priv->id.device->node_guid;
 	IBA_SET(CM_REQ_LOCAL_QPN, req_msg, param->qp_num);
 	IBA_SET(CM_REQ_INITIATOR_DEPTH, req_msg, param->initiator_depth);
-	cm_req_set_remote_resp_timeout(req_msg,
-				       param->remote_cm_response_timeout);
+	IBA_SET(CM_REQ_REMOTE_CM_RESPONSE_TIMEOUT, req_msg,
+		param->remote_cm_response_timeout);
 	cm_req_set_qp_type(req_msg, param->qp_type);
 	cm_req_set_flow_ctrl(req_msg, param->flow_control);
 	cm_req_set_starting_psn(req_msg, cpu_to_be32(param->starting_psn));
@@ -1673,7 +1673,7 @@ static void cm_format_req_event(struct cm_work *work,
 	param->responder_resources = IBA_GET(CM_REQ_INITIATOR_DEPTH, req_msg);
 	param->initiator_depth = IBA_GET(CM_REQ_RESPONDED_RESOURCES, req_msg);
 	param->local_cm_response_timeout =
-					cm_req_get_remote_resp_timeout(req_msg);
+		IBA_GET(CM_REQ_REMOTE_CM_RESPONSE_TIMEOUT, req_msg);
 	param->flow_control = cm_req_get_flow_ctrl(req_msg);
 	param->remote_cm_response_timeout =
 					cm_req_get_local_resp_timeout(req_msg);
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index 4adf107f07f0..c348521040a6 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -70,19 +70,6 @@ struct cm_req_msg {
 
 } __packed;
 
-static inline u8 cm_req_get_remote_resp_timeout(struct cm_req_msg *req_msg)
-{
-	return (u8) ((be32_to_cpu(req_msg->offset40) & 0xF8) >> 3);
-}
-
-static inline void cm_req_set_remote_resp_timeout(struct cm_req_msg *req_msg,
-						  u8 resp_timeout)
-{
-	req_msg->offset40 = cpu_to_be32((resp_timeout << 3) |
-					 (be32_to_cpu(req_msg->offset40) &
-					  0xFFFFFF07));
-}
-
 static inline enum ib_qp_type cm_req_get_qp_type(struct cm_req_msg *req_msg)
 {
 	u8 transport_type = (u8) (be32_to_cpu(req_msg->offset40) & 0x06) >> 1;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 20/48] RDMA/cm: Simplify QP type to wire protocol translation
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (18 preceding siblings ...)
  2019-12-12  9:38 ` [PATCH rdma-rc v2 19/48] RDMA/cm: Convert REQ remote response timeout Leon Romanovsky
@ 2019-12-12  9:38 ` Leon Romanovsky
  2019-12-12  9:38 ` [PATCH rdma-rc v2 21/48] RDMA/cm: Convert REQ flow control Leon Romanovsky
                   ` (29 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:38 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Simplify QP type to/from wire protocol logic and move it to be near
implementation and not in header file.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      | 25 +++++++++++++++++++++
 drivers/infiniband/core/cm_msgs.h | 37 -------------------------------
 2 files changed, 25 insertions(+), 37 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 34654aba638e..f7ed4067743f 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -1247,6 +1247,20 @@ static void cm_format_mad_hdr(struct ib_mad_hdr *hdr,
 	hdr->tid	   = tid;
 }
 
+static void cm_req_set_qp_type(struct cm_req_msg *req_msg,
+			       enum ib_qp_type qp_type)
+{
+	static const u8 qp_types[IB_QPT_MAX] = {
+		[IB_QPT_UC] = 1,
+		[IB_QPT_XRC_INI] = 3,
+	};
+
+	if (qp_type == IB_QPT_XRC_INI)
+		IBA_SET(CM_REQ_EXTENDED_TRANSPORT_TYPE, req_msg, 0x1);
+
+	IBA_SET(CM_REQ_TRANSPORT_SERVICE_TYPE, req_msg, qp_types[qp_type]);
+}
+
 static void cm_format_req(struct cm_req_msg *req_msg,
 			  struct cm_id_private *cm_id_priv,
 			  struct ib_cm_req_param *param)
@@ -1645,6 +1659,17 @@ static void cm_opa_to_ib_sgid(struct cm_work *work,
 	}
 }
 
+static enum ib_qp_type cm_req_get_qp_type(struct cm_req_msg *req_msg)
+{
+	static const enum ib_qp_type qp_type[] = { IB_QPT_RC, IB_QPT_UC, 0, 0 };
+	u8 transport_type = IBA_GET(CM_REQ_TRANSPORT_SERVICE_TYPE, req_msg);
+
+	if (transport_type == 3 &&
+	    (IBA_GET(CM_REQ_EXTENDED_TRANSPORT_TYPE, req_msg) == 1))
+		return IB_QPT_XRC_TGT;
+	return qp_type[transport_type];
+}
+
 static void cm_format_req_event(struct cm_work *work,
 				struct cm_id_private *cm_id_priv,
 				struct ib_cm_id *listen_id)
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index c348521040a6..5e6dd00c6018 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -70,43 +70,6 @@ struct cm_req_msg {
 
 } __packed;
 
-static inline enum ib_qp_type cm_req_get_qp_type(struct cm_req_msg *req_msg)
-{
-	u8 transport_type = (u8) (be32_to_cpu(req_msg->offset40) & 0x06) >> 1;
-	switch(transport_type) {
-	case 0: return IB_QPT_RC;
-	case 1: return IB_QPT_UC;
-	case 3:
-		switch (req_msg->offset51 & 0x7) {
-		case 1: return IB_QPT_XRC_TGT;
-		default: return 0;
-		}
-	default: return 0;
-	}
-}
-
-static inline void cm_req_set_qp_type(struct cm_req_msg *req_msg,
-				      enum ib_qp_type qp_type)
-{
-	switch(qp_type) {
-	case IB_QPT_UC:
-		req_msg->offset40 = cpu_to_be32((be32_to_cpu(
-						  req_msg->offset40) &
-						   0xFFFFFFF9) | 0x2);
-		break;
-	case IB_QPT_XRC_INI:
-		req_msg->offset40 = cpu_to_be32((be32_to_cpu(
-						 req_msg->offset40) &
-						   0xFFFFFFF9) | 0x6);
-		req_msg->offset51 = (req_msg->offset51 & 0xF8) | 1;
-		break;
-	default:
-		req_msg->offset40 = cpu_to_be32(be32_to_cpu(
-						 req_msg->offset40) &
-						  0xFFFFFFF9);
-	}
-}
-
 static inline u8 cm_req_get_flow_ctrl(struct cm_req_msg *req_msg)
 {
 	return be32_to_cpu(req_msg->offset40) & 0x1;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 21/48] RDMA/cm: Convert REQ flow control
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (19 preceding siblings ...)
  2019-12-12  9:38 ` [PATCH rdma-rc v2 20/48] RDMA/cm: Simplify QP type to wire protocol translation Leon Romanovsky
@ 2019-12-12  9:38 ` Leon Romanovsky
  2019-12-12  9:38 ` [PATCH rdma-rc v2 22/48] RDMA/cm: Convert starting PSN to be u32 variable Leon Romanovsky
                   ` (28 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:38 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Use IBA_GET/IBA_SET for REQ flow control field.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      |  4 ++--
 drivers/infiniband/core/cm_msgs.h | 13 -------------
 2 files changed, 2 insertions(+), 15 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index f7ed4067743f..c60ec7967744 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -1284,7 +1284,7 @@ static void cm_format_req(struct cm_req_msg *req_msg,
 	IBA_SET(CM_REQ_REMOTE_CM_RESPONSE_TIMEOUT, req_msg,
 		param->remote_cm_response_timeout);
 	cm_req_set_qp_type(req_msg, param->qp_type);
-	cm_req_set_flow_ctrl(req_msg, param->flow_control);
+	IBA_SET(CM_REQ_END_TO_END_FLOW_CONTROL, req_msg, param->flow_control);
 	cm_req_set_starting_psn(req_msg, cpu_to_be32(param->starting_psn));
 	cm_req_set_local_resp_timeout(req_msg,
 				      param->local_cm_response_timeout);
@@ -1699,7 +1699,7 @@ static void cm_format_req_event(struct cm_work *work,
 	param->initiator_depth = IBA_GET(CM_REQ_RESPONDED_RESOURCES, req_msg);
 	param->local_cm_response_timeout =
 		IBA_GET(CM_REQ_REMOTE_CM_RESPONSE_TIMEOUT, req_msg);
-	param->flow_control = cm_req_get_flow_ctrl(req_msg);
+	param->flow_control = IBA_GET(CM_REQ_END_TO_END_FLOW_CONTROL, req_msg);
 	param->remote_cm_response_timeout =
 					cm_req_get_local_resp_timeout(req_msg);
 	param->retry_count = cm_req_get_retry_count(req_msg);
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index 5e6dd00c6018..fc05a42e125d 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -70,19 +70,6 @@ struct cm_req_msg {
 
 } __packed;
 
-static inline u8 cm_req_get_flow_ctrl(struct cm_req_msg *req_msg)
-{
-	return be32_to_cpu(req_msg->offset40) & 0x1;
-}
-
-static inline void cm_req_set_flow_ctrl(struct cm_req_msg *req_msg,
-					u8 flow_ctrl)
-{
-	req_msg->offset40 = cpu_to_be32((flow_ctrl & 0x1) |
-					 (be32_to_cpu(req_msg->offset40) &
-					  0xFFFFFFFE));
-}
-
 static inline __be32 cm_req_get_starting_psn(struct cm_req_msg *req_msg)
 {
 	return cpu_to_be32(be32_to_cpu(req_msg->offset44) >> 8);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 22/48] RDMA/cm: Convert starting PSN to be u32 variable
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (20 preceding siblings ...)
  2019-12-12  9:38 ` [PATCH rdma-rc v2 21/48] RDMA/cm: Convert REQ flow control Leon Romanovsky
@ 2019-12-12  9:38 ` Leon Romanovsky
  2019-12-12  9:38 ` [PATCH rdma-rc v2 23/48] RDMA/cm: Update REQ local response timeout Leon Romanovsky
                   ` (27 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:38 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Remove extra conversion between be32<->u32 for starting PSN
by using newly created IBA_GET/IBA_SET macros.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      | 24 ++++++++++++------------
 drivers/infiniband/core/cm_msgs.h | 24 ------------------------
 2 files changed, 12 insertions(+), 36 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index c60ec7967744..e9123f3b8f43 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -253,8 +253,8 @@ struct cm_id_private {
 	u32 local_qpn;
 	u32 remote_qpn;
 	enum ib_qp_type qp_type;
-	__be32 sq_psn;
-	__be32 rq_psn;
+	u32 sq_psn;
+	u32 rq_psn;
 	int timeout_ms;
 	enum ib_mtu path_mtu;
 	__be16 pkey;
@@ -1285,7 +1285,7 @@ static void cm_format_req(struct cm_req_msg *req_msg,
 		param->remote_cm_response_timeout);
 	cm_req_set_qp_type(req_msg, param->qp_type);
 	IBA_SET(CM_REQ_END_TO_END_FLOW_CONTROL, req_msg, param->flow_control);
-	cm_req_set_starting_psn(req_msg, cpu_to_be32(param->starting_psn));
+	IBA_SET(CM_REQ_STARTING_PSN, req_msg, param->starting_psn);
 	cm_req_set_local_resp_timeout(req_msg,
 				      param->local_cm_response_timeout);
 	req_msg->pkey = param->primary_path->pkey;
@@ -1459,7 +1459,7 @@ int ib_send_cm_req(struct ib_cm_id *cm_id,
 	cm_id_priv->msg->context[1] = (void *) (unsigned long) IB_CM_REQ_SENT;
 
 	cm_id_priv->local_qpn = IBA_GET(CM_REQ_LOCAL_QPN, req_msg);
-	cm_id_priv->rq_psn = cm_req_get_starting_psn(req_msg);
+	cm_id_priv->rq_psn = IBA_GET(CM_REQ_STARTING_PSN, req_msg);
 
 	spin_lock_irqsave(&cm_id_priv->lock, flags);
 	ret = ib_post_send_mad(cm_id_priv->msg, NULL);
@@ -1694,7 +1694,7 @@ static void cm_format_req_event(struct cm_work *work,
 	param->remote_qkey = be32_to_cpu(req_msg->local_qkey);
 	param->remote_qpn = IBA_GET(CM_REQ_LOCAL_QPN, req_msg);
 	param->qp_type = cm_req_get_qp_type(req_msg);
-	param->starting_psn = be32_to_cpu(cm_req_get_starting_psn(req_msg));
+	param->starting_psn = IBA_GET(CM_REQ_STARTING_PSN, req_msg);
 	param->responder_resources = IBA_GET(CM_REQ_INITIATOR_DEPTH, req_msg);
 	param->initiator_depth = IBA_GET(CM_REQ_RESPONDED_RESOURCES, req_msg);
 	param->local_cm_response_timeout =
@@ -2034,7 +2034,7 @@ static int cm_req_handler(struct cm_work *work)
 	cm_id_priv->responder_resources = IBA_GET(CM_REQ_INITIATOR_DEPTH, req_msg);
 	cm_id_priv->path_mtu = cm_req_get_path_mtu(req_msg);
 	cm_id_priv->pkey = req_msg->pkey;
-	cm_id_priv->sq_psn = cm_req_get_starting_psn(req_msg);
+	cm_id_priv->sq_psn = IBA_GET(CM_REQ_STARTING_PSN, req_msg);
 	cm_id_priv->retry_count = cm_req_get_retry_count(req_msg);
 	cm_id_priv->rnr_retry_count = cm_req_get_rnr_retry_count(req_msg);
 	cm_id_priv->qp_type = cm_req_get_qp_type(req_msg);
@@ -2061,7 +2061,7 @@ static void cm_format_rep(struct cm_rep_msg *rep_msg,
 	cm_format_mad_hdr(&rep_msg->hdr, CM_REP_ATTR_ID, cm_id_priv->tid);
 	rep_msg->local_comm_id = cm_id_priv->id.local_id;
 	rep_msg->remote_comm_id = cm_id_priv->id.remote_id;
-	cm_rep_set_starting_psn(rep_msg, cpu_to_be32(param->starting_psn));
+	IBA_SET(CM_REP_STARTING_PSN, rep_msg, param->starting_psn);
 	rep_msg->resp_resources = param->responder_resources;
 	cm_rep_set_target_ack_delay(rep_msg,
 				    cm_id_priv->av.port->cm_dev->ack_delay);
@@ -2127,7 +2127,7 @@ int ib_send_cm_rep(struct ib_cm_id *cm_id,
 	cm_id_priv->msg = msg;
 	cm_id_priv->initiator_depth = param->initiator_depth;
 	cm_id_priv->responder_resources = param->responder_resources;
-	cm_id_priv->rq_psn = cm_rep_get_starting_psn(rep_msg);
+	cm_id_priv->rq_psn = IBA_GET(CM_REP_STARTING_PSN, rep_msg);
 	WARN_ONCE(param->qp_num & 0xFF000000,
 		  "IBTA declares QPN to be 24 bits, but it is 0x%X\n",
 		  param->qp_num);
@@ -2218,7 +2218,7 @@ static void cm_format_rep_event(struct cm_work *work, enum ib_qp_type qp_type)
 			IBA_GET(CM_REP_LOCAL_EE_CONTEXT_NUMBER, rep_msg);
 	else
 		param->remote_qpn = IBA_GET(CM_REP_LOCAL_QPN, rep_msg);
-	param->starting_psn = be32_to_cpu(cm_rep_get_starting_psn(rep_msg));
+	param->starting_psn = IBA_GET(CM_REP_STARTING_PSN, rep_msg);
 	param->responder_resources = rep_msg->initiator_depth;
 	param->initiator_depth = rep_msg->resp_resources;
 	param->target_ack_delay = cm_rep_get_target_ack_delay(rep_msg);
@@ -2365,7 +2365,7 @@ static int cm_rep_handler(struct cm_work *work)
 		cm_id_priv->remote_qpn = IBA_GET(CM_REP_LOCAL_QPN, rep_msg);
 	cm_id_priv->initiator_depth = rep_msg->resp_resources;
 	cm_id_priv->responder_resources = rep_msg->initiator_depth;
-	cm_id_priv->sq_psn = cm_rep_get_starting_psn(rep_msg);
+	cm_id_priv->sq_psn = IBA_GET(CM_REP_STARTING_PSN, rep_msg);
 	cm_id_priv->rnr_retry_count = cm_rep_get_rnr_retry_count(rep_msg);
 	cm_id_priv->target_ack_delay = cm_rep_get_target_ack_delay(rep_msg);
 	cm_id_priv->av.timeout =
@@ -4154,7 +4154,7 @@ static int cm_init_qp_rtr_attr(struct cm_id_private *cm_id_priv,
 		qp_attr->ah_attr = cm_id_priv->av.ah_attr;
 		qp_attr->path_mtu = cm_id_priv->path_mtu;
 		qp_attr->dest_qp_num = cm_id_priv->remote_qpn;
-		qp_attr->rq_psn = be32_to_cpu(cm_id_priv->rq_psn);
+		qp_attr->rq_psn = cm_id_priv->rq_psn;
 		if (cm_id_priv->qp_type == IB_QPT_RC ||
 		    cm_id_priv->qp_type == IB_QPT_XRC_TGT) {
 			*qp_attr_mask |= IB_QP_MAX_DEST_RD_ATOMIC |
@@ -4203,7 +4203,7 @@ static int cm_init_qp_rts_attr(struct cm_id_private *cm_id_priv,
 	case IB_CM_ESTABLISHED:
 		if (cm_id_priv->id.lap_state == IB_CM_LAP_UNINIT) {
 			*qp_attr_mask = IB_QP_STATE | IB_QP_SQ_PSN;
-			qp_attr->sq_psn = be32_to_cpu(cm_id_priv->sq_psn);
+			qp_attr->sq_psn = cm_id_priv->sq_psn;
 			switch (cm_id_priv->qp_type) {
 			case IB_QPT_RC:
 			case IB_QPT_XRC_INI:
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index fc05a42e125d..47f66c1793a7 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -70,18 +70,6 @@ struct cm_req_msg {
 
 } __packed;
 
-static inline __be32 cm_req_get_starting_psn(struct cm_req_msg *req_msg)
-{
-	return cpu_to_be32(be32_to_cpu(req_msg->offset44) >> 8);
-}
-
-static inline void cm_req_set_starting_psn(struct cm_req_msg *req_msg,
-					   __be32 starting_psn)
-{
-	req_msg->offset44 = cpu_to_be32((be32_to_cpu(starting_psn) << 8) |
-			    (be32_to_cpu(req_msg->offset44) & 0x000000FF));
-}
-
 static inline u8 cm_req_get_local_resp_timeout(struct cm_req_msg *req_msg)
 {
 	return (u8) ((be32_to_cpu(req_msg->offset44) & 0xF8) >> 3);
@@ -379,18 +367,6 @@ struct cm_rep_msg {
 
 } __packed;
 
-static inline __be32 cm_rep_get_starting_psn(struct cm_rep_msg *rep_msg)
-{
-	return cpu_to_be32(be32_to_cpu(rep_msg->offset20) >> 8);
-}
-
-static inline void cm_rep_set_starting_psn(struct cm_rep_msg *rep_msg,
-					   __be32 starting_psn)
-{
-	rep_msg->offset20 = cpu_to_be32((be32_to_cpu(starting_psn) << 8) |
-			    (be32_to_cpu(rep_msg->offset20) & 0x000000FF));
-}
-
 static inline u8 cm_rep_get_target_ack_delay(struct cm_rep_msg *rep_msg)
 {
 	return (u8) (rep_msg->offset26 >> 3);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 23/48] RDMA/cm: Update REQ local response timeout
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (21 preceding siblings ...)
  2019-12-12  9:38 ` [PATCH rdma-rc v2 22/48] RDMA/cm: Convert starting PSN to be u32 variable Leon Romanovsky
@ 2019-12-12  9:38 ` Leon Romanovsky
  2019-12-12  9:38 ` [PATCH rdma-rc v2 24/48] RDMA/cm: Convert REQ retry count to use new scheme Leon Romanovsky
                   ` (26 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:38 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Use newly introduces IBA_GET/IBA_SET to add access REQ local response
timeout.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      |  8 ++++----
 drivers/infiniband/core/cm_msgs.h | 12 ------------
 2 files changed, 4 insertions(+), 16 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index e9123f3b8f43..062579d43c56 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -1286,8 +1286,8 @@ static void cm_format_req(struct cm_req_msg *req_msg,
 	cm_req_set_qp_type(req_msg, param->qp_type);
 	IBA_SET(CM_REQ_END_TO_END_FLOW_CONTROL, req_msg, param->flow_control);
 	IBA_SET(CM_REQ_STARTING_PSN, req_msg, param->starting_psn);
-	cm_req_set_local_resp_timeout(req_msg,
-				      param->local_cm_response_timeout);
+	IBA_SET(CM_REQ_LOCAL_CM_RESPONSE_TIMEOUT, req_msg,
+		param->local_cm_response_timeout);
 	req_msg->pkey = param->primary_path->pkey;
 	cm_req_set_path_mtu(req_msg, param->primary_path->mtu);
 	cm_req_set_max_cm_retries(req_msg, param->max_cm_retries);
@@ -1701,7 +1701,7 @@ static void cm_format_req_event(struct cm_work *work,
 		IBA_GET(CM_REQ_REMOTE_CM_RESPONSE_TIMEOUT, req_msg);
 	param->flow_control = IBA_GET(CM_REQ_END_TO_END_FLOW_CONTROL, req_msg);
 	param->remote_cm_response_timeout =
-					cm_req_get_local_resp_timeout(req_msg);
+		IBA_GET(CM_REQ_LOCAL_CM_RESPONSE_TIMEOUT, req_msg);
 	param->retry_count = cm_req_get_retry_count(req_msg);
 	param->rnr_retry_count = cm_req_get_rnr_retry_count(req_msg);
 	param->srq = cm_req_get_srq(req_msg);
@@ -2027,7 +2027,7 @@ static int cm_req_handler(struct cm_work *work)
 	}
 	cm_id_priv->tid = req_msg->hdr.tid;
 	cm_id_priv->timeout_ms = cm_convert_to_ms(
-					cm_req_get_local_resp_timeout(req_msg));
+		IBA_GET(CM_REQ_LOCAL_CM_RESPONSE_TIMEOUT, req_msg));
 	cm_id_priv->max_cm_retries = cm_req_get_max_cm_retries(req_msg);
 	cm_id_priv->remote_qpn = IBA_GET(CM_REQ_LOCAL_QPN, req_msg);
 	cm_id_priv->initiator_depth = IBA_GET(CM_REQ_RESPONDED_RESOURCES, req_msg);
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index 47f66c1793a7..56832e9a0692 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -70,18 +70,6 @@ struct cm_req_msg {
 
 } __packed;
 
-static inline u8 cm_req_get_local_resp_timeout(struct cm_req_msg *req_msg)
-{
-	return (u8) ((be32_to_cpu(req_msg->offset44) & 0xF8) >> 3);
-}
-
-static inline void cm_req_set_local_resp_timeout(struct cm_req_msg *req_msg,
-						 u8 resp_timeout)
-{
-	req_msg->offset44 = cpu_to_be32((resp_timeout << 3) |
-			    (be32_to_cpu(req_msg->offset44) & 0xFFFFFF07));
-}
-
 static inline u8 cm_req_get_retry_count(struct cm_req_msg *req_msg)
 {
 	return (u8) (be32_to_cpu(req_msg->offset44) & 0x7);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 24/48] RDMA/cm: Convert REQ retry count to use new scheme
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (22 preceding siblings ...)
  2019-12-12  9:38 ` [PATCH rdma-rc v2 23/48] RDMA/cm: Update REQ local response timeout Leon Romanovsky
@ 2019-12-12  9:38 ` Leon Romanovsky
  2019-12-12  9:38 ` [PATCH rdma-rc v2 25/48] RDMA/cm: Update REQ path MTU field Leon Romanovsky
                   ` (25 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:38 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Convert REQ retry count to new IBA_GET/IBA_SET macros.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      |  6 +++---
 drivers/infiniband/core/cm_msgs.h | 12 ------------
 2 files changed, 3 insertions(+), 15 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 062579d43c56..3e5b06c98808 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -1295,7 +1295,7 @@ static void cm_format_req(struct cm_req_msg *req_msg,
 	if (param->qp_type != IB_QPT_XRC_INI) {
 		IBA_SET(CM_REQ_RESPONDED_RESOURCES, req_msg,
 		       param->responder_resources);
-		cm_req_set_retry_count(req_msg, param->retry_count);
+		IBA_SET(CM_REQ_RETRY_COUNT, req_msg, param->retry_count);
 		cm_req_set_rnr_retry_count(req_msg, param->rnr_retry_count);
 		cm_req_set_srq(req_msg, param->srq);
 	}
@@ -1702,7 +1702,7 @@ static void cm_format_req_event(struct cm_work *work,
 	param->flow_control = IBA_GET(CM_REQ_END_TO_END_FLOW_CONTROL, req_msg);
 	param->remote_cm_response_timeout =
 		IBA_GET(CM_REQ_LOCAL_CM_RESPONSE_TIMEOUT, req_msg);
-	param->retry_count = cm_req_get_retry_count(req_msg);
+	param->retry_count = IBA_GET(CM_REQ_RETRY_COUNT, req_msg);
 	param->rnr_retry_count = cm_req_get_rnr_retry_count(req_msg);
 	param->srq = cm_req_get_srq(req_msg);
 	param->ppath_sgid_attr = cm_id_priv->av.ah_attr.grh.sgid_attr;
@@ -2035,7 +2035,7 @@ static int cm_req_handler(struct cm_work *work)
 	cm_id_priv->path_mtu = cm_req_get_path_mtu(req_msg);
 	cm_id_priv->pkey = req_msg->pkey;
 	cm_id_priv->sq_psn = IBA_GET(CM_REQ_STARTING_PSN, req_msg);
-	cm_id_priv->retry_count = cm_req_get_retry_count(req_msg);
+	cm_id_priv->retry_count = IBA_GET(CM_REQ_RETRY_COUNT, req_msg);
 	cm_id_priv->rnr_retry_count = cm_req_get_rnr_retry_count(req_msg);
 	cm_id_priv->qp_type = cm_req_get_qp_type(req_msg);
 
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index 56832e9a0692..d3f1caf07db5 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -70,18 +70,6 @@ struct cm_req_msg {
 
 } __packed;
 
-static inline u8 cm_req_get_retry_count(struct cm_req_msg *req_msg)
-{
-	return (u8) (be32_to_cpu(req_msg->offset44) & 0x7);
-}
-
-static inline void cm_req_set_retry_count(struct cm_req_msg *req_msg,
-					  u8 retry_count)
-{
-	req_msg->offset44 = cpu_to_be32((retry_count & 0x7) |
-			    (be32_to_cpu(req_msg->offset44) & 0xFFFFFFF8));
-}
-
 static inline u8 cm_req_get_path_mtu(struct cm_req_msg *req_msg)
 {
 	return req_msg->offset50 >> 4;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 25/48] RDMA/cm: Update REQ path MTU field
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (23 preceding siblings ...)
  2019-12-12  9:38 ` [PATCH rdma-rc v2 24/48] RDMA/cm: Convert REQ retry count to use new scheme Leon Romanovsky
@ 2019-12-12  9:38 ` Leon Romanovsky
  2019-12-12  9:38 ` [PATCH rdma-rc v2 26/48] RDMA/cm: Convert REQ RNR retry timeout counter Leon Romanovsky
                   ` (24 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:38 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Convert REQ path MTU field to use IBA_GET/IBA_SET macros.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      |  8 ++++----
 drivers/infiniband/core/cm_msgs.h | 10 ----------
 2 files changed, 4 insertions(+), 14 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 3e5b06c98808..d7f0b929147b 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -1289,7 +1289,7 @@ static void cm_format_req(struct cm_req_msg *req_msg,
 	IBA_SET(CM_REQ_LOCAL_CM_RESPONSE_TIMEOUT, req_msg,
 		param->local_cm_response_timeout);
 	req_msg->pkey = param->primary_path->pkey;
-	cm_req_set_path_mtu(req_msg, param->primary_path->mtu);
+	IBA_SET(CM_REQ_PATH_PACKET_PAYLOAD_MTU, req_msg, param->primary_path->mtu);
 	cm_req_set_max_cm_retries(req_msg, param->max_cm_retries);
 
 	if (param->qp_type != IB_QPT_XRC_INI) {
@@ -1576,7 +1576,7 @@ static void cm_format_paths_from_req(struct cm_req_msg *req_msg,
 	primary_path->pkey = req_msg->pkey;
 	primary_path->sl = cm_req_get_primary_sl(req_msg);
 	primary_path->mtu_selector = IB_SA_EQ;
-	primary_path->mtu = cm_req_get_path_mtu(req_msg);
+	primary_path->mtu = IBA_GET(CM_REQ_PATH_PACKET_PAYLOAD_MTU, req_msg);
 	primary_path->rate_selector = IB_SA_EQ;
 	primary_path->rate = cm_req_get_primary_packet_rate(req_msg);
 	primary_path->packet_life_time_selector = IB_SA_EQ;
@@ -1597,7 +1597,7 @@ static void cm_format_paths_from_req(struct cm_req_msg *req_msg,
 		alt_path->pkey = req_msg->pkey;
 		alt_path->sl = cm_req_get_alt_sl(req_msg);
 		alt_path->mtu_selector = IB_SA_EQ;
-		alt_path->mtu = cm_req_get_path_mtu(req_msg);
+		alt_path->mtu = IBA_GET(CM_REQ_PATH_PACKET_PAYLOAD_MTU, req_msg);
 		alt_path->rate_selector = IB_SA_EQ;
 		alt_path->rate = cm_req_get_alt_packet_rate(req_msg);
 		alt_path->packet_life_time_selector = IB_SA_EQ;
@@ -2032,7 +2032,7 @@ static int cm_req_handler(struct cm_work *work)
 	cm_id_priv->remote_qpn = IBA_GET(CM_REQ_LOCAL_QPN, req_msg);
 	cm_id_priv->initiator_depth = IBA_GET(CM_REQ_RESPONDED_RESOURCES, req_msg);
 	cm_id_priv->responder_resources = IBA_GET(CM_REQ_INITIATOR_DEPTH, req_msg);
-	cm_id_priv->path_mtu = cm_req_get_path_mtu(req_msg);
+	cm_id_priv->path_mtu = IBA_GET(CM_REQ_PATH_PACKET_PAYLOAD_MTU, req_msg);
 	cm_id_priv->pkey = req_msg->pkey;
 	cm_id_priv->sq_psn = IBA_GET(CM_REQ_STARTING_PSN, req_msg);
 	cm_id_priv->retry_count = IBA_GET(CM_REQ_RETRY_COUNT, req_msg);
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index d3f1caf07db5..dbc5acdd7a71 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -70,16 +70,6 @@ struct cm_req_msg {
 
 } __packed;
 
-static inline u8 cm_req_get_path_mtu(struct cm_req_msg *req_msg)
-{
-	return req_msg->offset50 >> 4;
-}
-
-static inline void cm_req_set_path_mtu(struct cm_req_msg *req_msg, u8 path_mtu)
-{
-	req_msg->offset50 = (u8) ((req_msg->offset50 & 0xF) | (path_mtu << 4));
-}
-
 static inline u8 cm_req_get_rnr_retry_count(struct cm_req_msg *req_msg)
 {
 	return req_msg->offset50 & 0x7;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 26/48] RDMA/cm: Convert REQ RNR retry timeout counter
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (24 preceding siblings ...)
  2019-12-12  9:38 ` [PATCH rdma-rc v2 25/48] RDMA/cm: Update REQ path MTU field Leon Romanovsky
@ 2019-12-12  9:38 ` Leon Romanovsky
  2019-12-12  9:38 ` [PATCH rdma-rc v2 27/48] RDMA/cm: Convert REQ MAX CM retries Leon Romanovsky
                   ` (23 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:38 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Convert REQ RNR retry timeout counter to new scheme.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      |  6 +++---
 drivers/infiniband/core/cm_msgs.h | 12 ------------
 2 files changed, 3 insertions(+), 15 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index d7f0b929147b..549ea886f0de 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -1296,7 +1296,7 @@ static void cm_format_req(struct cm_req_msg *req_msg,
 		IBA_SET(CM_REQ_RESPONDED_RESOURCES, req_msg,
 		       param->responder_resources);
 		IBA_SET(CM_REQ_RETRY_COUNT, req_msg, param->retry_count);
-		cm_req_set_rnr_retry_count(req_msg, param->rnr_retry_count);
+		IBA_SET(CM_REQ_RNR_RETRY_COUNT, req_msg, param->rnr_retry_count);
 		cm_req_set_srq(req_msg, param->srq);
 	}
 
@@ -1703,7 +1703,7 @@ static void cm_format_req_event(struct cm_work *work,
 	param->remote_cm_response_timeout =
 		IBA_GET(CM_REQ_LOCAL_CM_RESPONSE_TIMEOUT, req_msg);
 	param->retry_count = IBA_GET(CM_REQ_RETRY_COUNT, req_msg);
-	param->rnr_retry_count = cm_req_get_rnr_retry_count(req_msg);
+	param->rnr_retry_count = IBA_GET(CM_REQ_RNR_RETRY_COUNT, req_msg);
 	param->srq = cm_req_get_srq(req_msg);
 	param->ppath_sgid_attr = cm_id_priv->av.ah_attr.grh.sgid_attr;
 	work->cm_event.private_data = &req_msg->private_data;
@@ -2036,7 +2036,7 @@ static int cm_req_handler(struct cm_work *work)
 	cm_id_priv->pkey = req_msg->pkey;
 	cm_id_priv->sq_psn = IBA_GET(CM_REQ_STARTING_PSN, req_msg);
 	cm_id_priv->retry_count = IBA_GET(CM_REQ_RETRY_COUNT, req_msg);
-	cm_id_priv->rnr_retry_count = cm_req_get_rnr_retry_count(req_msg);
+	cm_id_priv->rnr_retry_count = IBA_GET(CM_REQ_RNR_RETRY_COUNT, req_msg);
 	cm_id_priv->qp_type = cm_req_get_qp_type(req_msg);
 
 	cm_format_req_event(work, cm_id_priv, &listen_cm_id_priv->id);
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index dbc5acdd7a71..a754c7fa4fc0 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -70,18 +70,6 @@ struct cm_req_msg {
 
 } __packed;
 
-static inline u8 cm_req_get_rnr_retry_count(struct cm_req_msg *req_msg)
-{
-	return req_msg->offset50 & 0x7;
-}
-
-static inline void cm_req_set_rnr_retry_count(struct cm_req_msg *req_msg,
-					      u8 rnr_retry_count)
-{
-	req_msg->offset50 = (u8) ((req_msg->offset50 & 0xF8) |
-				  (rnr_retry_count & 0x7));
-}
-
 static inline u8 cm_req_get_max_cm_retries(struct cm_req_msg *req_msg)
 {
 	return req_msg->offset51 >> 4;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 27/48] RDMA/cm: Convert REQ MAX CM retries
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (25 preceding siblings ...)
  2019-12-12  9:38 ` [PATCH rdma-rc v2 26/48] RDMA/cm: Convert REQ RNR retry timeout counter Leon Romanovsky
@ 2019-12-12  9:38 ` Leon Romanovsky
  2019-12-12  9:38 ` [PATCH rdma-rc v2 28/48] RDMA/cm: Convert REQ SRQ field Leon Romanovsky
                   ` (22 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:38 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Convert REQ MAX CM retries to new scheme.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      |  4 ++--
 drivers/infiniband/core/cm_msgs.h | 11 -----------
 2 files changed, 2 insertions(+), 13 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 549ea886f0de..1b0cdaea035e 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -1290,7 +1290,7 @@ static void cm_format_req(struct cm_req_msg *req_msg,
 		param->local_cm_response_timeout);
 	req_msg->pkey = param->primary_path->pkey;
 	IBA_SET(CM_REQ_PATH_PACKET_PAYLOAD_MTU, req_msg, param->primary_path->mtu);
-	cm_req_set_max_cm_retries(req_msg, param->max_cm_retries);
+	IBA_SET(CM_REQ_MAX_CM_RETRIES, req_msg, param->max_cm_retries);
 
 	if (param->qp_type != IB_QPT_XRC_INI) {
 		IBA_SET(CM_REQ_RESPONDED_RESOURCES, req_msg,
@@ -2028,7 +2028,7 @@ static int cm_req_handler(struct cm_work *work)
 	cm_id_priv->tid = req_msg->hdr.tid;
 	cm_id_priv->timeout_ms = cm_convert_to_ms(
 		IBA_GET(CM_REQ_LOCAL_CM_RESPONSE_TIMEOUT, req_msg));
-	cm_id_priv->max_cm_retries = cm_req_get_max_cm_retries(req_msg);
+	cm_id_priv->max_cm_retries = IBA_GET(CM_REQ_MAX_CM_RETRIES, req_msg);
 	cm_id_priv->remote_qpn = IBA_GET(CM_REQ_LOCAL_QPN, req_msg);
 	cm_id_priv->initiator_depth = IBA_GET(CM_REQ_RESPONDED_RESOURCES, req_msg);
 	cm_id_priv->responder_resources = IBA_GET(CM_REQ_INITIATOR_DEPTH, req_msg);
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index a754c7fa4fc0..54573280652a 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -70,17 +70,6 @@ struct cm_req_msg {
 
 } __packed;
 
-static inline u8 cm_req_get_max_cm_retries(struct cm_req_msg *req_msg)
-{
-	return req_msg->offset51 >> 4;
-}
-
-static inline void cm_req_set_max_cm_retries(struct cm_req_msg *req_msg,
-					     u8 retries)
-{
-	req_msg->offset51 = (u8) ((req_msg->offset51 & 0xF) | (retries << 4));
-}
-
 static inline u8 cm_req_get_srq(struct cm_req_msg *req_msg)
 {
 	return (req_msg->offset51 & 0x8) >> 3;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 28/48] RDMA/cm: Convert REQ SRQ field
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (26 preceding siblings ...)
  2019-12-12  9:38 ` [PATCH rdma-rc v2 27/48] RDMA/cm: Convert REQ MAX CM retries Leon Romanovsky
@ 2019-12-12  9:38 ` Leon Romanovsky
  2019-12-12  9:38 ` [PATCH rdma-rc v2 29/48] RDMA/cm: Convert REQ flow label field Leon Romanovsky
                   ` (21 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:38 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Convert REQ SRQ field to new scheme.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      |  4 ++--
 drivers/infiniband/core/cm_msgs.h | 11 -----------
 2 files changed, 2 insertions(+), 13 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 1b0cdaea035e..673ff1da05bd 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -1297,7 +1297,7 @@ static void cm_format_req(struct cm_req_msg *req_msg,
 		       param->responder_resources);
 		IBA_SET(CM_REQ_RETRY_COUNT, req_msg, param->retry_count);
 		IBA_SET(CM_REQ_RNR_RETRY_COUNT, req_msg, param->rnr_retry_count);
-		cm_req_set_srq(req_msg, param->srq);
+		IBA_SET(CM_REQ_SRQ, req_msg, param->srq);
 	}
 
 	req_msg->primary_local_gid = pri_path->sgid;
@@ -1704,7 +1704,7 @@ static void cm_format_req_event(struct cm_work *work,
 		IBA_GET(CM_REQ_LOCAL_CM_RESPONSE_TIMEOUT, req_msg);
 	param->retry_count = IBA_GET(CM_REQ_RETRY_COUNT, req_msg);
 	param->rnr_retry_count = IBA_GET(CM_REQ_RNR_RETRY_COUNT, req_msg);
-	param->srq = cm_req_get_srq(req_msg);
+	param->srq = IBA_GET(CM_REQ_SRQ, req_msg);
 	param->ppath_sgid_attr = cm_id_priv->av.ah_attr.grh.sgid_attr;
 	work->cm_event.private_data = &req_msg->private_data;
 	work->cm_event.private_data_len = CM_REQ_PRIVATE_DATA_SIZE;
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index 54573280652a..23a48211d15e 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -70,17 +70,6 @@ struct cm_req_msg {
 
 } __packed;
 
-static inline u8 cm_req_get_srq(struct cm_req_msg *req_msg)
-{
-	return (req_msg->offset51 & 0x8) >> 3;
-}
-
-static inline void cm_req_set_srq(struct cm_req_msg *req_msg, u8 srq)
-{
-	req_msg->offset51 = (u8) ((req_msg->offset51 & 0xF7) |
-				  ((srq & 0x1) << 3));
-}
-
 static inline __be32 cm_req_get_primary_flow_label(struct cm_req_msg *req_msg)
 {
 	return cpu_to_be32(be32_to_cpu(req_msg->primary_offset88) >> 12);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 29/48] RDMA/cm: Convert REQ flow label field
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (27 preceding siblings ...)
  2019-12-12  9:38 ` [PATCH rdma-rc v2 28/48] RDMA/cm: Convert REQ SRQ field Leon Romanovsky
@ 2019-12-12  9:38 ` Leon Romanovsky
  2019-12-12  9:38 ` [PATCH rdma-rc v2 30/48] RDMA/cm: Convert REQ packet rate Leon Romanovsky
                   ` (20 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:38 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Use new IBA_GET/IBA_SET macros.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      | 13 ++++++++-----
 drivers/infiniband/core/cm_msgs.h | 28 ----------------------------
 2 files changed, 8 insertions(+), 33 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 673ff1da05bd..eebb98b48ec4 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -1318,7 +1318,8 @@ static void cm_format_req(struct cm_req_msg *req_msg,
 		req_msg->primary_local_lid = IB_LID_PERMISSIVE;
 		req_msg->primary_remote_lid = IB_LID_PERMISSIVE;
 	}
-	cm_req_set_primary_flow_label(req_msg, pri_path->flow_label);
+	IBA_SET(CM_REQ_PRIMARY_FLOW_LABEL, req_msg,
+	       be32_to_cpu(pri_path->flow_label));
 	cm_req_set_primary_packet_rate(req_msg, pri_path->rate);
 	req_msg->primary_traffic_class = pri_path->traffic_class;
 	req_msg->primary_hop_limit = pri_path->hop_limit;
@@ -1352,8 +1353,8 @@ static void cm_format_req(struct cm_req_msg *req_msg,
 			req_msg->alt_local_lid = IB_LID_PERMISSIVE;
 			req_msg->alt_remote_lid = IB_LID_PERMISSIVE;
 		}
-		cm_req_set_alt_flow_label(req_msg,
-					  alt_path->flow_label);
+		IBA_SET(CM_REQ_ALTERNATE_FLOW_LABEL, req_msg,
+		       be32_to_cpu(alt_path->flow_label));
 		cm_req_set_alt_packet_rate(req_msg, alt_path->rate);
 		req_msg->alt_traffic_class = alt_path->traffic_class;
 		req_msg->alt_hop_limit = alt_path->hop_limit;
@@ -1569,7 +1570,8 @@ static void cm_format_paths_from_req(struct cm_req_msg *req_msg,
 {
 	primary_path->dgid = req_msg->primary_local_gid;
 	primary_path->sgid = req_msg->primary_remote_gid;
-	primary_path->flow_label = cm_req_get_primary_flow_label(req_msg);
+	primary_path->flow_label =
+		cpu_to_be32(IBA_GET(CM_REQ_PRIMARY_FLOW_LABEL, req_msg));
 	primary_path->hop_limit = req_msg->primary_hop_limit;
 	primary_path->traffic_class = req_msg->primary_traffic_class;
 	primary_path->reversible = 1;
@@ -1590,7 +1592,8 @@ static void cm_format_paths_from_req(struct cm_req_msg *req_msg,
 	if (cm_req_has_alt_path(req_msg)) {
 		alt_path->dgid = req_msg->alt_local_gid;
 		alt_path->sgid = req_msg->alt_remote_gid;
-		alt_path->flow_label = cm_req_get_alt_flow_label(req_msg);
+		alt_path->flow_label = cpu_to_be32(
+			IBA_GET(CM_REQ_ALTERNATE_FLOW_LABEL, req_msg));
 		alt_path->hop_limit = req_msg->alt_hop_limit;
 		alt_path->traffic_class = req_msg->alt_traffic_class;
 		alt_path->reversible = 1;
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index 23a48211d15e..09c6a393b6a1 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -70,20 +70,6 @@ struct cm_req_msg {
 
 } __packed;
 
-static inline __be32 cm_req_get_primary_flow_label(struct cm_req_msg *req_msg)
-{
-	return cpu_to_be32(be32_to_cpu(req_msg->primary_offset88) >> 12);
-}
-
-static inline void cm_req_set_primary_flow_label(struct cm_req_msg *req_msg,
-						 __be32 flow_label)
-{
-	req_msg->primary_offset88 = cpu_to_be32(
-				    (be32_to_cpu(req_msg->primary_offset88) &
-				     0x00000FFF) |
-				     (be32_to_cpu(flow_label) << 12));
-}
-
 static inline u8 cm_req_get_primary_packet_rate(struct cm_req_msg *req_msg)
 {
 	return (u8) (be32_to_cpu(req_msg->primary_offset88) & 0x3F);
@@ -132,20 +118,6 @@ static inline void cm_req_set_primary_local_ack_timeout(struct cm_req_msg *req_m
 					  (local_ack_timeout << 3));
 }
 
-static inline __be32 cm_req_get_alt_flow_label(struct cm_req_msg *req_msg)
-{
-	return cpu_to_be32(be32_to_cpu(req_msg->alt_offset132) >> 12);
-}
-
-static inline void cm_req_set_alt_flow_label(struct cm_req_msg *req_msg,
-					     __be32 flow_label)
-{
-	req_msg->alt_offset132 = cpu_to_be32(
-				 (be32_to_cpu(req_msg->alt_offset132) &
-				  0x00000FFF) |
-				  (be32_to_cpu(flow_label) << 12));
-}
-
 static inline u8 cm_req_get_alt_packet_rate(struct cm_req_msg *req_msg)
 {
 	return (u8) (be32_to_cpu(req_msg->alt_offset132) & 0x3F);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 30/48] RDMA/cm: Convert REQ packet rate
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (28 preceding siblings ...)
  2019-12-12  9:38 ` [PATCH rdma-rc v2 29/48] RDMA/cm: Convert REQ flow label field Leon Romanovsky
@ 2019-12-12  9:38 ` Leon Romanovsky
  2019-12-12  9:38 ` [PATCH rdma-rc v2 31/48] RDMA/cm: Convert REQ SL fields Leon Romanovsky
                   ` (19 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:38 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Change primary and alternate packet rate fields to use newly
introduced macros.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      |  8 ++++----
 drivers/infiniband/core/cm_msgs.h | 26 --------------------------
 2 files changed, 4 insertions(+), 30 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index eebb98b48ec4..a8b98dc0d50c 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -1320,7 +1320,7 @@ static void cm_format_req(struct cm_req_msg *req_msg,
 	}
 	IBA_SET(CM_REQ_PRIMARY_FLOW_LABEL, req_msg,
 	       be32_to_cpu(pri_path->flow_label));
-	cm_req_set_primary_packet_rate(req_msg, pri_path->rate);
+	IBA_SET(CM_REQ_PRIMARY_PACKET_RATE, req_msg, pri_path->rate);
 	req_msg->primary_traffic_class = pri_path->traffic_class;
 	req_msg->primary_hop_limit = pri_path->hop_limit;
 	cm_req_set_primary_sl(req_msg, pri_path->sl);
@@ -1355,7 +1355,7 @@ static void cm_format_req(struct cm_req_msg *req_msg,
 		}
 		IBA_SET(CM_REQ_ALTERNATE_FLOW_LABEL, req_msg,
 		       be32_to_cpu(alt_path->flow_label));
-		cm_req_set_alt_packet_rate(req_msg, alt_path->rate);
+		IBA_SET(CM_REQ_ALTERNATE_PACKET_RATE, req_msg, alt_path->rate);
 		req_msg->alt_traffic_class = alt_path->traffic_class;
 		req_msg->alt_hop_limit = alt_path->hop_limit;
 		cm_req_set_alt_sl(req_msg, alt_path->sl);
@@ -1580,7 +1580,7 @@ static void cm_format_paths_from_req(struct cm_req_msg *req_msg,
 	primary_path->mtu_selector = IB_SA_EQ;
 	primary_path->mtu = IBA_GET(CM_REQ_PATH_PACKET_PAYLOAD_MTU, req_msg);
 	primary_path->rate_selector = IB_SA_EQ;
-	primary_path->rate = cm_req_get_primary_packet_rate(req_msg);
+	primary_path->rate = IBA_GET(CM_REQ_PRIMARY_PACKET_RATE, req_msg);
 	primary_path->packet_life_time_selector = IB_SA_EQ;
 	primary_path->packet_life_time =
 		cm_req_get_primary_local_ack_timeout(req_msg);
@@ -1602,7 +1602,7 @@ static void cm_format_paths_from_req(struct cm_req_msg *req_msg,
 		alt_path->mtu_selector = IB_SA_EQ;
 		alt_path->mtu = IBA_GET(CM_REQ_PATH_PACKET_PAYLOAD_MTU, req_msg);
 		alt_path->rate_selector = IB_SA_EQ;
-		alt_path->rate = cm_req_get_alt_packet_rate(req_msg);
+		alt_path->rate = IBA_GET(CM_REQ_ALTERNATE_PACKET_RATE, req_msg);
 		alt_path->packet_life_time_selector = IB_SA_EQ;
 		alt_path->packet_life_time =
 			cm_req_get_alt_local_ack_timeout(req_msg);
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index 09c6a393b6a1..620d344651ae 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -70,19 +70,6 @@ struct cm_req_msg {
 
 } __packed;
 
-static inline u8 cm_req_get_primary_packet_rate(struct cm_req_msg *req_msg)
-{
-	return (u8) (be32_to_cpu(req_msg->primary_offset88) & 0x3F);
-}
-
-static inline void cm_req_set_primary_packet_rate(struct cm_req_msg *req_msg,
-						  u8 rate)
-{
-	req_msg->primary_offset88 = cpu_to_be32(
-				    (be32_to_cpu(req_msg->primary_offset88) &
-				     0xFFFFFFC0) | (rate & 0x3F));
-}
-
 static inline u8 cm_req_get_primary_sl(struct cm_req_msg *req_msg)
 {
 	return (u8) (req_msg->primary_offset94 >> 4);
@@ -118,19 +105,6 @@ static inline void cm_req_set_primary_local_ack_timeout(struct cm_req_msg *req_m
 					  (local_ack_timeout << 3));
 }
 
-static inline u8 cm_req_get_alt_packet_rate(struct cm_req_msg *req_msg)
-{
-	return (u8) (be32_to_cpu(req_msg->alt_offset132) & 0x3F);
-}
-
-static inline void cm_req_set_alt_packet_rate(struct cm_req_msg *req_msg,
-					      u8 rate)
-{
-	req_msg->alt_offset132 = cpu_to_be32(
-				 (be32_to_cpu(req_msg->alt_offset132) &
-				  0xFFFFFFC0) | (rate & 0x3F));
-}
-
 static inline u8 cm_req_get_alt_sl(struct cm_req_msg *req_msg)
 {
 	return (u8) (req_msg->alt_offset138 >> 4);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 31/48] RDMA/cm: Convert REQ SL fields
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (29 preceding siblings ...)
  2019-12-12  9:38 ` [PATCH rdma-rc v2 30/48] RDMA/cm: Convert REQ packet rate Leon Romanovsky
@ 2019-12-12  9:38 ` Leon Romanovsky
  2019-12-12  9:38 ` [PATCH rdma-rc v2 32/48] RDMA/cm: Convert REQ subnet local fields Leon Romanovsky
                   ` (18 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:38 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Convert REQ SL fields.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      | 12 ++++++------
 drivers/infiniband/core/cm_msgs.h | 22 ----------------------
 2 files changed, 6 insertions(+), 28 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index a8b98dc0d50c..b6da28d43b46 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -1323,7 +1323,7 @@ static void cm_format_req(struct cm_req_msg *req_msg,
 	IBA_SET(CM_REQ_PRIMARY_PACKET_RATE, req_msg, pri_path->rate);
 	req_msg->primary_traffic_class = pri_path->traffic_class;
 	req_msg->primary_hop_limit = pri_path->hop_limit;
-	cm_req_set_primary_sl(req_msg, pri_path->sl);
+	IBA_SET(CM_REQ_PRIMARY_SL, req_msg, pri_path->sl);
 	cm_req_set_primary_subnet_local(req_msg, (pri_path->hop_limit <= 1));
 	cm_req_set_primary_local_ack_timeout(req_msg,
 		cm_ack_timeout(cm_id_priv->av.port->cm_dev->ack_delay,
@@ -1358,7 +1358,7 @@ static void cm_format_req(struct cm_req_msg *req_msg,
 		IBA_SET(CM_REQ_ALTERNATE_PACKET_RATE, req_msg, alt_path->rate);
 		req_msg->alt_traffic_class = alt_path->traffic_class;
 		req_msg->alt_hop_limit = alt_path->hop_limit;
-		cm_req_set_alt_sl(req_msg, alt_path->sl);
+		IBA_SET(CM_REQ_ALTERNATE_SL, req_msg, alt_path->sl);
 		cm_req_set_alt_subnet_local(req_msg, (alt_path->hop_limit <= 1));
 		cm_req_set_alt_local_ack_timeout(req_msg,
 			cm_ack_timeout(cm_id_priv->av.port->cm_dev->ack_delay,
@@ -1576,7 +1576,7 @@ static void cm_format_paths_from_req(struct cm_req_msg *req_msg,
 	primary_path->traffic_class = req_msg->primary_traffic_class;
 	primary_path->reversible = 1;
 	primary_path->pkey = req_msg->pkey;
-	primary_path->sl = cm_req_get_primary_sl(req_msg);
+	primary_path->sl = IBA_GET(CM_REQ_PRIMARY_SL, req_msg);
 	primary_path->mtu_selector = IB_SA_EQ;
 	primary_path->mtu = IBA_GET(CM_REQ_PATH_PACKET_PAYLOAD_MTU, req_msg);
 	primary_path->rate_selector = IB_SA_EQ;
@@ -1598,7 +1598,7 @@ static void cm_format_paths_from_req(struct cm_req_msg *req_msg,
 		alt_path->traffic_class = req_msg->alt_traffic_class;
 		alt_path->reversible = 1;
 		alt_path->pkey = req_msg->pkey;
-		alt_path->sl = cm_req_get_alt_sl(req_msg);
+		alt_path->sl = IBA_GET(CM_REQ_ALTERNATE_SL, req_msg);
 		alt_path->mtu_selector = IB_SA_EQ;
 		alt_path->mtu = IBA_GET(CM_REQ_PATH_PACKET_PAYLOAD_MTU, req_msg);
 		alt_path->rate_selector = IB_SA_EQ;
@@ -1910,7 +1910,7 @@ static void cm_process_routed_req(struct cm_req_msg *req_msg, struct ib_wc *wc)
 	if (!cm_req_get_primary_subnet_local(req_msg)) {
 		if (req_msg->primary_local_lid == IB_LID_PERMISSIVE) {
 			req_msg->primary_local_lid = ib_lid_be16(wc->slid);
-			cm_req_set_primary_sl(req_msg, wc->sl);
+			IBA_SET(CM_REQ_PRIMARY_SL, req_msg, wc->sl);
 		}
 
 		if (req_msg->primary_remote_lid == IB_LID_PERMISSIVE)
@@ -1920,7 +1920,7 @@ static void cm_process_routed_req(struct cm_req_msg *req_msg, struct ib_wc *wc)
 	if (!cm_req_get_alt_subnet_local(req_msg)) {
 		if (req_msg->alt_local_lid == IB_LID_PERMISSIVE) {
 			req_msg->alt_local_lid = ib_lid_be16(wc->slid);
-			cm_req_set_alt_sl(req_msg, wc->sl);
+			IBA_SET(CM_REQ_ALTERNATE_SL, req_msg, wc->sl);
 		}
 
 		if (req_msg->alt_remote_lid == IB_LID_PERMISSIVE)
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index 620d344651ae..b82eb10e22f6 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -70,17 +70,6 @@ struct cm_req_msg {
 
 } __packed;
 
-static inline u8 cm_req_get_primary_sl(struct cm_req_msg *req_msg)
-{
-	return (u8) (req_msg->primary_offset94 >> 4);
-}
-
-static inline void cm_req_set_primary_sl(struct cm_req_msg *req_msg, u8 sl)
-{
-	req_msg->primary_offset94 = (u8) ((req_msg->primary_offset94 & 0x0F) |
-					  (sl << 4));
-}
-
 static inline u8 cm_req_get_primary_subnet_local(struct cm_req_msg *req_msg)
 {
 	return (u8) ((req_msg->primary_offset94 & 0x08) >> 3);
@@ -105,17 +94,6 @@ static inline void cm_req_set_primary_local_ack_timeout(struct cm_req_msg *req_m
 					  (local_ack_timeout << 3));
 }
 
-static inline u8 cm_req_get_alt_sl(struct cm_req_msg *req_msg)
-{
-	return (u8) (req_msg->alt_offset138 >> 4);
-}
-
-static inline void cm_req_set_alt_sl(struct cm_req_msg *req_msg, u8 sl)
-{
-	req_msg->alt_offset138 = (u8) ((req_msg->alt_offset138 & 0x0F) |
-				       (sl << 4));
-}
-
 static inline u8 cm_req_get_alt_subnet_local(struct cm_req_msg *req_msg)
 {
 	return (u8) ((req_msg->alt_offset138 & 0x08) >> 3);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 32/48] RDMA/cm: Convert REQ subnet local fields
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (30 preceding siblings ...)
  2019-12-12  9:38 ` [PATCH rdma-rc v2 31/48] RDMA/cm: Convert REQ SL fields Leon Romanovsky
@ 2019-12-12  9:38 ` Leon Romanovsky
  2019-12-12  9:38 ` [PATCH rdma-rc v2 33/48] RDMA/cm: Convert REQ local ack timeout Leon Romanovsky
                   ` (17 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:38 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Convert REQ subnet local fields.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      |  9 +++++----
 drivers/infiniband/core/cm_msgs.h | 24 ------------------------
 2 files changed, 5 insertions(+), 28 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index b6da28d43b46..1a5d5d401c72 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -1324,7 +1324,7 @@ static void cm_format_req(struct cm_req_msg *req_msg,
 	req_msg->primary_traffic_class = pri_path->traffic_class;
 	req_msg->primary_hop_limit = pri_path->hop_limit;
 	IBA_SET(CM_REQ_PRIMARY_SL, req_msg, pri_path->sl);
-	cm_req_set_primary_subnet_local(req_msg, (pri_path->hop_limit <= 1));
+	IBA_SET(CM_REQ_PRIMARY_SUBNET_LOCAL, req_msg, (pri_path->hop_limit <= 1));
 	cm_req_set_primary_local_ack_timeout(req_msg,
 		cm_ack_timeout(cm_id_priv->av.port->cm_dev->ack_delay,
 			       pri_path->packet_life_time));
@@ -1359,7 +1359,8 @@ static void cm_format_req(struct cm_req_msg *req_msg,
 		req_msg->alt_traffic_class = alt_path->traffic_class;
 		req_msg->alt_hop_limit = alt_path->hop_limit;
 		IBA_SET(CM_REQ_ALTERNATE_SL, req_msg, alt_path->sl);
-		cm_req_set_alt_subnet_local(req_msg, (alt_path->hop_limit <= 1));
+		IBA_SET(CM_REQ_ALTERNATE_SUBNET_LOCAL, req_msg,
+			(alt_path->hop_limit <= 1));
 		cm_req_set_alt_local_ack_timeout(req_msg,
 			cm_ack_timeout(cm_id_priv->av.port->cm_dev->ack_delay,
 				       alt_path->packet_life_time));
@@ -1907,7 +1908,7 @@ static struct cm_id_private * cm_match_req(struct cm_work *work,
  */
 static void cm_process_routed_req(struct cm_req_msg *req_msg, struct ib_wc *wc)
 {
-	if (!cm_req_get_primary_subnet_local(req_msg)) {
+	if (!IBA_GET(CM_REQ_PRIMARY_SUBNET_LOCAL, req_msg)) {
 		if (req_msg->primary_local_lid == IB_LID_PERMISSIVE) {
 			req_msg->primary_local_lid = ib_lid_be16(wc->slid);
 			IBA_SET(CM_REQ_PRIMARY_SL, req_msg, wc->sl);
@@ -1917,7 +1918,7 @@ static void cm_process_routed_req(struct cm_req_msg *req_msg, struct ib_wc *wc)
 			req_msg->primary_remote_lid = cpu_to_be16(wc->dlid_path_bits);
 	}
 
-	if (!cm_req_get_alt_subnet_local(req_msg)) {
+	if (!IBA_GET(CM_REQ_ALTERNATE_SUBNET_LOCAL, req_msg)) {
 		if (req_msg->alt_local_lid == IB_LID_PERMISSIVE) {
 			req_msg->alt_local_lid = ib_lid_be16(wc->slid);
 			IBA_SET(CM_REQ_ALTERNATE_SL, req_msg, wc->sl);
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index b82eb10e22f6..3933c29b569b 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -70,18 +70,6 @@ struct cm_req_msg {
 
 } __packed;
 
-static inline u8 cm_req_get_primary_subnet_local(struct cm_req_msg *req_msg)
-{
-	return (u8) ((req_msg->primary_offset94 & 0x08) >> 3);
-}
-
-static inline void cm_req_set_primary_subnet_local(struct cm_req_msg *req_msg,
-						   u8 subnet_local)
-{
-	req_msg->primary_offset94 = (u8) ((req_msg->primary_offset94 & 0xF7) |
-					  ((subnet_local & 0x1) << 3));
-}
-
 static inline u8 cm_req_get_primary_local_ack_timeout(struct cm_req_msg *req_msg)
 {
 	return (u8) (req_msg->primary_offset95 >> 3);
@@ -94,18 +82,6 @@ static inline void cm_req_set_primary_local_ack_timeout(struct cm_req_msg *req_m
 					  (local_ack_timeout << 3));
 }
 
-static inline u8 cm_req_get_alt_subnet_local(struct cm_req_msg *req_msg)
-{
-	return (u8) ((req_msg->alt_offset138 & 0x08) >> 3);
-}
-
-static inline void cm_req_set_alt_subnet_local(struct cm_req_msg *req_msg,
-					       u8 subnet_local)
-{
-	req_msg->alt_offset138 = (u8) ((req_msg->alt_offset138 & 0xF7) |
-				       ((subnet_local & 0x1) << 3));
-}
-
 static inline u8 cm_req_get_alt_local_ack_timeout(struct cm_req_msg *req_msg)
 {
 	return (u8) (req_msg->alt_offset139 >> 3);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 33/48] RDMA/cm: Convert REQ local ack timeout
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (31 preceding siblings ...)
  2019-12-12  9:38 ` [PATCH rdma-rc v2 32/48] RDMA/cm: Convert REQ subnet local fields Leon Romanovsky
@ 2019-12-12  9:38 ` Leon Romanovsky
  2019-12-12  9:38 ` [PATCH rdma-rc v2 34/48] RDMA/cm: Convert MRA MRAed field Leon Romanovsky
                   ` (16 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:38 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Convert REQ local ack timeout fields.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      | 18 +++++++++---------
 drivers/infiniband/core/cm_msgs.h | 24 ------------------------
 2 files changed, 9 insertions(+), 33 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 1a5d5d401c72..e25629a910f0 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -1325,9 +1325,9 @@ static void cm_format_req(struct cm_req_msg *req_msg,
 	req_msg->primary_hop_limit = pri_path->hop_limit;
 	IBA_SET(CM_REQ_PRIMARY_SL, req_msg, pri_path->sl);
 	IBA_SET(CM_REQ_PRIMARY_SUBNET_LOCAL, req_msg, (pri_path->hop_limit <= 1));
-	cm_req_set_primary_local_ack_timeout(req_msg,
-		cm_ack_timeout(cm_id_priv->av.port->cm_dev->ack_delay,
-			       pri_path->packet_life_time));
+	IBA_SET(CM_REQ_PRIMARY_LOCAL_ACK_TIMEOUT, req_msg,
+	       cm_ack_timeout(cm_id_priv->av.port->cm_dev->ack_delay,
+			      pri_path->packet_life_time));
 
 	if (alt_path) {
 		bool alt_ext = false;
@@ -1360,10 +1360,10 @@ static void cm_format_req(struct cm_req_msg *req_msg,
 		req_msg->alt_hop_limit = alt_path->hop_limit;
 		IBA_SET(CM_REQ_ALTERNATE_SL, req_msg, alt_path->sl);
 		IBA_SET(CM_REQ_ALTERNATE_SUBNET_LOCAL, req_msg,
-			(alt_path->hop_limit <= 1));
-		cm_req_set_alt_local_ack_timeout(req_msg,
-			cm_ack_timeout(cm_id_priv->av.port->cm_dev->ack_delay,
-				       alt_path->packet_life_time));
+		       (alt_path->hop_limit <= 1));
+		IBA_SET(CM_REQ_ALTERNATE_LOCAL_ACK_TIMEOUT, req_msg,
+		       cm_ack_timeout(cm_id_priv->av.port->cm_dev->ack_delay,
+				      alt_path->packet_life_time));
 	}
 
 	if (param->private_data && param->private_data_len)
@@ -1584,7 +1584,7 @@ static void cm_format_paths_from_req(struct cm_req_msg *req_msg,
 	primary_path->rate = IBA_GET(CM_REQ_PRIMARY_PACKET_RATE, req_msg);
 	primary_path->packet_life_time_selector = IB_SA_EQ;
 	primary_path->packet_life_time =
-		cm_req_get_primary_local_ack_timeout(req_msg);
+		IBA_GET(CM_REQ_PRIMARY_LOCAL_ACK_TIMEOUT, req_msg);
 	primary_path->packet_life_time -= (primary_path->packet_life_time > 0);
 	primary_path->service_id = req_msg->service_id;
 	if (sa_path_is_roce(primary_path))
@@ -1606,7 +1606,7 @@ static void cm_format_paths_from_req(struct cm_req_msg *req_msg,
 		alt_path->rate = IBA_GET(CM_REQ_ALTERNATE_PACKET_RATE, req_msg);
 		alt_path->packet_life_time_selector = IB_SA_EQ;
 		alt_path->packet_life_time =
-			cm_req_get_alt_local_ack_timeout(req_msg);
+			IBA_GET(CM_REQ_ALTERNATE_LOCAL_ACK_TIMEOUT, req_msg);
 		alt_path->packet_life_time -= (alt_path->packet_life_time > 0);
 		alt_path->service_id = req_msg->service_id;
 
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index 3933c29b569b..6f52a8f0bee3 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -70,30 +70,6 @@ struct cm_req_msg {
 
 } __packed;
 
-static inline u8 cm_req_get_primary_local_ack_timeout(struct cm_req_msg *req_msg)
-{
-	return (u8) (req_msg->primary_offset95 >> 3);
-}
-
-static inline void cm_req_set_primary_local_ack_timeout(struct cm_req_msg *req_msg,
-							u8 local_ack_timeout)
-{
-	req_msg->primary_offset95 = (u8) ((req_msg->primary_offset95 & 0x07) |
-					  (local_ack_timeout << 3));
-}
-
-static inline u8 cm_req_get_alt_local_ack_timeout(struct cm_req_msg *req_msg)
-{
-	return (u8) (req_msg->alt_offset139 >> 3);
-}
-
-static inline void cm_req_set_alt_local_ack_timeout(struct cm_req_msg *req_msg,
-						    u8 local_ack_timeout)
-{
-	req_msg->alt_offset139 = (u8) ((req_msg->alt_offset139 & 0x07) |
-				       (local_ack_timeout << 3));
-}
-
 /* Message REJected or MRAed */
 enum cm_msg_response {
 	CM_MSG_RESPONSE_REQ = 0x0,
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 34/48] RDMA/cm: Convert MRA MRAed field
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (32 preceding siblings ...)
  2019-12-12  9:38 ` [PATCH rdma-rc v2 33/48] RDMA/cm: Convert REQ local ack timeout Leon Romanovsky
@ 2019-12-12  9:38 ` Leon Romanovsky
  2019-12-12  9:38 ` [PATCH rdma-rc v2 35/48] RDMA/cm: Convert MRA service timeout Leon Romanovsky
                   ` (15 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:38 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Convert MRA MRAed field.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      | 11 ++++++-----
 drivers/infiniband/core/cm_msgs.h | 10 ----------
 2 files changed, 6 insertions(+), 15 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index e25629a910f0..080c4411ae16 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -1745,7 +1745,7 @@ static void cm_format_mra(struct cm_mra_msg *mra_msg,
 			  const void *private_data, u8 private_data_len)
 {
 	cm_format_mad_hdr(&mra_msg->hdr, CM_MRA_ATTR_ID, cm_id_priv->tid);
-	cm_mra_set_msg_mraed(mra_msg, msg_mraed);
+	IBA_SET(CM_MRA_MESSAGE_MRAED, mra_msg, msg_mraed);
 	mra_msg->local_comm_id = cm_id_priv->id.local_id;
 	mra_msg->remote_comm_id = cm_id_priv->id.remote_id;
 	cm_mra_set_service_timeout(mra_msg, service_timeout);
@@ -3010,7 +3010,7 @@ EXPORT_SYMBOL(ib_send_cm_mra);
 
 static struct cm_id_private * cm_acquire_mraed_id(struct cm_mra_msg *mra_msg)
 {
-	switch (cm_mra_get_msg_mraed(mra_msg)) {
+	switch (IBA_GET(CM_MRA_MESSAGE_MRAED, mra_msg)) {
 	case CM_MSG_RESPONSE_REQ:
 		return cm_acquire_id(mra_msg->remote_comm_id, 0);
 	case CM_MSG_RESPONSE_REP:
@@ -3043,21 +3043,22 @@ static int cm_mra_handler(struct cm_work *work)
 	spin_lock_irq(&cm_id_priv->lock);
 	switch (cm_id_priv->id.state) {
 	case IB_CM_REQ_SENT:
-		if (cm_mra_get_msg_mraed(mra_msg) != CM_MSG_RESPONSE_REQ ||
+		if (IBA_GET(CM_MRA_MESSAGE_MRAED, mra_msg) != CM_MSG_RESPONSE_REQ ||
 		    ib_modify_mad(cm_id_priv->av.port->mad_agent,
 				  cm_id_priv->msg, timeout))
 			goto out;
 		cm_id_priv->id.state = IB_CM_MRA_REQ_RCVD;
 		break;
 	case IB_CM_REP_SENT:
-		if (cm_mra_get_msg_mraed(mra_msg) != CM_MSG_RESPONSE_REP ||
+		if (IBA_GET(CM_MRA_MESSAGE_MRAED, mra_msg) != CM_MSG_RESPONSE_REP ||
 		    ib_modify_mad(cm_id_priv->av.port->mad_agent,
 				  cm_id_priv->msg, timeout))
 			goto out;
 		cm_id_priv->id.state = IB_CM_MRA_REP_RCVD;
 		break;
 	case IB_CM_ESTABLISHED:
-		if (cm_mra_get_msg_mraed(mra_msg) != CM_MSG_RESPONSE_OTHER ||
+		if (IBA_GET(CM_MRA_MESSAGE_MRAED, mra_msg) !=
+			    CM_MSG_RESPONSE_OTHER ||
 		    cm_id_priv->id.lap_state != IB_CM_LAP_SENT ||
 		    ib_modify_mad(cm_id_priv->av.port->mad_agent,
 				  cm_id_priv->msg, timeout)) {
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index 6f52a8f0bee3..e096b1f572bc 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -91,16 +91,6 @@ enum cm_msg_response {
 
 } __packed;
 
-static inline u8 cm_mra_get_msg_mraed(struct cm_mra_msg *mra_msg)
-{
-	return (u8) (mra_msg->offset8 >> 6);
-}
-
-static inline void cm_mra_set_msg_mraed(struct cm_mra_msg *mra_msg, u8 msg)
-{
-	mra_msg->offset8 = (u8) ((mra_msg->offset8 & 0x3F) | (msg << 6));
-}
-
 static inline u8 cm_mra_get_service_timeout(struct cm_mra_msg *mra_msg)
 {
 	return (u8) (mra_msg->offset9 >> 3);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 35/48] RDMA/cm: Convert MRA service timeout
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (33 preceding siblings ...)
  2019-12-12  9:38 ` [PATCH rdma-rc v2 34/48] RDMA/cm: Convert MRA MRAed field Leon Romanovsky
@ 2019-12-12  9:38 ` Leon Romanovsky
  2019-12-12  9:38 ` [PATCH rdma-rc v2 36/48] RDMA/cm: Update REJ struct to use new scheme Leon Romanovsky
                   ` (14 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:38 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Convert MRA service timeout field.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      |  6 +++---
 drivers/infiniband/core/cm_msgs.h | 12 ------------
 2 files changed, 3 insertions(+), 15 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 080c4411ae16..a5fc328d6d23 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -1748,7 +1748,7 @@ static void cm_format_mra(struct cm_mra_msg *mra_msg,
 	IBA_SET(CM_MRA_MESSAGE_MRAED, mra_msg, msg_mraed);
 	mra_msg->local_comm_id = cm_id_priv->id.local_id;
 	mra_msg->remote_comm_id = cm_id_priv->id.remote_id;
-	cm_mra_set_service_timeout(mra_msg, service_timeout);
+	IBA_SET(CM_MRA_SERVICE_TIMEOUT, mra_msg, service_timeout);
 
 	if (private_data && private_data_len)
 		memcpy(mra_msg->private_data, private_data, private_data_len);
@@ -3036,8 +3036,8 @@ static int cm_mra_handler(struct cm_work *work)
 	work->cm_event.private_data = &mra_msg->private_data;
 	work->cm_event.private_data_len = CM_MRA_PRIVATE_DATA_SIZE;
 	work->cm_event.param.mra_rcvd.service_timeout =
-					cm_mra_get_service_timeout(mra_msg);
-	timeout = cm_convert_to_ms(cm_mra_get_service_timeout(mra_msg)) +
+		IBA_GET(CM_MRA_SERVICE_TIMEOUT, mra_msg);
+	timeout = cm_convert_to_ms(IBA_GET(CM_MRA_SERVICE_TIMEOUT, mra_msg)) +
 		  cm_convert_to_ms(cm_id_priv->av.timeout);
 
 	spin_lock_irq(&cm_id_priv->lock);
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index e096b1f572bc..601b0fd2c86c 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -91,18 +91,6 @@ enum cm_msg_response {
 
 } __packed;
 
-static inline u8 cm_mra_get_service_timeout(struct cm_mra_msg *mra_msg)
-{
-	return (u8) (mra_msg->offset9 >> 3);
-}
-
-static inline void cm_mra_set_service_timeout(struct cm_mra_msg *mra_msg,
-					      u8 service_timeout)
-{
-	mra_msg->offset9 = (u8) ((mra_msg->offset9 & 0x07) |
-				 (service_timeout << 3));
-}
-
 struct cm_rej_msg {
 	struct ib_mad_hdr hdr;
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 36/48] RDMA/cm: Update REJ struct to use new scheme
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (34 preceding siblings ...)
  2019-12-12  9:38 ` [PATCH rdma-rc v2 35/48] RDMA/cm: Convert MRA service timeout Leon Romanovsky
@ 2019-12-12  9:38 ` Leon Romanovsky
  2019-12-12  9:38 ` [PATCH rdma-rc v2 37/48] RDMA/cm: Convert REP target ack delay field Leon Romanovsky
                   ` (13 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:38 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Convert both message rejected and rejected info length fields.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      | 18 +++++++++---------
 drivers/infiniband/core/cm_msgs.h | 21 ---------------------
 2 files changed, 9 insertions(+), 30 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index a5fc328d6d23..b69611c088b8 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -1501,11 +1501,11 @@ static int cm_issue_rej(struct cm_port *port,
 	cm_format_mad_hdr(&rej_msg->hdr, CM_REJ_ATTR_ID, rcv_msg->hdr.tid);
 	rej_msg->remote_comm_id = rcv_msg->local_comm_id;
 	rej_msg->local_comm_id = rcv_msg->remote_comm_id;
-	cm_rej_set_msg_rejected(rej_msg, msg_rejected);
+	IBA_SET(CM_REJ_MESSAGE_REJECTED, rej_msg, msg_rejected);
 	rej_msg->reason = cpu_to_be16(reason);
 
 	if (ari && ari_length) {
-		cm_rej_set_reject_info_len(rej_msg, ari_length);
+		IBA_SET(CM_REJ_REJECTED_INFO_LENGTH, rej_msg, ari_length);
 		memcpy(rej_msg->ari, ari, ari_length);
 	}
 
@@ -1768,26 +1768,26 @@ static void cm_format_rej(struct cm_rej_msg *rej_msg,
 	switch(cm_id_priv->id.state) {
 	case IB_CM_REQ_RCVD:
 		rej_msg->local_comm_id = 0;
-		cm_rej_set_msg_rejected(rej_msg, CM_MSG_RESPONSE_REQ);
+		IBA_SET(CM_REJ_MESSAGE_REJECTED, rej_msg, CM_MSG_RESPONSE_REQ);
 		break;
 	case IB_CM_MRA_REQ_SENT:
 		rej_msg->local_comm_id = cm_id_priv->id.local_id;
-		cm_rej_set_msg_rejected(rej_msg, CM_MSG_RESPONSE_REQ);
+		IBA_SET(CM_REJ_MESSAGE_REJECTED, rej_msg, CM_MSG_RESPONSE_REQ);
 		break;
 	case IB_CM_REP_RCVD:
 	case IB_CM_MRA_REP_SENT:
 		rej_msg->local_comm_id = cm_id_priv->id.local_id;
-		cm_rej_set_msg_rejected(rej_msg, CM_MSG_RESPONSE_REP);
+		IBA_SET(CM_REJ_MESSAGE_REJECTED, rej_msg, CM_MSG_RESPONSE_REP);
 		break;
 	default:
 		rej_msg->local_comm_id = cm_id_priv->id.local_id;
-		cm_rej_set_msg_rejected(rej_msg, CM_MSG_RESPONSE_OTHER);
+		IBA_SET(CM_REJ_MESSAGE_REJECTED, rej_msg, CM_MSG_RESPONSE_OTHER);
 		break;
 	}
 
 	rej_msg->reason = cpu_to_be16(reason);
 	if (ari && ari_length) {
-		cm_rej_set_reject_info_len(rej_msg, ari_length);
+		IBA_SET(CM_REJ_REJECTED_INFO_LENGTH, rej_msg, ari_length);
 		memcpy(rej_msg->ari, ari, ari_length);
 	}
 
@@ -2818,7 +2818,7 @@ static void cm_format_rej_event(struct cm_work *work)
 	rej_msg = (struct cm_rej_msg *)work->mad_recv_wc->recv_buf.mad;
 	param = &work->cm_event.param.rej_rcvd;
 	param->ari = rej_msg->ari;
-	param->ari_length = cm_rej_get_reject_info_len(rej_msg);
+	param->ari_length = IBA_GET(CM_REJ_REJECTED_INFO_LENGTH, rej_msg);
 	param->reason = __be16_to_cpu(rej_msg->reason);
 	work->cm_event.private_data = &rej_msg->private_data;
 	work->cm_event.private_data_len = CM_REJ_PRIVATE_DATA_SIZE;
@@ -2849,7 +2849,7 @@ static struct cm_id_private * cm_acquire_rejected_id(struct cm_rej_msg *rej_msg)
 				cm_id_priv = NULL;
 		}
 		spin_unlock_irq(&cm.lock);
-	} else if (cm_rej_get_msg_rejected(rej_msg) == CM_MSG_RESPONSE_REQ)
+	} else if (IBA_GET(CM_REJ_MESSAGE_REJECTED, rej_msg) == CM_MSG_RESPONSE_REQ)
 		cm_id_priv = cm_acquire_id(rej_msg->remote_comm_id, 0);
 	else
 		cm_id_priv = cm_acquire_id(rej_msg->remote_comm_id, remote_id);
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index 601b0fd2c86c..5a76b63dde12 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -107,27 +107,6 @@ struct cm_rej_msg {
 
 } __packed;
 
-static inline u8 cm_rej_get_msg_rejected(struct cm_rej_msg *rej_msg)
-{
-	return (u8) (rej_msg->offset8 >> 6);
-}
-
-static inline void cm_rej_set_msg_rejected(struct cm_rej_msg *rej_msg, u8 msg)
-{
-	rej_msg->offset8 = (u8) ((rej_msg->offset8 & 0x3F) | (msg << 6));
-}
-
-static inline u8 cm_rej_get_reject_info_len(struct cm_rej_msg *rej_msg)
-{
-	return (u8) (rej_msg->offset9 >> 1);
-}
-
-static inline void cm_rej_set_reject_info_len(struct cm_rej_msg *rej_msg,
-					      u8 len)
-{
-	rej_msg->offset9 = (u8) ((rej_msg->offset9 & 0x1) | (len << 1));
-}
-
 struct cm_rep_msg {
 	struct ib_mad_hdr hdr;
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 37/48] RDMA/cm: Convert REP target ack delay field
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (35 preceding siblings ...)
  2019-12-12  9:38 ` [PATCH rdma-rc v2 36/48] RDMA/cm: Update REJ struct to use new scheme Leon Romanovsky
@ 2019-12-12  9:38 ` Leon Romanovsky
  2019-12-12  9:38 ` [PATCH rdma-rc v2 38/48] RDMA/cm: Convert REP failover accepted field Leon Romanovsky
                   ` (12 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:38 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Convert REP target ack delay field.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      |  8 ++++----
 drivers/infiniband/core/cm_msgs.h | 12 ------------
 2 files changed, 4 insertions(+), 16 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index b69611c088b8..ca2d50a3c7da 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -2067,8 +2067,8 @@ static void cm_format_rep(struct cm_rep_msg *rep_msg,
 	rep_msg->remote_comm_id = cm_id_priv->id.remote_id;
 	IBA_SET(CM_REP_STARTING_PSN, rep_msg, param->starting_psn);
 	rep_msg->resp_resources = param->responder_resources;
-	cm_rep_set_target_ack_delay(rep_msg,
-				    cm_id_priv->av.port->cm_dev->ack_delay);
+	IBA_SET(CM_REP_TARGET_ACK_DELAY, rep_msg,
+	       cm_id_priv->av.port->cm_dev->ack_delay);
 	cm_rep_set_failover(rep_msg, param->failover_accepted);
 	cm_rep_set_rnr_retry_count(rep_msg, param->rnr_retry_count);
 	rep_msg->local_ca_guid = cm_id_priv->id.device->node_guid;
@@ -2225,7 +2225,7 @@ static void cm_format_rep_event(struct cm_work *work, enum ib_qp_type qp_type)
 	param->starting_psn = IBA_GET(CM_REP_STARTING_PSN, rep_msg);
 	param->responder_resources = rep_msg->initiator_depth;
 	param->initiator_depth = rep_msg->resp_resources;
-	param->target_ack_delay = cm_rep_get_target_ack_delay(rep_msg);
+	param->target_ack_delay = IBA_GET(CM_REP_TARGET_ACK_DELAY, rep_msg);
 	param->failover_accepted = cm_rep_get_failover(rep_msg);
 	param->flow_control = cm_rep_get_flow_ctrl(rep_msg);
 	param->rnr_retry_count = cm_rep_get_rnr_retry_count(rep_msg);
@@ -2371,7 +2371,7 @@ static int cm_rep_handler(struct cm_work *work)
 	cm_id_priv->responder_resources = rep_msg->initiator_depth;
 	cm_id_priv->sq_psn = IBA_GET(CM_REP_STARTING_PSN, rep_msg);
 	cm_id_priv->rnr_retry_count = cm_rep_get_rnr_retry_count(rep_msg);
-	cm_id_priv->target_ack_delay = cm_rep_get_target_ack_delay(rep_msg);
+	cm_id_priv->target_ack_delay = IBA_GET(CM_REP_TARGET_ACK_DELAY, rep_msg);
 	cm_id_priv->av.timeout =
 			cm_ack_timeout(cm_id_priv->target_ack_delay,
 				       cm_id_priv->av.timeout - 1);
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index 5a76b63dde12..0536f827fd8e 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -132,18 +132,6 @@ struct cm_rep_msg {
 
 } __packed;
 
-static inline u8 cm_rep_get_target_ack_delay(struct cm_rep_msg *rep_msg)
-{
-	return (u8) (rep_msg->offset26 >> 3);
-}
-
-static inline void cm_rep_set_target_ack_delay(struct cm_rep_msg *rep_msg,
-					       u8 target_ack_delay)
-{
-	rep_msg->offset26 = (u8) ((rep_msg->offset26 & 0x07) |
-				  (target_ack_delay << 3));
-}
-
 static inline u8 cm_rep_get_failover(struct cm_rep_msg *rep_msg)
 {
 	return (u8) ((rep_msg->offset26 & 0x06) >> 1);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 38/48] RDMA/cm: Convert REP failover accepted field
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (36 preceding siblings ...)
  2019-12-12  9:38 ` [PATCH rdma-rc v2 37/48] RDMA/cm: Convert REP target ack delay field Leon Romanovsky
@ 2019-12-12  9:38 ` Leon Romanovsky
  2019-12-12  9:38 ` [PATCH rdma-rc v2 39/48] RDMA/cm: Convert REP flow control field Leon Romanovsky
                   ` (11 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:38 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Update REP failover accepted field to the new scheme.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      |  4 ++--
 drivers/infiniband/core/cm_msgs.h | 11 -----------
 2 files changed, 2 insertions(+), 13 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index ca2d50a3c7da..23e2c54d51dd 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -2069,7 +2069,7 @@ static void cm_format_rep(struct cm_rep_msg *rep_msg,
 	rep_msg->resp_resources = param->responder_resources;
 	IBA_SET(CM_REP_TARGET_ACK_DELAY, rep_msg,
 	       cm_id_priv->av.port->cm_dev->ack_delay);
-	cm_rep_set_failover(rep_msg, param->failover_accepted);
+	IBA_SET(CM_REP_FAILOVER_ACCEPTED, rep_msg, param->failover_accepted);
 	cm_rep_set_rnr_retry_count(rep_msg, param->rnr_retry_count);
 	rep_msg->local_ca_guid = cm_id_priv->id.device->node_guid;
 
@@ -2226,7 +2226,7 @@ static void cm_format_rep_event(struct cm_work *work, enum ib_qp_type qp_type)
 	param->responder_resources = rep_msg->initiator_depth;
 	param->initiator_depth = rep_msg->resp_resources;
 	param->target_ack_delay = IBA_GET(CM_REP_TARGET_ACK_DELAY, rep_msg);
-	param->failover_accepted = cm_rep_get_failover(rep_msg);
+	param->failover_accepted = IBA_GET(CM_REP_FAILOVER_ACCEPTED, rep_msg);
 	param->flow_control = cm_rep_get_flow_ctrl(rep_msg);
 	param->rnr_retry_count = cm_rep_get_rnr_retry_count(rep_msg);
 	param->srq = cm_rep_get_srq(rep_msg);
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index 0536f827fd8e..566bc868e120 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -132,17 +132,6 @@ struct cm_rep_msg {
 
 } __packed;
 
-static inline u8 cm_rep_get_failover(struct cm_rep_msg *rep_msg)
-{
-	return (u8) ((rep_msg->offset26 & 0x06) >> 1);
-}
-
-static inline void cm_rep_set_failover(struct cm_rep_msg *rep_msg, u8 failover)
-{
-	rep_msg->offset26 = (u8) ((rep_msg->offset26 & 0xF9) |
-				  ((failover & 0x3) << 1));
-}
-
 static inline u8 cm_rep_get_flow_ctrl(struct cm_rep_msg *rep_msg)
 {
 	return (u8) (rep_msg->offset26 & 0x01);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 39/48] RDMA/cm: Convert REP flow control field
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (37 preceding siblings ...)
  2019-12-12  9:38 ` [PATCH rdma-rc v2 38/48] RDMA/cm: Convert REP failover accepted field Leon Romanovsky
@ 2019-12-12  9:38 ` Leon Romanovsky
  2019-12-12  9:38 ` [PATCH rdma-rc v2 40/48] RDMA/cm: Convert REP RNR retry count field Leon Romanovsky
                   ` (10 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:38 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Convert REP flow control field.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      |  5 +++--
 drivers/infiniband/core/cm_msgs.h | 12 ------------
 2 files changed, 3 insertions(+), 14 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 23e2c54d51dd..4c27465df6a1 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -2075,7 +2075,8 @@ static void cm_format_rep(struct cm_rep_msg *rep_msg,
 
 	if (cm_id_priv->qp_type != IB_QPT_XRC_TGT) {
 		rep_msg->initiator_depth = param->initiator_depth;
-		cm_rep_set_flow_ctrl(rep_msg, param->flow_control);
+		IBA_SET(CM_REP_END_TO_END_FLOW_CONTROL, rep_msg,
+		       param->flow_control);
 		cm_rep_set_srq(rep_msg, param->srq);
 		IBA_SET(CM_REP_LOCAL_QPN, rep_msg, param->qp_num);
 	} else {
@@ -2227,7 +2228,7 @@ static void cm_format_rep_event(struct cm_work *work, enum ib_qp_type qp_type)
 	param->initiator_depth = rep_msg->resp_resources;
 	param->target_ack_delay = IBA_GET(CM_REP_TARGET_ACK_DELAY, rep_msg);
 	param->failover_accepted = IBA_GET(CM_REP_FAILOVER_ACCEPTED, rep_msg);
-	param->flow_control = cm_rep_get_flow_ctrl(rep_msg);
+	param->flow_control = IBA_GET(CM_REP_END_TO_END_FLOW_CONTROL, rep_msg);
 	param->rnr_retry_count = cm_rep_get_rnr_retry_count(rep_msg);
 	param->srq = cm_rep_get_srq(rep_msg);
 	work->cm_event.private_data = &rep_msg->private_data;
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index 566bc868e120..953f6a9f868b 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -132,18 +132,6 @@ struct cm_rep_msg {
 
 } __packed;
 
-static inline u8 cm_rep_get_flow_ctrl(struct cm_rep_msg *rep_msg)
-{
-	return (u8) (rep_msg->offset26 & 0x01);
-}
-
-static inline void cm_rep_set_flow_ctrl(struct cm_rep_msg *rep_msg,
-					    u8 flow_ctrl)
-{
-	rep_msg->offset26 = (u8) ((rep_msg->offset26 & 0xFE) |
-				  (flow_ctrl & 0x1));
-}
-
 static inline u8 cm_rep_get_rnr_retry_count(struct cm_rep_msg *rep_msg)
 {
 	return (u8) (rep_msg->offset27 >> 5);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 40/48] RDMA/cm: Convert REP RNR retry count field
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (38 preceding siblings ...)
  2019-12-12  9:38 ` [PATCH rdma-rc v2 39/48] RDMA/cm: Convert REP flow control field Leon Romanovsky
@ 2019-12-12  9:38 ` Leon Romanovsky
  2019-12-12  9:38 ` [PATCH rdma-rc v2 41/48] RDMA/cm: Convert REP SRQ field Leon Romanovsky
                   ` (9 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:38 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Convert REP RNR retry count field.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      |  6 +++---
 drivers/infiniband/core/cm_msgs.h | 12 ------------
 2 files changed, 3 insertions(+), 15 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 4c27465df6a1..77199078d276 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -2070,7 +2070,7 @@ static void cm_format_rep(struct cm_rep_msg *rep_msg,
 	IBA_SET(CM_REP_TARGET_ACK_DELAY, rep_msg,
 	       cm_id_priv->av.port->cm_dev->ack_delay);
 	IBA_SET(CM_REP_FAILOVER_ACCEPTED, rep_msg, param->failover_accepted);
-	cm_rep_set_rnr_retry_count(rep_msg, param->rnr_retry_count);
+	IBA_SET(CM_REP_RNR_RETRY_COUNT, rep_msg, param->rnr_retry_count);
 	rep_msg->local_ca_guid = cm_id_priv->id.device->node_guid;
 
 	if (cm_id_priv->qp_type != IB_QPT_XRC_TGT) {
@@ -2229,7 +2229,7 @@ static void cm_format_rep_event(struct cm_work *work, enum ib_qp_type qp_type)
 	param->target_ack_delay = IBA_GET(CM_REP_TARGET_ACK_DELAY, rep_msg);
 	param->failover_accepted = IBA_GET(CM_REP_FAILOVER_ACCEPTED, rep_msg);
 	param->flow_control = IBA_GET(CM_REP_END_TO_END_FLOW_CONTROL, rep_msg);
-	param->rnr_retry_count = cm_rep_get_rnr_retry_count(rep_msg);
+	param->rnr_retry_count = IBA_GET(CM_REP_RNR_RETRY_COUNT, rep_msg);
 	param->srq = cm_rep_get_srq(rep_msg);
 	work->cm_event.private_data = &rep_msg->private_data;
 	work->cm_event.private_data_len = CM_REP_PRIVATE_DATA_SIZE;
@@ -2371,7 +2371,7 @@ static int cm_rep_handler(struct cm_work *work)
 	cm_id_priv->initiator_depth = rep_msg->resp_resources;
 	cm_id_priv->responder_resources = rep_msg->initiator_depth;
 	cm_id_priv->sq_psn = IBA_GET(CM_REP_STARTING_PSN, rep_msg);
-	cm_id_priv->rnr_retry_count = cm_rep_get_rnr_retry_count(rep_msg);
+	cm_id_priv->rnr_retry_count = IBA_GET(CM_REP_RNR_RETRY_COUNT, rep_msg);
 	cm_id_priv->target_ack_delay = IBA_GET(CM_REP_TARGET_ACK_DELAY, rep_msg);
 	cm_id_priv->av.timeout =
 			cm_ack_timeout(cm_id_priv->target_ack_delay,
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index 953f6a9f868b..209e19197693 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -132,18 +132,6 @@ struct cm_rep_msg {
 
 } __packed;
 
-static inline u8 cm_rep_get_rnr_retry_count(struct cm_rep_msg *rep_msg)
-{
-	return (u8) (rep_msg->offset27 >> 5);
-}
-
-static inline void cm_rep_set_rnr_retry_count(struct cm_rep_msg *rep_msg,
-					      u8 rnr_retry_count)
-{
-	rep_msg->offset27 = (u8) ((rep_msg->offset27 & 0x1F) |
-				  (rnr_retry_count << 5));
-}
-
 static inline u8 cm_rep_get_srq(struct cm_rep_msg *rep_msg)
 {
 	return (u8) ((rep_msg->offset27 >> 4) & 0x1);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 41/48] RDMA/cm: Convert REP SRQ field
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (39 preceding siblings ...)
  2019-12-12  9:38 ` [PATCH rdma-rc v2 40/48] RDMA/cm: Convert REP RNR retry count field Leon Romanovsky
@ 2019-12-12  9:38 ` Leon Romanovsky
  2019-12-12  9:38 ` [PATCH rdma-rc v2 42/48] RDMA/cm: Delete unused CM LAP functions Leon Romanovsky
                   ` (8 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:38 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Convert REP SRQ field.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      |  6 +++---
 drivers/infiniband/core/cm_msgs.h | 11 -----------
 2 files changed, 3 insertions(+), 14 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 77199078d276..454650f3ec7d 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -2077,10 +2077,10 @@ static void cm_format_rep(struct cm_rep_msg *rep_msg,
 		rep_msg->initiator_depth = param->initiator_depth;
 		IBA_SET(CM_REP_END_TO_END_FLOW_CONTROL, rep_msg,
 		       param->flow_control);
-		cm_rep_set_srq(rep_msg, param->srq);
+		IBA_SET(CM_REP_SRQ, rep_msg, param->srq);
 		IBA_SET(CM_REP_LOCAL_QPN, rep_msg, param->qp_num);
 	} else {
-		cm_rep_set_srq(rep_msg, 1);
+		IBA_SET(CM_REP_SRQ, rep_msg, 1);
 		IBA_SET(CM_REP_LOCAL_EE_CONTEXT_NUMBER, rep_msg, param->qp_num);
 	}
 
@@ -2230,7 +2230,7 @@ static void cm_format_rep_event(struct cm_work *work, enum ib_qp_type qp_type)
 	param->failover_accepted = IBA_GET(CM_REP_FAILOVER_ACCEPTED, rep_msg);
 	param->flow_control = IBA_GET(CM_REP_END_TO_END_FLOW_CONTROL, rep_msg);
 	param->rnr_retry_count = IBA_GET(CM_REP_RNR_RETRY_COUNT, rep_msg);
-	param->srq = cm_rep_get_srq(rep_msg);
+	param->srq = IBA_GET(CM_REP_SRQ, rep_msg);
 	work->cm_event.private_data = &rep_msg->private_data;
 	work->cm_event.private_data_len = CM_REP_PRIVATE_DATA_SIZE;
 }
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index 209e19197693..cdd7e96e6355 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -132,17 +132,6 @@ struct cm_rep_msg {
 
 } __packed;
 
-static inline u8 cm_rep_get_srq(struct cm_rep_msg *rep_msg)
-{
-	return (u8) ((rep_msg->offset27 >> 4) & 0x1);
-}
-
-static inline void cm_rep_set_srq(struct cm_rep_msg *rep_msg, u8 srq)
-{
-	rep_msg->offset27 = (u8) ((rep_msg->offset27 & 0xEF) |
-				  ((srq & 0x1) << 4));
-}
-
 struct cm_rtu_msg {
 	struct ib_mad_hdr hdr;
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 42/48] RDMA/cm: Delete unused CM LAP functions
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (40 preceding siblings ...)
  2019-12-12  9:38 ` [PATCH rdma-rc v2 41/48] RDMA/cm: Convert REP SRQ field Leon Romanovsky
@ 2019-12-12  9:38 ` Leon Romanovsky
  2019-12-12  9:38 ` [PATCH rdma-rc v2 43/48] RDMA/cm: Convert LAP flow label field Leon Romanovsky
                   ` (7 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:38 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Clean the code by deleting LAP functions, which are not called anyway.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      | 98 -------------------------------
 drivers/infiniband/core/cm_msgs.h | 58 ------------------
 2 files changed, 156 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 454650f3ec7d..ec2206b9dd14 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -3101,104 +3101,6 @@ static int cm_mra_handler(struct cm_work *work)
 	return -EINVAL;
 }
 
-static void cm_format_lap(struct cm_lap_msg *lap_msg,
-			  struct cm_id_private *cm_id_priv,
-			  struct sa_path_rec *alternate_path,
-			  const void *private_data,
-			  u8 private_data_len)
-{
-	bool alt_ext = false;
-
-	if (alternate_path->rec_type == SA_PATH_REC_TYPE_OPA)
-		alt_ext = opa_is_extended_lid(alternate_path->opa.dlid,
-					      alternate_path->opa.slid);
-	cm_format_mad_hdr(&lap_msg->hdr, CM_LAP_ATTR_ID,
-			  cm_form_tid(cm_id_priv));
-	lap_msg->local_comm_id = cm_id_priv->id.local_id;
-	lap_msg->remote_comm_id = cm_id_priv->id.remote_id;
-	IBA_SET(CM_LAP_REMOTE_QPN_EECN, lap_msg, cm_id_priv->remote_qpn);
-	/* todo: need remote CM response timeout */
-	cm_lap_set_remote_resp_timeout(lap_msg, 0x1F);
-	lap_msg->alt_local_lid =
-		htons(ntohl(sa_path_get_slid(alternate_path)));
-	lap_msg->alt_remote_lid =
-		htons(ntohl(sa_path_get_dlid(alternate_path)));
-	lap_msg->alt_local_gid = alternate_path->sgid;
-	lap_msg->alt_remote_gid = alternate_path->dgid;
-	if (alt_ext) {
-		lap_msg->alt_local_gid.global.interface_id
-			= OPA_MAKE_ID(be32_to_cpu(alternate_path->opa.slid));
-		lap_msg->alt_remote_gid.global.interface_id
-			= OPA_MAKE_ID(be32_to_cpu(alternate_path->opa.dlid));
-	}
-	cm_lap_set_flow_label(lap_msg, alternate_path->flow_label);
-	cm_lap_set_traffic_class(lap_msg, alternate_path->traffic_class);
-	lap_msg->alt_hop_limit = alternate_path->hop_limit;
-	cm_lap_set_packet_rate(lap_msg, alternate_path->rate);
-	cm_lap_set_sl(lap_msg, alternate_path->sl);
-	cm_lap_set_subnet_local(lap_msg, 1); /* local only... */
-	cm_lap_set_local_ack_timeout(lap_msg,
-		cm_ack_timeout(cm_id_priv->av.port->cm_dev->ack_delay,
-			       alternate_path->packet_life_time));
-
-	if (private_data && private_data_len)
-		memcpy(lap_msg->private_data, private_data, private_data_len);
-}
-
-int ib_send_cm_lap(struct ib_cm_id *cm_id,
-		   struct sa_path_rec *alternate_path,
-		   const void *private_data,
-		   u8 private_data_len)
-{
-	struct cm_id_private *cm_id_priv;
-	struct ib_mad_send_buf *msg;
-	unsigned long flags;
-	int ret;
-
-	if (private_data && private_data_len > CM_LAP_PRIVATE_DATA_SIZE)
-		return -EINVAL;
-
-	cm_id_priv = container_of(cm_id, struct cm_id_private, id);
-	spin_lock_irqsave(&cm_id_priv->lock, flags);
-	if (cm_id->state != IB_CM_ESTABLISHED ||
-	    (cm_id->lap_state != IB_CM_LAP_UNINIT &&
-	     cm_id->lap_state != IB_CM_LAP_IDLE)) {
-		ret = -EINVAL;
-		goto out;
-	}
-
-	ret = cm_init_av_by_path(alternate_path, NULL, &cm_id_priv->alt_av,
-				 cm_id_priv);
-	if (ret)
-		goto out;
-	cm_id_priv->alt_av.timeout =
-			cm_ack_timeout(cm_id_priv->target_ack_delay,
-				       cm_id_priv->alt_av.timeout - 1);
-
-	ret = cm_alloc_msg(cm_id_priv, &msg);
-	if (ret)
-		goto out;
-
-	cm_format_lap((struct cm_lap_msg *) msg->mad, cm_id_priv,
-		      alternate_path, private_data, private_data_len);
-	msg->timeout_ms = cm_id_priv->timeout_ms;
-	msg->context[1] = (void *) (unsigned long) IB_CM_ESTABLISHED;
-
-	ret = ib_post_send_mad(msg, NULL);
-	if (ret) {
-		spin_unlock_irqrestore(&cm_id_priv->lock, flags);
-		cm_free_msg(msg);
-		return ret;
-	}
-
-	cm_id->lap_state = IB_CM_LAP_SENT;
-	cm_id_priv->msg = msg;
-
-out:	spin_unlock_irqrestore(&cm_id_priv->lock, flags);
-	return ret;
-}
-EXPORT_SYMBOL(ib_send_cm_lap);
-
 static void cm_format_path_lid_from_lap(struct cm_lap_msg *lap_msg,
 					struct sa_path_rec *path)
 {
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index cdd7e96e6355..2172a5c53fbd 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -192,89 +192,31 @@ struct cm_lap_msg {
 	u8 private_data[CM_LAP_PRIVATE_DATA_SIZE];
 } __packed;
 
-static inline u8 cm_lap_get_remote_resp_timeout(struct cm_lap_msg *lap_msg)
-{
-	return (u8) ((be32_to_cpu(lap_msg->offset12) & 0xF8) >> 3);
-}
-
-static inline void cm_lap_set_remote_resp_timeout(struct cm_lap_msg *lap_msg,
-						  u8 resp_timeout)
-{
-	lap_msg->offset12 = cpu_to_be32((resp_timeout << 3) |
-					 (be32_to_cpu(lap_msg->offset12) &
-					  0xFFFFFF07));
-}
-
 static inline __be32 cm_lap_get_flow_label(struct cm_lap_msg *lap_msg)
 {
 	return cpu_to_be32(be32_to_cpu(lap_msg->offset56) >> 12);
 }
 
-static inline void cm_lap_set_flow_label(struct cm_lap_msg *lap_msg,
-					 __be32 flow_label)
-{
-	lap_msg->offset56 = cpu_to_be32(
-				 (be32_to_cpu(lap_msg->offset56) & 0x00000FFF) |
-				 (be32_to_cpu(flow_label) << 12));
-}
-
 static inline u8 cm_lap_get_traffic_class(struct cm_lap_msg *lap_msg)
 {
 	return (u8) be32_to_cpu(lap_msg->offset56);
 }
 
-static inline void cm_lap_set_traffic_class(struct cm_lap_msg *lap_msg,
-					    u8 traffic_class)
-{
-	lap_msg->offset56 = cpu_to_be32(traffic_class |
-					 (be32_to_cpu(lap_msg->offset56) &
-					  0xFFFFFF00));
-}
-
 static inline u8 cm_lap_get_packet_rate(struct cm_lap_msg *lap_msg)
 {
 	return lap_msg->offset61 & 0x3F;
 }
 
-static inline void cm_lap_set_packet_rate(struct cm_lap_msg *lap_msg,
-					  u8 packet_rate)
-{
-	lap_msg->offset61 = (packet_rate & 0x3F) | (lap_msg->offset61 & 0xC0);
-}
-
 static inline u8 cm_lap_get_sl(struct cm_lap_msg *lap_msg)
 {
 	return lap_msg->offset62 >> 4;
 }
 
-static inline void cm_lap_set_sl(struct cm_lap_msg *lap_msg, u8 sl)
-{
-	lap_msg->offset62 = (sl << 4) | (lap_msg->offset62 & 0x0F);
-}
-
-static inline u8 cm_lap_get_subnet_local(struct cm_lap_msg *lap_msg)
-{
-	return (lap_msg->offset62 >> 3) & 0x1;
-}
-
-static inline void cm_lap_set_subnet_local(struct cm_lap_msg *lap_msg,
-					   u8 subnet_local)
-{
-	lap_msg->offset62 = ((subnet_local & 0x1) << 3) |
-			     (lap_msg->offset61 & 0xF7);
-}
 static inline u8 cm_lap_get_local_ack_timeout(struct cm_lap_msg *lap_msg)
 {
 	return lap_msg->offset63 >> 3;
 }
 
-static inline void cm_lap_set_local_ack_timeout(struct cm_lap_msg *lap_msg,
-						u8 local_ack_timeout)
-{
-	lap_msg->offset63 = (local_ack_timeout << 3) |
-			    (lap_msg->offset63 & 0x07);
-}
-
 struct cm_apr_msg {
 	struct ib_mad_hdr hdr;
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 43/48] RDMA/cm: Convert LAP flow label field
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (41 preceding siblings ...)
  2019-12-12  9:38 ` [PATCH rdma-rc v2 42/48] RDMA/cm: Delete unused CM LAP functions Leon Romanovsky
@ 2019-12-12  9:38 ` Leon Romanovsky
  2019-12-12  9:38 ` [PATCH rdma-rc v2 44/48] RDMA/cm: Convert LAP fields Leon Romanovsky
                   ` (6 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:38 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Convert LAP flow label field.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      | 3 ++-
 drivers/infiniband/core/cm_msgs.h | 5 -----
 2 files changed, 2 insertions(+), 6 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index ec2206b9dd14..b41ee97cda86 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -3124,7 +3124,8 @@ static void cm_format_path_from_lap(struct cm_id_private *cm_id_priv,
 {
 	path->dgid = lap_msg->alt_local_gid;
 	path->sgid = lap_msg->alt_remote_gid;
-	path->flow_label = cm_lap_get_flow_label(lap_msg);
+	path->flow_label =
+		cpu_to_be32(IBA_GET(CM_LAP_ALTERNATE_FLOW_LABEL, lap_msg));
 	path->hop_limit = lap_msg->alt_hop_limit;
 	path->traffic_class = cm_lap_get_traffic_class(lap_msg);
 	path->reversible = 1;
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index 2172a5c53fbd..978eff812ce1 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -192,11 +192,6 @@ struct cm_lap_msg {
 	u8 private_data[CM_LAP_PRIVATE_DATA_SIZE];
 } __packed;
 
-static inline __be32 cm_lap_get_flow_label(struct cm_lap_msg *lap_msg)
-{
-	return cpu_to_be32(be32_to_cpu(lap_msg->offset56) >> 12);
-}
-
 static inline u8 cm_lap_get_traffic_class(struct cm_lap_msg *lap_msg)
 {
 	return (u8) be32_to_cpu(lap_msg->offset56);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 44/48] RDMA/cm: Convert LAP fields
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (42 preceding siblings ...)
  2019-12-12  9:38 ` [PATCH rdma-rc v2 43/48] RDMA/cm: Convert LAP flow label field Leon Romanovsky
@ 2019-12-12  9:38 ` Leon Romanovsky
  2019-12-12  9:38 ` [PATCH rdma-rc v2 45/48] RDMA/cm: Delete unused CM ARP functions Leon Romanovsky
                   ` (5 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:38 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Convert LAP fields to the new scheme.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      |  9 +++++----
 drivers/infiniband/core/cm_msgs.h | 20 --------------------
 2 files changed, 5 insertions(+), 24 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index b41ee97cda86..0db15799969f 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -3127,16 +3127,17 @@ static void cm_format_path_from_lap(struct cm_id_private *cm_id_priv,
 	path->flow_label =
 		cpu_to_be32(IBA_GET(CM_LAP_ALTERNATE_FLOW_LABEL, lap_msg));
 	path->hop_limit = lap_msg->alt_hop_limit;
-	path->traffic_class = cm_lap_get_traffic_class(lap_msg);
+	path->traffic_class = IBA_GET(CM_LAP_ALTERNATE_TRAFFIC_CLASS, lap_msg);
 	path->reversible = 1;
 	path->pkey = cm_id_priv->pkey;
-	path->sl = cm_lap_get_sl(lap_msg);
+	path->sl = IBA_GET(CM_LAP_ALTERNATE_SL, lap_msg);
 	path->mtu_selector = IB_SA_EQ;
 	path->mtu = cm_id_priv->path_mtu;
 	path->rate_selector = IB_SA_EQ;
-	path->rate = cm_lap_get_packet_rate(lap_msg);
+	path->rate = IBA_GET(CM_LAP_ALTERNATE_PACKET_RATE, lap_msg);
 	path->packet_life_time_selector = IB_SA_EQ;
-	path->packet_life_time = cm_lap_get_local_ack_timeout(lap_msg);
+	path->packet_life_time =
+		IBA_GET(CM_LAP_ALTERNATE_LOCAL_ACK_TIMEOUT, lap_msg);
 	path->packet_life_time -= (path->packet_life_time > 0);
 	cm_format_path_lid_from_lap(lap_msg, path);
 }
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index 978eff812ce1..0f3f9f3cd1cb 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -192,26 +192,6 @@ struct cm_lap_msg {
 	u8 private_data[CM_LAP_PRIVATE_DATA_SIZE];
 } __packed;
 
-static inline u8 cm_lap_get_traffic_class(struct cm_lap_msg *lap_msg)
-{
-	return (u8) be32_to_cpu(lap_msg->offset56);
-}
-
-static inline u8 cm_lap_get_packet_rate(struct cm_lap_msg *lap_msg)
-{
-	return lap_msg->offset61 & 0x3F;
-}
-
-static inline u8 cm_lap_get_sl(struct cm_lap_msg *lap_msg)
-{
-	return lap_msg->offset62 >> 4;
-}
-
-static inline u8 cm_lap_get_local_ack_timeout(struct cm_lap_msg *lap_msg)
-{
-	return lap_msg->offset63 >> 3;
-}
-
 struct cm_apr_msg {
 	struct ib_mad_hdr hdr;
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 45/48] RDMA/cm: Delete unused CM ARP functions
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (43 preceding siblings ...)
  2019-12-12  9:38 ` [PATCH rdma-rc v2 44/48] RDMA/cm: Convert LAP fields Leon Romanovsky
@ 2019-12-12  9:38 ` Leon Romanovsky
  2019-12-12  9:38 ` [PATCH rdma-rc v2 46/48] RDMA/cm: Convert SIDR_REP to new scheme Leon Romanovsky
                   ` (4 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:38 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Clean the code by deleting ARP functions, which are not called anyway.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c | 66 ------------------------------------
 include/rdma/ib_cm.h         | 34 -------------------
 2 files changed, 100 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 0db15799969f..41422bf13279 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -3238,72 +3238,6 @@ deref:	cm_deref_id(cm_id_priv);
 	return -EINVAL;
 }
 
-static void cm_format_apr(struct cm_apr_msg *apr_msg,
-			  struct cm_id_private *cm_id_priv,
-			  enum ib_cm_apr_status status,
-			  void *info,
-			  u8 info_length,
-			  const void *private_data,
-			  u8 private_data_len)
-{
-	cm_format_mad_hdr(&apr_msg->hdr, CM_APR_ATTR_ID, cm_id_priv->tid);
-	apr_msg->local_comm_id = cm_id_priv->id.local_id;
-	apr_msg->remote_comm_id = cm_id_priv->id.remote_id;
-	apr_msg->ap_status = (u8) status;
-
-	if (info && info_length) {
-		apr_msg->info_length = info_length;
-		memcpy(apr_msg->info, info, info_length);
-	}
-
-	if (private_data && private_data_len)
-		memcpy(apr_msg->private_data, private_data, private_data_len);
-}
-
-int ib_send_cm_apr(struct ib_cm_id *cm_id,
-		   enum ib_cm_apr_status status,
-		   void *info,
-		   u8 info_length,
-		   const void *private_data,
-		   u8 private_data_len)
-{
-	struct cm_id_private *cm_id_priv;
-	struct ib_mad_send_buf *msg;
-	unsigned long flags;
-	int ret;
-
-	if ((private_data && private_data_len > CM_APR_PRIVATE_DATA_SIZE) ||
-	    (info && info_length > CM_APR_ADDITIONAL_INFORMATION_SIZE))
-		return -EINVAL;
-
-	cm_id_priv = container_of(cm_id, struct cm_id_private, id);
-	spin_lock_irqsave(&cm_id_priv->lock, flags);
-	if (cm_id->state != IB_CM_ESTABLISHED ||
-	    (cm_id->lap_state != IB_CM_LAP_RCVD &&
-	     cm_id->lap_state != IB_CM_MRA_LAP_SENT)) {
-		ret = -EINVAL;
-		goto out;
-	}
-
-	ret = cm_alloc_msg(cm_id_priv, &msg);
-	if (ret)
-		goto out;
-
-	cm_format_apr((struct cm_apr_msg *) msg->mad, cm_id_priv, status,
-		      info, info_length, private_data, private_data_len);
-	ret = ib_post_send_mad(msg, NULL);
-	if (ret) {
-		spin_unlock_irqrestore(&cm_id_priv->lock, flags);
-		cm_free_msg(msg);
-		return ret;
-	}
-
-	cm_id->lap_state = IB_CM_LAP_IDLE;
-out:	spin_unlock_irqrestore(&cm_id_priv->lock, flags);
-	return ret;
-}
-EXPORT_SYMBOL(ib_send_cm_apr);
-
 static int cm_apr_handler(struct cm_work *work)
 {
 	struct cm_id_private *cm_id_priv;
diff --git a/include/rdma/ib_cm.h b/include/rdma/ib_cm.h
index 6237c369dbd6..adccdc12b8e3 100644
--- a/include/rdma/ib_cm.h
+++ b/include/rdma/ib_cm.h
@@ -483,21 +483,6 @@ int ib_send_cm_mra(struct ib_cm_id *cm_id,
 		   const void *private_data,
 		   u8 private_data_len);
 
-/**
- * ib_send_cm_lap - Sends a load alternate path request.
- * @cm_id: Connection identifier associated with the load alternate path
- *   message.
- * @alternate_path: A path record that identifies the alternate path to
- *   load.
- * @private_data: Optional user-defined private data sent with the
- *   load alternate path message.
- * @private_data_len: Size of the private data buffer, in bytes.
- */
-int ib_send_cm_lap(struct ib_cm_id *cm_id,
-		   struct sa_path_rec *alternate_path,
-		   const void *private_data,
-		   u8 private_data_len);
-
 /**
  * ib_cm_init_qp_attr - Initializes the QP attributes for use in transitioning
  *   to a specified QP state.
@@ -518,25 +503,6 @@ int ib_cm_init_qp_attr(struct ib_cm_id *cm_id,
 		       struct ib_qp_attr *qp_attr,
 		       int *qp_attr_mask);
 
-/**
- * ib_send_cm_apr - Sends an alternate path response message in response to
- *   a load alternate path request.
- * @cm_id: Connection identifier associated with the alternate path response.
- * @status: Reply status sent with the alternate path response.
- * @info: Optional additional information sent with the alternate path
- *   response.
- * @info_length: Size of the additional information, in bytes.
- * @private_data: Optional user-defined private data sent with the
- *   alternate path response message.
- * @private_data_len: Size of the private data buffer, in bytes.
- */
-int ib_send_cm_apr(struct ib_cm_id *cm_id,
-		   enum ib_cm_apr_status status,
-		   void *info,
-		   u8 info_length,
-		   const void *private_data,
-		   u8 private_data_len);
-
 struct ib_cm_sidr_req_param {
 	struct sa_path_rec	*path;
 	const struct ib_gid_attr *sgid_attr;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 46/48] RDMA/cm: Convert SIDR_REP to new scheme
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (44 preceding siblings ...)
  2019-12-12  9:38 ` [PATCH rdma-rc v2 45/48] RDMA/cm: Delete unused CM ARP functions Leon Romanovsky
@ 2019-12-12  9:38 ` Leon Romanovsky
  2019-12-12  9:38 ` [PATCH rdma-rc v2 47/48] RDMA/cm: Add Enhanced Connection Establishment (ECE) bits Leon Romanovsky
                   ` (3 subsequent siblings)
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:38 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Use new scheme to access SIDR_REP fields.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c      | 15 ++++++++-------
 drivers/infiniband/core/cm_msgs.h | 14 --------------
 2 files changed, 8 insertions(+), 21 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 41422bf13279..f197f9740362 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -3483,10 +3483,10 @@ static void cm_format_sidr_rep(struct cm_sidr_rep_msg *sidr_rep_msg,
 	cm_format_mad_hdr(&sidr_rep_msg->hdr, CM_SIDR_REP_ATTR_ID,
 			  cm_id_priv->tid);
 	sidr_rep_msg->request_id = cm_id_priv->id.remote_id;
-	sidr_rep_msg->status = param->status;
-	cm_sidr_rep_set_qpn(sidr_rep_msg, cpu_to_be32(param->qp_num));
+	IBA_SET(CM_SIDR_REP_STATUS, sidr_rep_msg, param->status);
+	IBA_SET(CM_SIDR_REP_QPN, sidr_rep_msg, param->qp_num);
 	sidr_rep_msg->service_id = cm_id_priv->id.service_id;
-	sidr_rep_msg->qkey = cpu_to_be32(param->qkey);
+	IBA_SET(CM_SIDR_REP_Q_KEY, sidr_rep_msg, param->qkey);
 
 	if (param->info && param->info_length)
 		memcpy(sidr_rep_msg->info, param->info, param->info_length);
@@ -3554,11 +3554,12 @@ static void cm_format_sidr_rep_event(struct cm_work *work,
 	sidr_rep_msg = (struct cm_sidr_rep_msg *)
 				work->mad_recv_wc->recv_buf.mad;
 	param = &work->cm_event.param.sidr_rep_rcvd;
-	param->status = sidr_rep_msg->status;
-	param->qkey = be32_to_cpu(sidr_rep_msg->qkey);
-	param->qpn = be32_to_cpu(cm_sidr_rep_get_qpn(sidr_rep_msg));
+	param->status = IBA_GET(CM_SIDR_REP_STATUS, sidr_rep_msg);
+	param->qkey = IBA_GET(CM_SIDR_REP_Q_KEY, sidr_rep_msg);
+	param->qpn = IBA_GET(CM_SIDR_REP_QPN, sidr_rep_msg);
 	param->info = &sidr_rep_msg->info;
-	param->info_len = sidr_rep_msg->info_length;
+	param->info_len =
+		IBA_GET(CM_SIDR_REP_ADDITIONAL_INFORMATION_LENGTH, sidr_rep_msg);
 	param->sgid_attr = cm_id_priv->av.ah_attr.grh.sgid_attr;
 	work->cm_event.private_data = &sidr_rep_msg->private_data;
 	work->cm_event.private_data_len = CM_SIDR_REP_PRIVATE_DATA_SIZE;
diff --git a/drivers/infiniband/core/cm_msgs.h b/drivers/infiniband/core/cm_msgs.h
index 0f3f9f3cd1cb..ee3bd6f7dc47 100644
--- a/drivers/infiniband/core/cm_msgs.h
+++ b/drivers/infiniband/core/cm_msgs.h
@@ -232,18 +232,4 @@ struct cm_sidr_rep_msg {
 
 	u8 private_data[CM_SIDR_REP_PRIVATE_DATA_SIZE];
 } __packed;
-
-static inline __be32 cm_sidr_rep_get_qpn(struct cm_sidr_rep_msg *sidr_rep_msg)
-{
-	return cpu_to_be32(be32_to_cpu(sidr_rep_msg->offset8) >> 8);
-}
-
-static inline void cm_sidr_rep_set_qpn(struct cm_sidr_rep_msg *sidr_rep_msg,
-				       __be32 qpn)
-{
-	sidr_rep_msg->offset8 = cpu_to_be32((be32_to_cpu(qpn) << 8) |
-					(be32_to_cpu(sidr_rep_msg->offset8) &
-					 0x000000FF));
-}
-
 #endif /* CM_MSGS_H */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 47/48] RDMA/cm: Add Enhanced Connection Establishment (ECE) bits
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (45 preceding siblings ...)
  2019-12-12  9:38 ` [PATCH rdma-rc v2 46/48] RDMA/cm: Convert SIDR_REP to new scheme Leon Romanovsky
@ 2019-12-12  9:38 ` Leon Romanovsky
  2019-12-17 12:33   ` Leon Romanovsky
  2019-12-12  9:38 ` [PATCH rdma-rc v2 48/48] RDMA/cm: Convert private_date access Leon Romanovsky
                   ` (2 subsequent siblings)
  49 siblings, 1 reply; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:38 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Extend REQ (request for communications), REP (reply to request
for communication), rejected reason and SIDR_REP (service ID
resolution response) structures with hardware vendor ID bits
according to approved IBA Comment #9434.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 include/rdma/ib_cm.h         | 3 ++-
 include/rdma/ibta_vol1_c12.h | 5 +++++
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/include/rdma/ib_cm.h b/include/rdma/ib_cm.h
index adccdc12b8e3..72348475eee8 100644
--- a/include/rdma/ib_cm.h
+++ b/include/rdma/ib_cm.h
@@ -147,7 +147,8 @@ enum ib_cm_rej_reason {
 	IB_CM_REJ_DUPLICATE_LOCAL_COMM_ID	= 30,
 	IB_CM_REJ_INVALID_CLASS_VERSION		= 31,
 	IB_CM_REJ_INVALID_FLOW_LABEL		= 32,
-	IB_CM_REJ_INVALID_ALT_FLOW_LABEL	= 33
+	IB_CM_REJ_INVALID_ALT_FLOW_LABEL	= 33,
+	IB_CM_REJ_VENDOR_OPTION_NOT_SUPPORTED	= 35
 };
 
 struct ib_cm_rej_event_param {
diff --git a/include/rdma/ibta_vol1_c12.h b/include/rdma/ibta_vol1_c12.h
index f937865fe6b5..6fc4f1b89ca6 100644
--- a/include/rdma/ibta_vol1_c12.h
+++ b/include/rdma/ibta_vol1_c12.h
@@ -112,8 +112,11 @@
 #define CM_REP_REMOTE_COMM_ID CM_FIELD32_LOC(struct cm_rep_msg, 4, 32)
 #define CM_REP_LOCAL_Q_KEY CM_FIELD32_LOC(struct cm_rep_msg, 8, 32)
 #define CM_REP_LOCAL_QPN CM_FIELD32_LOC(struct cm_rep_msg, 12, 24)
+#define CM_REP_VENDORID_H CM_FIELD8_LOC(struct cm_rep_msg, 15, 8)
 #define CM_REP_LOCAL_EE_CONTEXT_NUMBER CM_FIELD32_LOC(struct cm_rep_msg, 16, 24)
+#define CM_REP_VENDORID_M CM_FIELD8_LOC(struct cm_rep_msg, 19, 8)
 #define CM_REP_STARTING_PSN CM_FIELD32_LOC(struct cm_rep_msg, 20, 24)
+#define CM_REP_VENDORID_L CM_FIELD8_LOC(struct cm_rep_msg, 23, 8)
 #define CM_REP_RESPONDER_RESOURCES CM_FIELD8_LOC(struct cm_rep_msg, 24, 8)
 #define CM_REP_INITIATOR_DEPTH CM_FIELD8_LOC(struct cm_rep_msg, 25, 8)
 #define CM_REP_TARGET_ACK_DELAY CM_FIELD8_LOC(struct cm_rep_msg, 26, 5)
@@ -194,7 +197,9 @@
 #define CM_SIDR_REP_STATUS CM_FIELD8_LOC(struct cm_sidr_rep_msg, 4, 8)
 #define CM_SIDR_REP_ADDITIONAL_INFORMATION_LENGTH                              \
 	CM_FIELD8_LOC(struct cm_sidr_rep_msg, 5, 8)
+#define CM_SIDR_REP_VENDORID_H CM_FIELD16_LOC(struct cm_sidr_rep_msg, 6, 16)
 #define CM_SIDR_REP_QPN CM_FIELD32_LOC(struct cm_sidr_rep_msg, 8, 24)
+#define CM_SIDR_REP_VENDORID_L CM_FIELD8_LOC(struct cm_sidr_rep_msg, 11, 8)
 #define CM_SIDR_REP_SERVICEID CM_FIELD64_LOC(struct cm_sidr_rep_msg, 12, 64)
 #define CM_SIDR_REP_Q_KEY CM_FIELD32_LOC(struct cm_sidr_rep_msg, 20, 32)
 #define CM_SIDR_REP_ADDITIONAL_INFORMATION                                     \
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH rdma-rc v2 48/48] RDMA/cm: Convert private_date access
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (46 preceding siblings ...)
  2019-12-12  9:38 ` [PATCH rdma-rc v2 47/48] RDMA/cm: Add Enhanced Connection Establishment (ECE) bits Leon Romanovsky
@ 2019-12-12  9:38 ` Leon Romanovsky
  2019-12-12 12:06 ` [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
  2020-01-07 18:40 ` Jason Gunthorpe
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12  9:38 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Bart Van Assche, Sean Hefty

From: Leon Romanovsky <leonro@mellanox.com>

Reuse existing IBA accessors to set private data.

Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/core/cm.c | 43 +++++++++++++++---------------------
 1 file changed, 18 insertions(+), 25 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index f197f9740362..d42a3887057b 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -1366,9 +1366,8 @@ static void cm_format_req(struct cm_req_msg *req_msg,
 				      alt_path->packet_life_time));
 	}
 
-	if (param->private_data && param->private_data_len)
-		memcpy(req_msg->private_data, param->private_data,
-		       param->private_data_len);
+	IBA_SET_MEM(CM_REQ_PRIVATE_DATA, req_msg, param->private_data,
+		    param->private_data_len);
 }
 
 static int cm_validate_req_param(struct ib_cm_req_param *param)
@@ -1750,8 +1749,8 @@ static void cm_format_mra(struct cm_mra_msg *mra_msg,
 	mra_msg->remote_comm_id = cm_id_priv->id.remote_id;
 	IBA_SET(CM_MRA_SERVICE_TIMEOUT, mra_msg, service_timeout);
 
-	if (private_data && private_data_len)
-		memcpy(mra_msg->private_data, private_data, private_data_len);
+	IBA_SET_MEM(CM_MRA_PRIVATE_DATA, mra_msg, private_data,
+		    private_data_len);
 }
 
 static void cm_format_rej(struct cm_rej_msg *rej_msg,
@@ -1791,8 +1790,8 @@ static void cm_format_rej(struct cm_rej_msg *rej_msg,
 		memcpy(rej_msg->ari, ari, ari_length);
 	}
 
-	if (private_data && private_data_len)
-		memcpy(rej_msg->private_data, private_data, private_data_len);
+	IBA_SET_MEM(CM_REJ_PRIVATE_DATA, rej_msg, private_data,
+		    private_data_len);
 }
 
 static void cm_dup_req_handler(struct cm_work *work,
@@ -2084,9 +2083,8 @@ static void cm_format_rep(struct cm_rep_msg *rep_msg,
 		IBA_SET(CM_REP_LOCAL_EE_CONTEXT_NUMBER, rep_msg, param->qp_num);
 	}
 
-	if (param->private_data && param->private_data_len)
-		memcpy(rep_msg->private_data, param->private_data,
-		       param->private_data_len);
+	IBA_SET_MEM(CM_REP_PRIVATE_DATA, rep_msg, param->private_data,
+		    param->private_data_len);
 }
 
 int ib_send_cm_rep(struct ib_cm_id *cm_id,
@@ -2152,8 +2150,8 @@ static void cm_format_rtu(struct cm_rtu_msg *rtu_msg,
 	rtu_msg->local_comm_id = cm_id_priv->id.local_id;
 	rtu_msg->remote_comm_id = cm_id_priv->id.remote_id;
 
-	if (private_data && private_data_len)
-		memcpy(rtu_msg->private_data, private_data, private_data_len);
+	IBA_SET_MEM(CM_RTU_PRIVATE_DATA, rtu_msg, private_data,
+		    private_data_len);
 }
 
 int ib_send_cm_rtu(struct ib_cm_id *cm_id,
@@ -2482,9 +2480,8 @@ static void cm_format_dreq(struct cm_dreq_msg *dreq_msg,
 	dreq_msg->local_comm_id = cm_id_priv->id.local_id;
 	dreq_msg->remote_comm_id = cm_id_priv->id.remote_id;
 	IBA_SET(CM_DREQ_REMOTE_QPN_EECN, dreq_msg, cm_id_priv->remote_qpn);
-
-	if (private_data && private_data_len)
-		memcpy(dreq_msg->private_data, private_data, private_data_len);
+	IBA_SET_MEM(CM_DREQ_PRIVATE_DATA, dreq_msg, private_data,
+		    private_data_len);
 }
 
 int ib_send_cm_dreq(struct ib_cm_id *cm_id,
@@ -2546,9 +2543,8 @@ static void cm_format_drep(struct cm_drep_msg *drep_msg,
 	cm_format_mad_hdr(&drep_msg->hdr, CM_DREP_ATTR_ID, cm_id_priv->tid);
 	drep_msg->local_comm_id = cm_id_priv->id.local_id;
 	drep_msg->remote_comm_id = cm_id_priv->id.remote_id;
-
-	if (private_data && private_data_len)
-		memcpy(drep_msg->private_data, private_data, private_data_len);
+	IBA_SET_MEM(CM_DREP_PRIVATE_DATA, drep_msg, private_data,
+		    private_data_len);
 }
 
 int ib_send_cm_drep(struct ib_cm_id *cm_id,
@@ -3487,13 +3483,10 @@ static void cm_format_sidr_rep(struct cm_sidr_rep_msg *sidr_rep_msg,
 	IBA_SET(CM_SIDR_REP_QPN, sidr_rep_msg, param->qp_num);
 	sidr_rep_msg->service_id = cm_id_priv->id.service_id;
 	IBA_SET(CM_SIDR_REP_Q_KEY, sidr_rep_msg, param->qkey);
-
-	if (param->info && param->info_length)
-		memcpy(sidr_rep_msg->info, param->info, param->info_length);
-
-	if (param->private_data && param->private_data_len)
-		memcpy(sidr_rep_msg->private_data, param->private_data,
-		       param->private_data_len);
+	IBA_SET_MEM(CM_SIDR_REP_ADDITIONAL_INFORMATION, sidr_rep_msg,
+		    param->info, param->info_length);
+	IBA_SET_MEM(CM_SIDR_REP_PRIVATE_DATA, sidr_rep_msg, param->private_data,
+		    param->private_data_len);
 }
 
 int ib_send_cm_sidr_rep(struct ib_cm_id *cm_id,
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (47 preceding siblings ...)
  2019-12-12  9:38 ` [PATCH rdma-rc v2 48/48] RDMA/cm: Convert private_date access Leon Romanovsky
@ 2019-12-12 12:06 ` Leon Romanovsky
  2020-01-07 18:40 ` Jason Gunthorpe
  49 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-12 12:06 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: RDMA mailing list, Bart Van Assche, Sean Hefty

Of course, this series to -next.
I had a bug in my submission scripts.

Thanks

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH rdma-rc v2 47/48] RDMA/cm: Add Enhanced Connection Establishment (ECE) bits
  2019-12-12  9:38 ` [PATCH rdma-rc v2 47/48] RDMA/cm: Add Enhanced Connection Establishment (ECE) bits Leon Romanovsky
@ 2019-12-17 12:33   ` Leon Romanovsky
  0 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2019-12-17 12:33 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: RDMA mailing list, Bart Van Assche, Sean Hefty

On Thu, Dec 12, 2019 at 11:38:29AM +0200, Leon Romanovsky wrote:
> From: Leon Romanovsky <leonro@mellanox.com>
>
> Extend REQ (request for communications), REP (reply to request
> for communication), rejected reason and SIDR_REP (service ID
> resolution response) structures with hardware vendor ID bits
> according to approved IBA Comment #9434.
>
> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
> ---
>  include/rdma/ib_cm.h         | 3 ++-
>  include/rdma/ibta_vol1_c12.h | 5 +++++
>  2 files changed, 7 insertions(+), 1 deletion(-)
>


commit 0238bd12e7601ed09725feba43ca2b228ba4d5a3 (HEAD -> rdma-next)
Author: Leon Romanovsky <leonro@mellanox.com>
Date:   Tue Dec 17 14:30:22 2019 +0200

    fixup! RDMA/cm: Add Enhanced Connection Establishment (ECE) bits

    Signed-off-by: Leon Romanovsky <leonro@mellanox.com>

diff --git a/include/rdma/ibta_vol1_c12.h b/include/rdma/ibta_vol1_c12.h
index 6fc4f1b89ca6..d642a040be18 100644
--- a/include/rdma/ibta_vol1_c12.h
+++ b/include/rdma/ibta_vol1_c12.h
@@ -32,6 +32,7 @@

 /* Table 106 REQ Message Contents */
 #define CM_REQ_LOCAL_COMM_ID CM_FIELD32_LOC(struct cm_req_msg, 0, 32)
+#define CM_REQ_VENDORID CM_FIELD32_LOC(struct cm_req_msg, 5, 24)
 #define CM_REQ_SERVICE_ID CM_FIELD64_LOC(struct cm_req_msg, 8, 64)
 #define CM_REQ_LOCAL_CA_GUID CM_FIELD64_LOC(struct cm_req_msg, 16, 64)
 #define CM_REQ_LOCAL_Q_KEY CM_FIELD32_LOC(struct cm_req_msg, 28, 32)
(END)


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH rdma-rc v2 04/48] RDMA/cm: Add SET/GET implementations to hide IBA wire format
  2019-12-12  9:37 ` [PATCH rdma-rc v2 04/48] RDMA/cm: Add SET/GET implementations to hide IBA wire format Leon Romanovsky
@ 2020-01-04  2:15   ` Jason Gunthorpe
  0 siblings, 0 replies; 57+ messages in thread
From: Jason Gunthorpe @ 2020-01-04  2:15 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Doug Ledford, Leon Romanovsky, RDMA mailing list,
	Bart Van Assche, Sean Hefty

On Thu, Dec 12, 2019 at 11:37:46AM +0200, Leon Romanovsky wrote:

> +#define IBA_FIELD64_LOC(field_struct, byte_offset, num_bits)                   \
> +	field_struct, (byte_offset)&0xFFF8,                                    \
> +		GENMASK_ULL(63 - (((byte_offset) % 8) * 8),                    \
> +			    63 - (((byte_offset) % 8) * 8) - (num_bits - 1)),  \
> +		64

This doesn't quite work out right, the 64 bit fields are not naturally
aligned and we never extract anything other than 64 bits from them. So
as written this one:

#define CM_SIDR_REP_SERVICEID CM_FIELD64_LOC(struct cm_sidr_rep_msg, 12, 64)

Gives a compilation failure.

Should be 

#define IBA_FIELD64_LOC(field_struct, byte_offset)                             \
	field_struct, (byte_offset)&0xFFF8, GENMASK_ULL(63, 0), 64

As we rely on the get_unaligned() to safely retrieve the 64 bits.

Jason

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH rdma-rc v2 05/48] RDMA/cm: Request For Communication (REQ) message definitions
  2019-12-12  9:37 ` [PATCH rdma-rc v2 05/48] RDMA/cm: Request For Communication (REQ) message definitions Leon Romanovsky
@ 2020-01-07  1:17   ` Jason Gunthorpe
  0 siblings, 0 replies; 57+ messages in thread
From: Jason Gunthorpe @ 2020-01-07  1:17 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Doug Ledford, Leon Romanovsky, RDMA mailing list,
	Bart Van Assche, Sean Hefty

On Thu, Dec 12, 2019 at 11:37:47AM +0200, Leon Romanovsky wrote:

> +/* Table 106 REQ Message Contents */
> +#define CM_REQ_LOCAL_COMM_ID CM_FIELD32_LOC(struct cm_req_msg, 0, 32)
> +#define CM_REQ_SERVICE_ID CM_FIELD64_LOC(struct cm_req_msg, 8, 64)
> +#define CM_REQ_LOCAL_CA_GUID CM_FIELD64_LOC(struct cm_req_msg, 16, 64)
> +#define CM_REQ_LOCAL_Q_KEY CM_FIELD32_LOC(struct cm_req_msg, 28, 32)
> +#define CM_REQ_LOCAL_QPN CM_FIELD32_LOC(struct cm_req_msg, 32, 24)
> +#define CM_REQ_RESPONDED_RESOURCES CM_FIELD8_LOC(struct cm_req_msg, 35, 8)

RESPONDER not RESPONDED

> +#define CM_REQ_PRIMARY_LOCAL_PORT_LID CM_FIELD16_LOC(struct cm_req_msg, 52, 16)
> +#define CM_REQ_PRIMARY_REMOTE_PORT_LID CM_FIELD16_LOC(struct cm_req_msg, 54, 16)
> +#define CM_REQ_PRIMARY_LOCAL_PORT_GID CM_FIELD_MLOC(struct cm_req_msg, 56, 128)
> +#define CM_REQ_PRIMARY_REMOTE_PORT_GID CM_FIELD_MLOC(struct cm_req_msg, 72, 128)
> +#define CM_REQ_PRIMARY_FLOW_LABEL CM_FIELD32_LOC(struct cm_req_msg, 88, 20)
> +#define CM_REQ_PRIMARY_PACKET_RATE CM_FIELD_BLOC(struct cm_req_msg, 91, 2, 2)

This field is 6 bits wide, not two. This is the only mistake in the
field layouts.

Jason

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout
  2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
                   ` (48 preceding siblings ...)
  2019-12-12 12:06 ` [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
@ 2020-01-07 18:40 ` Jason Gunthorpe
  2020-01-16  7:32   ` Leon Romanovsky
  49 siblings, 1 reply; 57+ messages in thread
From: Jason Gunthorpe @ 2020-01-07 18:40 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Doug Ledford, Leon Romanovsky, RDMA mailing list,
	Bart Van Assche, Sean Hefty

On Thu, Dec 12, 2019 at 11:37:42AM +0200, Leon Romanovsky wrote:
>   RDMA/cm: Delete unused CM LAP functions
>   RDMA/cm: Delete unused CM ARP functions

These are applied to for-next

> Leon Romanovsky (48):
>   RDMA/cm: Provide private data size to CM users
>   RDMA/srpt: Use private_data_len instead of hardcoded value
>   RDMA/ucma: Mask QPN to be 24 bits according to IBTA
>   RDMA/cm: Add SET/GET implementations to hide IBA wire format
>   RDMA/cm: Request For Communication (REQ) message definitions
>   RDMA/cm: Message Receipt Acknowledgment (MRA) message definitions
>   RDMA/cm: Reject (REJ) message definitions
>   RDMA/cm: Reply To Request for communication (REP) definitions
>   RDMA/cm: Ready To Use (RTU) definitions
>   RDMA/cm: Request For Communication Release (DREQ) definitions
>   RDMA/cm: Reply To Request For Communication Release (DREP) definitions
>   RDMA/cm: Load Alternate Path (LAP) definitions
>   RDMA/cm: Alternate Path Response (APR) message definitions
>   RDMA/cm: Service ID Resolution Request (SIDR_REQ) definitions
>   RDMA/cm: Service ID Resolution Response (SIDR_REP) definitions
>   RDMA/cm: Convert QPN and EECN to be u32 variables
>   RDMA/cm: Convert REQ responded resources to the new scheme
>   RDMA/cm: Convert REQ initiator depth to the new scheme
>   RDMA/cm: Convert REQ remote response timeout
>   RDMA/cm: Simplify QP type to wire protocol translation
>   RDMA/cm: Convert REQ flow control
>   RDMA/cm: Convert starting PSN to be u32 variable
>   RDMA/cm: Update REQ local response timeout
>   RDMA/cm: Convert REQ retry count to use new scheme
>   RDMA/cm: Update REQ path MTU field
>   RDMA/cm: Convert REQ RNR retry timeout counter
>   RDMA/cm: Convert REQ MAX CM retries
>   RDMA/cm: Convert REQ SRQ field
>   RDMA/cm: Convert REQ flow label field
>   RDMA/cm: Convert REQ packet rate
>   RDMA/cm: Convert REQ SL fields
>   RDMA/cm: Convert REQ subnet local fields
>   RDMA/cm: Convert REQ local ack timeout
>   RDMA/cm: Convert MRA MRAed field
>   RDMA/cm: Convert MRA service timeout
>   RDMA/cm: Update REJ struct to use new scheme
>   RDMA/cm: Convert REP target ack delay field
>   RDMA/cm: Convert REP failover accepted field
>   RDMA/cm: Convert REP flow control field
>   RDMA/cm: Convert REP RNR retry count field
>   RDMA/cm: Convert REP SRQ field
>   RDMA/cm: Convert LAP flow label field
>   RDMA/cm: Convert LAP fields
>   RDMA/cm: Convert SIDR_REP to new scheme
>   RDMA/cm: Add Enhanced Connection Establishment (ECE) bits
>   RDMA/cm: Convert private_date access

I spent a long, long time looking at this. Far too long.

The series is too big, and the patches make too many changes all at
once. There are also many problems with the IBA_GET/etc macros I gave
you. Finally, I didn't like that it only did half the job and still
left the old structs around.

I fixed it all up, and put it here:

https://github.com/jgunthorpe/linux/commits/cm_rework

I originaly started by just writing out the IBA_CHECK things in the
first patch. This showed that the IBA_ acessors were not working right
(I fixed all those too). At the end of that exercise I had full
confidence that the new macros and the field descriptors were OK.

When I started to look at the actual conversion patches and doing the
missing ones, I realized this whole thing was trivially done via
spatch. So I made a script that took the proven mapping of new names
to old names and had it code gen a spatch script which then was
applied.

I split the spatch rules into 4 patches bases on 'kind of thing' being
converted.

The first two can be diffed against your series. I didn't observe any
problems, so the conversion was probably good. However, it was hard to
tell as there was lots of functional changes mixed in your series,
like dropping more BE's and what not.

The last two complete the work and convert all the loose structure
members. The final one deletes most of cm_msgs.h

I have a pretty high confidence in the spatch process and the input
markup. But I didn't run sparse or test it.

While this does not do everything your series did, it gobbles up all
the high LOC stuff and the remaining things like dropping more of the
be's are best done as smaller followup patches which can be applied
right away.

The full diffstat is ridiculous:
 5 files changed, 852 insertions(+), 1253 deletions(-)

Please check the revised series and let me know.

Thanks,
Jason

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout
  2020-01-07 18:40 ` Jason Gunthorpe
@ 2020-01-16  7:32   ` Leon Romanovsky
  2020-01-16 19:24     ` Jason Gunthorpe
  0 siblings, 1 reply; 57+ messages in thread
From: Leon Romanovsky @ 2020-01-16  7:32 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Doug Ledford, RDMA mailing list, Bart Van Assche, Sean Hefty

On Tue, Jan 07, 2020 at 02:40:19PM -0400, Jason Gunthorpe wrote:
> On Thu, Dec 12, 2019 at 11:37:42AM +0200, Leon Romanovsky wrote:
> >   RDMA/cm: Delete unused CM LAP functions
> >   RDMA/cm: Delete unused CM ARP functions
>
> These are applied to for-next
>
> > Leon Romanovsky (48):
> >   RDMA/cm: Provide private data size to CM users
> >   RDMA/srpt: Use private_data_len instead of hardcoded value
> >   RDMA/ucma: Mask QPN to be 24 bits according to IBTA
> >   RDMA/cm: Add SET/GET implementations to hide IBA wire format
> >   RDMA/cm: Request For Communication (REQ) message definitions
> >   RDMA/cm: Message Receipt Acknowledgment (MRA) message definitions
> >   RDMA/cm: Reject (REJ) message definitions
> >   RDMA/cm: Reply To Request for communication (REP) definitions
> >   RDMA/cm: Ready To Use (RTU) definitions
> >   RDMA/cm: Request For Communication Release (DREQ) definitions
> >   RDMA/cm: Reply To Request For Communication Release (DREP) definitions
> >   RDMA/cm: Load Alternate Path (LAP) definitions
> >   RDMA/cm: Alternate Path Response (APR) message definitions
> >   RDMA/cm: Service ID Resolution Request (SIDR_REQ) definitions
> >   RDMA/cm: Service ID Resolution Response (SIDR_REP) definitions
> >   RDMA/cm: Convert QPN and EECN to be u32 variables
> >   RDMA/cm: Convert REQ responded resources to the new scheme
> >   RDMA/cm: Convert REQ initiator depth to the new scheme
> >   RDMA/cm: Convert REQ remote response timeout
> >   RDMA/cm: Simplify QP type to wire protocol translation
> >   RDMA/cm: Convert REQ flow control
> >   RDMA/cm: Convert starting PSN to be u32 variable
> >   RDMA/cm: Update REQ local response timeout
> >   RDMA/cm: Convert REQ retry count to use new scheme
> >   RDMA/cm: Update REQ path MTU field
> >   RDMA/cm: Convert REQ RNR retry timeout counter
> >   RDMA/cm: Convert REQ MAX CM retries
> >   RDMA/cm: Convert REQ SRQ field
> >   RDMA/cm: Convert REQ flow label field
> >   RDMA/cm: Convert REQ packet rate
> >   RDMA/cm: Convert REQ SL fields
> >   RDMA/cm: Convert REQ subnet local fields
> >   RDMA/cm: Convert REQ local ack timeout
> >   RDMA/cm: Convert MRA MRAed field
> >   RDMA/cm: Convert MRA service timeout
> >   RDMA/cm: Update REJ struct to use new scheme
> >   RDMA/cm: Convert REP target ack delay field
> >   RDMA/cm: Convert REP failover accepted field
> >   RDMA/cm: Convert REP flow control field
> >   RDMA/cm: Convert REP RNR retry count field
> >   RDMA/cm: Convert REP SRQ field
> >   RDMA/cm: Convert LAP flow label field
> >   RDMA/cm: Convert LAP fields
> >   RDMA/cm: Convert SIDR_REP to new scheme
> >   RDMA/cm: Add Enhanced Connection Establishment (ECE) bits
> >   RDMA/cm: Convert private_date access
>
> I spent a long, long time looking at this. Far too long.
>
> The series is too big, and the patches make too many changes all at
> once. There are also many problems with the IBA_GET/etc macros I gave
> you. Finally, I didn't like that it only did half the job and still
> left the old structs around.
>
> I fixed it all up, and put it here:
>
> https://github.com/jgunthorpe/linux/commits/cm_rework
>
> I originaly started by just writing out the IBA_CHECK things in the
> first patch. This showed that the IBA_ acessors were not working right
> (I fixed all those too). At the end of that exercise I had full
> confidence that the new macros and the field descriptors were OK.
>
> When I started to look at the actual conversion patches and doing the
> missing ones, I realized this whole thing was trivially done via
> spatch. So I made a script that took the proven mapping of new names
> to old names and had it code gen a spatch script which then was
> applied.
>
> I split the spatch rules into 4 patches bases on 'kind of thing' being
> converted.
>
> The first two can be diffed against your series. I didn't observe any
> problems, so the conversion was probably good. However, it was hard to
> tell as there was lots of functional changes mixed in your series,
> like dropping more BE's and what not.
>
> The last two complete the work and convert all the loose structure
> members. The final one deletes most of cm_msgs.h
>
> I have a pretty high confidence in the spatch process and the input
> markup. But I didn't run sparse or test it.
>
> While this does not do everything your series did, it gobbles up all
> the high LOC stuff and the remaining things like dropping more of the
> be's are best done as smaller followup patches which can be applied
> right away.
>
> The full diffstat is ridiculous:
>  5 files changed, 852 insertions(+), 1253 deletions(-)
>
> Please check the revised series and let me know.

Hi Jason,

We tested the series and I reviewed it on github, everything looks
amazing, and I have only three nitpicks.

1. "exta" -> "extra"
2. IMHO, you don't need to include your selftest in final patches, because
the whole series is going to be accepted and that code will be added and
deleted at the same time. Especially printk part.
3. Copyright needs to be 2020

Thanks,
Tested-by: Leon Romanovsky <leonro@mellanox.com>
Reviewed-by: Leon Romanovsky <leonro@mellanox.com>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout
  2020-01-16  7:32   ` Leon Romanovsky
@ 2020-01-16 19:24     ` Jason Gunthorpe
  2020-01-16 19:31       ` Leon Romanovsky
  0 siblings, 1 reply; 57+ messages in thread
From: Jason Gunthorpe @ 2020-01-16 19:24 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Doug Ledford, RDMA mailing list, Bart Van Assche, Sean Hefty

On Thu, Jan 16, 2020 at 09:32:08AM +0200, Leon Romanovsky wrote:

> 2. IMHO, you don't need to include your selftest in final patches, because
> the whole series is going to be accepted and that code will be added and
> deleted at the same time. Especially printk part.

I like seeing the tests. For a patch like this, which is so tedious to
review, it makes the review a check of the tests, a check of the
spatch and some spot checks of the transformations.

Since it is a small number of lines, and it is much easier than
sending the tests separately, it felt reasonable to leave them in the
history.

Will you be able to send the _be removal conversions you had done on
top of this?

I didn't show it, but all the private_data_len, etc should be some
generic IBA_NUM_BYTES() accessor like get/set instead of more #defines.

Jason

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout
  2020-01-16 19:24     ` Jason Gunthorpe
@ 2020-01-16 19:31       ` Leon Romanovsky
  0 siblings, 0 replies; 57+ messages in thread
From: Leon Romanovsky @ 2020-01-16 19:31 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Doug Ledford, RDMA mailing list, Bart Van Assche, Sean Hefty

On Thu, Jan 16, 2020 at 03:24:04PM -0400, Jason Gunthorpe wrote:
> On Thu, Jan 16, 2020 at 09:32:08AM +0200, Leon Romanovsky wrote:
>
> > 2. IMHO, you don't need to include your selftest in final patches, because
> > the whole series is going to be accepted and that code will be added and
> > deleted at the same time. Especially printk part.
>
> I like seeing the tests. For a patch like this, which is so tedious to
> review, it makes the review a check of the tests, a check of the
> spatch and some spot checks of the transformations.
>
> Since it is a small number of lines, and it is much easier than
> sending the tests separately, it felt reasonable to leave them in the
> history.
>
> Will you be able to send the _be removal conversions you had done on
> top of this?

It will time to make rebase, but I'll do.

>
> I didn't show it, but all the private_data_len, etc should be some
> generic IBA_NUM_BYTES() accessor like get/set instead of more #defines.
>
> Jason

^ permalink raw reply	[flat|nested] 57+ messages in thread

end of thread, other threads:[~2020-01-16 19:31 UTC | newest]

Thread overview: 57+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-12  9:37 [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
2019-12-12  9:37 ` [PATCH rdma-rc v2 01/48] RDMA/cm: Provide private data size to CM users Leon Romanovsky
2019-12-12  9:37 ` [PATCH rdma-rc v2 02/48] RDMA/srpt: Use private_data_len instead of hardcoded value Leon Romanovsky
2019-12-12  9:37 ` [PATCH rdma-rc v2 03/48] RDMA/ucma: Mask QPN to be 24 bits according to IBTA Leon Romanovsky
2019-12-12  9:37 ` [PATCH rdma-rc v2 04/48] RDMA/cm: Add SET/GET implementations to hide IBA wire format Leon Romanovsky
2020-01-04  2:15   ` Jason Gunthorpe
2019-12-12  9:37 ` [PATCH rdma-rc v2 05/48] RDMA/cm: Request For Communication (REQ) message definitions Leon Romanovsky
2020-01-07  1:17   ` Jason Gunthorpe
2019-12-12  9:37 ` [PATCH rdma-rc v2 06/48] RDMA/cm: Message Receipt Acknowledgment (MRA) " Leon Romanovsky
2019-12-12  9:37 ` [PATCH rdma-rc v2 07/48] RDMA/cm: Reject (REJ) " Leon Romanovsky
2019-12-12  9:37 ` [PATCH rdma-rc v2 08/48] RDMA/cm: Reply To Request for communication (REP) definitions Leon Romanovsky
2019-12-12  9:37 ` [PATCH rdma-rc v2 09/48] RDMA/cm: Ready To Use (RTU) definitions Leon Romanovsky
2019-12-12  9:37 ` [PATCH rdma-rc v2 10/48] RDMA/cm: Request For Communication Release (DREQ) definitions Leon Romanovsky
2019-12-12  9:37 ` [PATCH rdma-rc v2 11/48] RDMA/cm: Reply To Request For Communication Release (DREP) definitions Leon Romanovsky
2019-12-12  9:37 ` [PATCH rdma-rc v2 12/48] RDMA/cm: Load Alternate Path (LAP) definitions Leon Romanovsky
2019-12-12  9:37 ` [PATCH rdma-rc v2 13/48] RDMA/cm: Alternate Path Response (APR) message definitions Leon Romanovsky
2019-12-12  9:37 ` [PATCH rdma-rc v2 14/48] RDMA/cm: Service ID Resolution Request (SIDR_REQ) definitions Leon Romanovsky
2019-12-12  9:37 ` [PATCH rdma-rc v2 15/48] RDMA/cm: Service ID Resolution Response (SIDR_REP) definitions Leon Romanovsky
2019-12-12  9:37 ` [PATCH rdma-rc v2 16/48] RDMA/cm: Convert QPN and EECN to be u32 variables Leon Romanovsky
2019-12-12  9:37 ` [PATCH rdma-rc v2 17/48] RDMA/cm: Convert REQ responded resources to the new scheme Leon Romanovsky
2019-12-12  9:38 ` [PATCH rdma-rc v2 18/48] RDMA/cm: Convert REQ initiator depth " Leon Romanovsky
2019-12-12  9:38 ` [PATCH rdma-rc v2 19/48] RDMA/cm: Convert REQ remote response timeout Leon Romanovsky
2019-12-12  9:38 ` [PATCH rdma-rc v2 20/48] RDMA/cm: Simplify QP type to wire protocol translation Leon Romanovsky
2019-12-12  9:38 ` [PATCH rdma-rc v2 21/48] RDMA/cm: Convert REQ flow control Leon Romanovsky
2019-12-12  9:38 ` [PATCH rdma-rc v2 22/48] RDMA/cm: Convert starting PSN to be u32 variable Leon Romanovsky
2019-12-12  9:38 ` [PATCH rdma-rc v2 23/48] RDMA/cm: Update REQ local response timeout Leon Romanovsky
2019-12-12  9:38 ` [PATCH rdma-rc v2 24/48] RDMA/cm: Convert REQ retry count to use new scheme Leon Romanovsky
2019-12-12  9:38 ` [PATCH rdma-rc v2 25/48] RDMA/cm: Update REQ path MTU field Leon Romanovsky
2019-12-12  9:38 ` [PATCH rdma-rc v2 26/48] RDMA/cm: Convert REQ RNR retry timeout counter Leon Romanovsky
2019-12-12  9:38 ` [PATCH rdma-rc v2 27/48] RDMA/cm: Convert REQ MAX CM retries Leon Romanovsky
2019-12-12  9:38 ` [PATCH rdma-rc v2 28/48] RDMA/cm: Convert REQ SRQ field Leon Romanovsky
2019-12-12  9:38 ` [PATCH rdma-rc v2 29/48] RDMA/cm: Convert REQ flow label field Leon Romanovsky
2019-12-12  9:38 ` [PATCH rdma-rc v2 30/48] RDMA/cm: Convert REQ packet rate Leon Romanovsky
2019-12-12  9:38 ` [PATCH rdma-rc v2 31/48] RDMA/cm: Convert REQ SL fields Leon Romanovsky
2019-12-12  9:38 ` [PATCH rdma-rc v2 32/48] RDMA/cm: Convert REQ subnet local fields Leon Romanovsky
2019-12-12  9:38 ` [PATCH rdma-rc v2 33/48] RDMA/cm: Convert REQ local ack timeout Leon Romanovsky
2019-12-12  9:38 ` [PATCH rdma-rc v2 34/48] RDMA/cm: Convert MRA MRAed field Leon Romanovsky
2019-12-12  9:38 ` [PATCH rdma-rc v2 35/48] RDMA/cm: Convert MRA service timeout Leon Romanovsky
2019-12-12  9:38 ` [PATCH rdma-rc v2 36/48] RDMA/cm: Update REJ struct to use new scheme Leon Romanovsky
2019-12-12  9:38 ` [PATCH rdma-rc v2 37/48] RDMA/cm: Convert REP target ack delay field Leon Romanovsky
2019-12-12  9:38 ` [PATCH rdma-rc v2 38/48] RDMA/cm: Convert REP failover accepted field Leon Romanovsky
2019-12-12  9:38 ` [PATCH rdma-rc v2 39/48] RDMA/cm: Convert REP flow control field Leon Romanovsky
2019-12-12  9:38 ` [PATCH rdma-rc v2 40/48] RDMA/cm: Convert REP RNR retry count field Leon Romanovsky
2019-12-12  9:38 ` [PATCH rdma-rc v2 41/48] RDMA/cm: Convert REP SRQ field Leon Romanovsky
2019-12-12  9:38 ` [PATCH rdma-rc v2 42/48] RDMA/cm: Delete unused CM LAP functions Leon Romanovsky
2019-12-12  9:38 ` [PATCH rdma-rc v2 43/48] RDMA/cm: Convert LAP flow label field Leon Romanovsky
2019-12-12  9:38 ` [PATCH rdma-rc v2 44/48] RDMA/cm: Convert LAP fields Leon Romanovsky
2019-12-12  9:38 ` [PATCH rdma-rc v2 45/48] RDMA/cm: Delete unused CM ARP functions Leon Romanovsky
2019-12-12  9:38 ` [PATCH rdma-rc v2 46/48] RDMA/cm: Convert SIDR_REP to new scheme Leon Romanovsky
2019-12-12  9:38 ` [PATCH rdma-rc v2 47/48] RDMA/cm: Add Enhanced Connection Establishment (ECE) bits Leon Romanovsky
2019-12-17 12:33   ` Leon Romanovsky
2019-12-12  9:38 ` [PATCH rdma-rc v2 48/48] RDMA/cm: Convert private_date access Leon Romanovsky
2019-12-12 12:06 ` [PATCH rdma-rc v2 00/48] Organize code according to IBTA layout Leon Romanovsky
2020-01-07 18:40 ` Jason Gunthorpe
2020-01-16  7:32   ` Leon Romanovsky
2020-01-16 19:24     ` Jason Gunthorpe
2020-01-16 19:31       ` Leon Romanovsky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).