All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V1 for-next 0/3] Enable LSO and Checksum for IPoIB over mlx5
@ 2016-02-17  7:48 Erez Shitrit
       [not found] ` <1455695339-10389-1-git-send-email-erezsh-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
  0 siblings, 1 reply; 5+ messages in thread
From: Erez Shitrit @ 2016-02-17  7:48 UTC (permalink / raw)
  To: dledford-H+wXaHxf7aLQT0dZR+AlfA
  Cc: ogerlitz-VPRAkNaXOzVWk0Htik3J/w,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA, Erez Shitrit

The patch set exposes the UD QP's HW abilities to receive CSUM and LSO.
When these offloads are enabled, IPoIB over mlx5 (when uses UD QP) can benefit
from the above without additional changes in the IPoIB code.

The first patch in the series "IB/mlx5: Define mlx5 interface bits for IPoIB
offloads" is on top of the next patch "[PATCH net V2 1/3] net/mlx5: Use offset
based reserved field names in the IFC header file" that was sent to "net"
branch.
(see http://www.spinics.net/lists/netdev/msg362410.html)

Changes from v0:
	Fixed variable type from u16 to u64
	Returned back empty line.
	sync capability bit from new FW
	Change "likely" to "unlikely".

Erez Shitrit (3):
  IB/mlx5: Define mlx5 interface bits for IPoIB offloads
  IB/mlx5: Implement UD QP offloads for IPoIB in the TX flow
  IB/mlx5: Add support for CSUM in RX flow

 drivers/infiniband/hw/mlx5/cq.c      |   3 +
 drivers/infiniband/hw/mlx5/main.c    |   5 ++
 drivers/infiniband/hw/mlx5/mlx5_ib.h |  11 ++--
 drivers/infiniband/hw/mlx5/qp.c      | 123 ++++++++++++++++++++++++++++++++---
 include/linux/mlx5/mlx5_ifc.h        |   6 +-
 5 files changed, 133 insertions(+), 15 deletions(-)

-- 
1.7.11.3

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH V1 for-next 1/3] IB/mlx5: Define mlx5 interface bits for IPoIB offloads
       [not found] ` <1455695339-10389-1-git-send-email-erezsh-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
@ 2016-02-17  7:48   ` Erez Shitrit
       [not found]     ` <1455695339-10389-2-git-send-email-erezsh-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
  2016-02-17  7:48   ` [PATCH V1 for-next 2/3] IB/mlx5: Implement UD QP offloads for IPoIB in the TX flow Erez Shitrit
  2016-02-17  7:48   ` [PATCH V1 for-next 3/3] IB/mlx5: Add support for CSUM in RX flow Erez Shitrit
  2 siblings, 1 reply; 5+ messages in thread
From: Erez Shitrit @ 2016-02-17  7:48 UTC (permalink / raw)
  To: dledford-H+wXaHxf7aLQT0dZR+AlfA
  Cc: ogerlitz-VPRAkNaXOzVWk0Htik3J/w,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA, Erez Shitrit


The HW can supply several offloads for UD QP, added  offloads for
checksumming for both TX and RX and LSO for TX.

Two new bits were added in order to expose and enable these offloads:
1. HCA capability bit: declares the support for IPoIB basic offloads.
2. QPC bit which will be used in the QP creation flow, which set these
abilities in the QP.

Signed-off-by: Erez Shitrit <erezsh-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
---
 include/linux/mlx5/mlx5_ifc.h | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
index 51f1e54..5f61afe 100644
--- a/include/linux/mlx5/mlx5_ifc.h
+++ b/include/linux/mlx5/mlx5_ifc.h
@@ -736,7 +736,11 @@ struct mlx5_ifc_cmd_hca_cap_bits {
 	u8         cqe_version[0x4];
 
 	u8         compact_address_vector[0x1];
-	u8         reserved_at_200[0xe];
+	u8         reserved_at_200[0x1];
+	u8         ipoib_basic_offloads_deprecated[0x1];
+	u8         ipoib_enhanced_offloads[0x1];
+	u8         ipoib_basic_offloads[0x1];
+	u8         reserved_at_204[0xa];
 	u8         drain_sigerr[0x1];
 	u8         cmdif_checksum[0x2];
 	u8         sigerr_cqe[0x1];
@@ -1780,7 +1784,7 @@ struct mlx5_ifc_qpc_bits {
 	u8         log_sq_size[0x4];
 	u8         reserved_at_55[0x6];
 	u8         rlky[0x1];
-	u8         reserved_at_5c[0x4];
+	u8         ulp_stateless_offload_mode[0x4];
 
 	u8         counter_set_id[0x8];
 	u8         uar_page[0x18];
-- 
1.7.11.3

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH V1 for-next 2/3] IB/mlx5: Implement UD QP offloads for IPoIB in the TX flow
       [not found] ` <1455695339-10389-1-git-send-email-erezsh-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
  2016-02-17  7:48   ` [PATCH V1 for-next 1/3] IB/mlx5: Define mlx5 interface bits for IPoIB offloads Erez Shitrit
@ 2016-02-17  7:48   ` Erez Shitrit
  2016-02-17  7:48   ` [PATCH V1 for-next 3/3] IB/mlx5: Add support for CSUM in RX flow Erez Shitrit
  2 siblings, 0 replies; 5+ messages in thread
From: Erez Shitrit @ 2016-02-17  7:48 UTC (permalink / raw)
  To: dledford-H+wXaHxf7aLQT0dZR+AlfA
  Cc: ogerlitz-VPRAkNaXOzVWk0Htik3J/w,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA, Erez Shitrit


In order to support LSO and CSUM in the TX flow the driver does the
following:
LSO bit for the enum mlx5_ib_qp_flags was added, indicates QP that
supports LSO offloads.
Enable the special offload when the QP is created, and enable the
special work request id (IB_WR_LSO) when comes.
It calculates the size of the WQE according to the new WQE format that
support these offloads.
And it handles the new WQE format when arrived, it sets the relevant
fields, and copies the needed data.

Signed-off-by: Erez Shitrit <erezsh-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
---
 drivers/infiniband/hw/mlx5/main.c    |   5 ++
 drivers/infiniband/hw/mlx5/mlx5_ib.h |  11 ++--
 drivers/infiniband/hw/mlx5/qp.c      | 122 +++++++++++++++++++++++++++++++++--
 3 files changed, 126 insertions(+), 12 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index a55bf05..5172658 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -501,6 +501,11 @@ static int mlx5_ib_query_device(struct ib_device *ibdev,
 	    (MLX5_CAP_ETH(dev->mdev, csum_cap)))
 			props->device_cap_flags |= IB_DEVICE_RAW_IP_CSUM;
 
+	if (MLX5_CAP_GEN(mdev, ipoib_basic_offloads)) {
+		props->device_cap_flags |= IB_DEVICE_UD_IP_CSUM;
+		props->device_cap_flags |= IB_DEVICE_UD_TSO;
+	}
+
 	props->vendor_part_id	   = mdev->pdev->device;
 	props->hw_ver		   = mdev->pdev->revision;
 
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index d475f83..2a9ed1d 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -295,11 +295,12 @@ struct mlx5_ib_cq_buf {
 };
 
 enum mlx5_ib_qp_flags {
-	MLX5_IB_QP_BLOCK_MULTICAST_LOOPBACK     = 1 << 0,
-	MLX5_IB_QP_SIGNATURE_HANDLING           = 1 << 1,
-	MLX5_IB_QP_CROSS_CHANNEL		= 1 << 2,
-	MLX5_IB_QP_MANAGED_SEND			= 1 << 3,
-	MLX5_IB_QP_MANAGED_RECV			= 1 << 4,
+	MLX5_IB_QP_LSO                          = IB_QP_CREATE_IPOIB_UD_LSO,
+	MLX5_IB_QP_BLOCK_MULTICAST_LOOPBACK     = IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK,
+	MLX5_IB_QP_CROSS_CHANNEL            = IB_QP_CREATE_CROSS_CHANNEL,
+	MLX5_IB_QP_MANAGED_SEND             = IB_QP_CREATE_MANAGED_SEND,
+	MLX5_IB_QP_MANAGED_RECV             = IB_QP_CREATE_MANAGED_RECV,
+	MLX5_IB_QP_SIGNATURE_HANDLING           = 1 << 5,
 };
 
 struct mlx5_umr_wr {
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 8fb9c27..ac41f9b 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -58,6 +58,7 @@ enum {
 
 static const u32 mlx5_ib_opcode[] = {
 	[IB_WR_SEND]				= MLX5_OPCODE_SEND,
+	[IB_WR_LSO]				= MLX5_OPCODE_LSO,
 	[IB_WR_SEND_WITH_IMM]			= MLX5_OPCODE_SEND_IMM,
 	[IB_WR_RDMA_WRITE]			= MLX5_OPCODE_RDMA_WRITE,
 	[IB_WR_RDMA_WRITE_WITH_IMM]		= MLX5_OPCODE_RDMA_WRITE_IMM,
@@ -72,6 +73,9 @@ static const u32 mlx5_ib_opcode[] = {
 	[MLX5_IB_WR_UMR]			= MLX5_OPCODE_UMR,
 };
 
+struct mlx5_wqe_eth_pad {
+	u8 rsvd0[16];
+};
 
 static int is_qp0(enum ib_qp_type qp_type)
 {
@@ -260,11 +264,11 @@ static int set_rq_size(struct mlx5_ib_dev *dev, struct ib_qp_cap *cap,
 	return 0;
 }
 
-static int sq_overhead(enum ib_qp_type qp_type)
+static int sq_overhead(struct ib_qp_init_attr *attr)
 {
 	int size = 0;
 
-	switch (qp_type) {
+	switch (attr->qp_type) {
 	case IB_QPT_XRC_INI:
 		size += sizeof(struct mlx5_wqe_xrc_seg);
 		/* fall through */
@@ -285,6 +289,10 @@ static int sq_overhead(enum ib_qp_type qp_type)
 		break;
 
 	case IB_QPT_UD:
+		if (attr->create_flags & IB_QP_CREATE_IPOIB_UD_LSO)
+			size += sizeof(struct mlx5_wqe_eth_pad) +
+				sizeof(struct mlx5_wqe_eth_seg);
+		/* fall through */
 	case IB_QPT_SMI:
 	case IB_QPT_GSI:
 		size += sizeof(struct mlx5_wqe_ctrl_seg) +
@@ -309,7 +317,7 @@ static int calc_send_wqe(struct ib_qp_init_attr *attr)
 	int inl_size = 0;
 	int size;
 
-	size = sq_overhead(attr->qp_type);
+	size = sq_overhead(attr);
 	if (size < 0)
 		return size;
 
@@ -346,8 +354,8 @@ static int calc_sq_size(struct mlx5_ib_dev *dev, struct ib_qp_init_attr *attr,
 		return -EINVAL;
 	}
 
-	qp->max_inline_data = wqe_size - sq_overhead(attr->qp_type) -
-		sizeof(struct mlx5_wqe_inline_seg);
+	qp->max_inline_data = wqe_size - sq_overhead(attr) -
+			      sizeof(struct mlx5_wqe_inline_seg);
 	attr->cap.max_inline_data = qp->max_inline_data;
 
 	if (attr->create_flags & IB_QP_CREATE_SIGNATURE_EN)
@@ -768,6 +776,21 @@ static void destroy_qp_user(struct ib_pd *pd, struct mlx5_ib_qp *qp,
 	free_uuar(&context->uuari, qp->uuarn);
 }
 
+static bool verify_flags(struct mlx5_ib_dev *dev,
+			 struct ib_qp_init_attr *init_attr)
+{
+	struct mlx5_core_dev *mdev = dev->mdev;
+
+	if ((init_attr->create_flags & ~(IB_QP_CREATE_SIGNATURE_EN |
+					IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK |
+					IB_QP_CREATE_IPOIB_UD_LSO)) ||
+	    ((init_attr->qp_type == IB_QPT_UD) &&
+	     (init_attr->create_flags & IB_QP_CREATE_IPOIB_UD_LSO) &&
+	     !MLX5_CAP_GEN(mdev, ipoib_basic_offloads)))
+		return false;
+
+	return true;
+}
 static int create_kernel_qp(struct mlx5_ib_dev *dev,
 			    struct ib_qp_init_attr *init_attr,
 			    struct mlx5_ib_qp *qp,
@@ -781,7 +804,7 @@ static int create_kernel_qp(struct mlx5_ib_dev *dev,
 	int err;
 
 	uuari = &dev->mdev->priv.uuari;
-	if (init_attr->create_flags & ~(IB_QP_CREATE_SIGNATURE_EN | IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK))
+	if (!verify_flags(dev, init_attr))
 		return -EINVAL;
 
 	if (init_attr->qp_type == MLX5_IB_QPT_REG_UMR)
@@ -1383,6 +1406,14 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		/* 0xffffff means we ask to work with cqe version 0 */
 		MLX5_SET(qpc, qpc, user_index, uidx);
 	}
+	/* we use IB_QP_CREATE_IPOIB_UD_LSO to indicates ipoib qp */
+	if (init_attr->qp_type == IB_QPT_UD &&
+	    (init_attr->create_flags & IB_QP_CREATE_IPOIB_UD_LSO) &&
+	     MLX5_CAP_GEN(mdev, ipoib_basic_offloads)) {
+		qpc = MLX5_ADDR_OF(create_qp_in, in, qpc);
+		MLX5_SET(qpc, qpc, ulp_stateless_offload_mode, 1);
+		qp->flags |= MLX5_IB_QP_LSO;
+	}
 
 	if (init_attr->qp_type == IB_QPT_RAW_PACKET) {
 		qp->raw_packet_qp.sq.ubuffer.buf_addr = ucmd.sq_buf_addr;
@@ -2440,6 +2471,62 @@ static __always_inline void set_raddr_seg(struct mlx5_wqe_raddr_seg *rseg,
 	rseg->reserved = 0;
 }
 
+static void *set_eth_seg(struct mlx5_wqe_eth_seg *eseg,
+			 struct ib_send_wr *wr, void *qend,
+			 struct mlx5_ib_qp *qp, int *size)
+{
+	void *seg = eseg;
+
+	memset(eseg, 0, sizeof(struct mlx5_wqe_eth_seg));
+
+	if (wr->send_flags & IB_SEND_IP_CSUM)
+		eseg->cs_flags = MLX5_ETH_WQE_L3_CSUM |
+				 MLX5_ETH_WQE_L4_CSUM;
+
+	seg += sizeof(struct mlx5_wqe_eth_seg);
+	*size += sizeof(struct mlx5_wqe_eth_seg) / 16;
+
+	if (wr->opcode == IB_WR_LSO) {
+		struct ib_ud_wr *ud_wr = container_of(wr, struct ib_ud_wr, wr);
+		int size_of_inl_hdr_start = sizeof(eseg->inline_hdr_start);
+		u64 left, leftlen, copysz;
+		void *pdata = ud_wr->header;
+
+		left = ud_wr->hlen;
+		eseg->mss = cpu_to_be16(ud_wr->mss);
+		eseg->inline_hdr_sz = cpu_to_be16(left);
+
+		/*
+		 * check if there is space till the end of queue, if yes,
+		 * copy all in one shot, otherwise copy till the end of queue,
+		 * rollback and than the copy the left
+		 */
+		leftlen = qend - (void *)eseg->inline_hdr_start;
+		copysz = min_t(u64, leftlen, left);
+
+		memcpy(seg - size_of_inl_hdr_start, pdata, copysz);
+
+		if (unlikely(copysz < left)) /* the lase wqe in the queue */
+			seg = mlx5_get_send_wqe(qp, 0);
+		else if (copysz > size_of_inl_hdr_start)
+			seg += ALIGN(copysz - size_of_inl_hdr_start, 16);
+
+		if (likely(copysz > size_of_inl_hdr_start))
+			*size += ALIGN(copysz - size_of_inl_hdr_start, 16) / 16;
+
+		left -= copysz;
+		pdata += copysz;
+
+		if (unlikely(left)) {
+			memcpy(seg, pdata, left);
+			seg += ALIGN(left, 16);
+			*size += ALIGN(left, 16) / 16;
+		}
+	}
+
+	return seg;
+}
+
 static void set_datagram_seg(struct mlx5_wqe_datagram_seg *dseg,
 			     struct ib_send_wr *wr)
 {
@@ -3371,7 +3458,6 @@ int mlx5_ib_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
 			}
 			break;
 
-		case IB_QPT_UD:
 		case IB_QPT_SMI:
 		case IB_QPT_GSI:
 			set_datagram_seg(seg, wr);
@@ -3380,7 +3466,29 @@ int mlx5_ib_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
 			if (unlikely((seg == qend)))
 				seg = mlx5_get_send_wqe(qp, 0);
 			break;
+		case IB_QPT_UD:
+			set_datagram_seg(seg, wr);
+			seg += sizeof(struct mlx5_wqe_datagram_seg);
+			size += sizeof(struct mlx5_wqe_datagram_seg) / 16;
+
+			if (unlikely((seg == qend)))
+				seg = mlx5_get_send_wqe(qp, 0);
+
+			/* handle qp that supports ud offload */
+			if (qp->flags & IB_QP_CREATE_IPOIB_UD_LSO) {
+				struct mlx5_wqe_eth_pad *pad;
 
+				pad = seg;
+				memset(pad, 0, sizeof(struct mlx5_wqe_eth_pad));
+				seg += sizeof(struct mlx5_wqe_eth_pad);
+				size += sizeof(struct mlx5_wqe_eth_pad) / 16;
+
+				seg = set_eth_seg(seg, wr, qend, qp, &size);
+
+				if (unlikely((seg == qend)))
+					seg = mlx5_get_send_wqe(qp, 0);
+			}
+			break;
 		case MLX5_IB_QPT_REG_UMR:
 			if (wr->opcode != MLX5_IB_WR_UMR) {
 				err = -EINVAL;
-- 
1.7.11.3

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH V1 for-next 3/3] IB/mlx5: Add support for CSUM in RX flow
       [not found] ` <1455695339-10389-1-git-send-email-erezsh-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
  2016-02-17  7:48   ` [PATCH V1 for-next 1/3] IB/mlx5: Define mlx5 interface bits for IPoIB offloads Erez Shitrit
  2016-02-17  7:48   ` [PATCH V1 for-next 2/3] IB/mlx5: Implement UD QP offloads for IPoIB in the TX flow Erez Shitrit
@ 2016-02-17  7:48   ` Erez Shitrit
  2 siblings, 0 replies; 5+ messages in thread
From: Erez Shitrit @ 2016-02-17  7:48 UTC (permalink / raw)
  To: dledford-H+wXaHxf7aLQT0dZR+AlfA
  Cc: ogerlitz-VPRAkNaXOzVWk0Htik3J/w,
	linux-rdma-u79uwXL29TY76Z2rM5mHXA, Erez Shitrit


The driver checks the csum from the HW when completion arrived and marks
it in the wc->wc_flags field for the ulp drivers.
These is for packets from type IB_WC_RECV only.

Signed-off-by: Erez Shitrit <erezsh-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
---
 drivers/infiniband/hw/mlx5/cq.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c
index 7ddc790..b0bdc31 100644
--- a/drivers/infiniband/hw/mlx5/cq.c
+++ b/drivers/infiniband/hw/mlx5/cq.c
@@ -207,7 +207,10 @@ static void handle_responder(struct ib_wc *wc, struct mlx5_cqe64 *cqe,
 		break;
 	case MLX5_CQE_RESP_SEND:
 		wc->opcode   = IB_WC_RECV;
-		wc->wc_flags = 0;
+		wc->wc_flags = IB_WC_IP_CSUM_OK;
+		if (!(unlikely(cqe->hds_ip_ext & CQE_L3_OK) &&
+		      unlikely(cqe->hds_ip_ext & CQE_L4_OK)))
+			wc->wc_flags = 0;
 		break;
 	case MLX5_CQE_RESP_SEND_IMM:
 		wc->opcode	= IB_WC_RECV;
-- 
1.7.11.3

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH V1 for-next 1/3] IB/mlx5: Define mlx5 interface bits for IPoIB offloads
       [not found]     ` <1455695339-10389-2-git-send-email-erezsh-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
@ 2016-02-17  8:47       ` Or Gerlitz
  0 siblings, 0 replies; 5+ messages in thread
From: Or Gerlitz @ 2016-02-17  8:47 UTC (permalink / raw)
  To: Erez Shitrit
  Cc: dledford-H+wXaHxf7aLQT0dZR+AlfA, linux-rdma-u79uwXL29TY76Z2rM5mHXA

On 2/17/2016 9:48 AM, Erez Shitrit wrote:
> The HW can supply several offloads for UD QP, added  offloads for
> checksumming for both TX and RX and LSO for TX.

all the patches have 1-2 blank lines in the begining of the change-log, 
please remove them.
>
> Two new bits were added in order to expose and enable these offloads:
> 1. HCA capability bit: declares the support for IPoIB basic offloads.
> 2. QPC bit which will be used in the QP creation flow, which set these
> abilities in the QP.
>
> Signed-off-by: Erez Shitrit <erezsh-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
> ---
>   include/linux/mlx5/mlx5_ifc.h | 8 ++++++--
>   1 file changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
> index 51f1e54..5f61afe 100644
> --- a/include/linux/mlx5/mlx5_ifc.h
> +++ b/include/linux/mlx5/mlx5_ifc.h
> @@ -736,7 +736,11 @@ struct mlx5_ifc_cmd_hca_cap_bits {
>   	u8         cqe_version[0x4];
>   
>   	u8         compact_address_vector[0x1];
> -	u8         reserved_at_200[0xe];
> +	u8         reserved_at_200[0x1];
> +	u8         ipoib_basic_offloads_deprecated[0x1];
> +	u8         ipoib_enhanced_offloads[0x1];
> +	u8         ipoib_basic_offloads[0x1];
> \

You don't use two of these new bits anywhere in the code


> +	u8         reserved_at_204[0xa];
>   	u8         drain_sigerr[0x1];
>   	u8         cmdif_checksum[0x2];
>   	u8         sigerr_cqe[0x1];
> @@ -1780,7 +1784,7 @@ struct mlx5_ifc_qpc_bits {
>   	u8         log_sq_size[0x4];
>   	u8         reserved_at_55[0x6];
>   	u8         rlky[0x1];
> -	u8         reserved_at_5c[0x4];
> +	u8         ulp_stateless_offload_mode[0x4];
>   
>   	u8         counter_set_id[0x8];
>   	u8         uar_page[0x18];


--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2016-02-17  8:47 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-02-17  7:48 [PATCH V1 for-next 0/3] Enable LSO and Checksum for IPoIB over mlx5 Erez Shitrit
     [not found] ` <1455695339-10389-1-git-send-email-erezsh-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2016-02-17  7:48   ` [PATCH V1 for-next 1/3] IB/mlx5: Define mlx5 interface bits for IPoIB offloads Erez Shitrit
     [not found]     ` <1455695339-10389-2-git-send-email-erezsh-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2016-02-17  8:47       ` Or Gerlitz
2016-02-17  7:48   ` [PATCH V1 for-next 2/3] IB/mlx5: Implement UD QP offloads for IPoIB in the TX flow Erez Shitrit
2016-02-17  7:48   ` [PATCH V1 for-next 3/3] IB/mlx5: Add support for CSUM in RX flow Erez Shitrit

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.