All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH rdma-next v1 0/4] Scatter to CQE
@ 2018-10-09  9:05 Leon Romanovsky
  2018-10-09  9:05 ` [PATCH mlx5-next v1 1/4] net/mlx5: Expose DC scatter to CQE capability bit Leon Romanovsky
                   ` (4 more replies)
  0 siblings, 5 replies; 8+ messages in thread
From: Leon Romanovsky @ 2018-10-09  9:05 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Guy Levi, Yonatan Cohen,
	Saeed Mahameed, linux-netdev

From: Leon Romanovsky <leonro@mellanox.com>

Changelog v0->v1:
 * Changed patch #3 to use check_mask function from rdma-core instead define.

--------------------------------------------------------------------------
>From Yonatan,

Scatter to CQE is a HW offload feature that saves PCI writes by
scattering the payload to the CQE.

The feature depends on the CQE size and if the CQE size is 64B, it will
work for payload smaller than 32. If the CQE size is 128B, it will work for
payload smaller than 64.

The feature works for responder and requestor:
1. For responder, if the payload is small as required above, the data
will be part of the CQE, and thus we save another PCI transaction the recv buffers.
2. For requestor, this can be used to get the RDMA_READ response and
RDMA_ATOMIC response in the CQE. This feature is already supported in upstream.

As part of this series, we are adding support for DC transport type and
ability to enable the feature (force enable) in the requestor when SQ
is not configured to signal all WRs.

Thanks

Yonatan Cohen (4):
  net/mlx5: Expose DC scatter to CQE capability bit
  IB/mlx5: Support scatter to CQE for DC transport type
  IB/mlx5: Verify that driver supports user flags
  IB/mlx5: Allow scatter to CQE without global signaled WRs

 drivers/infiniband/hw/mlx5/cq.c      |  2 +-
 drivers/infiniband/hw/mlx5/mlx5_ib.h |  2 +-
 drivers/infiniband/hw/mlx5/qp.c      | 93 ++++++++++++++++++++++++++++--------
 include/linux/mlx5/mlx5_ifc.h        |  3 +-
 include/uapi/rdma/mlx5-abi.h         |  1 +
 5 files changed, 79 insertions(+), 22 deletions(-)

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH mlx5-next v1 1/4] net/mlx5: Expose DC scatter to CQE capability bit
  2018-10-09  9:05 [PATCH rdma-next v1 0/4] Scatter to CQE Leon Romanovsky
@ 2018-10-09  9:05 ` Leon Romanovsky
  2018-10-09  9:05 ` [PATCH rdma-next v1 2/4] IB/mlx5: Support scatter to CQE for DC transport type Leon Romanovsky
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Leon Romanovsky @ 2018-10-09  9:05 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Guy Levi, Yonatan Cohen,
	Saeed Mahameed, linux-netdev

From: Yonatan Cohen <yonatanc@mellanox.com>

dc_req_scat_data_cqe capability bit determines
if requester scatter to cqe is available for 64 bytes CQE over
DC transport type.

Signed-off-by: Yonatan Cohen <yonatanc@mellanox.com>
Reviewed-by: Guy Levi <guyle@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 include/linux/mlx5/mlx5_ifc.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
index 68f4d5f9d929..0f460fb22c31 100644
--- a/include/linux/mlx5/mlx5_ifc.h
+++ b/include/linux/mlx5/mlx5_ifc.h
@@ -1005,7 +1005,8 @@ struct mlx5_ifc_cmd_hca_cap_bits {
 	u8         umr_modify_atomic_disabled[0x1];
 	u8         umr_indirect_mkey_disabled[0x1];
 	u8         umr_fence[0x2];
-	u8         reserved_at_20c[0x3];
+	u8         dc_req_scat_data_cqe[0x1];
+	u8         reserved_at_20d[0x2];
 	u8         drain_sigerr[0x1];
 	u8         cmdif_checksum[0x2];
 	u8         sigerr_cqe[0x1];
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH rdma-next v1 2/4] IB/mlx5: Support scatter to CQE for DC transport type
  2018-10-09  9:05 [PATCH rdma-next v1 0/4] Scatter to CQE Leon Romanovsky
  2018-10-09  9:05 ` [PATCH mlx5-next v1 1/4] net/mlx5: Expose DC scatter to CQE capability bit Leon Romanovsky
@ 2018-10-09  9:05 ` Leon Romanovsky
  2018-10-09  9:05 ` [PATCH rdma-next v1 3/4] IB/mlx5: Verify that driver supports user flags Leon Romanovsky
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Leon Romanovsky @ 2018-10-09  9:05 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Guy Levi, Yonatan Cohen,
	Saeed Mahameed, linux-netdev

From: Yonatan Cohen <yonatanc@mellanox.com>

Scatter to CQE is a HW offload that saves PCI writes by scattering the
payload to the CQE.
This patch extends already existing functionality to support DC
transport type.

Signed-off-by: Yonatan Cohen <yonatanc@mellanox.com>
Reviewed-by: Guy Levi <guyle@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/cq.c      |  2 +-
 drivers/infiniband/hw/mlx5/mlx5_ib.h |  2 +-
 drivers/infiniband/hw/mlx5/qp.c      | 71 ++++++++++++++++++++++++++----------
 3 files changed, 54 insertions(+), 21 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c
index dae30b6478bf..a41519dc8d3a 100644
--- a/drivers/infiniband/hw/mlx5/cq.c
+++ b/drivers/infiniband/hw/mlx5/cq.c
@@ -1460,7 +1460,7 @@ int mlx5_ib_resize_cq(struct ib_cq *ibcq, int entries, struct ib_udata *udata)
 	return err;
 }
 
-int mlx5_ib_get_cqe_size(struct mlx5_ib_dev *dev, struct ib_cq *ibcq)
+int mlx5_ib_get_cqe_size(struct ib_cq *ibcq)
 {
 	struct mlx5_ib_cq *cq;
 
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index e5ec3fdaa4d5..f52dfed1395d 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -1127,7 +1127,7 @@ void __mlx5_ib_populate_pas(struct mlx5_ib_dev *dev, struct ib_umem *umem,
 void mlx5_ib_populate_pas(struct mlx5_ib_dev *dev, struct ib_umem *umem,
 			  int page_shift, __be64 *pas, int access_flags);
 void mlx5_ib_copy_pas(u64 *old, u64 *new, int step, int num);
-int mlx5_ib_get_cqe_size(struct mlx5_ib_dev *dev, struct ib_cq *ibcq);
+int mlx5_ib_get_cqe_size(struct ib_cq *ibcq);
 int mlx5_mr_cache_init(struct mlx5_ib_dev *dev);
 int mlx5_mr_cache_cleanup(struct mlx5_ib_dev *dev);
 
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index fa8e5dc65cb4..bae48bdf281c 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1053,7 +1053,8 @@ static u32 get_rx_type(struct mlx5_ib_qp *qp, struct ib_qp_init_attr *attr)
 
 static int is_connected(enum ib_qp_type qp_type)
 {
-	if (qp_type == IB_QPT_RC || qp_type == IB_QPT_UC)
+	if (qp_type == IB_QPT_RC || qp_type == IB_QPT_UC ||
+	    qp_type == MLX5_IB_QPT_DCI)
 		return 1;
 
 	return 0;
@@ -1684,6 +1685,49 @@ static int create_rss_raw_qp_tir(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 	return err;
 }
 
+static void configure_responder_scat_cqe(struct ib_qp_init_attr *init_attr,
+					 void *qpc)
+{
+	int rcqe_sz;
+
+	if (init_attr->qp_type == MLX5_IB_QPT_DCI)
+		return;
+
+	rcqe_sz = mlx5_ib_get_cqe_size(init_attr->recv_cq);
+
+	if (rcqe_sz == 128) {
+		MLX5_SET(qpc, qpc, cs_res, MLX5_RES_SCAT_DATA64_CQE);
+		return;
+	}
+
+	if (init_attr->qp_type != MLX5_IB_QPT_DCT)
+		MLX5_SET(qpc, qpc, cs_res, MLX5_RES_SCAT_DATA32_CQE);
+}
+
+static void configure_requester_scat_cqe(struct mlx5_ib_dev *dev,
+					 struct ib_qp_init_attr *init_attr,
+					 void *qpc)
+{
+	enum ib_qp_type qpt = init_attr->qp_type;
+	int scqe_sz;
+
+	if (qpt == IB_QPT_UC || qpt == IB_QPT_UD)
+		return;
+
+	if (init_attr->sq_sig_type != IB_SIGNAL_ALL_WR)
+		return;
+
+	scqe_sz = mlx5_ib_get_cqe_size(init_attr->send_cq);
+	if (scqe_sz == 128) {
+		MLX5_SET(qpc, qpc, cs_req, MLX5_REQ_SCAT_DATA64_CQE);
+		return;
+	}
+
+	if (init_attr->qp_type != MLX5_IB_QPT_DCI ||
+	    MLX5_CAP_GEN(dev->mdev, dc_req_scat_data_cqe))
+		MLX5_SET(qpc, qpc, cs_req, MLX5_REQ_SCAT_DATA32_CQE);
+}
+
 static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 			    struct ib_qp_init_attr *init_attr,
 			    struct ib_udata *udata, struct mlx5_ib_qp *qp)
@@ -1787,7 +1831,8 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 			return err;
 
 		qp->wq_sig = !!(ucmd.flags & MLX5_QP_FLAG_SIGNATURE);
-		qp->scat_cqe = !!(ucmd.flags & MLX5_QP_FLAG_SCATTER_CQE);
+		if (MLX5_CAP_GEN(dev->mdev, sctr_data_cqe))
+			qp->scat_cqe = !!(ucmd.flags & MLX5_QP_FLAG_SCATTER_CQE);
 		if (ucmd.flags & MLX5_QP_FLAG_TUNNEL_OFFLOADS) {
 			if (init_attr->qp_type != IB_QPT_RAW_PACKET ||
 			    !tunnel_offload_supported(mdev)) {
@@ -1911,23 +1956,8 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		MLX5_SET(qpc, qpc, cd_slave_receive, 1);
 
 	if (qp->scat_cqe && is_connected(init_attr->qp_type)) {
-		int rcqe_sz;
-		int scqe_sz;
-
-		rcqe_sz = mlx5_ib_get_cqe_size(dev, init_attr->recv_cq);
-		scqe_sz = mlx5_ib_get_cqe_size(dev, init_attr->send_cq);
-
-		if (rcqe_sz == 128)
-			MLX5_SET(qpc, qpc, cs_res, MLX5_RES_SCAT_DATA64_CQE);
-		else
-			MLX5_SET(qpc, qpc, cs_res, MLX5_RES_SCAT_DATA32_CQE);
-
-		if (init_attr->sq_sig_type == IB_SIGNAL_ALL_WR) {
-			if (scqe_sz == 128)
-				MLX5_SET(qpc, qpc, cs_req, MLX5_REQ_SCAT_DATA64_CQE);
-			else
-				MLX5_SET(qpc, qpc, cs_req, MLX5_REQ_SCAT_DATA32_CQE);
-		}
+		configure_responder_scat_cqe(init_attr, qpc);
+		configure_requester_scat_cqe(dev, init_attr, qpc);
 	}
 
 	if (qp->rq.wqe_cnt) {
@@ -2302,6 +2332,9 @@ static struct ib_qp *mlx5_ib_create_dct(struct ib_pd *pd,
 	MLX5_SET64(dctc, dctc, dc_access_key, ucmd->access_key);
 	MLX5_SET(dctc, dctc, user_index, uidx);
 
+	if (ucmd->flags & MLX5_QP_FLAG_SCATTER_CQE)
+		configure_responder_scat_cqe(attr, dctc);
+
 	qp->state = IB_QPS_RESET;
 
 	return &qp->ibqp;
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH rdma-next v1 3/4] IB/mlx5: Verify that driver supports user flags
  2018-10-09  9:05 [PATCH rdma-next v1 0/4] Scatter to CQE Leon Romanovsky
  2018-10-09  9:05 ` [PATCH mlx5-next v1 1/4] net/mlx5: Expose DC scatter to CQE capability bit Leon Romanovsky
  2018-10-09  9:05 ` [PATCH rdma-next v1 2/4] IB/mlx5: Support scatter to CQE for DC transport type Leon Romanovsky
@ 2018-10-09  9:05 ` Leon Romanovsky
  2018-10-09  9:05 ` [PATCH rdma-next v1 4/4] IB/mlx5: Allow scatter to CQE without global signaled WRs Leon Romanovsky
  2018-10-16 18:39 ` [PATCH rdma-next v1 0/4] Scatter to CQE Doug Ledford
  4 siblings, 0 replies; 8+ messages in thread
From: Leon Romanovsky @ 2018-10-09  9:05 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Guy Levi, Yonatan Cohen,
	Saeed Mahameed, linux-netdev

From: Yonatan Cohen <yonatanc@mellanox.com>

Flags sent down from user might not be supported by
running driver.
This might lead to unwanted bugs.
To solve this, added macro to test for unsupported flags.

Signed-off-by: Yonatan Cohen <yonatanc@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index bae48bdf281c..73a53a8da9b6 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1728,6 +1728,11 @@ static void configure_requester_scat_cqe(struct mlx5_ib_dev *dev,
 		MLX5_SET(qpc, qpc, cs_req, MLX5_REQ_SCAT_DATA32_CQE);
 }
 
+static inline bool check_flags_mask(uint64_t input, uint64_t supported)
+{
+	return (input & ~supported) == 0;
+}
+
 static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 			    struct ib_qp_init_attr *init_attr,
 			    struct ib_udata *udata, struct mlx5_ib_qp *qp)
@@ -1825,6 +1830,15 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 			return -EFAULT;
 		}
 
+		if (!check_flags_mask(ucmd.flags,
+				      MLX5_QP_FLAG_SIGNATURE |
+					      MLX5_QP_FLAG_SCATTER_CQE |
+					      MLX5_QP_FLAG_TUNNEL_OFFLOADS |
+					      MLX5_QP_FLAG_BFREG_INDEX |
+					      MLX5_QP_FLAG_TYPE_DCT |
+					      MLX5_QP_FLAG_TYPE_DCI))
+			return -EINVAL;
+
 		err = get_qp_user_index(to_mucontext(pd->uobject->context),
 					&ucmd, udata->inlen, &uidx);
 		if (err)
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH rdma-next v1 4/4] IB/mlx5: Allow scatter to CQE without global signaled WRs
  2018-10-09  9:05 [PATCH rdma-next v1 0/4] Scatter to CQE Leon Romanovsky
                   ` (2 preceding siblings ...)
  2018-10-09  9:05 ` [PATCH rdma-next v1 3/4] IB/mlx5: Verify that driver supports user flags Leon Romanovsky
@ 2018-10-09  9:05 ` Leon Romanovsky
  2018-10-16 18:39 ` [PATCH rdma-next v1 0/4] Scatter to CQE Doug Ledford
  4 siblings, 0 replies; 8+ messages in thread
From: Leon Romanovsky @ 2018-10-09  9:05 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Guy Levi, Yonatan Cohen,
	Saeed Mahameed, linux-netdev

From: Yonatan Cohen <yonatanc@mellanox.com>

Requester scatter to CQE is restricted to QPs configured to signal
all WRs.

This patch adds ability to enable scatter to cqe (force enable)
in the requester without sig_all, for users who do not want all WRs
signaled but rather just the ones whose data found in the CQE.

Signed-off-by: Yonatan Cohen <yonatanc@mellanox.com>
Reviewed-by: Guy Levi <guyle@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 14 +++++++++++---
 include/uapi/rdma/mlx5-abi.h    |  1 +
 2 files changed, 12 insertions(+), 3 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 73a53a8da9b6..1a6f34ea2f5d 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1706,15 +1706,20 @@ static void configure_responder_scat_cqe(struct ib_qp_init_attr *init_attr,
 
 static void configure_requester_scat_cqe(struct mlx5_ib_dev *dev,
 					 struct ib_qp_init_attr *init_attr,
+					 struct mlx5_ib_create_qp *ucmd,
 					 void *qpc)
 {
 	enum ib_qp_type qpt = init_attr->qp_type;
 	int scqe_sz;
+	bool allow_scat_cqe = 0;
 
 	if (qpt == IB_QPT_UC || qpt == IB_QPT_UD)
 		return;
 
-	if (init_attr->sq_sig_type != IB_SIGNAL_ALL_WR)
+	if (ucmd)
+		allow_scat_cqe = ucmd->flags & MLX5_QP_FLAG_ALLOW_SCATTER_CQE;
+
+	if (!allow_scat_cqe && init_attr->sq_sig_type != IB_SIGNAL_ALL_WR)
 		return;
 
 	scqe_sz = mlx5_ib_get_cqe_size(init_attr->send_cq);
@@ -1836,7 +1841,8 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 					      MLX5_QP_FLAG_TUNNEL_OFFLOADS |
 					      MLX5_QP_FLAG_BFREG_INDEX |
 					      MLX5_QP_FLAG_TYPE_DCT |
-					      MLX5_QP_FLAG_TYPE_DCI))
+					      MLX5_QP_FLAG_TYPE_DCI |
+					      MLX5_QP_FLAG_ALLOW_SCATTER_CQE))
 			return -EINVAL;
 
 		err = get_qp_user_index(to_mucontext(pd->uobject->context),
@@ -1971,7 +1977,9 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 
 	if (qp->scat_cqe && is_connected(init_attr->qp_type)) {
 		configure_responder_scat_cqe(init_attr, qpc);
-		configure_requester_scat_cqe(dev, init_attr, qpc);
+		configure_requester_scat_cqe(dev, init_attr,
+					     (pd && pd->uobject) ? &ucmd : NULL,
+					     qpc);
 	}
 
 	if (qp->rq.wqe_cnt) {
diff --git a/include/uapi/rdma/mlx5-abi.h b/include/uapi/rdma/mlx5-abi.h
index 6056625237cf..8fa9f90e2bb1 100644
--- a/include/uapi/rdma/mlx5-abi.h
+++ b/include/uapi/rdma/mlx5-abi.h
@@ -47,6 +47,7 @@ enum {
 	MLX5_QP_FLAG_TYPE_DCI		= 1 << 5,
 	MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_UC = 1 << 6,
 	MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_MC = 1 << 7,
+	MLX5_QP_FLAG_ALLOW_SCATTER_CQE	= 1 << 8,
 };
 
 enum {
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH rdma-next v1 0/4] Scatter to CQE
  2018-10-09  9:05 [PATCH rdma-next v1 0/4] Scatter to CQE Leon Romanovsky
                   ` (3 preceding siblings ...)
  2018-10-09  9:05 ` [PATCH rdma-next v1 4/4] IB/mlx5: Allow scatter to CQE without global signaled WRs Leon Romanovsky
@ 2018-10-16 18:39 ` Doug Ledford
  2018-10-16 19:00   ` Leon Romanovsky
  4 siblings, 1 reply; 8+ messages in thread
From: Doug Ledford @ 2018-10-16 18:39 UTC (permalink / raw)
  To: Leon Romanovsky, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Guy Levi, Yonatan Cohen,
	Saeed Mahameed, linux-netdev

[-- Attachment #1: Type: text/plain, Size: 2049 bytes --]

On Tue, 2018-10-09 at 12:05 +0300, Leon Romanovsky wrote:
> From: Leon Romanovsky <leonro@mellanox.com>
> 
> Changelog v0->v1:
>  * Changed patch #3 to use check_mask function from rdma-core instead define.
> 
> --------------------------------------------------------------------------
> From Yonatan,
> 
> Scatter to CQE is a HW offload feature that saves PCI writes by
> scattering the payload to the CQE.
> 
> The feature depends on the CQE size and if the CQE size is 64B, it will
> work for payload smaller than 32. If the CQE size is 128B, it will work for
> payload smaller than 64.
> 
> The feature works for responder and requestor:
> 1. For responder, if the payload is small as required above, the data
> will be part of the CQE, and thus we save another PCI transaction the recv buffers.
> 2. For requestor, this can be used to get the RDMA_READ response and
> RDMA_ATOMIC response in the CQE. This feature is already supported in upstream.
> 
> As part of this series, we are adding support for DC transport type and
> ability to enable the feature (force enable) in the requestor when SQ
> is not configured to signal all WRs.
> 
> Thanks
> 
> Yonatan Cohen (4):
>   net/mlx5: Expose DC scatter to CQE capability bit
>   IB/mlx5: Support scatter to CQE for DC transport type
>   IB/mlx5: Verify that driver supports user flags
>   IB/mlx5: Allow scatter to CQE without global signaled WRs
> 
>  drivers/infiniband/hw/mlx5/cq.c      |  2 +-
>  drivers/infiniband/hw/mlx5/mlx5_ib.h |  2 +-
>  drivers/infiniband/hw/mlx5/qp.c      | 93 ++++++++++++++++++++++++++++--------
>  include/linux/mlx5/mlx5_ifc.h        |  3 +-
>  include/uapi/rdma/mlx5-abi.h         |  1 +
>  5 files changed, 79 insertions(+), 22 deletions(-)
> 
> --
> 2.14.4
> 


Hi Leon,

This series looks fine.  Let me know when the net/mlx5 portion has been
committed.

-- 
Doug Ledford <dledford@redhat.com>
    GPG KeyID: B826A3330E572FDD
    Key fingerprint = AE6B 1BDA 122B 23B4 265B  1274 B826 A333 0E57 2FDD

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH rdma-next v1 0/4] Scatter to CQE
  2018-10-16 18:39 ` [PATCH rdma-next v1 0/4] Scatter to CQE Doug Ledford
@ 2018-10-16 19:00   ` Leon Romanovsky
  2018-10-17 15:27     ` Doug Ledford
  0 siblings, 1 reply; 8+ messages in thread
From: Leon Romanovsky @ 2018-10-16 19:00 UTC (permalink / raw)
  To: Doug Ledford
  Cc: Jason Gunthorpe, RDMA mailing list, Guy Levi, Yonatan Cohen,
	Saeed Mahameed, linux-netdev

[-- Attachment #1: Type: text/plain, Size: 2268 bytes --]

On Tue, Oct 16, 2018 at 02:39:01PM -0400, Doug Ledford wrote:
> On Tue, 2018-10-09 at 12:05 +0300, Leon Romanovsky wrote:
> > From: Leon Romanovsky <leonro@mellanox.com>
> >
> > Changelog v0->v1:
> >  * Changed patch #3 to use check_mask function from rdma-core instead define.
> >
> > --------------------------------------------------------------------------
> > From Yonatan,
> >
> > Scatter to CQE is a HW offload feature that saves PCI writes by
> > scattering the payload to the CQE.
> >
> > The feature depends on the CQE size and if the CQE size is 64B, it will
> > work for payload smaller than 32. If the CQE size is 128B, it will work for
> > payload smaller than 64.
> >
> > The feature works for responder and requestor:
> > 1. For responder, if the payload is small as required above, the data
> > will be part of the CQE, and thus we save another PCI transaction the recv buffers.
> > 2. For requestor, this can be used to get the RDMA_READ response and
> > RDMA_ATOMIC response in the CQE. This feature is already supported in upstream.
> >
> > As part of this series, we are adding support for DC transport type and
> > ability to enable the feature (force enable) in the requestor when SQ
> > is not configured to signal all WRs.
> >
> > Thanks
> >
> > Yonatan Cohen (4):
> >   net/mlx5: Expose DC scatter to CQE capability bit
> >   IB/mlx5: Support scatter to CQE for DC transport type
> >   IB/mlx5: Verify that driver supports user flags
> >   IB/mlx5: Allow scatter to CQE without global signaled WRs
> >
> >  drivers/infiniband/hw/mlx5/cq.c      |  2 +-
> >  drivers/infiniband/hw/mlx5/mlx5_ib.h |  2 +-
> >  drivers/infiniband/hw/mlx5/qp.c      | 93 ++++++++++++++++++++++++++++--------
> >  include/linux/mlx5/mlx5_ifc.h        |  3 +-
> >  include/uapi/rdma/mlx5-abi.h         |  1 +
> >  5 files changed, 79 insertions(+), 22 deletions(-)
> >
> > --
> > 2.14.4
> >
>
>
> Hi Leon,
>
> This series looks fine.  Let me know when the net/mlx5 portion has been
> committed.

Thanks Doug,
I pushed first patch to mlx5-next
94a04d1d3d36 ("net/mlx5: Expose DC scatter to CQE capability bit")

>
> --
> Doug Ledford <dledford@redhat.com>
>     GPG KeyID: B826A3330E572FDD
>     Key fingerprint = AE6B 1BDA 122B 23B4 265B  1274 B826 A333 0E57 2FDD



[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 801 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH rdma-next v1 0/4] Scatter to CQE
  2018-10-16 19:00   ` Leon Romanovsky
@ 2018-10-17 15:27     ` Doug Ledford
  0 siblings, 0 replies; 8+ messages in thread
From: Doug Ledford @ 2018-10-17 15:27 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Jason Gunthorpe, RDMA mailing list, Guy Levi, Yonatan Cohen,
	Saeed Mahameed, linux-netdev

[-- Attachment #1: Type: text/plain, Size: 2584 bytes --]

On Tue, 2018-10-16 at 22:00 +0300, Leon Romanovsky wrote:
> On Tue, Oct 16, 2018 at 02:39:01PM -0400, Doug Ledford wrote:
> > On Tue, 2018-10-09 at 12:05 +0300, Leon Romanovsky wrote:
> > > From: Leon Romanovsky <leonro@mellanox.com>
> > > 
> > > Changelog v0->v1:
> > >  * Changed patch #3 to use check_mask function from rdma-core instead define.
> > > 
> > > --------------------------------------------------------------------------
> > > From Yonatan,
> > > 
> > > Scatter to CQE is a HW offload feature that saves PCI writes by
> > > scattering the payload to the CQE.
> > > 
> > > The feature depends on the CQE size and if the CQE size is 64B, it will
> > > work for payload smaller than 32. If the CQE size is 128B, it will work for
> > > payload smaller than 64.
> > > 
> > > The feature works for responder and requestor:
> > > 1. For responder, if the payload is small as required above, the data
> > > will be part of the CQE, and thus we save another PCI transaction the recv buffers.
> > > 2. For requestor, this can be used to get the RDMA_READ response and
> > > RDMA_ATOMIC response in the CQE. This feature is already supported in upstream.
> > > 
> > > As part of this series, we are adding support for DC transport type and
> > > ability to enable the feature (force enable) in the requestor when SQ
> > > is not configured to signal all WRs.
> > > 
> > > Thanks
> > > 
> > > Yonatan Cohen (4):
> > >   net/mlx5: Expose DC scatter to CQE capability bit
> > >   IB/mlx5: Support scatter to CQE for DC transport type
> > >   IB/mlx5: Verify that driver supports user flags
> > >   IB/mlx5: Allow scatter to CQE without global signaled WRs
> > > 
> > >  drivers/infiniband/hw/mlx5/cq.c      |  2 +-
> > >  drivers/infiniband/hw/mlx5/mlx5_ib.h |  2 +-
> > >  drivers/infiniband/hw/mlx5/qp.c      | 93 ++++++++++++++++++++++++++++--------
> > >  include/linux/mlx5/mlx5_ifc.h        |  3 +-
> > >  include/uapi/rdma/mlx5-abi.h         |  1 +
> > >  5 files changed, 79 insertions(+), 22 deletions(-)
> > > 
> > > --
> > > 2.14.4
> > > 
> > 
> > 
> > Hi Leon,
> > 
> > This series looks fine.  Let me know when the net/mlx5 portion has been
> > committed.
> 
> Thanks Doug,
> I pushed first patch to mlx5-next
> 94a04d1d3d36 ("net/mlx5: Expose DC scatter to CQE capability bit")

Thanks Leon, mlx5-next merged in, then remainder of series applied to
for-next.

-- 
Doug Ledford <dledford@redhat.com>
    GPG KeyID: B826A3330E572FDD
    Key fingerprint = AE6B 1BDA 122B 23B4 265B  1274 B826 A333 0E57 2FDD

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2018-10-17 15:27 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-09  9:05 [PATCH rdma-next v1 0/4] Scatter to CQE Leon Romanovsky
2018-10-09  9:05 ` [PATCH mlx5-next v1 1/4] net/mlx5: Expose DC scatter to CQE capability bit Leon Romanovsky
2018-10-09  9:05 ` [PATCH rdma-next v1 2/4] IB/mlx5: Support scatter to CQE for DC transport type Leon Romanovsky
2018-10-09  9:05 ` [PATCH rdma-next v1 3/4] IB/mlx5: Verify that driver supports user flags Leon Romanovsky
2018-10-09  9:05 ` [PATCH rdma-next v1 4/4] IB/mlx5: Allow scatter to CQE without global signaled WRs Leon Romanovsky
2018-10-16 18:39 ` [PATCH rdma-next v1 0/4] Scatter to CQE Doug Ledford
2018-10-16 19:00   ` Leon Romanovsky
2018-10-17 15:27     ` Doug Ledford

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.