All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH rdma-next 0/4] Scatter to CQE
@ 2018-10-07  9:03 Leon Romanovsky
  2018-10-07  9:03 ` [PATCH mlx5-next 1/4] net/mlx5: Expose DC scatter to CQE capability bit Leon Romanovsky
                   ` (3 more replies)
  0 siblings, 4 replies; 6+ messages in thread
From: Leon Romanovsky @ 2018-10-07  9:03 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Guy Levi, Yonatan Cohen,
	Saeed Mahameed, linux-netdev

From: Leon Romanovsky <leonro@mellanox.com>

>From Yonatan,

Scatter to CQE is a HW offload feature that saves PCI writes by scattering
the payload to the CQE.

The feature depends on the CQE size and if the CQE size is 64B, it will work
for payload smaller than 32. If the CQE size is 128B, it will work for payload
smaller than 64.

The feature works for responder and requestor:
1. For responder, if the payload is small as required above, the data will
be part of the CQE, and thus we save another PCI transaction the recv buffers.
2. For requestor, this can be used to get the RDMA_READ response and RDMA_ATOMIC
response in the CQE. This feature is already supported in upstream.

As part of this series, we are adding support for DC transport type and
ability to enable the feature (force enable) in the requestor when SQ
is not configured to signal all WRs.

Thanks

Yonatan Cohen (4):
  net/mlx5: Expose DC scatter to CQE capability bit
  IB/mlx5: Support scatter to CQE for DC transport type
  IB/mlx5: Verify that driver supports user flags
  IB/mlx5: Allow scatter to CQE without global signaled WRs

 drivers/infiniband/hw/mlx5/cq.c      |  2 +-
 drivers/infiniband/hw/mlx5/mlx5_ib.h |  2 +-
 drivers/infiniband/hw/mlx5/qp.c      | 91 ++++++++++++++++++++++++++++--------
 include/linux/mlx5/mlx5_ifc.h        |  3 +-
 include/uapi/rdma/mlx5-abi.h         |  1 +
 5 files changed, 77 insertions(+), 22 deletions(-)

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH mlx5-next 1/4] net/mlx5: Expose DC scatter to CQE capability bit
  2018-10-07  9:03 [PATCH rdma-next 0/4] Scatter to CQE Leon Romanovsky
@ 2018-10-07  9:03 ` Leon Romanovsky
  2018-10-07  9:03 ` [PATCH rdma-next 2/4] IB/mlx5: Support scatter to CQE for DC transport type Leon Romanovsky
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 6+ messages in thread
From: Leon Romanovsky @ 2018-10-07  9:03 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Guy Levi, Yonatan Cohen,
	Saeed Mahameed, linux-netdev

From: Yonatan Cohen <yonatanc@mellanox.com>

dc_req_scat_data_cqe capability bit determines
if requester scatter to cqe is available for 64 bytes CQE over
DC transport type.

Signed-off-by: Yonatan Cohen <yonatanc@mellanox.com>
Reviewed-by: Guy Levi <guyle@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 include/linux/mlx5/mlx5_ifc.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
index 68f4d5f9d929..0f460fb22c31 100644
--- a/include/linux/mlx5/mlx5_ifc.h
+++ b/include/linux/mlx5/mlx5_ifc.h
@@ -1005,7 +1005,8 @@ struct mlx5_ifc_cmd_hca_cap_bits {
 	u8         umr_modify_atomic_disabled[0x1];
 	u8         umr_indirect_mkey_disabled[0x1];
 	u8         umr_fence[0x2];
-	u8         reserved_at_20c[0x3];
+	u8         dc_req_scat_data_cqe[0x1];
+	u8         reserved_at_20d[0x2];
 	u8         drain_sigerr[0x1];
 	u8         cmdif_checksum[0x2];
 	u8         sigerr_cqe[0x1];
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH rdma-next 2/4] IB/mlx5: Support scatter to CQE for DC transport type
  2018-10-07  9:03 [PATCH rdma-next 0/4] Scatter to CQE Leon Romanovsky
  2018-10-07  9:03 ` [PATCH mlx5-next 1/4] net/mlx5: Expose DC scatter to CQE capability bit Leon Romanovsky
@ 2018-10-07  9:03 ` Leon Romanovsky
  2018-10-07  9:03 ` [PATCH rdma-next 3/4] IB/mlx5: Verify that driver supports user flags Leon Romanovsky
  2018-10-07  9:03 ` [PATCH rdma-next 4/4] IB/mlx5: Allow scatter to CQE without global signaled WRs Leon Romanovsky
  3 siblings, 0 replies; 6+ messages in thread
From: Leon Romanovsky @ 2018-10-07  9:03 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Guy Levi, Yonatan Cohen,
	Saeed Mahameed, linux-netdev

From: Yonatan Cohen <yonatanc@mellanox.com>

Scatter to CQE is a HW offload that saves PCI writes by scattering the
payload to the CQE.
This patch extends already existing functionality to support DC
transport type.

Signed-off-by: Yonatan Cohen <yonatanc@mellanox.com>
Reviewed-by: Guy Levi <guyle@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/cq.c      |  2 +-
 drivers/infiniband/hw/mlx5/mlx5_ib.h |  2 +-
 drivers/infiniband/hw/mlx5/qp.c      | 71 ++++++++++++++++++++++++++----------
 3 files changed, 54 insertions(+), 21 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c
index dae30b6478bf..a41519dc8d3a 100644
--- a/drivers/infiniband/hw/mlx5/cq.c
+++ b/drivers/infiniband/hw/mlx5/cq.c
@@ -1460,7 +1460,7 @@ int mlx5_ib_resize_cq(struct ib_cq *ibcq, int entries, struct ib_udata *udata)
 	return err;
 }
 
-int mlx5_ib_get_cqe_size(struct mlx5_ib_dev *dev, struct ib_cq *ibcq)
+int mlx5_ib_get_cqe_size(struct ib_cq *ibcq)
 {
 	struct mlx5_ib_cq *cq;
 
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index e5ec3fdaa4d5..f52dfed1395d 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -1127,7 +1127,7 @@ void __mlx5_ib_populate_pas(struct mlx5_ib_dev *dev, struct ib_umem *umem,
 void mlx5_ib_populate_pas(struct mlx5_ib_dev *dev, struct ib_umem *umem,
 			  int page_shift, __be64 *pas, int access_flags);
 void mlx5_ib_copy_pas(u64 *old, u64 *new, int step, int num);
-int mlx5_ib_get_cqe_size(struct mlx5_ib_dev *dev, struct ib_cq *ibcq);
+int mlx5_ib_get_cqe_size(struct ib_cq *ibcq);
 int mlx5_mr_cache_init(struct mlx5_ib_dev *dev);
 int mlx5_mr_cache_cleanup(struct mlx5_ib_dev *dev);
 
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index fa8e5dc65cb4..bae48bdf281c 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1053,7 +1053,8 @@ static u32 get_rx_type(struct mlx5_ib_qp *qp, struct ib_qp_init_attr *attr)
 
 static int is_connected(enum ib_qp_type qp_type)
 {
-	if (qp_type == IB_QPT_RC || qp_type == IB_QPT_UC)
+	if (qp_type == IB_QPT_RC || qp_type == IB_QPT_UC ||
+	    qp_type == MLX5_IB_QPT_DCI)
 		return 1;
 
 	return 0;
@@ -1684,6 +1685,49 @@ static int create_rss_raw_qp_tir(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
 	return err;
 }
 
+static void configure_responder_scat_cqe(struct ib_qp_init_attr *init_attr,
+					 void *qpc)
+{
+	int rcqe_sz;
+
+	if (init_attr->qp_type == MLX5_IB_QPT_DCI)
+		return;
+
+	rcqe_sz = mlx5_ib_get_cqe_size(init_attr->recv_cq);
+
+	if (rcqe_sz == 128) {
+		MLX5_SET(qpc, qpc, cs_res, MLX5_RES_SCAT_DATA64_CQE);
+		return;
+	}
+
+	if (init_attr->qp_type != MLX5_IB_QPT_DCT)
+		MLX5_SET(qpc, qpc, cs_res, MLX5_RES_SCAT_DATA32_CQE);
+}
+
+static void configure_requester_scat_cqe(struct mlx5_ib_dev *dev,
+					 struct ib_qp_init_attr *init_attr,
+					 void *qpc)
+{
+	enum ib_qp_type qpt = init_attr->qp_type;
+	int scqe_sz;
+
+	if (qpt == IB_QPT_UC || qpt == IB_QPT_UD)
+		return;
+
+	if (init_attr->sq_sig_type != IB_SIGNAL_ALL_WR)
+		return;
+
+	scqe_sz = mlx5_ib_get_cqe_size(init_attr->send_cq);
+	if (scqe_sz == 128) {
+		MLX5_SET(qpc, qpc, cs_req, MLX5_REQ_SCAT_DATA64_CQE);
+		return;
+	}
+
+	if (init_attr->qp_type != MLX5_IB_QPT_DCI ||
+	    MLX5_CAP_GEN(dev->mdev, dc_req_scat_data_cqe))
+		MLX5_SET(qpc, qpc, cs_req, MLX5_REQ_SCAT_DATA32_CQE);
+}
+
 static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 			    struct ib_qp_init_attr *init_attr,
 			    struct ib_udata *udata, struct mlx5_ib_qp *qp)
@@ -1787,7 +1831,8 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 			return err;
 
 		qp->wq_sig = !!(ucmd.flags & MLX5_QP_FLAG_SIGNATURE);
-		qp->scat_cqe = !!(ucmd.flags & MLX5_QP_FLAG_SCATTER_CQE);
+		if (MLX5_CAP_GEN(dev->mdev, sctr_data_cqe))
+			qp->scat_cqe = !!(ucmd.flags & MLX5_QP_FLAG_SCATTER_CQE);
 		if (ucmd.flags & MLX5_QP_FLAG_TUNNEL_OFFLOADS) {
 			if (init_attr->qp_type != IB_QPT_RAW_PACKET ||
 			    !tunnel_offload_supported(mdev)) {
@@ -1911,23 +1956,8 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 		MLX5_SET(qpc, qpc, cd_slave_receive, 1);
 
 	if (qp->scat_cqe && is_connected(init_attr->qp_type)) {
-		int rcqe_sz;
-		int scqe_sz;
-
-		rcqe_sz = mlx5_ib_get_cqe_size(dev, init_attr->recv_cq);
-		scqe_sz = mlx5_ib_get_cqe_size(dev, init_attr->send_cq);
-
-		if (rcqe_sz == 128)
-			MLX5_SET(qpc, qpc, cs_res, MLX5_RES_SCAT_DATA64_CQE);
-		else
-			MLX5_SET(qpc, qpc, cs_res, MLX5_RES_SCAT_DATA32_CQE);
-
-		if (init_attr->sq_sig_type == IB_SIGNAL_ALL_WR) {
-			if (scqe_sz == 128)
-				MLX5_SET(qpc, qpc, cs_req, MLX5_REQ_SCAT_DATA64_CQE);
-			else
-				MLX5_SET(qpc, qpc, cs_req, MLX5_REQ_SCAT_DATA32_CQE);
-		}
+		configure_responder_scat_cqe(init_attr, qpc);
+		configure_requester_scat_cqe(dev, init_attr, qpc);
 	}
 
 	if (qp->rq.wqe_cnt) {
@@ -2302,6 +2332,9 @@ static struct ib_qp *mlx5_ib_create_dct(struct ib_pd *pd,
 	MLX5_SET64(dctc, dctc, dc_access_key, ucmd->access_key);
 	MLX5_SET(dctc, dctc, user_index, uidx);
 
+	if (ucmd->flags & MLX5_QP_FLAG_SCATTER_CQE)
+		configure_responder_scat_cqe(attr, dctc);
+
 	qp->state = IB_QPS_RESET;
 
 	return &qp->ibqp;
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH rdma-next 3/4] IB/mlx5: Verify that driver supports user flags
  2018-10-07  9:03 [PATCH rdma-next 0/4] Scatter to CQE Leon Romanovsky
  2018-10-07  9:03 ` [PATCH mlx5-next 1/4] net/mlx5: Expose DC scatter to CQE capability bit Leon Romanovsky
  2018-10-07  9:03 ` [PATCH rdma-next 2/4] IB/mlx5: Support scatter to CQE for DC transport type Leon Romanovsky
@ 2018-10-07  9:03 ` Leon Romanovsky
  2018-10-07 19:13   ` Jason Gunthorpe
  2018-10-07  9:03 ` [PATCH rdma-next 4/4] IB/mlx5: Allow scatter to CQE without global signaled WRs Leon Romanovsky
  3 siblings, 1 reply; 6+ messages in thread
From: Leon Romanovsky @ 2018-10-07  9:03 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Guy Levi, Yonatan Cohen,
	Saeed Mahameed, linux-netdev

From: Yonatan Cohen <yonatanc@mellanox.com>

Flags sent down from user might not be supported by
running driver.
This might lead to unwanted bugs.
To solve this, added macro to test for unsupported flags.

Signed-off-by: Yonatan Cohen <yonatanc@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index bae48bdf281c..17c4b6641933 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1728,6 +1728,15 @@ static void configure_requester_scat_cqe(struct mlx5_ib_dev *dev,
 		MLX5_SET(qpc, qpc, cs_req, MLX5_REQ_SCAT_DATA32_CQE);
 }
 
+#define MLX5_QP_CREATE_FLAGS_NOT_SUPPORTED(flags) \
+	 ((flags) & ~(                            \
+		MLX5_QP_FLAG_SIGNATURE		| \
+		MLX5_QP_FLAG_SCATTER_CQE	| \
+		MLX5_QP_FLAG_TUNNEL_OFFLOADS	| \
+		MLX5_QP_FLAG_BFREG_INDEX	| \
+		MLX5_QP_FLAG_TYPE_DCT		| \
+		MLX5_QP_FLAG_TYPE_DCI))
+
 static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 			    struct ib_qp_init_attr *init_attr,
 			    struct ib_udata *udata, struct mlx5_ib_qp *qp)
@@ -1825,6 +1834,9 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 			return -EFAULT;
 		}
 
+		if (MLX5_QP_CREATE_FLAGS_NOT_SUPPORTED(ucmd.flags))
+			return -EINVAL;
+
 		err = get_qp_user_index(to_mucontext(pd->uobject->context),
 					&ucmd, udata->inlen, &uidx);
 		if (err)
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH rdma-next 4/4] IB/mlx5: Allow scatter to CQE without global signaled WRs
  2018-10-07  9:03 [PATCH rdma-next 0/4] Scatter to CQE Leon Romanovsky
                   ` (2 preceding siblings ...)
  2018-10-07  9:03 ` [PATCH rdma-next 3/4] IB/mlx5: Verify that driver supports user flags Leon Romanovsky
@ 2018-10-07  9:03 ` Leon Romanovsky
  3 siblings, 0 replies; 6+ messages in thread
From: Leon Romanovsky @ 2018-10-07  9:03 UTC (permalink / raw)
  To: Doug Ledford, Jason Gunthorpe
  Cc: Leon Romanovsky, RDMA mailing list, Guy Levi, Yonatan Cohen,
	Saeed Mahameed, linux-netdev

From: Yonatan Cohen <yonatanc@mellanox.com>

Requester scatter to CQE is restricted to QPs configured to signal
all WRs.

This patch adds ability to enable scatter to cqe (force enable)
in the requester without sig_all, for users who do not want all WRs
signaled but rather just the ones whose data found in the CQE.

Signed-off-by: Yonatan Cohen <yonatanc@mellanox.com>
Reviewed-by: Guy Levi <guyle@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
 drivers/infiniband/hw/mlx5/qp.c | 14 +++++++++++---
 include/uapi/rdma/mlx5-abi.h    |  1 +
 2 files changed, 12 insertions(+), 3 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 17c4b6641933..4397b5f27125 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1706,15 +1706,20 @@ static void configure_responder_scat_cqe(struct ib_qp_init_attr *init_attr,
 
 static void configure_requester_scat_cqe(struct mlx5_ib_dev *dev,
 					 struct ib_qp_init_attr *init_attr,
+					 struct mlx5_ib_create_qp *ucmd,
 					 void *qpc)
 {
 	enum ib_qp_type qpt = init_attr->qp_type;
 	int scqe_sz;
+	bool allow_scat_cqe = 0;
 
 	if (qpt == IB_QPT_UC || qpt == IB_QPT_UD)
 		return;
 
-	if (init_attr->sq_sig_type != IB_SIGNAL_ALL_WR)
+	if (ucmd)
+		allow_scat_cqe = ucmd->flags & MLX5_QP_FLAG_ALLOW_SCATTER_CQE;
+
+	if (!allow_scat_cqe && init_attr->sq_sig_type != IB_SIGNAL_ALL_WR)
 		return;
 
 	scqe_sz = mlx5_ib_get_cqe_size(init_attr->send_cq);
@@ -1735,7 +1740,8 @@ static void configure_requester_scat_cqe(struct mlx5_ib_dev *dev,
 		MLX5_QP_FLAG_TUNNEL_OFFLOADS	| \
 		MLX5_QP_FLAG_BFREG_INDEX	| \
 		MLX5_QP_FLAG_TYPE_DCT		| \
-		MLX5_QP_FLAG_TYPE_DCI))
+		MLX5_QP_FLAG_TYPE_DCI		| \
+		MLX5_QP_FLAG_ALLOW_SCATTER_CQE))
 
 static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 			    struct ib_qp_init_attr *init_attr,
@@ -1969,7 +1975,9 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
 
 	if (qp->scat_cqe && is_connected(init_attr->qp_type)) {
 		configure_responder_scat_cqe(init_attr, qpc);
-		configure_requester_scat_cqe(dev, init_attr, qpc);
+		configure_requester_scat_cqe(dev, init_attr,
+					     (pd && pd->uobject) ? &ucmd : NULL,
+					     qpc);
 	}
 
 	if (qp->rq.wqe_cnt) {
diff --git a/include/uapi/rdma/mlx5-abi.h b/include/uapi/rdma/mlx5-abi.h
index 6056625237cf..8fa9f90e2bb1 100644
--- a/include/uapi/rdma/mlx5-abi.h
+++ b/include/uapi/rdma/mlx5-abi.h
@@ -47,6 +47,7 @@ enum {
 	MLX5_QP_FLAG_TYPE_DCI		= 1 << 5,
 	MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_UC = 1 << 6,
 	MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_MC = 1 << 7,
+	MLX5_QP_FLAG_ALLOW_SCATTER_CQE	= 1 << 8,
 };
 
 enum {
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH rdma-next 3/4] IB/mlx5: Verify that driver supports user flags
  2018-10-07  9:03 ` [PATCH rdma-next 3/4] IB/mlx5: Verify that driver supports user flags Leon Romanovsky
@ 2018-10-07 19:13   ` Jason Gunthorpe
  0 siblings, 0 replies; 6+ messages in thread
From: Jason Gunthorpe @ 2018-10-07 19:13 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Doug Ledford, Leon Romanovsky, RDMA mailing list, Guy Levi,
	Yonatan Cohen, Saeed Mahameed, linux-netdev

On Sun, Oct 07, 2018 at 12:03:36PM +0300, Leon Romanovsky wrote:
> From: Yonatan Cohen <yonatanc@mellanox.com>
> 
> Flags sent down from user might not be supported by
> running driver.
> This might lead to unwanted bugs.
> To solve this, added macro to test for unsupported flags.
> 
> Signed-off-by: Yonatan Cohen <yonatanc@mellanox.com>
> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
>  drivers/infiniband/hw/mlx5/qp.c | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
> 
> diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
> index bae48bdf281c..17c4b6641933 100644
> +++ b/drivers/infiniband/hw/mlx5/qp.c
> @@ -1728,6 +1728,15 @@ static void configure_requester_scat_cqe(struct mlx5_ib_dev *dev,
>  		MLX5_SET(qpc, qpc, cs_req, MLX5_REQ_SCAT_DATA32_CQE);
>  }
>  
> +#define MLX5_QP_CREATE_FLAGS_NOT_SUPPORTED(flags) \
> +	 ((flags) & ~(                            \

This needs a cast, it would be better to add something like the check
comp mask function in rdma-core than this goofy macro thing.

Jason

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2018-10-07 19:13 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-07  9:03 [PATCH rdma-next 0/4] Scatter to CQE Leon Romanovsky
2018-10-07  9:03 ` [PATCH mlx5-next 1/4] net/mlx5: Expose DC scatter to CQE capability bit Leon Romanovsky
2018-10-07  9:03 ` [PATCH rdma-next 2/4] IB/mlx5: Support scatter to CQE for DC transport type Leon Romanovsky
2018-10-07  9:03 ` [PATCH rdma-next 3/4] IB/mlx5: Verify that driver supports user flags Leon Romanovsky
2018-10-07 19:13   ` Jason Gunthorpe
2018-10-07  9:03 ` [PATCH rdma-next 4/4] IB/mlx5: Allow scatter to CQE without global signaled WRs Leon Romanovsky

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.