All of lore.kernel.org
 help / color / mirror / Atom feed
* [pull request][rdma-next v1 REPOST 00/10] Hardware tag matching support
@ 2017-08-17 12:52 Leon Romanovsky
       [not found] ` <20170817125212.3173-1-leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
  0 siblings, 1 reply; 20+ messages in thread
From: Leon Romanovsky @ 2017-08-17 12:52 UTC (permalink / raw)
  To: Doug Ledford; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Leon Romanovsky

This patch series adds to Mellanox ConnectX HCA driver support of
tag matching. It introduces new hardware object eXtended shared Receive
Queue (XRQ), which follows SRQ semantics with addition of extended
receive buffers topologies and offloads.

This series adds tag matching topology and rendezvouz offload.

Changelog:
v0->v1:
 * Rebased version, no change
RFC->v0:
 * Followed after RFC posted on the ML and OFVWG discussions
 * Implements agreed verbs interface
 * Rebased on top of latest version
 * Adding feature description under Documentaion/infiniband
 * In struct ib_srq_init_attr moved CQ outside XRC inner struct.
 * Added max size of the information passed after the RNDV header
 * Added hca_sq_owner HW flag for RNDV QPs

Thanks

----------------------------------------------------------------

Doug,

Please note that I merged our shared pull request from the mailing
list before sending this series. Please let me know if something needs
to be redone.

Thanks

----------------------------------------------------------------
TAG matching support

Message Passing Interface (MPI) is a communication protocol that is
widely used for exchange of messages among processes in high-performance
computing (HPC) systems. Messages sent from a sending process to a
destination process are marked with an identifying label, referred to as
a tag. Destination processes post buffers in local memory that are
similarly marked with tags. When a message is received by the receiver
(i.e., the host computer on which the destination process is running),
the message is stored in a buffer whose tag matches the message tag. The
process of finding a buffer with a matching tag for the received packet
is called tag matching.

There are two protocols that are generally used to send messages over
MPI: The "Eager Protocol" is best suited to small messages that are
simply sent to the destination process and received in an appropriate
matching buffer. The "Rendezvous Protocol" is better suited to large
messages. In Rendezvous, when the sender process has a large message to
send, it first sends a small message to the destination process
announcing its intention to send the large message. This small message
is referred to as an RTS (ready to send) message. The RTS includes the
message tag and buffer address in the sender. The destination process
matches the RTS to a posted receive buffer, or posts such a buffer if
one does not already exist. Once a matching receive buffer has been
posted at the destination process side, the receiver initiates a remote
direct memory access (RDMA) read request to read the data from the
buffer address listed by the sender in the RTS message.

MPI tag matching, when performed in software by a host processor, can
consume substantial host resources, thus detracting from the performance
of the actual software applications that are using MPI for
communications. One possible solution is to offload the entire tag
matching process to a peripheral hardware device, such as a network
interface controller (NIC). In this case, the software application using
MPI will post a set of buffers in a memory of the host processor and
will pass the entire list of tags associated with the buffers to the
NIC. In large-scale networks, however, the NIC may be required to
simultaneously support many communicating processes and contexts
(referred to in MPI parlance as "ranks" and "communicators,"
respectively). NIC access to and matching of the large lists of tags
involved in such a scenario can itself become a bottleneck. The NIC must
also be able to handle "unexpected" traffic, for which buffers and tags
have not yet been posted, which may also degrade performance.

When the NIC receives a message over the network from one of the peer
processes, and the message contains a label in accordance with the
protocol, the NIC compares the label to the labels in the part of the
list that was pushed to the NIC. Upon finding a match to the label, the
NIC writes data conveyed in the message to the buffer in the memory that
is associated with this label and submits a notification to the software
process. The notification serves two purposes: both to indicate to the
software process that the label has been consumed, so that the process
will update the list of the labels posted to the NIC; and to inform the
software process that the data are available in the buffer. In some
cases (such as when the NIC retrieves the data from the remote node by
RDMA), the NIC may submit two notifications, in the form of completion
reports, of which the first informs the software process of the
consumption of the label and the second announces availability of the
data.

This patch series adds to Mellanox ConnectX HCA driver support of
tag matching. It introduces new hardware object eXtended shared Receive
Queue (XRQ), which follows SRQ semantics with addition of extended
receive buffers topologies and offloads.

This series adds tag matching topology and rendezvouz offload.

----------------------------------------------------------------
The following changes since commit b7a79bc53ce8d73daebb2b31345f86f5e25c195c:

  net/mlx5: Update HW layout definitions (2017-08-17 13:15:08 +0300)

are available in the git repository at:

  git://git.kernel.org/pub/scm/linux/kernel/git/leon/linux-rdma.git tags/rdma-next-2017-08-17-1

for you to fetch changes up to 89f4e752bf8000621770202d2e9855e187536b6d:

  Documentation: Hardware tag matching (2017-08-17 13:15:13 +0300)

Artemy Kovalyov (10):
      IB/core: Add XRQ capabilities
      IB/core: Separate CQ handle in SRQ context
      IB/core: Add new SRQ type IB_SRQT_TM
      IB/uverbs: Add XRQ creation parameter to UAPI
      IB/uverbs: Add new SRQ type IB_SRQT_TM
      IB/uverbs: Expose XRQ capabilities
      IB/mlx5: Fill XRQ capabilities
      net/mlx5: Add XRQ support
      IB/mlx5: Support IB_SRQT_TM
      Documentation: Hardware tag matching

 Documentation/infiniband/tag_matching.txt     |  64 +++++++++++
 drivers/infiniband/core/uverbs_cmd.c          |  43 ++++++--
 drivers/infiniband/core/verbs.c               |  16 +--
 drivers/infiniband/hw/mlx4/srq.c              |   4 +-
 drivers/infiniband/hw/mlx5/main.c             |  20 +++-
 drivers/infiniband/hw/mlx5/mlx5_ib.h          |   5 +
 drivers/infiniband/hw/mlx5/qp.c               |   9 +-
 drivers/infiniband/hw/mlx5/srq.c              |  29 +++--
 drivers/net/ethernet/mellanox/mlx5/core/srq.c | 150 ++++++++++++++++++++++++--
 include/linux/mlx5/driver.h                   |   1 +
 include/linux/mlx5/srq.h                      |   5 +
 include/rdma/ib_verbs.h                       |  58 +++++++---
 include/uapi/rdma/ib_user_verbs.h             |  17 ++-
 13 files changed, 364 insertions(+), 57 deletions(-)
 create mode 100644 Documentation/infiniband/tag_matching.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [rdma-next v1 REPOST 01/10] IB/core: Add XRQ capabilities
       [not found] ` <20170817125212.3173-1-leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
@ 2017-08-17 12:52   ` Leon Romanovsky
  2017-08-17 12:52   ` [rdma-next v1 REPOST 02/10] IB/core: Separate CQ handle in SRQ context Leon Romanovsky
                     ` (10 subsequent siblings)
  11 siblings, 0 replies; 20+ messages in thread
From: Leon Romanovsky @ 2017-08-17 12:52 UTC (permalink / raw)
  To: Doug Ledford
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Leon Romanovsky, Artemy Kovalyov

From: Artemy Kovalyov <artemyko-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>

This patch adds following TM XRQ capabilities:

* max_rndv_hdr_size - Max size of rendezvous request message
* max_num_tags - Max number of entries in tag matching list
* max_ops - Max number of outstanding list operations
* max_sge - Max number of SGE in tag matching entry
* flags - the following flags are currently defined:
    - IB_TM_CAP_RC - Support tag matching on RC transport

Signed-off-by: Artemy Kovalyov <artemyko-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Reviewed-by: Yossi Itigin <yosefe-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Signed-off-by: Leon Romanovsky <leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
---
 include/rdma/ib_verbs.h | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index 4ce188128aa9..afb863212419 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -280,6 +280,24 @@ struct ib_rss_caps {
 	u32 max_rwq_indirection_table_size;
 };

+enum ib_tm_cap_flags {
+	/*  Support tag matching on RC transport */
+	IB_TM_CAP_RC		    = 1 << 0,
+};
+
+struct ib_xrq_caps {
+	/* Max size of RNDV header */
+	u32 max_rndv_hdr_size;
+	/* Max number of entries in tag matching list */
+	u32 max_num_tags;
+	/* From enum ib_tm_cap_flags */
+	u32 flags;
+	/* Max number of outstanding list operations */
+	u32 max_ops;
+	/* Max number of SGE in tag matching entry */
+	u32 max_sge;
+};
+
 enum ib_cq_creation_flags {
 	IB_CQ_FLAGS_TIMESTAMP_COMPLETION   = 1 << 0,
 	IB_CQ_FLAGS_IGNORE_OVERRUN	   = 1 << 1,
@@ -340,6 +358,7 @@ struct ib_device_attr {
 	struct ib_rss_caps	rss_caps;
 	u32			max_wq_type_rq;
 	u32			raw_packet_caps; /* Use ib_raw_packet_caps enum */
+	struct ib_xrq_caps	xrq_caps;
 };

 enum ib_mtu {
--
2.14.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [rdma-next v1 REPOST 02/10] IB/core: Separate CQ handle in SRQ context
       [not found] ` <20170817125212.3173-1-leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
  2017-08-17 12:52   ` [rdma-next v1 REPOST 01/10] IB/core: Add XRQ capabilities Leon Romanovsky
@ 2017-08-17 12:52   ` Leon Romanovsky
  2017-08-17 12:52   ` [rdma-next v1 REPOST 03/10] IB/core: Add new SRQ type IB_SRQT_TM Leon Romanovsky
                     ` (9 subsequent siblings)
  11 siblings, 0 replies; 20+ messages in thread
From: Leon Romanovsky @ 2017-08-17 12:52 UTC (permalink / raw)
  To: Doug Ledford
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Leon Romanovsky, Artemy Kovalyov

From: Artemy Kovalyov <artemyko-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>

Before this change CQ attached to SRQ was part of XRC specific extension.
Moving CQ handle out makes it available to other types extending SRQ
functionality.

Signed-off-by: Artemy Kovalyov <artemyko-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Reviewed-by: Yossi Itigin <yosefe-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Signed-off-by: Leon Romanovsky <leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
---
 drivers/infiniband/core/uverbs_cmd.c | 27 +++++++++++++++++----------
 drivers/infiniband/core/verbs.c      | 16 +++++++++-------
 drivers/infiniband/hw/mlx4/srq.c     |  4 ++--
 drivers/infiniband/hw/mlx5/main.c    | 10 +++++-----
 drivers/infiniband/hw/mlx5/srq.c     | 11 +++++++----
 include/rdma/ib_verbs.h              | 31 ++++++++++++++++++++-----------
 6 files changed, 60 insertions(+), 39 deletions(-)

diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
index 515425a50059..afa4c1b7891a 100644
--- a/drivers/infiniband/core/uverbs_cmd.c
+++ b/drivers/infiniband/core/uverbs_cmd.c
@@ -3479,10 +3479,12 @@ static int __uverbs_create_xsrq(struct ib_uverbs_file *file,

 		obj->uxrcd = container_of(xrcd_uobj, struct ib_uxrcd_object, uobject);
 		atomic_inc(&obj->uxrcd->refcnt);
+	}

-		attr.ext.xrc.cq  = uobj_get_obj_read(cq, cmd->cq_handle,
-						     file->ucontext);
-		if (!attr.ext.xrc.cq) {
+	if (ib_srq_has_cq(cmd->srq_type)) {
+		attr.ext.cq  = uobj_get_obj_read(cq, cmd->cq_handle,
+						 file->ucontext);
+		if (!attr.ext.cq) {
 			ret = -EINVAL;
 			goto err_put_xrcd;
 		}
@@ -3517,10 +3519,13 @@ static int __uverbs_create_xsrq(struct ib_uverbs_file *file,
 	srq->event_handler = attr.event_handler;
 	srq->srq_context   = attr.srq_context;

+	if (ib_srq_has_cq(cmd->srq_type)) {
+		srq->ext.cq       = attr.ext.cq;
+		atomic_inc(&attr.ext.cq->usecnt);
+	}
+
 	if (cmd->srq_type == IB_SRQT_XRC) {
-		srq->ext.xrc.cq   = attr.ext.xrc.cq;
 		srq->ext.xrc.xrcd = attr.ext.xrc.xrcd;
-		atomic_inc(&attr.ext.xrc.cq->usecnt);
 		atomic_inc(&attr.ext.xrc.xrcd->usecnt);
 	}

@@ -3543,10 +3548,12 @@ static int __uverbs_create_xsrq(struct ib_uverbs_file *file,
 		goto err_copy;
 	}

-	if (cmd->srq_type == IB_SRQT_XRC) {
+	if (cmd->srq_type == IB_SRQT_XRC)
 		uobj_put_read(xrcd_uobj);
-		uobj_put_obj_read(attr.ext.xrc.cq);
-	}
+
+	if (ib_srq_has_cq(cmd->srq_type))
+		uobj_put_obj_read(attr.ext.cq);
+
 	uobj_put_obj_read(pd);
 	uobj_alloc_commit(&obj->uevent.uobject);

@@ -3559,8 +3566,8 @@ static int __uverbs_create_xsrq(struct ib_uverbs_file *file,
 	uobj_put_obj_read(pd);

 err_put_cq:
-	if (cmd->srq_type == IB_SRQT_XRC)
-		uobj_put_obj_read(attr.ext.xrc.cq);
+	if (ib_srq_has_cq(cmd->srq_type))
+		uobj_put_obj_read(attr.ext.cq);

 err_put_xrcd:
 	if (cmd->srq_type == IB_SRQT_XRC) {
diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
index 7dd13962fc4c..bf5ce137b043 100644
--- a/drivers/infiniband/core/verbs.c
+++ b/drivers/infiniband/core/verbs.c
@@ -622,11 +622,13 @@ struct ib_srq *ib_create_srq(struct ib_pd *pd,
 		srq->event_handler = srq_init_attr->event_handler;
 		srq->srq_context   = srq_init_attr->srq_context;
 		srq->srq_type      = srq_init_attr->srq_type;
+		if (ib_srq_has_cq(srq->srq_type)) {
+			srq->ext.cq   = srq_init_attr->ext.cq;
+			atomic_inc(&srq->ext.cq->usecnt);
+		}
 		if (srq->srq_type == IB_SRQT_XRC) {
 			srq->ext.xrc.xrcd = srq_init_attr->ext.xrc.xrcd;
-			srq->ext.xrc.cq   = srq_init_attr->ext.xrc.cq;
 			atomic_inc(&srq->ext.xrc.xrcd->usecnt);
-			atomic_inc(&srq->ext.xrc.cq->usecnt);
 		}
 		atomic_inc(&pd->usecnt);
 		atomic_set(&srq->usecnt, 0);
@@ -667,18 +669,18 @@ int ib_destroy_srq(struct ib_srq *srq)

 	pd = srq->pd;
 	srq_type = srq->srq_type;
-	if (srq_type == IB_SRQT_XRC) {
+	if (ib_srq_has_cq(srq_type))
+		cq = srq->ext.cq;
+	if (srq_type == IB_SRQT_XRC)
 		xrcd = srq->ext.xrc.xrcd;
-		cq = srq->ext.xrc.cq;
-	}

 	ret = srq->device->destroy_srq(srq);
 	if (!ret) {
 		atomic_dec(&pd->usecnt);
-		if (srq_type == IB_SRQT_XRC) {
+		if (srq_type == IB_SRQT_XRC)
 			atomic_dec(&xrcd->usecnt);
+		if (ib_srq_has_cq(srq_type))
 			atomic_dec(&cq->usecnt);
-		}
 	}

 	return ret;
diff --git a/drivers/infiniband/hw/mlx4/srq.c b/drivers/infiniband/hw/mlx4/srq.c
index 0facaf5f6d23..210fe964a65d 100644
--- a/drivers/infiniband/hw/mlx4/srq.c
+++ b/drivers/infiniband/hw/mlx4/srq.c
@@ -183,8 +183,8 @@ struct ib_srq *mlx4_ib_create_srq(struct ib_pd *pd,
 		}
 	}

-	cqn = (init_attr->srq_type == IB_SRQT_XRC) ?
-		to_mcq(init_attr->ext.xrc.cq)->mcq.cqn : 0;
+	cqn = ib_srq_has_cq(init_attr->srq_type) ?
+		to_mcq(init_attr->ext.cq)->mcq.cqn : 0;
 	xrcdn = (init_attr->srq_type == IB_SRQT_XRC) ?
 		to_mxrcd(init_attr->ext.xrc.xrcd)->xrcdn :
 		(u16) dev->dev->caps.reserved_xrcds;
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index 47cf622c6d24..e9ea759f35dd 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -3154,7 +3154,7 @@ static int create_dev_resources(struct mlx5_ib_resources *devr)
 	attr.attr.max_sge = 1;
 	attr.attr.max_wr = 1;
 	attr.srq_type = IB_SRQT_XRC;
-	attr.ext.xrc.cq = devr->c0;
+	attr.ext.cq = devr->c0;
 	attr.ext.xrc.xrcd = devr->x0;

 	devr->s0 = mlx5_ib_create_srq(devr->p0, &attr, NULL);
@@ -3169,9 +3169,9 @@ static int create_dev_resources(struct mlx5_ib_resources *devr)
 	devr->s0->srq_context   = NULL;
 	devr->s0->srq_type      = IB_SRQT_XRC;
 	devr->s0->ext.xrc.xrcd	= devr->x0;
-	devr->s0->ext.xrc.cq	= devr->c0;
+	devr->s0->ext.cq	= devr->c0;
 	atomic_inc(&devr->s0->ext.xrc.xrcd->usecnt);
-	atomic_inc(&devr->s0->ext.xrc.cq->usecnt);
+	atomic_inc(&devr->s0->ext.cq->usecnt);
 	atomic_inc(&devr->p0->usecnt);
 	atomic_set(&devr->s0->usecnt, 0);

@@ -3190,9 +3190,9 @@ static int create_dev_resources(struct mlx5_ib_resources *devr)
 	devr->s1->event_handler = NULL;
 	devr->s1->srq_context   = NULL;
 	devr->s1->srq_type      = IB_SRQT_BASIC;
-	devr->s1->ext.xrc.cq	= devr->c0;
+	devr->s1->ext.cq	= devr->c0;
 	atomic_inc(&devr->p0->usecnt);
-	atomic_set(&devr->s0->usecnt, 0);
+	atomic_set(&devr->s1->usecnt, 0);

 	for (port = 0; port < ARRAY_SIZE(devr->ports); ++port) {
 		INIT_WORK(&devr->ports[port].pkey_change_work,
diff --git a/drivers/infiniband/hw/mlx5/srq.c b/drivers/infiniband/hw/mlx5/srq.c
index 43707b101f47..022b4c642047 100644
--- a/drivers/infiniband/hw/mlx5/srq.c
+++ b/drivers/infiniband/hw/mlx5/srq.c
@@ -292,13 +292,16 @@ struct ib_srq *mlx5_ib_create_srq(struct ib_pd *pd,
 	in.wqe_shift = srq->msrq.wqe_shift - 4;
 	if (srq->wq_sig)
 		in.flags |= MLX5_SRQ_FLAG_WQ_SIG;
-	if (init_attr->srq_type == IB_SRQT_XRC) {
+
+	if (init_attr->srq_type == IB_SRQT_XRC)
 		in.xrcd = to_mxrcd(init_attr->ext.xrc.xrcd)->xrcdn;
-		in.cqn = to_mcq(init_attr->ext.xrc.cq)->mcq.cqn;
-	} else if (init_attr->srq_type == IB_SRQT_BASIC) {
+	else
 		in.xrcd = to_mxrcd(dev->devr.x0)->xrcdn;
+
+	if (ib_srq_has_cq(init_attr->srq_type))
+		in.cqn = to_mcq(init_attr->ext.cq)->mcq.cqn;
+	else
 		in.cqn = to_mcq(dev->devr.c0)->mcq.cqn;
-	}

 	in.pd = to_mpd(pd)->pdn;
 	in.db_record = srq->db.dma;
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index afb863212419..665ebf33424a 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -993,6 +993,11 @@ enum ib_srq_type {
 	IB_SRQT_XRC
 };

+static inline bool ib_srq_has_cq(enum ib_srq_type srq_type)
+{
+	return srq_type == IB_SRQT_XRC;
+}
+
 enum ib_srq_attr_mask {
 	IB_SRQ_MAX_WR	= 1 << 0,
 	IB_SRQ_LIMIT	= 1 << 1,
@@ -1010,11 +1015,13 @@ struct ib_srq_init_attr {
 	struct ib_srq_attr	attr;
 	enum ib_srq_type	srq_type;

-	union {
-		struct {
-			struct ib_xrcd *xrcd;
-			struct ib_cq   *cq;
-		} xrc;
+	struct {
+		struct ib_cq   *cq;
+		union {
+			struct {
+				struct ib_xrcd *xrcd;
+			} xrc;
+		};
 	} ext;
 };

@@ -1553,12 +1560,14 @@ struct ib_srq {
 	enum ib_srq_type	srq_type;
 	atomic_t		usecnt;

-	union {
-		struct {
-			struct ib_xrcd *xrcd;
-			struct ib_cq   *cq;
-			u32		srq_num;
-		} xrc;
+	struct {
+		struct ib_cq   *cq;
+		union {
+			struct {
+				struct ib_xrcd *xrcd;
+				u32		srq_num;
+			} xrc;
+		};
 	} ext;
 };

--
2.14.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [rdma-next v1 REPOST 03/10] IB/core: Add new SRQ type IB_SRQT_TM
       [not found] ` <20170817125212.3173-1-leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
  2017-08-17 12:52   ` [rdma-next v1 REPOST 01/10] IB/core: Add XRQ capabilities Leon Romanovsky
  2017-08-17 12:52   ` [rdma-next v1 REPOST 02/10] IB/core: Separate CQ handle in SRQ context Leon Romanovsky
@ 2017-08-17 12:52   ` Leon Romanovsky
  2017-08-17 12:52   ` [rdma-next v1 REPOST 04/10] IB/uverbs: Add XRQ creation parameter to UAPI Leon Romanovsky
                     ` (8 subsequent siblings)
  11 siblings, 0 replies; 20+ messages in thread
From: Leon Romanovsky @ 2017-08-17 12:52 UTC (permalink / raw)
  To: Doug Ledford
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Leon Romanovsky, Artemy Kovalyov

From: Artemy Kovalyov <artemyko-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>

This patch adds new SRQ type - IB_SRQT_TM. The new SRQ type supports tag
matching and rendezvous offloads for MPI applications.

When SRQ receives a message it will search through the matching list
for the corresponding posted receive buffer. The process of searching
the matching list is called tag matching.
In case the tag matching results in a match, the received message will
be placed in the address specified by the receive buffer. In case no
match was found the message will be placed in a generic buffer until the
corresponding receive buffer will be posted. These messages are called
unexpected and their set is called an unexpected list.

Signed-off-by: Artemy Kovalyov <artemyko-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Reviewed-by: Yossi Itigin <yosefe-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Signed-off-by: Leon Romanovsky <leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
---
 include/rdma/ib_verbs.h | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index 665ebf33424a..e0e48a256d63 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -990,12 +990,14 @@ enum ib_cq_notify_flags {

 enum ib_srq_type {
 	IB_SRQT_BASIC,
-	IB_SRQT_XRC
+	IB_SRQT_XRC,
+	IB_SRQT_TM,
 };

 static inline bool ib_srq_has_cq(enum ib_srq_type srq_type)
 {
-	return srq_type == IB_SRQT_XRC;
+	return srq_type == IB_SRQT_XRC ||
+	       srq_type == IB_SRQT_TM;
 }

 enum ib_srq_attr_mask {
@@ -1021,6 +1023,10 @@ struct ib_srq_init_attr {
 			struct {
 				struct ib_xrcd *xrcd;
 			} xrc;
+
+			struct {
+				u32		max_num_tags;
+			} tag_matching;
 		};
 	} ext;
 };
--
2.14.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [rdma-next v1 REPOST 04/10] IB/uverbs: Add XRQ creation parameter to UAPI
       [not found] ` <20170817125212.3173-1-leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
                     ` (2 preceding siblings ...)
  2017-08-17 12:52   ` [rdma-next v1 REPOST 03/10] IB/core: Add new SRQ type IB_SRQT_TM Leon Romanovsky
@ 2017-08-17 12:52   ` Leon Romanovsky
  2017-08-17 12:52   ` [rdma-next v1 REPOST 05/10] IB/uverbs: Add new SRQ type IB_SRQT_TM Leon Romanovsky
                     ` (7 subsequent siblings)
  11 siblings, 0 replies; 20+ messages in thread
From: Leon Romanovsky @ 2017-08-17 12:52 UTC (permalink / raw)
  To: Doug Ledford
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Leon Romanovsky, Artemy Kovalyov

From: Artemy Kovalyov <artemyko-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>

Add tm_list_size parameter to struct ib_uverbs_create_xsrq.
If SRQ type is tag-matching this field defines maximum size
of tag matching list. Otherwise, it is expected to be zero.

Signed-off-by: Artemy Kovalyov <artemyko-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Reviewed-by: Yossi Itigin <yosefe-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Signed-off-by: Leon Romanovsky <leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
---
 include/uapi/rdma/ib_user_verbs.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/uapi/rdma/ib_user_verbs.h b/include/uapi/rdma/ib_user_verbs.h
index 63656d2e8705..d5434bbf40c8 100644
--- a/include/uapi/rdma/ib_user_verbs.h
+++ b/include/uapi/rdma/ib_user_verbs.h
@@ -1024,7 +1024,7 @@ struct ib_uverbs_create_xsrq {
 	__u32 max_wr;
 	__u32 max_sge;
 	__u32 srq_limit;
-	__u32 reserved;
+	__u32 max_num_tags;
 	__u32 xrcd_handle;
 	__u32 cq_handle;
 	__u64 driver_data[0];
--
2.14.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [rdma-next v1 REPOST 05/10] IB/uverbs: Add new SRQ type IB_SRQT_TM
       [not found] ` <20170817125212.3173-1-leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
                     ` (3 preceding siblings ...)
  2017-08-17 12:52   ` [rdma-next v1 REPOST 04/10] IB/uverbs: Add XRQ creation parameter to UAPI Leon Romanovsky
@ 2017-08-17 12:52   ` Leon Romanovsky
  2017-08-17 12:52   ` [rdma-next v1 REPOST 06/10] IB/uverbs: Expose XRQ capabilities Leon Romanovsky
                     ` (6 subsequent siblings)
  11 siblings, 0 replies; 20+ messages in thread
From: Leon Romanovsky @ 2017-08-17 12:52 UTC (permalink / raw)
  To: Doug Ledford
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Leon Romanovsky, Artemy Kovalyov

From: Artemy Kovalyov <artemyko-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>

Add new SRQ type capable of new tag matching feature.

When SRQ receives a message it will search through the matching list
for the corresponding posted receive buffer. The process of searching
the matching list is called tag matching.

In case the tag matching results in a match, the received message will
be placed in the address specified by the receive buffer. In case no
match was found the message will be placed in a generic buffer until the
corresponding receive buffer will be posted. These messages are called
unexpected and their set is called an unexpected list.

Signed-off-by: Artemy Kovalyov <artemyko-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Reviewed-by: Yossi Itigin <yosefe-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Signed-off-by: Leon Romanovsky <leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
---
 drivers/infiniband/core/uverbs_cmd.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
index afa4c1b7891a..e3ac272a0fa9 100644
--- a/drivers/infiniband/core/uverbs_cmd.c
+++ b/drivers/infiniband/core/uverbs_cmd.c
@@ -1430,7 +1430,7 @@ static int create_qp(struct ib_uverbs_file *file,
 			if (cmd->is_srq) {
 				srq = uobj_get_obj_read(srq, cmd->srq_handle,
 							file->ucontext);
-				if (!srq || srq->srq_type != IB_SRQT_BASIC) {
+				if (!srq || srq->srq_type == IB_SRQT_XRC) {
 					ret = -EINVAL;
 					goto err_put;
 				}
@@ -3463,6 +3463,9 @@ static int __uverbs_create_xsrq(struct ib_uverbs_file *file,
 	if (IS_ERR(obj))
 		return PTR_ERR(obj);

+	if (cmd->srq_type == IB_SRQT_TM)
+		attr.ext.tag_matching.max_num_tags = cmd->max_num_tags;
+
 	if (cmd->srq_type == IB_SRQT_XRC) {
 		xrcd_uobj = uobj_get_read(uobj_get_type(xrcd), cmd->xrcd_handle,
 					  file->ucontext);
@@ -3597,6 +3600,7 @@ ssize_t ib_uverbs_create_srq(struct ib_uverbs_file *file,
 	if (copy_from_user(&cmd, buf, sizeof cmd))
 		return -EFAULT;

+	memset(&xcmd, 0, sizeof(xcmd));
 	xcmd.response	 = cmd.response;
 	xcmd.user_handle = cmd.user_handle;
 	xcmd.srq_type	 = IB_SRQT_BASIC;
--
2.14.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [rdma-next v1 REPOST 06/10] IB/uverbs: Expose XRQ capabilities
       [not found] ` <20170817125212.3173-1-leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
                     ` (4 preceding siblings ...)
  2017-08-17 12:52   ` [rdma-next v1 REPOST 05/10] IB/uverbs: Add new SRQ type IB_SRQT_TM Leon Romanovsky
@ 2017-08-17 12:52   ` Leon Romanovsky
  2017-08-17 12:52   ` [rdma-next v1 REPOST 07/10] IB/mlx5: Fill " Leon Romanovsky
                     ` (5 subsequent siblings)
  11 siblings, 0 replies; 20+ messages in thread
From: Leon Romanovsky @ 2017-08-17 12:52 UTC (permalink / raw)
  To: Doug Ledford
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Leon Romanovsky, Artemy Kovalyov

From: Artemy Kovalyov <artemyko-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>

Make XRQ capabilities available via ibv_query_device() verb.

Signed-off-by: Artemy Kovalyov <artemyko-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Reviewed-by: Yossi Itigin <yosefe-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Signed-off-by: Leon Romanovsky <leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
---
 drivers/infiniband/core/uverbs_cmd.c | 10 ++++++++++
 include/uapi/rdma/ib_user_verbs.h    | 15 +++++++++++++++
 2 files changed, 25 insertions(+)

diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
index e3ac272a0fa9..1996f096c4f8 100644
--- a/drivers/infiniband/core/uverbs_cmd.c
+++ b/drivers/infiniband/core/uverbs_cmd.c
@@ -3850,6 +3850,16 @@ int ib_uverbs_ex_query_device(struct ib_uverbs_file *file,

 	resp.raw_packet_caps = attr.raw_packet_caps;
 	resp.response_length += sizeof(resp.raw_packet_caps);
+
+	if (ucore->outlen < resp.response_length + sizeof(resp.xrq_caps))
+		goto end;
+
+	resp.xrq_caps.max_rndv_hdr_size = attr.xrq_caps.max_rndv_hdr_size;
+	resp.xrq_caps.max_num_tags      = attr.xrq_caps.max_num_tags;
+	resp.xrq_caps.max_ops		= attr.xrq_caps.max_ops;
+	resp.xrq_caps.max_sge		= attr.xrq_caps.max_sge;
+	resp.xrq_caps.flags		= attr.xrq_caps.flags;
+	resp.response_length += sizeof(resp.xrq_caps);
 end:
 	err = ib_copy_to_udata(ucore, &resp, resp.response_length);
 	return err;
diff --git a/include/uapi/rdma/ib_user_verbs.h b/include/uapi/rdma/ib_user_verbs.h
index d5434bbf40c8..9a0b6479fe0c 100644
--- a/include/uapi/rdma/ib_user_verbs.h
+++ b/include/uapi/rdma/ib_user_verbs.h
@@ -236,6 +236,20 @@ struct ib_uverbs_rss_caps {
 	__u32 reserved;
 };

+struct ib_uverbs_tm_caps {
+	/* Max size of rendezvous request message */
+	__u32 max_rndv_hdr_size;
+	/* Max number of entries in tag matching list */
+	__u32 max_num_tags;
+	/* TM flags */
+	__u32 flags;
+	/* Max number of outstanding list operations */
+	__u32 max_ops;
+	/* Max number of SGE in tag matching entry */
+	__u32 max_sge;
+	__u32 reserved;
+};
+
 struct ib_uverbs_ex_query_device_resp {
 	struct ib_uverbs_query_device_resp base;
 	__u32 comp_mask;
@@ -247,6 +261,7 @@ struct ib_uverbs_ex_query_device_resp {
 	struct ib_uverbs_rss_caps rss_caps;
 	__u32  max_wq_type_rq;
 	__u32 raw_packet_caps;
+	struct ib_uverbs_tm_caps xrq_caps;
 };

 struct ib_uverbs_query_port {
--
2.14.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [rdma-next v1 REPOST 07/10] IB/mlx5: Fill XRQ capabilities
       [not found] ` <20170817125212.3173-1-leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
                     ` (5 preceding siblings ...)
  2017-08-17 12:52   ` [rdma-next v1 REPOST 06/10] IB/uverbs: Expose XRQ capabilities Leon Romanovsky
@ 2017-08-17 12:52   ` Leon Romanovsky
  2017-08-17 12:52   ` [rdma-next v1 REPOST 08/10] net/mlx5: Add XRQ support Leon Romanovsky
                     ` (4 subsequent siblings)
  11 siblings, 0 replies; 20+ messages in thread
From: Leon Romanovsky @ 2017-08-17 12:52 UTC (permalink / raw)
  To: Doug Ledford
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Leon Romanovsky, Artemy Kovalyov

From: Artemy Kovalyov <artemyko-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>

Provide driver specific values for XRQ capabilities.

Signed-off-by: Artemy Kovalyov <artemyko-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Reviewed-by: Yossi Itigin <yosefe-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Signed-off-by: Leon Romanovsky <leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
---
 drivers/infiniband/hw/mlx5/main.c    | 10 ++++++++++
 drivers/infiniband/hw/mlx5/mlx5_ib.h |  5 +++++
 2 files changed, 15 insertions(+)

diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index e9ea759f35dd..894aec4a7c9d 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -777,6 +777,16 @@ static int mlx5_ib_query_device(struct ib_device *ibdev,
 			1 << MLX5_CAP_GEN(dev->mdev, log_max_rq);
 	}

+	if (MLX5_CAP_GEN(mdev, tag_matching)) {
+		props->xrq_caps.max_rndv_hdr_size = MLX5_TM_MAX_RNDV_MSG_SIZE;
+		props->xrq_caps.max_num_tags =
+			(1 << MLX5_CAP_GEN(mdev, log_tag_matching_list_sz)) - 1;
+		props->xrq_caps.flags = IB_TM_CAP_RC;
+		props->xrq_caps.max_ops =
+			1 << MLX5_CAP_GEN(mdev, log_max_qp_sz);
+		props->xrq_caps.max_sge = MLX5_TM_MAX_SGE;
+	}
+
 	if (field_avail(typeof(resp), cqe_comp_caps, uhw->outlen)) {
 		resp.cqe_comp_caps.max_num =
 			MLX5_CAP_GEN(dev->mdev, cqe_compression) ?
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index 7ac991070020..76a731f39d2b 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -107,6 +107,11 @@ enum {
 	MLX5_CQE_VERSION_V1,
 };

+enum {
+	MLX5_TM_MAX_RNDV_MSG_SIZE	= 64,
+	MLX5_TM_MAX_SGE			= 1,
+};
+
 struct mlx5_ib_vma_private_data {
 	struct list_head list;
 	struct vm_area_struct *vma;
--
2.14.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [rdma-next v1 REPOST 08/10] net/mlx5: Add XRQ support
       [not found] ` <20170817125212.3173-1-leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
                     ` (6 preceding siblings ...)
  2017-08-17 12:52   ` [rdma-next v1 REPOST 07/10] IB/mlx5: Fill " Leon Romanovsky
@ 2017-08-17 12:52   ` Leon Romanovsky
  2017-08-17 12:52   ` [rdma-next v1 REPOST 09/10] IB/mlx5: Support IB_SRQT_TM Leon Romanovsky
                     ` (3 subsequent siblings)
  11 siblings, 0 replies; 20+ messages in thread
From: Leon Romanovsky @ 2017-08-17 12:52 UTC (permalink / raw)
  To: Doug Ledford
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Leon Romanovsky, Artemy Kovalyov

From: Artemy Kovalyov <artemyko-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>

Add support to new XRQ(eXtended shared Receive Queue)
hardware object. It supports SRQ semantics with addition
of extended receive buffers topologies and offloads.

Currently supports tag matching topology and rendezvouz offload.

Signed-off-by: Artemy Kovalyov <artemyko-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Reviewed-by: Yossi Itigin <yosefe-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Signed-off-by: Leon Romanovsky <leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
---
 drivers/net/ethernet/mellanox/mlx5/core/srq.c | 150 ++++++++++++++++++++++++--
 include/linux/mlx5/driver.h                   |   1 +
 include/linux/mlx5/srq.h                      |   5 +
 3 files changed, 146 insertions(+), 10 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/srq.c b/drivers/net/ethernet/mellanox/mlx5/core/srq.c
index f774de6f5fcb..7673da04efa4 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/srq.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/srq.c
@@ -435,16 +435,128 @@ static int query_rmp_cmd(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq,
 	return err;
 }

+static int create_xrq_cmd(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq,
+			  struct mlx5_srq_attr *in)
+{
+	u32 create_out[MLX5_ST_SZ_DW(create_xrq_out)] = {0};
+	void *create_in;
+	void *xrqc;
+	void *wq;
+	int pas_size;
+	int inlen;
+	int err;
+
+	pas_size = get_pas_size(in);
+	inlen = MLX5_ST_SZ_BYTES(create_xrq_in) + pas_size;
+	create_in = kvzalloc(inlen, GFP_KERNEL);
+	if (!create_in)
+		return -ENOMEM;
+
+	xrqc = MLX5_ADDR_OF(create_xrq_in, create_in, xrq_context);
+	wq = MLX5_ADDR_OF(xrqc, xrqc, wq);
+
+	set_wq(wq, in);
+	memcpy(MLX5_ADDR_OF(xrqc, xrqc, wq.pas), in->pas, pas_size);
+
+	if (in->type == IB_SRQT_TM) {
+		MLX5_SET(xrqc, xrqc, topology, MLX5_XRQC_TOPOLOGY_TAG_MATCHING);
+		if (in->flags & MLX5_SRQ_FLAG_RNDV)
+			MLX5_SET(xrqc, xrqc, offload, MLX5_XRQC_OFFLOAD_RNDV);
+		MLX5_SET(xrqc, xrqc,
+			 tag_matching_topology_context.log_matching_list_sz,
+			 in->tm_log_list_size);
+	}
+	MLX5_SET(xrqc, xrqc, user_index, in->user_index);
+	MLX5_SET(xrqc, xrqc, cqn, in->cqn);
+	MLX5_SET(create_xrq_in, create_in, opcode, MLX5_CMD_OP_CREATE_XRQ);
+	err = mlx5_cmd_exec(dev, create_in, inlen, create_out,
+			    sizeof(create_out));
+	kvfree(create_in);
+	if (!err)
+		srq->srqn = MLX5_GET(create_xrq_out, create_out, xrqn);
+
+	return err;
+}
+
+static int destroy_xrq_cmd(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq)
+{
+	u32 in[MLX5_ST_SZ_DW(destroy_xrq_in)] = {0};
+	u32 out[MLX5_ST_SZ_DW(destroy_xrq_out)] = {0};
+
+	MLX5_SET(destroy_xrq_in, in, opcode, MLX5_CMD_OP_DESTROY_XRQ);
+	MLX5_SET(destroy_xrq_in, in, xrqn,   srq->srqn);
+
+	return mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out));
+}
+
+static int arm_xrq_cmd(struct mlx5_core_dev *dev,
+		       struct mlx5_core_srq *srq,
+		       u16 lwm)
+{
+	u32 out[MLX5_ST_SZ_DW(arm_rq_out)] = {0};
+	u32 in[MLX5_ST_SZ_DW(arm_rq_in)] = {0};
+
+	MLX5_SET(arm_rq_in, in, opcode,     MLX5_CMD_OP_ARM_RQ);
+	MLX5_SET(arm_rq_in, in, op_mod,     MLX5_ARM_RQ_IN_OP_MOD_XRQ);
+	MLX5_SET(arm_rq_in, in, srq_number, srq->srqn);
+	MLX5_SET(arm_rq_in, in, lwm,	    lwm);
+
+	return mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out));
+}
+
+static int query_xrq_cmd(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq,
+			 struct mlx5_srq_attr *out)
+{
+	u32 in[MLX5_ST_SZ_DW(query_xrq_in)] = {0};
+	u32 *xrq_out;
+	int outlen = MLX5_ST_SZ_BYTES(query_xrq_out);
+	void *xrqc;
+	int err;
+
+	xrq_out = kvzalloc(outlen, GFP_KERNEL);
+	if (!xrq_out)
+		return -ENOMEM;
+
+	MLX5_SET(query_xrq_in, in, opcode, MLX5_CMD_OP_QUERY_XRQ);
+	MLX5_SET(query_xrq_in, in, xrqn, srq->srqn);
+
+	err = mlx5_cmd_exec(dev, in, sizeof(in), xrq_out, outlen);
+	if (err)
+		goto out;
+
+	xrqc = MLX5_ADDR_OF(query_xrq_out, xrq_out, xrq_context);
+	get_wq(MLX5_ADDR_OF(xrqc, xrqc, wq), out);
+	if (MLX5_GET(xrqc, xrqc, state) != MLX5_XRQC_STATE_GOOD)
+		out->flags |= MLX5_SRQ_FLAG_ERR;
+	out->tm_next_tag =
+		MLX5_GET(xrqc, xrqc,
+			 tag_matching_topology_context.append_next_index);
+	out->tm_hw_phase_cnt =
+		MLX5_GET(xrqc, xrqc,
+			 tag_matching_topology_context.hw_phase_cnt);
+	out->tm_sw_phase_cnt =
+		MLX5_GET(xrqc, xrqc,
+			 tag_matching_topology_context.sw_phase_cnt);
+
+out:
+	kvfree(xrq_out);
+	return err;
+}
+
 static int create_srq_split(struct mlx5_core_dev *dev,
 			    struct mlx5_core_srq *srq,
 			    struct mlx5_srq_attr *in)
 {
 	if (!dev->issi)
 		return create_srq_cmd(dev, srq, in);
-	else if (srq->common.res == MLX5_RES_XSRQ)
+	switch (srq->common.res) {
+	case MLX5_RES_XSRQ:
 		return create_xrc_srq_cmd(dev, srq, in);
-	else
+	case MLX5_RES_XRQ:
+		return create_xrq_cmd(dev, srq, in);
+	default:
 		return create_rmp_cmd(dev, srq, in);
+	}
 }

 static int destroy_srq_split(struct mlx5_core_dev *dev,
@@ -452,10 +564,14 @@ static int destroy_srq_split(struct mlx5_core_dev *dev,
 {
 	if (!dev->issi)
 		return destroy_srq_cmd(dev, srq);
-	else if (srq->common.res == MLX5_RES_XSRQ)
+	switch (srq->common.res) {
+	case MLX5_RES_XSRQ:
 		return destroy_xrc_srq_cmd(dev, srq);
-	else
+	case MLX5_RES_XRQ:
+		return destroy_xrq_cmd(dev, srq);
+	default:
 		return destroy_rmp_cmd(dev, srq);
+	}
 }

 int mlx5_core_create_srq(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq,
@@ -464,10 +580,16 @@ int mlx5_core_create_srq(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq,
 	int err;
 	struct mlx5_srq_table *table = &dev->priv.srq_table;

-	if (in->type == IB_SRQT_XRC)
+	switch (in->type) {
+	case IB_SRQT_XRC:
 		srq->common.res = MLX5_RES_XSRQ;
-	else
+		break;
+	case IB_SRQT_TM:
+		srq->common.res = MLX5_RES_XRQ;
+		break;
+	default:
 		srq->common.res = MLX5_RES_SRQ;
+	}

 	err = create_srq_split(dev, srq, in);
 	if (err)
@@ -528,10 +650,14 @@ int mlx5_core_query_srq(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq,
 {
 	if (!dev->issi)
 		return query_srq_cmd(dev, srq, out);
-	else if (srq->common.res == MLX5_RES_XSRQ)
+	switch (srq->common.res) {
+	case MLX5_RES_XSRQ:
 		return query_xrc_srq_cmd(dev, srq, out);
-	else
+	case MLX5_RES_XRQ:
+		return query_xrq_cmd(dev, srq, out);
+	default:
 		return query_rmp_cmd(dev, srq, out);
+	}
 }
 EXPORT_SYMBOL(mlx5_core_query_srq);

@@ -540,10 +666,14 @@ int mlx5_core_arm_srq(struct mlx5_core_dev *dev, struct mlx5_core_srq *srq,
 {
 	if (!dev->issi)
 		return arm_srq_cmd(dev, srq, lwm, is_srq);
-	else if (srq->common.res == MLX5_RES_XSRQ)
+	switch (srq->common.res) {
+	case MLX5_RES_XSRQ:
 		return arm_xrc_srq_cmd(dev, srq, lwm);
-	else
+	case MLX5_RES_XRQ:
+		return arm_xrq_cmd(dev, srq, lwm);
+	default:
 		return arm_rmp_cmd(dev, srq, lwm);
+	}
 }
 EXPORT_SYMBOL(mlx5_core_arm_srq);

diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
index ea89e32a0d5f..f17e51a4ef94 100644
--- a/include/linux/mlx5/driver.h
+++ b/include/linux/mlx5/driver.h
@@ -418,6 +418,7 @@ enum mlx5_res_type {
 	MLX5_RES_SQ	= MLX5_EVENT_QUEUE_TYPE_SQ,
 	MLX5_RES_SRQ	= 3,
 	MLX5_RES_XSRQ	= 4,
+	MLX5_RES_XRQ	= 5,
 };

 struct mlx5_core_rsc_common {
diff --git a/include/linux/mlx5/srq.h b/include/linux/mlx5/srq.h
index 1cde0fd53f90..24ff23e27c8a 100644
--- a/include/linux/mlx5/srq.h
+++ b/include/linux/mlx5/srq.h
@@ -38,6 +38,7 @@
 enum {
 	MLX5_SRQ_FLAG_ERR    = (1 << 0),
 	MLX5_SRQ_FLAG_WQ_SIG = (1 << 1),
+	MLX5_SRQ_FLAG_RNDV   = (1 << 2),
 };

 struct mlx5_srq_attr {
@@ -56,6 +57,10 @@ struct mlx5_srq_attr {
 	u32 user_index;
 	u64 db_record;
 	__be64 *pas;
+	u32 tm_log_list_size;
+	u32 tm_next_tag;
+	u32 tm_hw_phase_cnt;
+	u32 tm_sw_phase_cnt;
 };

 struct mlx5_core_dev;
--
2.14.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [rdma-next v1 REPOST 09/10] IB/mlx5: Support IB_SRQT_TM
       [not found] ` <20170817125212.3173-1-leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
                     ` (7 preceding siblings ...)
  2017-08-17 12:52   ` [rdma-next v1 REPOST 08/10] net/mlx5: Add XRQ support Leon Romanovsky
@ 2017-08-17 12:52   ` Leon Romanovsky
  2017-08-17 12:52   ` [rdma-next v1 REPOST 10/10] Documentation: Hardware tag matching Leon Romanovsky
                     ` (2 subsequent siblings)
  11 siblings, 0 replies; 20+ messages in thread
From: Leon Romanovsky @ 2017-08-17 12:52 UTC (permalink / raw)
  To: Doug Ledford
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Leon Romanovsky, Artemy Kovalyov

From: Artemy Kovalyov <artemyko-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>

Pass to mlx5_core flag to enable rendezvous offload, list_size and CQ
when SRQ created with IB_SRQT_TM.

Signed-off-by: Artemy Kovalyov <artemyko-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Reviewed-by: Yossi Itigin <yosefe-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Signed-off-by: Leon Romanovsky <leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
---
 drivers/infiniband/hw/mlx5/qp.c  |  9 +++++++--
 drivers/infiniband/hw/mlx5/srq.c | 18 +++++++++++++++---
 2 files changed, 22 insertions(+), 5 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 5c7ce9bd466e..ebbfd721661d 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1718,10 +1718,15 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,

 	MLX5_SET(qpc, qpc, rq_type, get_rx_type(qp, init_attr));

-	if (qp->sq.wqe_cnt)
+	if (qp->sq.wqe_cnt) {
 		MLX5_SET(qpc, qpc, log_sq_size, ilog2(qp->sq.wqe_cnt));
-	else
+	} else {
 		MLX5_SET(qpc, qpc, no_sq, 1);
+		if (init_attr->srq &&
+		    init_attr->srq->srq_type == IB_SRQT_TM)
+			MLX5_SET(qpc, qpc, offload_type,
+				 MLX5_QPC_OFFLOAD_TYPE_RNDV);
+	}

 	/* Set default resources */
 	switch (init_attr->qp_type) {
diff --git a/drivers/infiniband/hw/mlx5/srq.c b/drivers/infiniband/hw/mlx5/srq.c
index 022b4c642047..d9af04100362 100644
--- a/drivers/infiniband/hw/mlx5/srq.c
+++ b/drivers/infiniband/hw/mlx5/srq.c
@@ -101,7 +101,7 @@ static int create_srq_user(struct ib_pd *pd, struct mlx5_ib_srq *srq,
 				 udata->inlen - sizeof(ucmd)))
 		return -EINVAL;

-	if (in->type == IB_SRQT_XRC) {
+	if (in->type != IB_SRQT_BASIC) {
 		err = get_srq_user_index(to_mucontext(pd->uobject->context),
 					 &ucmd, udata->inlen, &uidx);
 		if (err)
@@ -145,7 +145,7 @@ static int create_srq_user(struct ib_pd *pd, struct mlx5_ib_srq *srq,
 	in->log_page_size = page_shift - MLX5_ADAPTER_PAGE_SHIFT;
 	in->page_offset = offset;
 	if (MLX5_CAP_GEN(dev->mdev, cqe_version) == MLX5_CQE_VERSION_V1 &&
-	    in->type == IB_SRQT_XRC)
+	    in->type != IB_SRQT_BASIC)
 		in->user_index = uidx;

 	return 0;
@@ -205,7 +205,7 @@ static int create_srq_kernel(struct mlx5_ib_dev *dev, struct mlx5_ib_srq *srq,

 	in->log_page_size = srq->buf.page_shift - MLX5_ADAPTER_PAGE_SHIFT;
 	if (MLX5_CAP_GEN(dev->mdev, cqe_version) == MLX5_CQE_VERSION_V1 &&
-	    in->type == IB_SRQT_XRC)
+	    in->type != IB_SRQT_BASIC)
 		in->user_index = MLX5_IB_DEFAULT_UIDX;

 	return 0;
@@ -298,6 +298,18 @@ struct ib_srq *mlx5_ib_create_srq(struct ib_pd *pd,
 	else
 		in.xrcd = to_mxrcd(dev->devr.x0)->xrcdn;

+	if (init_attr->srq_type == IB_SRQT_TM) {
+		in.tm_log_list_size =
+			ilog2(init_attr->ext.tag_matching.max_num_tags) + 1;
+		if (in.tm_log_list_size >
+		    MLX5_CAP_GEN(dev->mdev, log_tag_matching_list_sz)) {
+			mlx5_ib_dbg(dev, "TM SRQ max_num_tags exceeding limit\n");
+			err = -EINVAL;
+			goto err_usr_kern_srq;
+		}
+		in.flags |= MLX5_SRQ_FLAG_RNDV;
+	}
+
 	if (ib_srq_has_cq(init_attr->srq_type))
 		in.cqn = to_mcq(init_attr->ext.cq)->mcq.cqn;
 	else
--
2.14.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [rdma-next v1 REPOST 10/10] Documentation: Hardware tag matching
       [not found] ` <20170817125212.3173-1-leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
                     ` (8 preceding siblings ...)
  2017-08-17 12:52   ` [rdma-next v1 REPOST 09/10] IB/mlx5: Support IB_SRQT_TM Leon Romanovsky
@ 2017-08-17 12:52   ` Leon Romanovsky
       [not found]     ` <20170817125212.3173-11-leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
  2017-08-24 19:56   ` [pull request][rdma-next v1 REPOST 00/10] Hardware tag matching support Doug Ledford
  2017-08-29  0:37   ` Doug Ledford
  11 siblings, 1 reply; 20+ messages in thread
From: Leon Romanovsky @ 2017-08-17 12:52 UTC (permalink / raw)
  To: Doug Ledford
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Leon Romanovsky, Artemy Kovalyov

From: Artemy Kovalyov <artemyko-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>

Add document providing definitions of terms and core explanations
for tag matching (TM) protocols, eager and rendezvous,
TM application header, tag list manipulations and matching process.

Signed-off-by: Artemy Kovalyov <artemyko-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Signed-off-by: Leon Romanovsky <leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
---
 Documentation/infiniband/tag_matching.txt | 64 +++++++++++++++++++++++++++++++
 1 file changed, 64 insertions(+)
 create mode 100644 Documentation/infiniband/tag_matching.txt

diff --git a/Documentation/infiniband/tag_matching.txt b/Documentation/infiniband/tag_matching.txt
new file mode 100644
index 000000000000..d2a3bf819226
--- /dev/null
+++ b/Documentation/infiniband/tag_matching.txt
@@ -0,0 +1,64 @@
+Tag matching logic
+
+The MPI standard defines a set of rules, known as tag-matching, for matching
+source send operations to destination receives.  The following parameters must
+match the following source and destination parameters:
+*	Communicator
+*	User tag - wild card may be specified by the receiver
+*	Source rank – wild car may be specified by the receiver
+*	Destination rank – wild
+The ordering rules require that when more than one pair of send and receive
+message envelopes may match, the pair that includes the earliest posted-send
+and the earliest posted-receive is the pair that must be used to satisfy the
+matching operation. However, this doesn’t imply that tags are consumed in
+the order they are created, e.g., a later generated tag may be consumed, if
+earlier tags can’t be used to satisfy the matching rules.
+
+When a message is sent from the sender to the receiver, the communication
+library may attempt to process the operation either after or before the
+corresponding matching receive is posted.  If a matching receive is posted,
+this is an expected message, otherwise it is called an unexpected message.
+Implementations frequently use different matching schemes for these two
+different matching instances.
+
+To keep MPI library memory footprint down, MPI implementations typically use
+two different protocols for this purpose:
+
+1.	The Eager protocol- the complete message is sent when the send is
+processed by the sender. A completion send is received in the send_cq
+notifying that the buffer can be reused.
+
+2.	The Rendezvous Protocol - the sender sends the tag-matching header,
+and perhaps a portion of data when first notifying the receiver. When the
+corresponding buffer is posted, the responder will use the information from
+the header to initiate an RDMA READ operation directly to the matching buffer.
+A fin message needs to be received in order for the buffer to be reused.
+
+Tag matching implementation
+
+There are two types of matching objects used, the posted receive list and the
+unexpected message list. The application posts receive buffers through calls
+to the MPI receive routines in the posted receive list and posts send messages
+using the MPI send routines. The head of the posted receive list may be
+maintained by the hardware, with the software expected to shadow this list.
+
+When send is initiated and arrives at the receive side, if there is no
+pre-posted receive for this arriving message, it is passed to the software and
+placed in the unexpected message list. Otherwise the match is processed,
+including rendezvous processing, if appropriate, delivering the data to the
+specified receive buffer. This allows overlapping receive-side MPI tag
+matching with computation.
+
+When a receive-message is posted, the communication library will first check
+the software unexpected message list for a matching receive. If a match is
+found, data is delivered to the user buffer, using a software controlled
+protocol. The UCX implementation uses either an eager or rendezvous protocol,
+depending on data size. If no match is found, the entire pre-posted receive
+list is maintained by the hardware, and there is space to add one more
+pre-posted receive to this list, this receive is passed to the hardware.
+Software is expected to shadow this list, to help with processing MPI cancel
+operations. In addition, because hardware and software are not expected to be
+tightly synchronized with respect to the tag-matching operation, this shadow
+list is used to detect the case that a pre-posted receive is passed to the
+hardware, as the matching unexpected message is being passed from the hardware
+to the software.
--
2.14.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [rdma-next v1 REPOST 10/10] Documentation: Hardware tag matching
       [not found]     ` <20170817125212.3173-11-leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
@ 2017-08-21 16:04       ` Jason Gunthorpe
       [not found]         ` <20170821160437.GD4401-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
  0 siblings, 1 reply; 20+ messages in thread
From: Jason Gunthorpe @ 2017-08-21 16:04 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Doug Ledford, linux-rdma-u79uwXL29TY76Z2rM5mHXA, Artemy Kovalyov

On Thu, Aug 17, 2017 at 03:52:12PM +0300, Leon Romanovsky wrote:
> From: Artemy Kovalyov <artemyko-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
> 
> Add document providing definitions of terms and core explanations
> for tag matching (TM) protocols, eager and rendezvous,
> TM application header, tag list manipulations and matching process.

When adding a new common verbs it needs to be documented to a level
where someone else can implement it.

At least describe the new header format and how the matching process
must work..

Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [rdma-next v1 REPOST 10/10] Documentation: Hardware tag matching
       [not found]         ` <20170821160437.GD4401-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
@ 2017-08-22  8:28           ` Leon Romanovsky
  0 siblings, 0 replies; 20+ messages in thread
From: Leon Romanovsky @ 2017-08-22  8:28 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Doug Ledford, linux-rdma-u79uwXL29TY76Z2rM5mHXA, Artemy Kovalyov

[-- Attachment #1: Type: text/plain, Size: 719 bytes --]

On Mon, Aug 21, 2017 at 10:04:37AM -0600, Jason Gunthorpe wrote:
> On Thu, Aug 17, 2017 at 03:52:12PM +0300, Leon Romanovsky wrote:
> > From: Artemy Kovalyov <artemyko-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
> >
> > Add document providing definitions of terms and core explanations
> > for tag matching (TM) protocols, eager and rendezvous,
> > TM application header, tag list manipulations and matching process.
>
> When adding a new common verbs it needs to be documented to a level
> where someone else can implement it.
>
> At least describe the new header format and how the matching process
> must work..

The actual work and new verbs will be added to rdma-core and will be
documented there.

Thanks

>
> Jason

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [pull request][rdma-next v1 REPOST 00/10] Hardware tag matching support
       [not found] ` <20170817125212.3173-1-leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
                     ` (9 preceding siblings ...)
  2017-08-17 12:52   ` [rdma-next v1 REPOST 10/10] Documentation: Hardware tag matching Leon Romanovsky
@ 2017-08-24 19:56   ` Doug Ledford
       [not found]     ` <1503604595.78641.39.camel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  2017-08-29  0:37   ` Doug Ledford
  11 siblings, 1 reply; 20+ messages in thread
From: Doug Ledford @ 2017-08-24 19:56 UTC (permalink / raw)
  To: Leon Romanovsky; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

On Thu, 2017-08-17 at 15:52 +0300, Leon Romanovsky wrote:
> This patch series adds to Mellanox ConnectX HCA driver support of
> tag matching. It introduces new hardware object eXtended shared
> Receive
> Queue (XRQ), which follows SRQ semantics with addition of extended
> receive buffers topologies and offloads.
> 
> This series adds tag matching topology and rendezvouz offload.
> 
> Changelog:
> v0->v1:
>  * Rebased version, no change
> RFC->v0:
>  * Followed after RFC posted on the ML and OFVWG discussions
>  * Implements agreed verbs interface
>  * Rebased on top of latest version
>  * Adding feature description under Documentaion/infiniband
>  * In struct ib_srq_init_attr moved CQ outside XRC inner struct.
>  * Added max size of the information passed after the RNDV header
>  * Added hca_sq_owner HW flag for RNDV QPs
> 
> Thanks

I know in the previous thread on this submission that you thought it
didn't need the shared pull request, but I'm seeing this when I try to
build after pulling this patch series in:
In file included from ./include/linux/mlx5/driver.h:49:0,
                 from ./include/linux/mlx5/fs.h:36,
                 from drivers/infiniband/hw/mlx5/qp.c:37:
drivers/infiniband/hw/mlx5/qp.c: In function ‘create_qp_common’:
drivers/infiniband/hw/mlx5/qp.c:1734:6: error:
‘MLX5_QPC_OFFLOAD_TYPE_RNDV’ undeclared (first use in this function);
did you mean ‘MLX5_XRQC_OFFLOAD_RNDV’?
      MLX5_QPC_OFFLOAD_TYPE_RNDV);
      ^
./include/linux/mlx5/device.h:70:11: note: in definition of macro
‘MLX5_SET’
  u32 _v = v; \
           ^
drivers/infiniband/hw/mlx5/qp.c:1734:6: note: each undeclared
identifier is reported only once for each function it appears in
      MLX5_QPC_OFFLOAD_TYPE_RNDV);
      ^
./include/linux/mlx5/device.h:70:11: note: in definition of macro
‘MLX5_SET’
  u32 _v = v; \
           ^
./include/linux/mlx5/device.h:51:80: error: ‘struct mlx5_ifc_qpc_bits’
has no member named ‘offload_type’
 #define __mlx5_bit_off(typ, fld) ((unsigned)(unsigned
long)(&(__mlx5_nullp(typ)->fld)))
                                                                       
         ^
./include/linux/mlx5/device.h:52:34: note: in expansion of macro
‘__mlx5_bit_off’
 #define __mlx5_dw_off(typ, fld) (__mlx5_bit_off(typ, fld) / 32)
                                  ^~~~~~~~~~~~~~
./include/linux/mlx5/device.h:72:20: note: in expansion of macro
‘__mlx5_dw_off’
  *((__be32 *)(p) + __mlx5_dw_off(typ, fld)) = \
                    ^~~~~~~~~~~~~
drivers/infiniband/hw/mlx5/qp.c:1733:4: note: in expansion of macro
‘MLX5_SET’
    MLX5_SET(qpc, qpc, offload_type,
    ^~~~~~~~
In file included from ./include/linux/swab.h:4:0,
                 from
./include/uapi/linux/byteorder/little_endian.h:12,
                 from ./include/linux/byteorder/little_endian.h:4,
                 from ./arch/x86/include/uapi/asm/byteorder.h:4,
                 from ./include/asm-generic/bitops/le.h:5,
                 from ./arch/x86/include/asm/bitops.h:517,
                 from ./include/linux/bitops.h:36,
                 from ./include/linux/kernel.h:10,
                 from ./include/linux/list.h:8,
                 from ./include/linux/module.h:9,
                 from drivers/infiniband/hw/mlx5/qp.c:33:
./include/linux/mlx5/device.h:51:80: error: ‘struct mlx5_ifc_qpc_bits’
has no member named ‘offload_type’
 #define __mlx5_bit_off(typ, fld) ((unsigned)(unsigned
long)(&(__mlx5_nullp(typ)->fld)))

plus a lot more garbage after that.  I'm thinking it does require the
definition updates in the shared pull request.

-- 
Doug Ledford <dledford-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
    GPG KeyID: B826A3330E572FDD
    Key fingerprint = AE6B 1BDA 122B 23B4 265B  1274 B826 A333 0E57 2FDD

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [pull request][rdma-next v1 REPOST 00/10] Hardware tag matching support
       [not found]     ` <1503604595.78641.39.camel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2017-08-24 20:10       ` Doug Ledford
       [not found]         ` <1503605434.78641.41.camel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 20+ messages in thread
From: Doug Ledford @ 2017-08-24 20:10 UTC (permalink / raw)
  To: Leon Romanovsky; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

On Thu, 2017-08-24 at 15:56 -0400, Doug Ledford wrote:
> On Thu, 2017-08-17 at 15:52 +0300, Leon Romanovsky wrote:
> > This patch series adds to Mellanox ConnectX HCA driver support of
> > tag matching. It introduces new hardware object eXtended shared
> > Receive
> > Queue (XRQ), which follows SRQ semantics with addition of extended
> > receive buffers topologies and offloads.
> > 
> > This series adds tag matching topology and rendezvouz offload.
> > 
> > Changelog:
> > v0->v1:
> >  * Rebased version, no change
> > RFC->v0:
> >  * Followed after RFC posted on the ML and OFVWG discussions
> >  * Implements agreed verbs interface
> >  * Rebased on top of latest version
> >  * Adding feature description under Documentaion/infiniband
> >  * In struct ib_srq_init_attr moved CQ outside XRC inner struct.
> >  * Added max size of the information passed after the RNDV header
> >  * Added hca_sq_owner HW flag for RNDV QPs
> > 
> > Thanks
> 
> I know in the previous thread on this submission that you thought it
> didn't need the shared pull request, but I'm seeing this when I try
> to
> build after pulling this patch series in:
> In file included from ./include/linux/mlx5/driver.h:49:0,
>                  from ./include/linux/mlx5/fs.h:36,
>                  from drivers/infiniband/hw/mlx5/qp.c:37:
> drivers/infiniband/hw/mlx5/qp.c: In function ‘create_qp_common’:
> drivers/infiniband/hw/mlx5/qp.c:1734:6: error:
> ‘MLX5_QPC_OFFLOAD_TYPE_RNDV’ undeclared (first use in this function);
> did you mean ‘MLX5_XRQC_OFFLOAD_RNDV’?
>       MLX5_QPC_OFFLOAD_TYPE_RNDV);

Nevermind.  This is an ordering issue.  I took this before your 24
patch series and I think that is the source of the problem.

-- 
Doug Ledford <dledford-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
    GPG KeyID: B826A3330E572FDD
    Key fingerprint = AE6B 1BDA 122B 23B4 265B  1274 B826 A333 0E57 2FDD

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [pull request][rdma-next v1 REPOST 00/10] Hardware tag matching support
       [not found]         ` <1503605434.78641.41.camel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2017-08-24 20:53           ` Doug Ledford
       [not found]             ` <1503608030.78641.57.camel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 20+ messages in thread
From: Doug Ledford @ 2017-08-24 20:53 UTC (permalink / raw)
  To: Leon Romanovsky; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

On Thu, 2017-08-24 at 16:10 -0400, Doug Ledford wrote:
> On Thu, 2017-08-24 at 15:56 -0400, Doug Ledford wrote:
> > On Thu, 2017-08-17 at 15:52 +0300, Leon Romanovsky wrote:
> > > This patch series adds to Mellanox ConnectX HCA driver support of
> > > tag matching. It introduces new hardware object eXtended shared
> > > Receive
> > > Queue (XRQ), which follows SRQ semantics with addition of
> > > extended
> > > receive buffers topologies and offloads.
> > > 
> > > This series adds tag matching topology and rendezvouz offload.
> > > 
> > > Changelog:
> > > v0->v1:
> > >  * Rebased version, no change
> > > RFC->v0:
> > >  * Followed after RFC posted on the ML and OFVWG discussions
> > >  * Implements agreed verbs interface
> > >  * Rebased on top of latest version
> > >  * Adding feature description under Documentaion/infiniband
> > >  * In struct ib_srq_init_attr moved CQ outside XRC inner struct.
> > >  * Added max size of the information passed after the RNDV header
> > >  * Added hca_sq_owner HW flag for RNDV QPs
> > > 
> > > Thanks
> > 
> > I know in the previous thread on this submission that you thought
> > it
> > didn't need the shared pull request, but I'm seeing this when I try
> > to
> > build after pulling this patch series in:
> > In file included from ./include/linux/mlx5/driver.h:49:0,
> >                  from ./include/linux/mlx5/fs.h:36,
> >                  from drivers/infiniband/hw/mlx5/qp.c:37:
> > drivers/infiniband/hw/mlx5/qp.c: In function ‘create_qp_common’:
> > drivers/infiniband/hw/mlx5/qp.c:1734:6: error:
> > ‘MLX5_QPC_OFFLOAD_TYPE_RNDV’ undeclared (first use in this
> > function);
> > did you mean ‘MLX5_XRQC_OFFLOAD_RNDV’?
> >       MLX5_QPC_OFFLOAD_TYPE_RNDV);
> 
> Nevermind.  This is an ordering issue.  I took this before your 24
> patch series and I think that is the source of the problem.

Nope, that didn't fix it either.  Maybe it needs one of your other
patchsets?  I also tried it on the shared code base an it fails to
build there in the same way.

-- 
Doug Ledford <dledford-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
    GPG KeyID: B826A3330E572FDD
    Key fingerprint = AE6B 1BDA 122B 23B4 265B  1274 B826 A333 0E57 2FDD

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [pull request][rdma-next v1 REPOST 00/10] Hardware tag matching support
       [not found]             ` <1503608030.78641.57.camel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2017-08-27  6:08               ` Leon Romanovsky
       [not found]                 ` <20170827060839.GO1724-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
  0 siblings, 1 reply; 20+ messages in thread
From: Leon Romanovsky @ 2017-08-27  6:08 UTC (permalink / raw)
  To: Doug Ledford; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

[-- Attachment #1: Type: text/plain, Size: 5809 bytes --]

On Thu, Aug 24, 2017 at 04:53:50PM -0400, Doug Ledford wrote:
> On Thu, 2017-08-24 at 16:10 -0400, Doug Ledford wrote:
> > On Thu, 2017-08-24 at 15:56 -0400, Doug Ledford wrote:
> > > On Thu, 2017-08-17 at 15:52 +0300, Leon Romanovsky wrote:
> > > > This patch series adds to Mellanox ConnectX HCA driver support of
> > > > tag matching. It introduces new hardware object eXtended shared
> > > > Receive
> > > > Queue (XRQ), which follows SRQ semantics with addition of
> > > > extended
> > > > receive buffers topologies and offloads.
> > > >
> > > > This series adds tag matching topology and rendezvouz offload.
> > > >
> > > > Changelog:
> > > > v0->v1:
> > > >  * Rebased version, no change
> > > > RFC->v0:
> > > >  * Followed after RFC posted on the ML and OFVWG discussions
> > > >  * Implements agreed verbs interface
> > > >  * Rebased on top of latest version
> > > >  * Adding feature description under Documentaion/infiniband
> > > >  * In struct ib_srq_init_attr moved CQ outside XRC inner struct.
> > > >  * Added max size of the information passed after the RNDV header
> > > >  * Added hca_sq_owner HW flag for RNDV QPs
> > > >
> > > > Thanks
> > >
> > > I know in the previous thread on this submission that you thought
> > > it
> > > didn't need the shared pull request, but I'm seeing this when I try
> > > to
> > > build after pulling this patch series in:
> > > In file included from ./include/linux/mlx5/driver.h:49:0,
> > >                  from ./include/linux/mlx5/fs.h:36,
> > >                  from drivers/infiniband/hw/mlx5/qp.c:37:
> > > drivers/infiniband/hw/mlx5/qp.c: In function ‘create_qp_common’:
> > > drivers/infiniband/hw/mlx5/qp.c:1734:6: error:
> > > ‘MLX5_QPC_OFFLOAD_TYPE_RNDV’ undeclared (first use in this
> > > function);
> > > did you mean ‘MLX5_XRQC_OFFLOAD_RNDV’?
> > >       MLX5_QPC_OFFLOAD_TYPE_RNDV);
> >
> > Nevermind.  This is an ordering issue.  I took this before your 24
> > patch series and I think that is the source of the problem.
>
> Nope, that didn't fix it either.  Maybe it needs one of your other
> patchsets?  I also tried it on the shared code base an it fails to
> build there in the same way.

I found the issue, in my "REPOST", I missed one patch, don't know how it
happened, but it is clearly my fault. Sorry about that.

Original pull request - 11 patches:
[pull request][rdma-next v1 00/11] Hardware tag matching support
https://www.spinics.net/lists/linux-rdma/msg53357.html

Repost of that pull request, which is supposed to be the same - 10 patches:
[pull request][rdma-next v1 REPOST 00/10] Hardware tag matching support
http://www.spinics.net/lists/linux-rdma/msg53522.html

The ‘MLX5_QPC_OFFLOAD_TYPE_RNDV’ is declared in first patch "net/mlx5:
Update HW layout definitions" of this series, and the shared code was
prepared on clean v4.13-rc4 tag.

The failures that you are experiencing is due to miss of first patch:
 * b7a79bc53ce8 - net/mlx5: Update HW layout definitions (10 days ago)

See the difference between the mellanox-shared branch and my tag.

➜  linux-rdma git:(rdma-next) git l grdma/mellanox-shared
6bb33a5770aa (grdma/mellanox-shared) Documentation: Hardware tag matching
380529f2a59b IB/mlx5: Support IB_SRQT_TM
98d90bfb6e58 net/mlx5: Add XRQ support
aa0d027930d6 IB/mlx5: Fill XRQ capabilities
5d7ef472505a IB/uverbs: Expose XRQ capabilities
6061811b8759 IB/uverbs: Add new SRQ type IB_SRQT_TM
2d87bd3ae6f0 IB/uverbs: Add XRQ creation parameter to UAPI
633b67ed6758 IB/core: Add new SRQ type IB_SRQT_TM
e190f28d0630 IB/core: Separate CQ handle in SRQ context
8f3d761c09f6 IB/core: Add XRQ capabilities
f336076a90bb Merge tag 'mlx5-shared-2017-08-07' of git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux into k.o/mellanox

➜  linux-rdma git:(rdma-next) git pl --graph rdma-next-2017-08-17-1
* 89f4e752bf80 - (tag: rdma-next-2017-08-17-1) Documentation: Hardware tag matching (10 days ago)
* a6eb9232179a - IB/mlx5: Support IB_SRQT_TM (10 days ago)
* 7173547b6e90 - net/mlx5: Add XRQ support (10 days ago)
* 3d88f302a3fd - IB/mlx5: Fill XRQ capabilities (10 days ago)
* 216c76559abb - IB/uverbs: Expose XRQ capabilities (10 days ago)
* 9cff356f28d8 - IB/uverbs: Add new SRQ type IB_SRQT_TM (10 days ago)
* 50a9896131e6 - IB/uverbs: Add XRQ creation parameter to UAPI (10 days ago)
* 25b0c9ac0cf0 - IB/core: Add new SRQ type IB_SRQT_TM (10 days ago)
* a4f8b0bc8a67 - IB/core: Separate CQ handle in SRQ context (10 days ago)
* 1dee69e16539 - IB/core: Add XRQ capabilities (10 days ago)
* b7a79bc53ce8 - net/mlx5: Update HW layout definitions (10 days ago)
*   c5fa0c255ce4 - Merge tag 'mlx5-shared-2017-08-07' of git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux into HEAD (10 days ago)
|\
| * a8ffcc741acb - (tag: mlx5-shared-2017-08-07, ml/topic/mlx5-shared-4.14) net/mlx5: Increase the maximum flow counters supported (3 weeks ago)
| * 61690e09c3b4 - net/mlx5: Fix counter list hardware structure (3 weeks ago)
| * 97834eba7c19 - net/mlx5: Delay events till ib registration ends (3 weeks ago)
| * e80541ecabd5 - net/mlx5: Add CONFIG_MLX5_ESWITCH Kconfig (3 weeks ago)
| * eeb66cdb6826 - net/mlx5: Separate between E-Switch and MPFS (3 weeks ago)
| * a9f7705ffd66 - net/mlx5: Unify vport manager capability check (3 weeks ago)
| * 07c9f1e57839 - net/mlx5e: NIC netdev init flow cleanup (3 weeks ago)
| * 706b35834820 - net/mlx5e: Rearrange netdevice ops structures (3 weeks ago)
| * aae4e7a8bc44 - (tag: v4.13-rc4, backup/master) Linux 4.13-rc4 (3 weeks ago)

Thanks

>
> --
> Doug Ledford <dledford-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
>     GPG KeyID: B826A3330E572FDD
>     Key fingerprint = AE6B 1BDA 122B 23B4 265B  1274 B826 A333 0E57 2FDD
>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [pull request][rdma-next v1 REPOST 00/10] Hardware tag matching support
       [not found]                 ` <20170827060839.GO1724-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
@ 2017-08-27  6:14                   ` Leon Romanovsky
  2017-08-29  0:03                   ` Doug Ledford
  1 sibling, 0 replies; 20+ messages in thread
From: Leon Romanovsky @ 2017-08-27  6:14 UTC (permalink / raw)
  To: Doug Ledford; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

[-- Attachment #1: Type: text/plain, Size: 6183 bytes --]

On Sun, Aug 27, 2017 at 09:08:39AM +0300, Leon Romanovsky wrote:
> On Thu, Aug 24, 2017 at 04:53:50PM -0400, Doug Ledford wrote:
> > On Thu, 2017-08-24 at 16:10 -0400, Doug Ledford wrote:
> > > On Thu, 2017-08-24 at 15:56 -0400, Doug Ledford wrote:
> > > > On Thu, 2017-08-17 at 15:52 +0300, Leon Romanovsky wrote:
> > > > > This patch series adds to Mellanox ConnectX HCA driver support of
> > > > > tag matching. It introduces new hardware object eXtended shared
> > > > > Receive
> > > > > Queue (XRQ), which follows SRQ semantics with addition of
> > > > > extended
> > > > > receive buffers topologies and offloads.
> > > > >
> > > > > This series adds tag matching topology and rendezvouz offload.
> > > > >
> > > > > Changelog:
> > > > > v0->v1:
> > > > >  * Rebased version, no change
> > > > > RFC->v0:
> > > > >  * Followed after RFC posted on the ML and OFVWG discussions
> > > > >  * Implements agreed verbs interface
> > > > >  * Rebased on top of latest version
> > > > >  * Adding feature description under Documentaion/infiniband
> > > > >  * In struct ib_srq_init_attr moved CQ outside XRC inner struct.
> > > > >  * Added max size of the information passed after the RNDV header
> > > > >  * Added hca_sq_owner HW flag for RNDV QPs
> > > > >
> > > > > Thanks
> > > >
> > > > I know in the previous thread on this submission that you thought
> > > > it
> > > > didn't need the shared pull request, but I'm seeing this when I try
> > > > to
> > > > build after pulling this patch series in:
> > > > In file included from ./include/linux/mlx5/driver.h:49:0,
> > > >                  from ./include/linux/mlx5/fs.h:36,
> > > >                  from drivers/infiniband/hw/mlx5/qp.c:37:
> > > > drivers/infiniband/hw/mlx5/qp.c: In function ‘create_qp_common’:
> > > > drivers/infiniband/hw/mlx5/qp.c:1734:6: error:
> > > > ‘MLX5_QPC_OFFLOAD_TYPE_RNDV’ undeclared (first use in this
> > > > function);
> > > > did you mean ‘MLX5_XRQC_OFFLOAD_RNDV’?
> > > >       MLX5_QPC_OFFLOAD_TYPE_RNDV);
> > >
> > > Nevermind.  This is an ordering issue.  I took this before your 24
> > > patch series and I think that is the source of the problem.
> >
> > Nope, that didn't fix it either.  Maybe it needs one of your other
> > patchsets?  I also tried it on the shared code base an it fails to
> > build there in the same way.
>
> I found the issue, in my "REPOST", I missed one patch, don't know how it
> happened, but it is clearly my fault. Sorry about that.
>
> Original pull request - 11 patches:
> [pull request][rdma-next v1 00/11] Hardware tag matching support
> https://www.spinics.net/lists/linux-rdma/msg53357.html
>
> Repost of that pull request, which is supposed to be the same - 10 patches:
> [pull request][rdma-next v1 REPOST 00/10] Hardware tag matching support
> http://www.spinics.net/lists/linux-rdma/msg53522.html
>
> The ‘MLX5_QPC_OFFLOAD_TYPE_RNDV’ is declared in first patch "net/mlx5:
> Update HW layout definitions" of this series, and the shared code was
> prepared on clean v4.13-rc4 tag.
>
> The failures that you are experiencing is due to miss of first patch:
>  * b7a79bc53ce8 - net/mlx5: Update HW layout definitions (10 days ago)
>
> See the difference between the mellanox-shared branch and my tag.
>
> ➜  linux-rdma git:(rdma-next) git l grdma/mellanox-shared
> 6bb33a5770aa (grdma/mellanox-shared) Documentation: Hardware tag matching
> 380529f2a59b IB/mlx5: Support IB_SRQT_TM
> 98d90bfb6e58 net/mlx5: Add XRQ support
> aa0d027930d6 IB/mlx5: Fill XRQ capabilities
> 5d7ef472505a IB/uverbs: Expose XRQ capabilities
> 6061811b8759 IB/uverbs: Add new SRQ type IB_SRQT_TM
> 2d87bd3ae6f0 IB/uverbs: Add XRQ creation parameter to UAPI
> 633b67ed6758 IB/core: Add new SRQ type IB_SRQT_TM
> e190f28d0630 IB/core: Separate CQ handle in SRQ context
> 8f3d761c09f6 IB/core: Add XRQ capabilities
> f336076a90bb Merge tag 'mlx5-shared-2017-08-07' of git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux into k.o/mellanox
>
> ➜  linux-rdma git:(rdma-next) git pl --graph rdma-next-2017-08-17-1
> * 89f4e752bf80 - (tag: rdma-next-2017-08-17-1) Documentation: Hardware tag matching (10 days ago)
> * a6eb9232179a - IB/mlx5: Support IB_SRQT_TM (10 days ago)
> * 7173547b6e90 - net/mlx5: Add XRQ support (10 days ago)
> * 3d88f302a3fd - IB/mlx5: Fill XRQ capabilities (10 days ago)
> * 216c76559abb - IB/uverbs: Expose XRQ capabilities (10 days ago)
> * 9cff356f28d8 - IB/uverbs: Add new SRQ type IB_SRQT_TM (10 days ago)
> * 50a9896131e6 - IB/uverbs: Add XRQ creation parameter to UAPI (10 days ago)
> * 25b0c9ac0cf0 - IB/core: Add new SRQ type IB_SRQT_TM (10 days ago)
> * a4f8b0bc8a67 - IB/core: Separate CQ handle in SRQ context (10 days ago)
> * 1dee69e16539 - IB/core: Add XRQ capabilities (10 days ago)
> * b7a79bc53ce8 - net/mlx5: Update HW layout definitions (10 days ago)
> *   c5fa0c255ce4 - Merge tag 'mlx5-shared-2017-08-07' of git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux into HEAD (10 days ago)
> |\
> | * a8ffcc741acb - (tag: mlx5-shared-2017-08-07, ml/topic/mlx5-shared-4.14) net/mlx5: Increase the maximum flow counters supported (3 weeks ago)
> | * 61690e09c3b4 - net/mlx5: Fix counter list hardware structure (3 weeks ago)
> | * 97834eba7c19 - net/mlx5: Delay events till ib registration ends (3 weeks ago)
> | * e80541ecabd5 - net/mlx5: Add CONFIG_MLX5_ESWITCH Kconfig (3 weeks ago)
> | * eeb66cdb6826 - net/mlx5: Separate between E-Switch and MPFS (3 weeks ago)
> | * a9f7705ffd66 - net/mlx5: Unify vport manager capability check (3 weeks ago)
> | * 07c9f1e57839 - net/mlx5e: NIC netdev init flow cleanup (3 weeks ago)
> | * 706b35834820 - net/mlx5e: Rearrange netdevice ops structures (3 weeks ago)
> | * aae4e7a8bc44 - (tag: v4.13-rc4, backup/master) Linux 4.13-rc4 (3 weeks ago)

The missing patch is here: https://patchwork.kernel.org/patch/9901361/

Thanks

>
> Thanks
>
> >
> > --
> > Doug Ledford <dledford-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> >     GPG KeyID: B826A3330E572FDD
> >     Key fingerprint = AE6B 1BDA 122B 23B4 265B  1274 B826 A333 0E57 2FDD
> >



[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [pull request][rdma-next v1 REPOST 00/10] Hardware tag matching support
       [not found]                 ` <20170827060839.GO1724-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
  2017-08-27  6:14                   ` Leon Romanovsky
@ 2017-08-29  0:03                   ` Doug Ledford
  1 sibling, 0 replies; 20+ messages in thread
From: Doug Ledford @ 2017-08-29  0:03 UTC (permalink / raw)
  To: Leon Romanovsky; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

On Sun, 2017-08-27 at 09:08 +0300, Leon Romanovsky wrote:
> On Thu, Aug 24, 2017 at 04:53:50PM -0400, Doug Ledford wrote:
> > On Thu, 2017-08-24 at 16:10 -0400, Doug Ledford wrote:
> > > On Thu, 2017-08-24 at 15:56 -0400, Doug Ledford wrote:
> > > > On Thu, 2017-08-17 at 15:52 +0300, Leon Romanovsky wrote:
> > > > > This patch series adds to Mellanox ConnectX HCA driver
> > > > > support of
> > > > > tag matching. It introduces new hardware object eXtended
> > > > > shared
> > > > > Receive
> > > > > Queue (XRQ), which follows SRQ semantics with addition of
> > > > > extended
> > > > > receive buffers topologies and offloads.
> > > > > 
> > > > > This series adds tag matching topology and rendezvouz
> > > > > offload.
> > > > > 
> > > > > Changelog:
> > > > > v0->v1:
> > > > >  * Rebased version, no change
> > > > > RFC->v0:
> > > > >  * Followed after RFC posted on the ML and OFVWG discussions
> > > > >  * Implements agreed verbs interface
> > > > >  * Rebased on top of latest version
> > > > >  * Adding feature description under Documentaion/infiniband
> > > > >  * In struct ib_srq_init_attr moved CQ outside XRC inner
> > > > > struct.
> > > > >  * Added max size of the information passed after the RNDV
> > > > > header
> > > > >  * Added hca_sq_owner HW flag for RNDV QPs
> > > > > 
> > > > > Thanks
> > > > 
> > > > I know in the previous thread on this submission that you
> > > > thought
> > > > it
> > > > didn't need the shared pull request, but I'm seeing this when I
> > > > try
> > > > to
> > > > build after pulling this patch series in:
> > > > In file included from ./include/linux/mlx5/driver.h:49:0,
> > > >                  from ./include/linux/mlx5/fs.h:36,
> > > >                  from drivers/infiniband/hw/mlx5/qp.c:37:
> > > > drivers/infiniband/hw/mlx5/qp.c: In function
> > > > ‘create_qp_common’:
> > > > drivers/infiniband/hw/mlx5/qp.c:1734:6: error:
> > > > ‘MLX5_QPC_OFFLOAD_TYPE_RNDV’ undeclared (first use in this
> > > > function);
> > > > did you mean ‘MLX5_XRQC_OFFLOAD_RNDV’?
> > > >       MLX5_QPC_OFFLOAD_TYPE_RNDV);
> > > 
> > > Nevermind.  This is an ordering issue.  I took this before your
> > > 24
> > > patch series and I think that is the source of the problem.
> > 
> > Nope, that didn't fix it either.  Maybe it needs one of your other
> > patchsets?  I also tried it on the shared code base an it fails to
> > build there in the same way.
> 
> I found the issue, in my "REPOST", I missed one patch, don't know how
> it
> happened, but it is clearly my fault. Sorry about that.

No worries.  It appears to be working now.  I pulled it in and once it
passes the compile test I'll declare it added.

> See the difference between the mellanox-shared branch and my tag.
> 
> ➜  linux-rdma git:(rdma-next) git l grdma/mellanox-shared

Try this command as git l --topo-order grdma/mellanox-shared and see if
it makes a difference.  If the normal sort order, which can vary from
topo-order, shows the merge out of topo order, then it's easy to use
the wrong starting hash for a git send-email command.

> 6bb33a5770aa (grdma/mellanox-shared) Documentation: Hardware tag
> matching
> 380529f2a59b IB/mlx5: Support IB_SRQT_TM
> 98d90bfb6e58 net/mlx5: Add XRQ support
> aa0d027930d6 IB/mlx5: Fill XRQ capabilities
> 5d7ef472505a IB/uverbs: Expose XRQ capabilities
> 6061811b8759 IB/uverbs: Add new SRQ type IB_SRQT_TM
> 2d87bd3ae6f0 IB/uverbs: Add XRQ creation parameter to UAPI
> 633b67ed6758 IB/core: Add new SRQ type IB_SRQT_TM
> e190f28d0630 IB/core: Separate CQ handle in SRQ context
> 8f3d761c09f6 IB/core: Add XRQ capabilities
> f336076a90bb Merge tag 'mlx5-shared-2017-08-07' of
> git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux into
> k.o/mellanox
> 
> ➜  linux-rdma git:(rdma-next) git pl --graph rdma-next-2017-08-17-1
> * 89f4e752bf80 - (tag: rdma-next-2017-08-17-1) Documentation:
> Hardware tag matching (10 days ago)
> * a6eb9232179a - IB/mlx5: Support IB_SRQT_TM (10 days ago)
> * 7173547b6e90 - net/mlx5: Add XRQ support (10 days ago)
> * 3d88f302a3fd - IB/mlx5: Fill XRQ capabilities (10 days ago)
> * 216c76559abb - IB/uverbs: Expose XRQ capabilities (10 days ago)
> * 9cff356f28d8 - IB/uverbs: Add new SRQ type IB_SRQT_TM (10 days ago)
> * 50a9896131e6 - IB/uverbs: Add XRQ creation parameter to UAPI (10
> days ago)
> * 25b0c9ac0cf0 - IB/core: Add new SRQ type IB_SRQT_TM (10 days ago)
> * a4f8b0bc8a67 - IB/core: Separate CQ handle in SRQ context (10 days
> ago)
> * 1dee69e16539 - IB/core: Add XRQ capabilities (10 days ago)
> * b7a79bc53ce8 - net/mlx5: Update HW layout definitions (10 days ago)
> *   c5fa0c255ce4 - Merge tag 'mlx5-shared-2017-08-07' of
> git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux into
> HEAD (10 days ago)
> > \
> > * a8ffcc741acb - (tag: mlx5-shared-2017-08-07, ml/topic/mlx5-
> > shared-4.14) net/mlx5: Increase the maximum flow counters supported
> > (3 weeks ago)
> > * 61690e09c3b4 - net/mlx5: Fix counter list hardware structure (3
> > weeks ago)
> > * 97834eba7c19 - net/mlx5: Delay events till ib registration ends
> > (3 weeks ago)
> > * e80541ecabd5 - net/mlx5: Add CONFIG_MLX5_ESWITCH Kconfig (3 weeks
> > ago)
> > * eeb66cdb6826 - net/mlx5: Separate between E-Switch and MPFS (3
> > weeks ago)
> > * a9f7705ffd66 - net/mlx5: Unify vport manager capability check (3
> > weeks ago)
> > * 07c9f1e57839 - net/mlx5e: NIC netdev init flow cleanup (3 weeks
> > ago)
> > * 706b35834820 - net/mlx5e: Rearrange netdevice ops structures (3
> > weeks ago)
> > * aae4e7a8bc44 - (tag: v4.13-rc4, backup/master) Linux 4.13-rc4 (3
> > weeks ago)
> 
> Thanks
> 
> > 
> > --
> > Doug Ledford <dledford-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> >     GPG KeyID: B826A3330E572FDD
> >     Key fingerprint = AE6B 1BDA 122B 23B4 265B  1274 B826 A333 0E57
> > 2FDD
> > 
-- 
Doug Ledford <dledford-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
    GPG KeyID: B826A3330E572FDD
    Key fingerprint = AE6B 1BDA 122B 23B4 265B  1274 B826 A333 0E57 2FDD

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [pull request][rdma-next v1 REPOST 00/10] Hardware tag matching support
       [not found] ` <20170817125212.3173-1-leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
                     ` (10 preceding siblings ...)
  2017-08-24 19:56   ` [pull request][rdma-next v1 REPOST 00/10] Hardware tag matching support Doug Ledford
@ 2017-08-29  0:37   ` Doug Ledford
  11 siblings, 0 replies; 20+ messages in thread
From: Doug Ledford @ 2017-08-29  0:37 UTC (permalink / raw)
  To: Leon Romanovsky; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

On Thu, 2017-08-17 at 15:52 +0300, Leon Romanovsky wrote:
> This patch series adds to Mellanox ConnectX HCA driver support of
> tag matching. It introduces new hardware object eXtended shared
> Receive
> Queue (XRQ), which follows SRQ semantics with addition of extended
> receive buffers topologies and offloads.
> 
> This series adds tag matching topology and rendezvouz offload.
> 
> Changelog:
> v0->v1:
>  * Rebased version, no change
> RFC->v0:
>  * Followed after RFC posted on the ML and OFVWG discussions
>  * Implements agreed verbs interface
>  * Rebased on top of latest version
>  * Adding feature description under Documentaion/infiniband
>  * In struct ib_srq_init_attr moved CQ outside XRC inner struct.
>  * Added max size of the information passed after the RNDV header
>  * Added hca_sq_owner HW flag for RNDV QPs
> 
> Thanks

Thanks, applied.

-- 
Doug Ledford <dledford-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
    GPG KeyID: B826A3330E572FDD
    Key fingerprint = AE6B 1BDA 122B 23B4 265B  1274 B826 A333 0E57 2FDD

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2017-08-29  0:37 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-17 12:52 [pull request][rdma-next v1 REPOST 00/10] Hardware tag matching support Leon Romanovsky
     [not found] ` <20170817125212.3173-1-leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
2017-08-17 12:52   ` [rdma-next v1 REPOST 01/10] IB/core: Add XRQ capabilities Leon Romanovsky
2017-08-17 12:52   ` [rdma-next v1 REPOST 02/10] IB/core: Separate CQ handle in SRQ context Leon Romanovsky
2017-08-17 12:52   ` [rdma-next v1 REPOST 03/10] IB/core: Add new SRQ type IB_SRQT_TM Leon Romanovsky
2017-08-17 12:52   ` [rdma-next v1 REPOST 04/10] IB/uverbs: Add XRQ creation parameter to UAPI Leon Romanovsky
2017-08-17 12:52   ` [rdma-next v1 REPOST 05/10] IB/uverbs: Add new SRQ type IB_SRQT_TM Leon Romanovsky
2017-08-17 12:52   ` [rdma-next v1 REPOST 06/10] IB/uverbs: Expose XRQ capabilities Leon Romanovsky
2017-08-17 12:52   ` [rdma-next v1 REPOST 07/10] IB/mlx5: Fill " Leon Romanovsky
2017-08-17 12:52   ` [rdma-next v1 REPOST 08/10] net/mlx5: Add XRQ support Leon Romanovsky
2017-08-17 12:52   ` [rdma-next v1 REPOST 09/10] IB/mlx5: Support IB_SRQT_TM Leon Romanovsky
2017-08-17 12:52   ` [rdma-next v1 REPOST 10/10] Documentation: Hardware tag matching Leon Romanovsky
     [not found]     ` <20170817125212.3173-11-leon-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
2017-08-21 16:04       ` Jason Gunthorpe
     [not found]         ` <20170821160437.GD4401-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
2017-08-22  8:28           ` Leon Romanovsky
2017-08-24 19:56   ` [pull request][rdma-next v1 REPOST 00/10] Hardware tag matching support Doug Ledford
     [not found]     ` <1503604595.78641.39.camel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2017-08-24 20:10       ` Doug Ledford
     [not found]         ` <1503605434.78641.41.camel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2017-08-24 20:53           ` Doug Ledford
     [not found]             ` <1503608030.78641.57.camel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2017-08-27  6:08               ` Leon Romanovsky
     [not found]                 ` <20170827060839.GO1724-U/DQcQFIOTAAJjI8aNfphQ@public.gmane.org>
2017-08-27  6:14                   ` Leon Romanovsky
2017-08-29  0:03                   ` Doug Ledford
2017-08-29  0:37   ` Doug Ledford

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.