All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/8] net/mlx5: various fixes
@ 2016-09-07  7:02 Nelio Laranjeiro
  2016-09-07  7:02 ` [PATCH 1/8] net/mlx5: fix inconsistent return value in Flow Director Nelio Laranjeiro
                   ` (16 more replies)
  0 siblings, 17 replies; 22+ messages in thread
From: Nelio Laranjeiro @ 2016-09-07  7:02 UTC (permalink / raw)
  To: dev

 - Flow director
 - Rx Capabilities
 - Inline

Adrien Mazarguil (1):
  net/mlx5: fix Rx VLAN offload capability report

Nelio Laranjeiro (3):
  net/mlx5: force inline for completion function
  net/mlx5: re-factorize functions
  net/mlx5: fix inline logic

Raslan Darawsheh (1):
  net/mlx5: fix removing VLAN filter

Yaacov Hazan (3):
  net/mlx5: fix inconsistent return value in Flow Director
  net/mlx5: refactor allocation of flow director queues
  net/mlx5: fix support for flow director drop mode

 doc/guides/nics/mlx5.rst       |   3 +-
 drivers/net/mlx5/mlx5.h        |   2 +
 drivers/net/mlx5/mlx5_ethdev.c |   7 +-
 drivers/net/mlx5/mlx5_fdir.c   | 270 +++++++++++++++-------
 drivers/net/mlx5/mlx5_rxq.c    |   2 +
 drivers/net/mlx5/mlx5_rxtx.c   | 499 +++++++++--------------------------------
 drivers/net/mlx5/mlx5_rxtx.h   |   7 +-
 drivers/net/mlx5/mlx5_txq.c    |   9 +-
 drivers/net/mlx5/mlx5_vlan.c   |   3 +-
 9 files changed, 318 insertions(+), 484 deletions(-)

-- 
2.1.4

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 1/8] net/mlx5: fix inconsistent return value in Flow Director
  2016-09-07  7:02 [PATCH 0/8] net/mlx5: various fixes Nelio Laranjeiro
@ 2016-09-07  7:02 ` Nelio Laranjeiro
  2016-09-07  7:02 ` [PATCH 2/8] net/mlx5: fix Rx VLAN offload capability report Nelio Laranjeiro
                   ` (15 subsequent siblings)
  16 siblings, 0 replies; 22+ messages in thread
From: Nelio Laranjeiro @ 2016-09-07  7:02 UTC (permalink / raw)
  To: dev; +Cc: Yaacov Hazan

From: Yaacov Hazan <yaacovh@mellanox.com>

The return value in DPDK is negative errno on failure.
Since internal functions in mlx driver return positive
values need to negate this value when it returned to
dpdk layer.

Fixes: 76f5c99 ("mlx5: support flow director")

Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
---
 drivers/net/mlx5/mlx5_fdir.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_fdir.c b/drivers/net/mlx5/mlx5_fdir.c
index 73eb00e..8207573 100644
--- a/drivers/net/mlx5/mlx5_fdir.c
+++ b/drivers/net/mlx5/mlx5_fdir.c
@@ -955,7 +955,7 @@ mlx5_dev_filter_ctrl(struct rte_eth_dev *dev,
 		     enum rte_filter_op filter_op,
 		     void *arg)
 {
-	int ret = -EINVAL;
+	int ret = EINVAL;
 	struct priv *priv = dev->data->dev_private;
 
 	switch (filter_type) {
@@ -970,5 +970,5 @@ mlx5_dev_filter_ctrl(struct rte_eth_dev *dev,
 		break;
 	}
 
-	return ret;
+	return -ret;
 }
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 2/8] net/mlx5: fix Rx VLAN offload capability report
  2016-09-07  7:02 [PATCH 0/8] net/mlx5: various fixes Nelio Laranjeiro
  2016-09-07  7:02 ` [PATCH 1/8] net/mlx5: fix inconsistent return value in Flow Director Nelio Laranjeiro
@ 2016-09-07  7:02 ` Nelio Laranjeiro
  2016-09-07  7:02 ` [PATCH 3/8] net/mlx5: fix removing VLAN filter Nelio Laranjeiro
                   ` (14 subsequent siblings)
  16 siblings, 0 replies; 22+ messages in thread
From: Nelio Laranjeiro @ 2016-09-07  7:02 UTC (permalink / raw)
  To: dev; +Cc: Adrien Mazarguil

From: Adrien Mazarguil <adrien.mazarguil@6wind.com>

This capability is implemented but not reported.

Fixes: f3db9489188a ("mlx5: support Rx VLAN stripping")

Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
---
 drivers/net/mlx5/mlx5_ethdev.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 130e15d..47f323e 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -583,7 +583,8 @@ mlx5_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
 		 (DEV_RX_OFFLOAD_IPV4_CKSUM |
 		  DEV_RX_OFFLOAD_UDP_CKSUM |
 		  DEV_RX_OFFLOAD_TCP_CKSUM) :
-		 0);
+		 0) |
+		(priv->hw_vlan_strip ? DEV_RX_OFFLOAD_VLAN_STRIP : 0);
 	if (!priv->mps)
 		info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT;
 	if (priv->hw_csum)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 3/8] net/mlx5: fix removing VLAN filter
  2016-09-07  7:02 [PATCH 0/8] net/mlx5: various fixes Nelio Laranjeiro
  2016-09-07  7:02 ` [PATCH 1/8] net/mlx5: fix inconsistent return value in Flow Director Nelio Laranjeiro
  2016-09-07  7:02 ` [PATCH 2/8] net/mlx5: fix Rx VLAN offload capability report Nelio Laranjeiro
@ 2016-09-07  7:02 ` Nelio Laranjeiro
  2016-09-07  7:02 ` [PATCH 4/8] net/mlx5: refactor allocation of flow director queues Nelio Laranjeiro
                   ` (13 subsequent siblings)
  16 siblings, 0 replies; 22+ messages in thread
From: Nelio Laranjeiro @ 2016-09-07  7:02 UTC (permalink / raw)
  To: dev; +Cc: Raslan Darawsheh

From: Raslan Darawsheh <rdarawsheh@asaltech.com>

memmove was moving bytes as the number of elements next to i, while it
should move the number of elements multiplied by the size of each element.

Fixes: e9086978 ("mlx5: support VLAN filtering")

Signed-off-by: Raslan Darawsheh <rdarawsheh@asaltech.com>
---
 drivers/net/mlx5/mlx5_vlan.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c
index 4719e69..fb730e5 100644
--- a/drivers/net/mlx5/mlx5_vlan.c
+++ b/drivers/net/mlx5/mlx5_vlan.c
@@ -87,7 +87,8 @@ vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 		--priv->vlan_filter_n;
 		memmove(&priv->vlan_filter[i],
 			&priv->vlan_filter[i + 1],
-			priv->vlan_filter_n - i);
+			sizeof(priv->vlan_filter[i]) *
+			(priv->vlan_filter_n - i));
 		priv->vlan_filter[priv->vlan_filter_n] = 0;
 	} else {
 		assert(i == priv->vlan_filter_n);
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 4/8] net/mlx5: refactor allocation of flow director queues
  2016-09-07  7:02 [PATCH 0/8] net/mlx5: various fixes Nelio Laranjeiro
                   ` (2 preceding siblings ...)
  2016-09-07  7:02 ` [PATCH 3/8] net/mlx5: fix removing VLAN filter Nelio Laranjeiro
@ 2016-09-07  7:02 ` Nelio Laranjeiro
  2016-09-07  7:02 ` [PATCH 5/8] net/mlx5: fix support for flow director drop mode Nelio Laranjeiro
                   ` (12 subsequent siblings)
  16 siblings, 0 replies; 22+ messages in thread
From: Nelio Laranjeiro @ 2016-09-07  7:02 UTC (permalink / raw)
  To: dev; +Cc: Yaacov Hazan, Adrien Mazarguil

From: Yaacov Hazan <yaacovh@mellanox.com>

This is done to prepare support for drop queues, which are not related to
existing RX queues and need to be managed separately.

Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
---
 drivers/net/mlx5/mlx5.h      |   1 +
 drivers/net/mlx5/mlx5_fdir.c | 229 ++++++++++++++++++++++++++++---------------
 drivers/net/mlx5/mlx5_rxq.c  |   2 +
 drivers/net/mlx5/mlx5_rxtx.h |   4 +-
 4 files changed, 156 insertions(+), 80 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 3a86609..fa78623 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -257,6 +257,7 @@ void mlx5_dev_stop(struct rte_eth_dev *);
 
 /* mlx5_fdir.c */
 
+void priv_fdir_queue_destroy(struct priv *, struct fdir_queue *);
 int fdir_init_filters_list(struct priv *);
 void priv_fdir_delete_filters_list(struct priv *);
 void priv_fdir_disable(struct priv *);
diff --git a/drivers/net/mlx5/mlx5_fdir.c b/drivers/net/mlx5/mlx5_fdir.c
index 8207573..802669b 100644
--- a/drivers/net/mlx5/mlx5_fdir.c
+++ b/drivers/net/mlx5/mlx5_fdir.c
@@ -400,6 +400,145 @@ create_flow:
 }
 
 /**
+ * Destroy a flow director queue.
+ *
+ * @param fdir_queue
+ *   Flow director queue to be destroyed.
+ */
+void
+priv_fdir_queue_destroy(struct priv *priv, struct fdir_queue *fdir_queue)
+{
+	struct mlx5_fdir_filter *fdir_filter;
+
+	/* Disable filter flows still applying to this queue. */
+	LIST_FOREACH(fdir_filter, priv->fdir_filter_list, next) {
+		unsigned int idx = fdir_filter->queue;
+		struct rxq_ctrl *rxq_ctrl =
+			container_of((*priv->rxqs)[idx], struct rxq_ctrl, rxq);
+
+		assert(idx < priv->rxqs_n);
+		if (fdir_queue == rxq_ctrl->fdir_queue &&
+		    fdir_filter->flow != NULL) {
+			claim_zero(ibv_exp_destroy_flow(fdir_filter->flow));
+			fdir_filter->flow = NULL;
+		}
+	}
+	assert(fdir_queue->qp);
+	claim_zero(ibv_destroy_qp(fdir_queue->qp));
+	assert(fdir_queue->ind_table);
+	claim_zero(ibv_exp_destroy_rwq_ind_table(fdir_queue->ind_table));
+	if (fdir_queue->wq)
+		claim_zero(ibv_exp_destroy_wq(fdir_queue->wq));
+	if (fdir_queue->cq)
+		claim_zero(ibv_destroy_cq(fdir_queue->cq));
+#ifndef NDEBUG
+	memset(fdir_queue, 0x2a, sizeof(*fdir_queue));
+#endif
+	rte_free(fdir_queue);
+}
+
+/**
+ * Create a flow director queue.
+ *
+ * @param priv
+ *   Private structure.
+ * @param wq
+ *   Work queue to route matched packets to, NULL if one needs to
+ *   be created.
+ *
+ * @return
+ *   Related flow director queue on success, NULL otherwise.
+ */
+static struct fdir_queue *
+priv_fdir_queue_create(struct priv *priv, struct ibv_exp_wq *wq,
+		       unsigned int socket)
+{
+	struct fdir_queue *fdir_queue;
+
+	fdir_queue = rte_calloc_socket(__func__, 1, sizeof(*fdir_queue),
+				       0, socket);
+	if (!fdir_queue) {
+		ERROR("cannot allocate flow director queue");
+		return NULL;
+	}
+	assert(priv->pd);
+	assert(priv->ctx);
+	if (!wq) {
+		fdir_queue->cq = ibv_exp_create_cq(
+			priv->ctx, 1, NULL, NULL, 0,
+			&(struct ibv_exp_cq_init_attr){
+				.comp_mask = 0,
+			});
+		if (!fdir_queue->cq) {
+			ERROR("cannot create flow director CQ");
+			goto error;
+		}
+		fdir_queue->wq = ibv_exp_create_wq(
+			priv->ctx,
+			&(struct ibv_exp_wq_init_attr){
+				.wq_type = IBV_EXP_WQT_RQ,
+				.max_recv_wr = 1,
+				.max_recv_sge = 1,
+				.pd = priv->pd,
+				.cq = fdir_queue->cq,
+			});
+		if (!fdir_queue->wq) {
+			ERROR("cannot create flow director WQ");
+			goto error;
+		}
+		wq = fdir_queue->wq;
+	}
+	fdir_queue->ind_table = ibv_exp_create_rwq_ind_table(
+		priv->ctx,
+		&(struct ibv_exp_rwq_ind_table_init_attr){
+			.pd = priv->pd,
+			.log_ind_tbl_size = 0,
+			.ind_tbl = &wq,
+			.comp_mask = 0,
+		});
+	if (!fdir_queue->ind_table) {
+		ERROR("cannot create flow director indirection table");
+		goto error;
+	}
+	fdir_queue->qp = ibv_exp_create_qp(
+		priv->ctx,
+		&(struct ibv_exp_qp_init_attr){
+			.qp_type = IBV_QPT_RAW_PACKET,
+			.comp_mask =
+				IBV_EXP_QP_INIT_ATTR_PD |
+				IBV_EXP_QP_INIT_ATTR_PORT |
+				IBV_EXP_QP_INIT_ATTR_RX_HASH,
+			.pd = priv->pd,
+			.rx_hash_conf = &(struct ibv_exp_rx_hash_conf){
+				.rx_hash_function =
+					IBV_EXP_RX_HASH_FUNC_TOEPLITZ,
+				.rx_hash_key_len = rss_hash_default_key_len,
+				.rx_hash_key = rss_hash_default_key,
+				.rx_hash_fields_mask = 0,
+				.rwq_ind_tbl = fdir_queue->ind_table,
+			},
+			.port_num = priv->port,
+		});
+	if (!fdir_queue->qp) {
+		ERROR("cannot create flow director hash RX QP");
+		goto error;
+	}
+	return fdir_queue;
+error:
+	assert(fdir_queue);
+	assert(!fdir_queue->qp);
+	if (fdir_queue->ind_table)
+		claim_zero(ibv_exp_destroy_rwq_ind_table
+			   (fdir_queue->ind_table));
+	if (fdir_queue->wq)
+		claim_zero(ibv_exp_destroy_wq(fdir_queue->wq));
+	if (fdir_queue->cq)
+		claim_zero(ibv_destroy_cq(fdir_queue->cq));
+	rte_free(fdir_queue);
+	return NULL;
+}
+
+/**
  * Get flow director queue for a specific RX queue, create it in case
  * it does not exist.
  *
@@ -416,74 +555,15 @@ priv_get_fdir_queue(struct priv *priv, uint16_t idx)
 {
 	struct rxq_ctrl *rxq_ctrl =
 		container_of((*priv->rxqs)[idx], struct rxq_ctrl, rxq);
-	struct fdir_queue *fdir_queue = &rxq_ctrl->fdir_queue;
-	struct ibv_exp_rwq_ind_table *ind_table = NULL;
-	struct ibv_qp *qp = NULL;
-	struct ibv_exp_rwq_ind_table_init_attr ind_init_attr;
-	struct ibv_exp_rx_hash_conf hash_conf;
-	struct ibv_exp_qp_init_attr qp_init_attr;
-	int err = 0;
-
-	/* Return immediately if it has already been created. */
-	if (fdir_queue->qp != NULL)
-		return fdir_queue;
-
-	ind_init_attr = (struct ibv_exp_rwq_ind_table_init_attr){
-		.pd = priv->pd,
-		.log_ind_tbl_size = 0,
-		.ind_tbl = &rxq_ctrl->wq,
-		.comp_mask = 0,
-	};
+	struct fdir_queue *fdir_queue = rxq_ctrl->fdir_queue;
 
-	errno = 0;
-	ind_table = ibv_exp_create_rwq_ind_table(priv->ctx,
-						 &ind_init_attr);
-	if (ind_table == NULL) {
-		/* Not clear whether errno is set. */
-		err = (errno ? errno : EINVAL);
-		ERROR("RX indirection table creation failed with error %d: %s",
-		      err, strerror(err));
-		goto error;
-	}
-
-	/* Create fdir_queue qp. */
-	hash_conf = (struct ibv_exp_rx_hash_conf){
-		.rx_hash_function = IBV_EXP_RX_HASH_FUNC_TOEPLITZ,
-		.rx_hash_key_len = rss_hash_default_key_len,
-		.rx_hash_key = rss_hash_default_key,
-		.rx_hash_fields_mask = 0,
-		.rwq_ind_tbl = ind_table,
-	};
-	qp_init_attr = (struct ibv_exp_qp_init_attr){
-		.max_inl_recv = 0, /* Currently not supported. */
-		.qp_type = IBV_QPT_RAW_PACKET,
-		.comp_mask = (IBV_EXP_QP_INIT_ATTR_PD |
-			      IBV_EXP_QP_INIT_ATTR_RX_HASH),
-		.pd = priv->pd,
-		.rx_hash_conf = &hash_conf,
-		.port_num = priv->port,
-	};
-
-	qp = ibv_exp_create_qp(priv->ctx, &qp_init_attr);
-	if (qp == NULL) {
-		err = (errno ? errno : EINVAL);
-		ERROR("hash RX QP creation failure: %s", strerror(err));
-		goto error;
+	assert(rxq_ctrl->wq);
+	if (fdir_queue == NULL) {
+		fdir_queue = priv_fdir_queue_create(priv, rxq_ctrl->wq,
+						    rxq_ctrl->socket);
+		rxq_ctrl->fdir_queue = fdir_queue;
 	}
-
-	fdir_queue->ind_table = ind_table;
-	fdir_queue->qp = qp;
-
 	return fdir_queue;
-
-error:
-	if (qp != NULL)
-		claim_zero(ibv_destroy_qp(qp));
-
-	if (ind_table != NULL)
-		claim_zero(ibv_exp_destroy_rwq_ind_table(ind_table));
-
-	return NULL;
 }
 
 /**
@@ -601,7 +681,6 @@ priv_fdir_disable(struct priv *priv)
 {
 	unsigned int i;
 	struct mlx5_fdir_filter *mlx5_fdir_filter;
-	struct fdir_queue *fdir_queue;
 
 	/* Run on every flow director filter and destroy flow handle. */
 	LIST_FOREACH(mlx5_fdir_filter, priv->fdir_filter_list, next) {
@@ -618,23 +697,15 @@ priv_fdir_disable(struct priv *priv)
 		}
 	}
 
-	/* Run on every RX queue to destroy related flow director QP and
-	 * indirection table. */
+	/* Destroy flow director context in each RX queue. */
 	for (i = 0; (i != priv->rxqs_n); i++) {
 		struct rxq_ctrl *rxq_ctrl =
 			container_of((*priv->rxqs)[i], struct rxq_ctrl, rxq);
 
-		fdir_queue = &rxq_ctrl->fdir_queue;
-		if (fdir_queue->qp != NULL) {
-			claim_zero(ibv_destroy_qp(fdir_queue->qp));
-			fdir_queue->qp = NULL;
-		}
-
-		if (fdir_queue->ind_table != NULL) {
-			claim_zero(ibv_exp_destroy_rwq_ind_table
-				   (fdir_queue->ind_table));
-			fdir_queue->ind_table = NULL;
-		}
+		if (!rxq_ctrl->fdir_queue)
+			continue;
+		priv_fdir_queue_destroy(priv, rxq_ctrl->fdir_queue);
+		rxq_ctrl->fdir_queue = NULL;
 	}
 }
 
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 29c137c..44889d1 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -745,6 +745,8 @@ rxq_cleanup(struct rxq_ctrl *rxq_ctrl)
 
 	DEBUG("cleaning up %p", (void *)rxq_ctrl);
 	rxq_free_elts(rxq_ctrl);
+	if (rxq_ctrl->fdir_queue != NULL)
+		priv_fdir_queue_destroy(rxq_ctrl->priv, rxq_ctrl->fdir_queue);
 	if (rxq_ctrl->if_wq != NULL) {
 		assert(rxq_ctrl->priv != NULL);
 		assert(rxq_ctrl->priv->ctx != NULL);
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index f6e2cba..c8a93c0 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -87,6 +87,8 @@ struct mlx5_txq_stats {
 struct fdir_queue {
 	struct ibv_qp *qp; /* Associated RX QP. */
 	struct ibv_exp_rwq_ind_table *ind_table; /* Indirection table. */
+	struct ibv_exp_wq *wq; /* Work queue. */
+	struct ibv_cq *cq; /* Completion queue. */
 };
 
 struct priv;
@@ -128,7 +130,7 @@ struct rxq_ctrl {
 	struct ibv_cq *cq; /* Completion Queue. */
 	struct ibv_exp_wq *wq; /* Work Queue. */
 	struct ibv_exp_res_domain *rd; /* Resource Domain. */
-	struct fdir_queue fdir_queue; /* Flow director queue. */
+	struct fdir_queue *fdir_queue; /* Flow director queue. */
 	struct ibv_mr *mr; /* Memory Region (for mp). */
 	struct ibv_exp_wq_family *if_wq; /* WQ burst interface. */
 	struct ibv_exp_cq_family_v1 *if_cq; /* CQ interface. */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 5/8] net/mlx5: fix support for flow director drop mode
  2016-09-07  7:02 [PATCH 0/8] net/mlx5: various fixes Nelio Laranjeiro
                   ` (3 preceding siblings ...)
  2016-09-07  7:02 ` [PATCH 4/8] net/mlx5: refactor allocation of flow director queues Nelio Laranjeiro
@ 2016-09-07  7:02 ` Nelio Laranjeiro
  2016-09-07  7:02 ` [PATCH 6/8] net/mlx5: force inline for completion function Nelio Laranjeiro
                   ` (11 subsequent siblings)
  16 siblings, 0 replies; 22+ messages in thread
From: Nelio Laranjeiro @ 2016-09-07  7:02 UTC (permalink / raw)
  To: dev; +Cc: Yaacov Hazan, Adrien Mazarguil

From: Yaacov Hazan <yaacovh@mellanox.com>

Packet rejection was routed to a polled queue.  This patch route them to a
dummy queue which is not polled.

Fixes: 76f5c99e6840 ("mlx5: support flow director")

Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
---
 doc/guides/nics/mlx5.rst     |  3 ++-
 drivers/net/mlx5/mlx5.h      |  1 +
 drivers/net/mlx5/mlx5_fdir.c | 41 +++++++++++++++++++++++++++++++++++++++--
 3 files changed, 42 insertions(+), 3 deletions(-)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 5c10cd3..8923173 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -84,7 +84,8 @@ Features
 - Promiscuous mode.
 - Multicast promiscuous mode.
 - Hardware checksum offloads.
-- Flow director (RTE_FDIR_MODE_PERFECT and RTE_FDIR_MODE_PERFECT_MAC_VLAN).
+- Flow director (RTE_FDIR_MODE_PERFECT, RTE_FDIR_MODE_PERFECT_MAC_VLAN and
+  RTE_ETH_FDIR_REJECT).
 - Secondary process TX is supported.
 - KVM and VMware ESX SR-IOV modes are supported.
 
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index fa78623..8349e5b 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -134,6 +134,7 @@ struct priv {
 	unsigned int (*reta_idx)[]; /* RETA index table. */
 	unsigned int reta_idx_n; /* RETA index size. */
 	struct fdir_filter_list *fdir_filter_list; /* Flow director rules. */
+	struct fdir_queue *fdir_drop_queue; /* Flow director drop queue. */
 	rte_spinlock_t lock; /* Lock for control functions. */
 };
 
diff --git a/drivers/net/mlx5/mlx5_fdir.c b/drivers/net/mlx5/mlx5_fdir.c
index 802669b..25c8c52 100644
--- a/drivers/net/mlx5/mlx5_fdir.c
+++ b/drivers/net/mlx5/mlx5_fdir.c
@@ -75,6 +75,7 @@ struct fdir_flow_desc {
 struct mlx5_fdir_filter {
 	LIST_ENTRY(mlx5_fdir_filter) next;
 	uint16_t queue; /* Queue assigned to if FDIR match. */
+	enum rte_eth_fdir_behavior behavior;
 	struct fdir_flow_desc desc;
 	struct ibv_exp_flow *flow;
 };
@@ -567,6 +568,33 @@ priv_get_fdir_queue(struct priv *priv, uint16_t idx)
 }
 
 /**
+ * Get or flow director drop queue. Create it if it does not exist.
+ *
+ * @param priv
+ *   Private structure.
+ *
+ * @return
+ *   Flow director drop queue on success, NULL otherwise.
+ */
+static struct fdir_queue *
+priv_get_fdir_drop_queue(struct priv *priv)
+{
+	struct fdir_queue *fdir_queue = priv->fdir_drop_queue;
+
+	if (fdir_queue == NULL) {
+		unsigned int socket = SOCKET_ID_ANY;
+
+		/* Select a known NUMA socket if possible. */
+		if (priv->rxqs_n && (*priv->rxqs)[0])
+			socket = container_of((*priv->rxqs)[0],
+					      struct rxq_ctrl, rxq)->socket;
+		fdir_queue = priv_fdir_queue_create(priv, NULL, socket);
+		priv->fdir_drop_queue = fdir_queue;
+	}
+	return fdir_queue;
+}
+
+/**
  * Enable flow director filter and create steering rules.
  *
  * @param priv
@@ -588,7 +616,11 @@ priv_fdir_filter_enable(struct priv *priv,
 		return 0;
 
 	/* Get fdir_queue for specific queue. */
-	fdir_queue = priv_get_fdir_queue(priv, mlx5_fdir_filter->queue);
+	if (mlx5_fdir_filter->behavior == RTE_ETH_FDIR_REJECT)
+		fdir_queue = priv_get_fdir_drop_queue(priv);
+	else
+		fdir_queue = priv_get_fdir_queue(priv,
+						 mlx5_fdir_filter->queue);
 
 	if (fdir_queue == NULL) {
 		ERROR("failed to create flow director rxq for queue %d",
@@ -707,6 +739,10 @@ priv_fdir_disable(struct priv *priv)
 		priv_fdir_queue_destroy(priv, rxq_ctrl->fdir_queue);
 		rxq_ctrl->fdir_queue = NULL;
 	}
+	if (priv->fdir_drop_queue) {
+		priv_fdir_queue_destroy(priv, priv->fdir_drop_queue);
+		priv->fdir_drop_queue = NULL;
+	}
 }
 
 /**
@@ -807,8 +843,9 @@ priv_fdir_filter_add(struct priv *priv,
 		return err;
 	}
 
-	/* Set queue. */
+	/* Set action parameters. */
 	mlx5_fdir_filter->queue = fdir_filter->action.rx_queue;
+	mlx5_fdir_filter->behavior = fdir_filter->action.behavior;
 
 	/* Convert to mlx5 filter descriptor. */
 	fdir_filter_to_flow_desc(fdir_filter,
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 6/8] net/mlx5: force inline for completion function
  2016-09-07  7:02 [PATCH 0/8] net/mlx5: various fixes Nelio Laranjeiro
                   ` (4 preceding siblings ...)
  2016-09-07  7:02 ` [PATCH 5/8] net/mlx5: fix support for flow director drop mode Nelio Laranjeiro
@ 2016-09-07  7:02 ` Nelio Laranjeiro
  2016-09-07  7:02 ` [PATCH 7/8] net/mlx5: re-factorize functions Nelio Laranjeiro
                   ` (10 subsequent siblings)
  16 siblings, 0 replies; 22+ messages in thread
From: Nelio Laranjeiro @ 2016-09-07  7:02 UTC (permalink / raw)
  To: dev

This function was supposed to be inlined, but was not because several
functions calls it.  This function should always be inline avoid
external function calls and to optimize code in data-path.

Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
---
 drivers/net/mlx5/mlx5_rxtx.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 3757366..5c39cbb 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -152,6 +152,9 @@ check_cqe64(volatile struct mlx5_cqe64 *cqe,
 	return 0;
 }
 
+static inline void
+txq_complete(struct txq *txq) __attribute__((always_inline));
+
 /**
  * Manage TX completions.
  *
@@ -160,7 +163,7 @@ check_cqe64(volatile struct mlx5_cqe64 *cqe,
  * @param txq
  *   Pointer to TX queue structure.
  */
-static void
+static inline void
 txq_complete(struct txq *txq)
 {
 	const unsigned int elts_n = txq->elts_n;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 7/8] net/mlx5: re-factorize functions
  2016-09-07  7:02 [PATCH 0/8] net/mlx5: various fixes Nelio Laranjeiro
                   ` (5 preceding siblings ...)
  2016-09-07  7:02 ` [PATCH 6/8] net/mlx5: force inline for completion function Nelio Laranjeiro
@ 2016-09-07  7:02 ` Nelio Laranjeiro
  2016-09-07  7:02 ` [PATCH 8/8] net/mlx5: fix inline logic Nelio Laranjeiro
                   ` (9 subsequent siblings)
  16 siblings, 0 replies; 22+ messages in thread
From: Nelio Laranjeiro @ 2016-09-07  7:02 UTC (permalink / raw)
  To: dev

Rework logic of wqe_write() and wqe_write_vlan() which are pretty similar
to keep a single one.

Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
---
 drivers/net/mlx5/mlx5_rxtx.c | 98 ++++++++++----------------------------------
 1 file changed, 22 insertions(+), 76 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 5c39cbb..c7e538f 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -293,8 +293,8 @@ txq_mp2mr(struct txq *txq, struct rte_mempool *mp)
  *   Pointer to TX queue structure.
  * @param wqe
  *   Pointer to the WQE to fill.
- * @param addr
- *   Buffer data address.
+ * @param buf
+ *   Buffer.
  * @param length
  *   Packet length.
  * @param lkey
@@ -302,54 +302,24 @@ txq_mp2mr(struct txq *txq, struct rte_mempool *mp)
  */
 static inline void
 mlx5_wqe_write(struct txq *txq, volatile union mlx5_wqe *wqe,
-	       uintptr_t addr, uint32_t length, uint32_t lkey)
-{
-	wqe->wqe.ctrl.data[0] = htonl((txq->wqe_ci << 8) | MLX5_OPCODE_SEND);
-	wqe->wqe.ctrl.data[1] = htonl((txq->qp_num_8s) | 4);
-	wqe->wqe.ctrl.data[2] = 0;
-	wqe->wqe.ctrl.data[3] = 0;
-	wqe->inl.eseg.rsvd0 = 0;
-	wqe->inl.eseg.rsvd1 = 0;
-	wqe->inl.eseg.mss = 0;
-	wqe->inl.eseg.rsvd2 = 0;
-	wqe->wqe.eseg.inline_hdr_sz = htons(MLX5_ETH_INLINE_HEADER_SIZE);
-	/* Copy the first 16 bytes into inline header. */
-	rte_memcpy((uint8_t *)(uintptr_t)wqe->wqe.eseg.inline_hdr_start,
-		   (uint8_t *)(uintptr_t)addr,
-		   MLX5_ETH_INLINE_HEADER_SIZE);
-	addr += MLX5_ETH_INLINE_HEADER_SIZE;
-	length -= MLX5_ETH_INLINE_HEADER_SIZE;
-	/* Store remaining data in data segment. */
-	wqe->wqe.dseg.byte_count = htonl(length);
-	wqe->wqe.dseg.lkey = lkey;
-	wqe->wqe.dseg.addr = htonll(addr);
-	/* Increment consumer index. */
-	++txq->wqe_ci;
-}
-
-/**
- * Write a regular WQE with VLAN.
- *
- * @param txq
- *   Pointer to TX queue structure.
- * @param wqe
- *   Pointer to the WQE to fill.
- * @param addr
- *   Buffer data address.
- * @param length
- *   Packet length.
- * @param lkey
- *   Memory region lkey.
- * @param vlan_tci
- *   VLAN field to insert in packet.
- */
-static inline void
-mlx5_wqe_write_vlan(struct txq *txq, volatile union mlx5_wqe *wqe,
-		    uintptr_t addr, uint32_t length, uint32_t lkey,
-		    uint16_t vlan_tci)
+	       struct rte_mbuf *buf, uint32_t length, uint32_t lkey)
 {
-	uint32_t vlan = htonl(0x81000000 | vlan_tci);
-
+	uintptr_t addr = rte_pktmbuf_mtod(buf, uintptr_t);
+
+	rte_mov16((uint8_t *)&wqe->wqe.eseg.inline_hdr_start,
+		  (uint8_t *)addr);
+	addr += 16;
+	length -= 16;
+	/* Need to insert VLAN ? */
+	if (buf->ol_flags & PKT_TX_VLAN_PKT) {
+		uint32_t vlan = htonl(0x81000000 | buf->vlan_tci);
+
+		memcpy((uint8_t *)&wqe->wqe.eseg.inline_hdr_start + 12,
+		       &vlan, sizeof(vlan));
+		addr -= sizeof(vlan);
+		length += sizeof(vlan);
+	}
+	/* Write the WQE. */
 	wqe->wqe.ctrl.data[0] = htonl((txq->wqe_ci << 8) | MLX5_OPCODE_SEND);
 	wqe->wqe.ctrl.data[1] = htonl((txq->qp_num_8s) | 4);
 	wqe->wqe.ctrl.data[2] = 0;
@@ -358,20 +328,7 @@ mlx5_wqe_write_vlan(struct txq *txq, volatile union mlx5_wqe *wqe,
 	wqe->inl.eseg.rsvd1 = 0;
 	wqe->inl.eseg.mss = 0;
 	wqe->inl.eseg.rsvd2 = 0;
-	wqe->wqe.eseg.inline_hdr_sz = htons(MLX5_ETH_VLAN_INLINE_HEADER_SIZE);
-	/*
-	 * Copy 12 bytes of source & destination MAC address.
-	 * Copy 4 bytes of VLAN.
-	 * Copy 2 bytes of Ether type.
-	 */
-	rte_memcpy((uint8_t *)(uintptr_t)wqe->wqe.eseg.inline_hdr_start,
-		   (uint8_t *)(uintptr_t)addr, 12);
-	rte_memcpy((uint8_t *)((uintptr_t)wqe->wqe.eseg.inline_hdr_start + 12),
-		   &vlan, sizeof(vlan));
-	rte_memcpy((uint8_t *)((uintptr_t)wqe->wqe.eseg.inline_hdr_start + 16),
-		   (uint8_t *)((uintptr_t)addr + 12), 2);
-	addr += MLX5_ETH_VLAN_INLINE_HEADER_SIZE - sizeof(vlan);
-	length -= MLX5_ETH_VLAN_INLINE_HEADER_SIZE - sizeof(vlan);
+	wqe->wqe.eseg.inline_hdr_sz = htons(16);
 	/* Store remaining data in data segment. */
 	wqe->wqe.dseg.byte_count = htonl(length);
 	wqe->wqe.dseg.lkey = lkey;
@@ -612,7 +569,6 @@ mlx5_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
 	do {
 		struct rte_mbuf *buf = *(pkts++);
 		unsigned int elts_head_next;
-		uintptr_t addr;
 		uint32_t length;
 		uint32_t lkey;
 		unsigned int segs_n = buf->nb_segs;
@@ -634,8 +590,6 @@ mlx5_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
 		rte_prefetch0(wqe);
 		if (pkts_n)
 			rte_prefetch0(*pkts);
-		/* Retrieve buffer information. */
-		addr = rte_pktmbuf_mtod(buf, uintptr_t);
 		length = DATA_LEN(buf);
 		/* Update element. */
 		(*txq->elts)[elts_head] = buf;
@@ -645,11 +599,7 @@ mlx5_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
 						       volatile void *));
 		/* Retrieve Memory Region key for this memory pool. */
 		lkey = txq_mp2mr(txq, txq_mb2mp(buf));
-		if (buf->ol_flags & PKT_TX_VLAN_PKT)
-			mlx5_wqe_write_vlan(txq, wqe, addr, length, lkey,
-					    buf->vlan_tci);
-		else
-			mlx5_wqe_write(txq, wqe, addr, length, lkey);
+		mlx5_wqe_write(txq, wqe, buf, length, lkey);
 		/* Should we enable HW CKSUM offload */
 		if (buf->ol_flags &
 		    (PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM)) {
@@ -813,11 +763,7 @@ mlx5_tx_burst_inline(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
 		} else {
 			/* Retrieve Memory Region key for this memory pool. */
 			lkey = txq_mp2mr(txq, txq_mb2mp(buf));
-			if (buf->ol_flags & PKT_TX_VLAN_PKT)
-				mlx5_wqe_write_vlan(txq, wqe, addr, length,
-						    lkey, buf->vlan_tci);
-			else
-				mlx5_wqe_write(txq, wqe, addr, length, lkey);
+			mlx5_wqe_write(txq, wqe, buf, length, lkey);
 		}
 		while (--segs_n) {
 			/*
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 8/8] net/mlx5: fix inline logic
  2016-09-07  7:02 [PATCH 0/8] net/mlx5: various fixes Nelio Laranjeiro
                   ` (6 preceding siblings ...)
  2016-09-07  7:02 ` [PATCH 7/8] net/mlx5: re-factorize functions Nelio Laranjeiro
@ 2016-09-07  7:02 ` Nelio Laranjeiro
  2016-09-14 10:43   ` Ferruh Yigit
  2016-09-14 11:53 ` [PATCH V2 0/8] net/mlx5: various fixes Nelio Laranjeiro
                   ` (8 subsequent siblings)
  16 siblings, 1 reply; 22+ messages in thread
From: Nelio Laranjeiro @ 2016-09-07  7:02 UTC (permalink / raw)
  To: dev; +Cc: Vasily Philipov

To improve performance the NIC expects for large packets to have a pointer
to a cache aligned address, old inline code could break this assumption
which hurts performance.

Fixes: 2a66cf378954 ("net/mlx5: support inline send")

Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Vasily Philipov <vasilyf@mellanox.com>
---
 drivers/net/mlx5/mlx5_ethdev.c |   4 -
 drivers/net/mlx5/mlx5_rxtx.c   | 424 ++++++++++-------------------------------
 drivers/net/mlx5/mlx5_rxtx.h   |   3 +-
 drivers/net/mlx5/mlx5_txq.c    |   9 +-
 4 files changed, 104 insertions(+), 336 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 47f323e..1ae80e5 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -1398,10 +1398,6 @@ priv_select_tx_function(struct priv *priv)
 	} else if ((priv->sriov == 0) && priv->mps) {
 		priv->dev->tx_pkt_burst = mlx5_tx_burst_mpw;
 		DEBUG("selected MPW TX function");
-	} else if (priv->txq_inline && (priv->txqs_n >= priv->txqs_inline)) {
-		priv->dev->tx_pkt_burst = mlx5_tx_burst_inline;
-		DEBUG("selected inline TX function (%u >= %u queues)",
-		      priv->txqs_n, priv->txqs_inline);
 	}
 }
 
diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index c7e538f..67e0f37 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -297,179 +297,99 @@ txq_mp2mr(struct txq *txq, struct rte_mempool *mp)
  *   Buffer.
  * @param length
  *   Packet length.
- * @param lkey
- *   Memory region lkey.
+ *
+ * @return ds
+ *   Number of DS elements consumed.
  */
-static inline void
+static inline unsigned int
 mlx5_wqe_write(struct txq *txq, volatile union mlx5_wqe *wqe,
-	       struct rte_mbuf *buf, uint32_t length, uint32_t lkey)
+	       struct rte_mbuf *buf, uint32_t length)
 {
+	uintptr_t raw = (uintptr_t)&wqe->wqe.eseg.inline_hdr_start;
+	uint16_t ds;
+	uint16_t pkt_inline_sz = 16;
 	uintptr_t addr = rte_pktmbuf_mtod(buf, uintptr_t);
+	struct mlx5_wqe_data_seg *dseg = NULL;
 
-	rte_mov16((uint8_t *)&wqe->wqe.eseg.inline_hdr_start,
-		  (uint8_t *)addr);
-	addr += 16;
+	assert(length >= 16);
+	/* Start the know and common part of the WQE structure. */
+	wqe->wqe.ctrl.data[0] = htonl((txq->wqe_ci << 8) | MLX5_OPCODE_SEND);
+	wqe->wqe.ctrl.data[2] = 0;
+	wqe->wqe.ctrl.data[3] = 0;
+	wqe->wqe.eseg.rsvd0 = 0;
+	wqe->wqe.eseg.rsvd1 = 0;
+	wqe->wqe.eseg.mss = 0;
+	wqe->wqe.eseg.rsvd2 = 0;
+	/* Start by copying the Ethernet Header. */
+	rte_mov16((uint8_t *)raw, (uint8_t *)addr);
 	length -= 16;
-	/* Need to insert VLAN ? */
+	addr += 16;
+	/* Replace the Ethernet type by the VLAN if necessary. */
 	if (buf->ol_flags & PKT_TX_VLAN_PKT) {
 		uint32_t vlan = htonl(0x81000000 | buf->vlan_tci);
 
-		memcpy((uint8_t *)&wqe->wqe.eseg.inline_hdr_start + 12,
+		memcpy((uint8_t *)(raw + 16 - sizeof(vlan)),
 		       &vlan, sizeof(vlan));
 		addr -= sizeof(vlan);
 		length += sizeof(vlan);
 	}
-	/* Write the WQE. */
-	wqe->wqe.ctrl.data[0] = htonl((txq->wqe_ci << 8) | MLX5_OPCODE_SEND);
-	wqe->wqe.ctrl.data[1] = htonl((txq->qp_num_8s) | 4);
-	wqe->wqe.ctrl.data[2] = 0;
-	wqe->wqe.ctrl.data[3] = 0;
-	wqe->inl.eseg.rsvd0 = 0;
-	wqe->inl.eseg.rsvd1 = 0;
-	wqe->inl.eseg.mss = 0;
-	wqe->inl.eseg.rsvd2 = 0;
-	wqe->wqe.eseg.inline_hdr_sz = htons(16);
-	/* Store remaining data in data segment. */
-	wqe->wqe.dseg.byte_count = htonl(length);
-	wqe->wqe.dseg.lkey = lkey;
-	wqe->wqe.dseg.addr = htonll(addr);
-	/* Increment consumer index. */
-	++txq->wqe_ci;
-}
-
-/**
- * Write a inline WQE.
- *
- * @param txq
- *   Pointer to TX queue structure.
- * @param wqe
- *   Pointer to the WQE to fill.
- * @param addr
- *   Buffer data address.
- * @param length
- *   Packet length.
- * @param lkey
- *   Memory region lkey.
- */
-static inline void
-mlx5_wqe_write_inline(struct txq *txq, volatile union mlx5_wqe *wqe,
-		      uintptr_t addr, uint32_t length)
-{
-	uint32_t size;
-	uint16_t wqe_cnt = txq->wqe_n - 1;
-	uint16_t wqe_ci = txq->wqe_ci + 1;
-
-	/* Copy the first 16 bytes into inline header. */
-	rte_memcpy((void *)(uintptr_t)wqe->inl.eseg.inline_hdr_start,
-		   (void *)(uintptr_t)addr,
-		   MLX5_ETH_INLINE_HEADER_SIZE);
-	addr += MLX5_ETH_INLINE_HEADER_SIZE;
-	length -= MLX5_ETH_INLINE_HEADER_SIZE;
-	size = 3 + ((4 + length + 15) / 16);
-	wqe->inl.byte_cnt = htonl(length | MLX5_INLINE_SEG);
-	rte_memcpy((void *)(uintptr_t)&wqe->inl.data[0],
-		   (void *)addr, MLX5_WQE64_INL_DATA);
-	addr += MLX5_WQE64_INL_DATA;
-	length -= MLX5_WQE64_INL_DATA;
-	while (length) {
-		volatile union mlx5_wqe *wqe_next =
-			&(*txq->wqes)[wqe_ci & wqe_cnt];
-		uint32_t copy_bytes = (length > sizeof(*wqe)) ?
-				      sizeof(*wqe) :
-				      length;
-
-		rte_mov64((uint8_t *)(uintptr_t)&wqe_next->data[0],
-			  (uint8_t *)addr);
-		addr += copy_bytes;
-		length -= copy_bytes;
-		++wqe_ci;
-	}
-	assert(size < 64);
-	wqe->inl.ctrl.data[0] = htonl((txq->wqe_ci << 8) | MLX5_OPCODE_SEND);
-	wqe->inl.ctrl.data[1] = htonl(txq->qp_num_8s | size);
-	wqe->inl.ctrl.data[2] = 0;
-	wqe->inl.ctrl.data[3] = 0;
-	wqe->inl.eseg.rsvd0 = 0;
-	wqe->inl.eseg.rsvd1 = 0;
-	wqe->inl.eseg.mss = 0;
-	wqe->inl.eseg.rsvd2 = 0;
-	wqe->inl.eseg.inline_hdr_sz = htons(MLX5_ETH_INLINE_HEADER_SIZE);
-	/* Increment consumer index. */
-	txq->wqe_ci = wqe_ci;
-}
-
-/**
- * Write a inline WQE with VLAN.
- *
- * @param txq
- *   Pointer to TX queue structure.
- * @param wqe
- *   Pointer to the WQE to fill.
- * @param addr
- *   Buffer data address.
- * @param length
- *   Packet length.
- * @param lkey
- *   Memory region lkey.
- * @param vlan_tci
- *   VLAN field to insert in packet.
- */
-static inline void
-mlx5_wqe_write_inline_vlan(struct txq *txq, volatile union mlx5_wqe *wqe,
-			   uintptr_t addr, uint32_t length, uint16_t vlan_tci)
-{
-	uint32_t size;
-	uint32_t wqe_cnt = txq->wqe_n - 1;
-	uint16_t wqe_ci = txq->wqe_ci + 1;
-	uint32_t vlan = htonl(0x81000000 | vlan_tci);
-
-	/*
-	 * Copy 12 bytes of source & destination MAC address.
-	 * Copy 4 bytes of VLAN.
-	 * Copy 2 bytes of Ether type.
-	 */
-	rte_memcpy((uint8_t *)(uintptr_t)wqe->inl.eseg.inline_hdr_start,
-		   (uint8_t *)addr, 12);
-	rte_memcpy((uint8_t *)(uintptr_t)wqe->inl.eseg.inline_hdr_start + 12,
-		   &vlan, sizeof(vlan));
-	rte_memcpy((uint8_t *)((uintptr_t)wqe->inl.eseg.inline_hdr_start + 16),
-		   (uint8_t *)(addr + 12), 2);
-	addr += MLX5_ETH_VLAN_INLINE_HEADER_SIZE - sizeof(vlan);
-	length -= MLX5_ETH_VLAN_INLINE_HEADER_SIZE - sizeof(vlan);
-	size = (sizeof(wqe->inl.ctrl.ctrl) +
-		sizeof(wqe->inl.eseg) +
-		sizeof(wqe->inl.byte_cnt) +
-		length + 15) / 16;
-	wqe->inl.byte_cnt = htonl(length | MLX5_INLINE_SEG);
-	rte_memcpy((void *)(uintptr_t)&wqe->inl.data[0],
-		   (void *)addr, MLX5_WQE64_INL_DATA);
-	addr += MLX5_WQE64_INL_DATA;
-	length -= MLX5_WQE64_INL_DATA;
-	while (length) {
-		volatile union mlx5_wqe *wqe_next =
-			&(*txq->wqes)[wqe_ci & wqe_cnt];
-		uint32_t copy_bytes = (length > sizeof(*wqe)) ?
-				      sizeof(*wqe) :
-				      length;
-
-		rte_mov64((uint8_t *)(uintptr_t)&wqe_next->data[0],
-			  (uint8_t *)addr);
-		addr += copy_bytes;
-		length -= copy_bytes;
-		++wqe_ci;
+	/* Inline if enough room. */
+	if (txq->max_inline != 0) {
+		uintptr_t end = (uintptr_t)&(*txq->wqes)[txq->wqe_n];
+		uint16_t max_inline = txq->max_inline * RTE_CACHE_LINE_SIZE;
+		uint16_t room;
+
+		raw += 16;
+		room = end - (uintptr_t)raw;
+		if (room > max_inline) {
+			uintptr_t addr_end = (addr + max_inline) &
+				~(RTE_CACHE_LINE_SIZE - 1);
+			uint16_t copy_b = ((addr_end - addr) > length) ?
+					  length :
+					  (addr_end - addr);
+
+			rte_memcpy((void *)raw, (void *)addr, copy_b);
+			addr += copy_b;
+			length -= copy_b;
+			pkt_inline_sz += copy_b;
+			/* Sanity check. */
+			assert(addr <= addr_end);
+		}
+		/* Store the inlined packet size in the WQE. */
+		wqe->wqe.eseg.inline_hdr_sz = htons(pkt_inline_sz);
+		/*
+		 * 2 DWORDs consumed by the WQE header + 1 DSEG +
+		 * the size of the inline part of the packet.
+		 */
+		ds = 2 + ((pkt_inline_sz - 2 + 15) / 16);
+		if (length > 0) {
+			dseg = (struct mlx5_wqe_data_seg *)
+				((uintptr_t)wqe + (ds * 16));
+			if ((uintptr_t)dseg >= end)
+				dseg = (struct mlx5_wqe_data_seg *)
+					((uintptr_t)&(*txq->wqes)[0]);
+			goto use_dseg;
+		}
+	} else {
+		/* Add the remaining packet as a simple ds. */
+		ds = 3;
+		/*
+		 * No inline has been done in the packet, only the Ethernet
+		 * Header as been stored.
+		 */
+		wqe->wqe.eseg.inline_hdr_sz = htons(16);
+		dseg = (struct mlx5_wqe_data_seg *)
+			((uintptr_t)wqe + (ds * 16));
+use_dseg:
+		*dseg = (struct mlx5_wqe_data_seg) {
+			.addr = htonll(addr),
+			.byte_count = htonl(length),
+			.lkey = txq_mp2mr(txq, txq_mb2mp(buf)),
+		};
+		++ds;
 	}
-	assert(size < 64);
-	wqe->inl.ctrl.data[0] = htonl((txq->wqe_ci << 8) | MLX5_OPCODE_SEND);
-	wqe->inl.ctrl.data[1] = htonl(txq->qp_num_8s | size);
-	wqe->inl.ctrl.data[2] = 0;
-	wqe->inl.ctrl.data[3] = 0;
-	wqe->inl.eseg.rsvd0 = 0;
-	wqe->inl.eseg.rsvd1 = 0;
-	wqe->inl.eseg.mss = 0;
-	wqe->inl.eseg.rsvd2 = 0;
-	wqe->inl.eseg.inline_hdr_sz = htons(MLX5_ETH_VLAN_INLINE_HEADER_SIZE);
-	/* Increment consumer index. */
-	txq->wqe_ci = wqe_ci;
+	wqe->wqe.ctrl.data[1] = htonl(txq->qp_num_8s | ds);
+	return ds;
 }
 
 /**
@@ -570,7 +490,6 @@ mlx5_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
 		struct rte_mbuf *buf = *(pkts++);
 		unsigned int elts_head_next;
 		uint32_t length;
-		uint32_t lkey;
 		unsigned int segs_n = buf->nb_segs;
 		volatile struct mlx5_wqe_data_seg *dseg;
 		unsigned int ds = sizeof(*wqe) / 16;
@@ -586,8 +505,8 @@ mlx5_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
 		--pkts_n;
 		elts_head_next = (elts_head + 1) & (elts_n - 1);
 		wqe = &(*txq->wqes)[txq->wqe_ci & (txq->wqe_n - 1)];
-		dseg = &wqe->wqe.dseg;
-		rte_prefetch0(wqe);
+		tx_prefetch_wqe(txq, txq->wqe_ci);
+		tx_prefetch_wqe(txq, txq->wqe_ci + 1);
 		if (pkts_n)
 			rte_prefetch0(*pkts);
 		length = DATA_LEN(buf);
@@ -597,9 +516,6 @@ mlx5_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
 		if (pkts_n)
 			rte_prefetch0(rte_pktmbuf_mtod(*pkts,
 						       volatile void *));
-		/* Retrieve Memory Region key for this memory pool. */
-		lkey = txq_mp2mr(txq, txq_mb2mp(buf));
-		mlx5_wqe_write(txq, wqe, buf, length, lkey);
 		/* Should we enable HW CKSUM offload */
 		if (buf->ol_flags &
 		    (PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM)) {
@@ -607,8 +523,13 @@ mlx5_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
 				MLX5_ETH_WQE_L3_CSUM |
 				MLX5_ETH_WQE_L4_CSUM;
 		} else {
-			wqe->wqe.eseg.cs_flags = 0;
+			wqe->eseg.cs_flags = 0;
 		}
+		ds = mlx5_wqe_write(txq, wqe, buf, length);
+		if (segs_n == 1)
+			goto skip_segs;
+		dseg = (volatile struct mlx5_wqe_data_seg *)
+			(((uintptr_t)wqe) + ds * 16);
 		while (--segs_n) {
 			/*
 			 * Spill on next WQE when the current one does not have
@@ -639,11 +560,13 @@ mlx5_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
 		/* Update DS field in WQE. */
 		wqe->wqe.ctrl.data[1] &= htonl(0xffffffc0);
 		wqe->wqe.ctrl.data[1] |= htonl(ds & 0x3f);
-		elts_head = elts_head_next;
+skip_segs:
 #ifdef MLX5_PMD_SOFT_COUNTERS
 		/* Increment sent bytes counter. */
 		txq->stats.obytes += length;
 #endif
+		/* Increment consumer index. */
+		txq->wqe_ci += (ds + 3) / 4;
 		elts_head = elts_head_next;
 		++i;
 	} while (pkts_n);
@@ -672,162 +595,6 @@ mlx5_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
 }
 
 /**
- * DPDK callback for TX with inline support.
- *
- * @param dpdk_txq
- *   Generic pointer to TX queue structure.
- * @param[in] pkts
- *   Packets to transmit.
- * @param pkts_n
- *   Number of packets in array.
- *
- * @return
- *   Number of packets successfully transmitted (<= pkts_n).
- */
-uint16_t
-mlx5_tx_burst_inline(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
-{
-	struct txq *txq = (struct txq *)dpdk_txq;
-	uint16_t elts_head = txq->elts_head;
-	const unsigned int elts_n = txq->elts_n;
-	unsigned int i = 0;
-	unsigned int j = 0;
-	unsigned int max;
-	unsigned int comp;
-	volatile union mlx5_wqe *wqe = NULL;
-	unsigned int max_inline = txq->max_inline;
-
-	if (unlikely(!pkts_n))
-		return 0;
-	/* Prefetch first packet cacheline. */
-	tx_prefetch_cqe(txq, txq->cq_ci);
-	tx_prefetch_cqe(txq, txq->cq_ci + 1);
-	rte_prefetch0(*pkts);
-	/* Start processing. */
-	txq_complete(txq);
-	max = (elts_n - (elts_head - txq->elts_tail));
-	if (max > elts_n)
-		max -= elts_n;
-	do {
-		struct rte_mbuf *buf = *(pkts++);
-		unsigned int elts_head_next;
-		uintptr_t addr;
-		uint32_t length;
-		uint32_t lkey;
-		unsigned int segs_n = buf->nb_segs;
-		volatile struct mlx5_wqe_data_seg *dseg;
-		unsigned int ds = sizeof(*wqe) / 16;
-
-		/*
-		 * Make sure there is enough room to store this packet and
-		 * that one ring entry remains unused.
-		 */
-		assert(segs_n);
-		if (max < segs_n + 1)
-			break;
-		max -= segs_n;
-		--pkts_n;
-		elts_head_next = (elts_head + 1) & (elts_n - 1);
-		wqe = &(*txq->wqes)[txq->wqe_ci & (txq->wqe_n - 1)];
-		dseg = &wqe->wqe.dseg;
-		tx_prefetch_wqe(txq, txq->wqe_ci);
-		tx_prefetch_wqe(txq, txq->wqe_ci + 1);
-		if (pkts_n)
-			rte_prefetch0(*pkts);
-		/* Should we enable HW CKSUM offload */
-		if (buf->ol_flags &
-		    (PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM)) {
-			wqe->inl.eseg.cs_flags =
-				MLX5_ETH_WQE_L3_CSUM |
-				MLX5_ETH_WQE_L4_CSUM;
-		} else {
-			wqe->inl.eseg.cs_flags = 0;
-		}
-		/* Retrieve buffer information. */
-		addr = rte_pktmbuf_mtod(buf, uintptr_t);
-		length = DATA_LEN(buf);
-		/* Update element. */
-		(*txq->elts)[elts_head] = buf;
-		/* Prefetch next buffer data. */
-		if (pkts_n)
-			rte_prefetch0(rte_pktmbuf_mtod(*pkts,
-						       volatile void *));
-		if ((length <= max_inline) && (segs_n == 1)) {
-			if (buf->ol_flags & PKT_TX_VLAN_PKT)
-				mlx5_wqe_write_inline_vlan(txq, wqe,
-							   addr, length,
-							   buf->vlan_tci);
-			else
-				mlx5_wqe_write_inline(txq, wqe, addr, length);
-			goto skip_segs;
-		} else {
-			/* Retrieve Memory Region key for this memory pool. */
-			lkey = txq_mp2mr(txq, txq_mb2mp(buf));
-			mlx5_wqe_write(txq, wqe, buf, length, lkey);
-		}
-		while (--segs_n) {
-			/*
-			 * Spill on next WQE when the current one does not have
-			 * enough room left. Size of WQE must a be a multiple
-			 * of data segment size.
-			 */
-			assert(!(sizeof(*wqe) % sizeof(*dseg)));
-			if (!(ds % (sizeof(*wqe) / 16)))
-				dseg = (volatile void *)
-					&(*txq->wqes)[txq->wqe_ci++ &
-						      (txq->wqe_n - 1)];
-			else
-				++dseg;
-			++ds;
-			buf = buf->next;
-			assert(buf);
-			/* Store segment information. */
-			dseg->byte_count = htonl(DATA_LEN(buf));
-			dseg->lkey = txq_mp2mr(txq, txq_mb2mp(buf));
-			dseg->addr = htonll(rte_pktmbuf_mtod(buf, uintptr_t));
-			(*txq->elts)[elts_head_next] = buf;
-			elts_head_next = (elts_head_next + 1) & (elts_n - 1);
-#ifdef MLX5_PMD_SOFT_COUNTERS
-			length += DATA_LEN(buf);
-#endif
-			++j;
-		}
-		/* Update DS field in WQE. */
-		wqe->inl.ctrl.data[1] &= htonl(0xffffffc0);
-		wqe->inl.ctrl.data[1] |= htonl(ds & 0x3f);
-skip_segs:
-		elts_head = elts_head_next;
-#ifdef MLX5_PMD_SOFT_COUNTERS
-		/* Increment sent bytes counter. */
-		txq->stats.obytes += length;
-#endif
-		++i;
-	} while (pkts_n);
-	/* Take a shortcut if nothing must be sent. */
-	if (unlikely(i == 0))
-		return 0;
-	/* Check whether completion threshold has been reached. */
-	comp = txq->elts_comp + i + j;
-	if (comp >= MLX5_TX_COMP_THRESH) {
-		/* Request completion on last WQE. */
-		wqe->inl.ctrl.data[2] = htonl(8);
-		/* Save elts_head in unused "immediate" field of WQE. */
-		wqe->inl.ctrl.data[3] = elts_head;
-		txq->elts_comp = 0;
-	} else {
-		txq->elts_comp = comp;
-	}
-#ifdef MLX5_PMD_SOFT_COUNTERS
-	/* Increment sent packets counter. */
-	txq->stats.opackets += i;
-#endif
-	/* Ring QP doorbell. */
-	mlx5_tx_dbrec(txq);
-	txq->elts_head = elts_head;
-	return i;
-}
-
-/**
  * Open a MPW session.
  *
  * @param txq
@@ -1117,7 +884,7 @@ mlx5_tx_burst_mpw_inline(void *dpdk_txq, struct rte_mbuf **pkts,
 	unsigned int j = 0;
 	unsigned int max;
 	unsigned int comp;
-	unsigned int inline_room = txq->max_inline;
+	unsigned int inline_room = txq->max_inline * RTE_CACHE_LINE_SIZE;
 	struct mlx5_mpw mpw = {
 		.state = MLX5_MPW_STATE_CLOSED,
 	};
@@ -1171,7 +938,8 @@ mlx5_tx_burst_mpw_inline(void *dpdk_txq, struct rte_mbuf **pkts,
 			    (length > inline_room) ||
 			    (mpw.wqe->mpw_inl.eseg.cs_flags != cs_flags)) {
 				mlx5_mpw_inline_close(txq, &mpw);
-				inline_room = txq->max_inline;
+				inline_room =
+					txq->max_inline * RTE_CACHE_LINE_SIZE;
 			}
 		}
 		if (mpw.state == MLX5_MPW_STATE_CLOSED) {
@@ -1187,7 +955,8 @@ mlx5_tx_burst_mpw_inline(void *dpdk_txq, struct rte_mbuf **pkts,
 		/* Multi-segment packets must be alone in their MPW. */
 		assert((segs_n == 1) || (mpw.pkts_n == 0));
 		if (mpw.state == MLX5_MPW_STATE_OPENED) {
-			assert(inline_room == txq->max_inline);
+			assert(inline_room ==
+			       txq->max_inline * RTE_CACHE_LINE_SIZE);
 #if defined(MLX5_PMD_SOFT_COUNTERS) || !defined(NDEBUG)
 			length = 0;
 #endif
@@ -1252,7 +1021,8 @@ mlx5_tx_burst_mpw_inline(void *dpdk_txq, struct rte_mbuf **pkts,
 			++j;
 			if (mpw.pkts_n == MLX5_MPW_DSEG_MAX) {
 				mlx5_mpw_inline_close(txq, &mpw);
-				inline_room = txq->max_inline;
+				inline_room =
+					txq->max_inline * RTE_CACHE_LINE_SIZE;
 			} else {
 				inline_room -= length;
 			}
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index c8a93c0..8c568ad 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -249,7 +249,7 @@ struct txq {
 	uint16_t wqe_n; /* Number of WQ elements. */
 	uint16_t bf_offset; /* Blueflame offset. */
 	uint16_t bf_buf_size; /* Blueflame size. */
-	uint16_t max_inline; /* Maximum size to inline in a WQE. */
+	uint16_t max_inline; /* Multiple of RTE_CACHE_LINE_SIZE to inline. */
 	uint32_t qp_num_8s; /* QP number shifted by 8. */
 	volatile struct mlx5_cqe (*cqes)[]; /* Completion queue. */
 	volatile union mlx5_wqe (*wqes)[]; /* Work queue. */
@@ -314,7 +314,6 @@ uint16_t mlx5_tx_burst_secondary_setup(void *, struct rte_mbuf **, uint16_t);
 /* mlx5_rxtx.c */
 
 uint16_t mlx5_tx_burst(void *, struct rte_mbuf **, uint16_t);
-uint16_t mlx5_tx_burst_inline(void *, struct rte_mbuf **, uint16_t);
 uint16_t mlx5_tx_burst_mpw(void *, struct rte_mbuf **, uint16_t);
 uint16_t mlx5_tx_burst_mpw_inline(void *, struct rte_mbuf **, uint16_t);
 uint16_t mlx5_rx_burst(void *, struct rte_mbuf **, uint16_t);
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index 6fe61c4..5ddd2fb 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -338,9 +338,12 @@ txq_ctrl_setup(struct rte_eth_dev *dev, struct txq_ctrl *txq_ctrl,
 		.comp_mask = (IBV_EXP_QP_INIT_ATTR_PD |
 			      IBV_EXP_QP_INIT_ATTR_RES_DOMAIN),
 	};
-	if (priv->txq_inline && priv->txqs_n >= priv->txqs_inline) {
-		tmpl.txq.max_inline = priv->txq_inline;
-		attr.init.cap.max_inline_data = tmpl.txq.max_inline;
+	if (priv->txq_inline && (priv->txqs_n >= priv->txqs_inline)) {
+		tmpl.txq.max_inline =
+			((priv->txq_inline + (RTE_CACHE_LINE_SIZE - 1)) /
+			 RTE_CACHE_LINE_SIZE);
+		attr.init.cap.max_inline_data =
+			tmpl.txq.max_inline * RTE_CACHE_LINE_SIZE;
 	}
 	tmpl.qp = ibv_exp_create_qp(priv->ctx, &attr.init);
 	if (tmpl.qp == NULL) {
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH 8/8] net/mlx5: fix inline logic
  2016-09-07  7:02 ` [PATCH 8/8] net/mlx5: fix inline logic Nelio Laranjeiro
@ 2016-09-14 10:43   ` Ferruh Yigit
  2016-09-14 11:07     ` Nélio Laranjeiro
  0 siblings, 1 reply; 22+ messages in thread
From: Ferruh Yigit @ 2016-09-14 10:43 UTC (permalink / raw)
  To: Nelio Laranjeiro, dev; +Cc: Vasily Philipov

Hi Nelio,

On 9/7/2016 8:02 AM, Nelio Laranjeiro wrote:
> To improve performance the NIC expects for large packets to have a pointer
> to a cache aligned address, old inline code could break this assumption
> which hurts performance.
> 
> Fixes: 2a66cf378954 ("net/mlx5: support inline send")
> 
> Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
> Signed-off-by: Vasily Philipov <vasilyf@mellanox.com>
> ---

...

> @@ -607,8 +523,13 @@ mlx5_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
>  				MLX5_ETH_WQE_L3_CSUM |
>  				MLX5_ETH_WQE_L4_CSUM;
>  		} else {
> -			wqe->wqe.eseg.cs_flags = 0;
> +			wqe->eseg.cs_flags = 0;

This cause a compilation error, and looks like a typo:

.../drivers/net/mlx5/mlx5_rxtx.c: In function ‘mlx5_tx_burst’:
.../drivers/net/mlx5/mlx5_rxtx.c:526:7: error: ‘volatile union mlx5_wqe’
has no member named ‘eseg’
    wqe->eseg.cs_flags = 0;
       ^~

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 8/8] net/mlx5: fix inline logic
  2016-09-14 10:43   ` Ferruh Yigit
@ 2016-09-14 11:07     ` Nélio Laranjeiro
  0 siblings, 0 replies; 22+ messages in thread
From: Nélio Laranjeiro @ 2016-09-14 11:07 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, Vasily Philipov

On Wed, Sep 14, 2016 at 11:43:35AM +0100, Ferruh Yigit wrote:
> Hi Nelio,
> 
> On 9/7/2016 8:02 AM, Nelio Laranjeiro wrote:
> > To improve performance the NIC expects for large packets to have a pointer
> > to a cache aligned address, old inline code could break this assumption
> > which hurts performance.
> > 
> > Fixes: 2a66cf378954 ("net/mlx5: support inline send")
> > 
> > Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
> > Signed-off-by: Vasily Philipov <vasilyf@mellanox.com>
> > ---
> 
> ...
> 
> > @@ -607,8 +523,13 @@ mlx5_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
> >  				MLX5_ETH_WQE_L3_CSUM |
> >  				MLX5_ETH_WQE_L4_CSUM;
> >  		} else {
> > -			wqe->wqe.eseg.cs_flags = 0;
> > +			wqe->eseg.cs_flags = 0;
> 
> This cause a compilation error, and looks like a typo:
> 
> .../drivers/net/mlx5/mlx5_rxtx.c: In function ‘mlx5_tx_burst’:
> .../drivers/net/mlx5/mlx5_rxtx.c:526:7: error: ‘volatile union mlx5_wqe’
> has no member named ‘eseg’
>     wqe->eseg.cs_flags = 0;
>        ^~

Hi Ferruh,

You are totally right.  I will fix it, re-test and send a V2 today.

Thanks,

-- 
Nélio Laranjeiro
6WIND

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH V2 0/8] net/mlx5: various fixes
  2016-09-07  7:02 [PATCH 0/8] net/mlx5: various fixes Nelio Laranjeiro
                   ` (7 preceding siblings ...)
  2016-09-07  7:02 ` [PATCH 8/8] net/mlx5: fix inline logic Nelio Laranjeiro
@ 2016-09-14 11:53 ` Nelio Laranjeiro
  2016-09-14 12:21   ` Nélio Laranjeiro
  2016-09-19 15:30   ` Bruce Richardson
  2016-09-14 11:53 ` [PATCH V2 1/8] net/mlx5: fix inconsistent return value in Flow Director Nelio Laranjeiro
                   ` (7 subsequent siblings)
  16 siblings, 2 replies; 22+ messages in thread
From: Nelio Laranjeiro @ 2016-09-14 11:53 UTC (permalink / raw)
  To: dev

 - Flow director
 - Rx Capabilities
 - Inline

Changes in V2:

 - Fix a compilation error.

Adrien Mazarguil (1):
  net/mlx5: fix Rx VLAN offload capability report

Nelio Laranjeiro (3):
  net/mlx5: force inline for completion function
  net/mlx5: re-factorize functions
  net/mlx5: fix inline logic

Raslan Darawsheh (1):
  net/mlx5: fix removing VLAN filter

Yaacov Hazan (3):
  net/mlx5: fix inconsistent return value in Flow Director
  net/mlx5: refactor allocation of flow director queues
  net/mlx5: fix support for flow director drop mode

 doc/guides/nics/mlx5.rst       |   3 +-
 drivers/net/mlx5/mlx5.h        |   2 +
 drivers/net/mlx5/mlx5_ethdev.c |   7 +-
 drivers/net/mlx5/mlx5_fdir.c   | 270 +++++++++++++++-------
 drivers/net/mlx5/mlx5_rxq.c    |   2 +
 drivers/net/mlx5/mlx5_rxtx.c   | 497 +++++++++--------------------------------
 drivers/net/mlx5/mlx5_rxtx.h   |   7 +-
 drivers/net/mlx5/mlx5_txq.c    |   9 +-
 drivers/net/mlx5/mlx5_vlan.c   |   3 +-
 9 files changed, 317 insertions(+), 483 deletions(-)

-- 
2.1.4

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH V2 1/8] net/mlx5: fix inconsistent return value in Flow Director
  2016-09-07  7:02 [PATCH 0/8] net/mlx5: various fixes Nelio Laranjeiro
                   ` (8 preceding siblings ...)
  2016-09-14 11:53 ` [PATCH V2 0/8] net/mlx5: various fixes Nelio Laranjeiro
@ 2016-09-14 11:53 ` Nelio Laranjeiro
  2016-09-14 11:53 ` [PATCH V2 2/8] net/mlx5: fix Rx VLAN offload capability report Nelio Laranjeiro
                   ` (6 subsequent siblings)
  16 siblings, 0 replies; 22+ messages in thread
From: Nelio Laranjeiro @ 2016-09-14 11:53 UTC (permalink / raw)
  To: dev; +Cc: Yaacov Hazan

From: Yaacov Hazan <yaacovh@mellanox.com>

The return value in DPDK is negative errno on failure.
Since internal functions in mlx driver return positive
values need to negate this value when it returned to
dpdk layer.

Fixes: 76f5c99 ("mlx5: support flow director")

Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
---
 drivers/net/mlx5/mlx5_fdir.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_fdir.c b/drivers/net/mlx5/mlx5_fdir.c
index 73eb00e..8207573 100644
--- a/drivers/net/mlx5/mlx5_fdir.c
+++ b/drivers/net/mlx5/mlx5_fdir.c
@@ -955,7 +955,7 @@ mlx5_dev_filter_ctrl(struct rte_eth_dev *dev,
 		     enum rte_filter_op filter_op,
 		     void *arg)
 {
-	int ret = -EINVAL;
+	int ret = EINVAL;
 	struct priv *priv = dev->data->dev_private;
 
 	switch (filter_type) {
@@ -970,5 +970,5 @@ mlx5_dev_filter_ctrl(struct rte_eth_dev *dev,
 		break;
 	}
 
-	return ret;
+	return -ret;
 }
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH V2 2/8] net/mlx5: fix Rx VLAN offload capability report
  2016-09-07  7:02 [PATCH 0/8] net/mlx5: various fixes Nelio Laranjeiro
                   ` (9 preceding siblings ...)
  2016-09-14 11:53 ` [PATCH V2 1/8] net/mlx5: fix inconsistent return value in Flow Director Nelio Laranjeiro
@ 2016-09-14 11:53 ` Nelio Laranjeiro
  2016-09-14 11:53 ` [PATCH V2 3/8] net/mlx5: fix removing VLAN filter Nelio Laranjeiro
                   ` (5 subsequent siblings)
  16 siblings, 0 replies; 22+ messages in thread
From: Nelio Laranjeiro @ 2016-09-14 11:53 UTC (permalink / raw)
  To: dev; +Cc: Adrien Mazarguil

From: Adrien Mazarguil <adrien.mazarguil@6wind.com>

This capability is implemented but not reported.

Fixes: f3db9489188a ("mlx5: support Rx VLAN stripping")

Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
---
 drivers/net/mlx5/mlx5_ethdev.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 130e15d..47f323e 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -583,7 +583,8 @@ mlx5_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
 		 (DEV_RX_OFFLOAD_IPV4_CKSUM |
 		  DEV_RX_OFFLOAD_UDP_CKSUM |
 		  DEV_RX_OFFLOAD_TCP_CKSUM) :
-		 0);
+		 0) |
+		(priv->hw_vlan_strip ? DEV_RX_OFFLOAD_VLAN_STRIP : 0);
 	if (!priv->mps)
 		info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT;
 	if (priv->hw_csum)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH V2 3/8] net/mlx5: fix removing VLAN filter
  2016-09-07  7:02 [PATCH 0/8] net/mlx5: various fixes Nelio Laranjeiro
                   ` (10 preceding siblings ...)
  2016-09-14 11:53 ` [PATCH V2 2/8] net/mlx5: fix Rx VLAN offload capability report Nelio Laranjeiro
@ 2016-09-14 11:53 ` Nelio Laranjeiro
  2016-09-14 11:53 ` [PATCH V2 4/8] net/mlx5: refactor allocation of flow director queues Nelio Laranjeiro
                   ` (4 subsequent siblings)
  16 siblings, 0 replies; 22+ messages in thread
From: Nelio Laranjeiro @ 2016-09-14 11:53 UTC (permalink / raw)
  To: dev; +Cc: Raslan Darawsheh

From: Raslan Darawsheh <rdarawsheh@asaltech.com>

memmove was moving bytes as the number of elements next to i, while it
should move the number of elements multiplied by the size of each element.

Fixes: e9086978 ("mlx5: support VLAN filtering")

Signed-off-by: Raslan Darawsheh <rdarawsheh@asaltech.com>
---
 drivers/net/mlx5/mlx5_vlan.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c
index 4719e69..fb730e5 100644
--- a/drivers/net/mlx5/mlx5_vlan.c
+++ b/drivers/net/mlx5/mlx5_vlan.c
@@ -87,7 +87,8 @@ vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 		--priv->vlan_filter_n;
 		memmove(&priv->vlan_filter[i],
 			&priv->vlan_filter[i + 1],
-			priv->vlan_filter_n - i);
+			sizeof(priv->vlan_filter[i]) *
+			(priv->vlan_filter_n - i));
 		priv->vlan_filter[priv->vlan_filter_n] = 0;
 	} else {
 		assert(i == priv->vlan_filter_n);
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH V2 4/8] net/mlx5: refactor allocation of flow director queues
  2016-09-07  7:02 [PATCH 0/8] net/mlx5: various fixes Nelio Laranjeiro
                   ` (11 preceding siblings ...)
  2016-09-14 11:53 ` [PATCH V2 3/8] net/mlx5: fix removing VLAN filter Nelio Laranjeiro
@ 2016-09-14 11:53 ` Nelio Laranjeiro
  2016-09-14 11:53 ` [PATCH V2 5/8] net/mlx5: fix support for flow director drop mode Nelio Laranjeiro
                   ` (3 subsequent siblings)
  16 siblings, 0 replies; 22+ messages in thread
From: Nelio Laranjeiro @ 2016-09-14 11:53 UTC (permalink / raw)
  To: dev; +Cc: Yaacov Hazan, Adrien Mazarguil

From: Yaacov Hazan <yaacovh@mellanox.com>

This is done to prepare support for drop queues, which are not related to
existing RX queues and need to be managed separately.

Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
---
 drivers/net/mlx5/mlx5.h      |   1 +
 drivers/net/mlx5/mlx5_fdir.c | 229 ++++++++++++++++++++++++++++---------------
 drivers/net/mlx5/mlx5_rxq.c  |   2 +
 drivers/net/mlx5/mlx5_rxtx.h |   4 +-
 4 files changed, 156 insertions(+), 80 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 3a86609..fa78623 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -257,6 +257,7 @@ void mlx5_dev_stop(struct rte_eth_dev *);
 
 /* mlx5_fdir.c */
 
+void priv_fdir_queue_destroy(struct priv *, struct fdir_queue *);
 int fdir_init_filters_list(struct priv *);
 void priv_fdir_delete_filters_list(struct priv *);
 void priv_fdir_disable(struct priv *);
diff --git a/drivers/net/mlx5/mlx5_fdir.c b/drivers/net/mlx5/mlx5_fdir.c
index 8207573..802669b 100644
--- a/drivers/net/mlx5/mlx5_fdir.c
+++ b/drivers/net/mlx5/mlx5_fdir.c
@@ -400,6 +400,145 @@ create_flow:
 }
 
 /**
+ * Destroy a flow director queue.
+ *
+ * @param fdir_queue
+ *   Flow director queue to be destroyed.
+ */
+void
+priv_fdir_queue_destroy(struct priv *priv, struct fdir_queue *fdir_queue)
+{
+	struct mlx5_fdir_filter *fdir_filter;
+
+	/* Disable filter flows still applying to this queue. */
+	LIST_FOREACH(fdir_filter, priv->fdir_filter_list, next) {
+		unsigned int idx = fdir_filter->queue;
+		struct rxq_ctrl *rxq_ctrl =
+			container_of((*priv->rxqs)[idx], struct rxq_ctrl, rxq);
+
+		assert(idx < priv->rxqs_n);
+		if (fdir_queue == rxq_ctrl->fdir_queue &&
+		    fdir_filter->flow != NULL) {
+			claim_zero(ibv_exp_destroy_flow(fdir_filter->flow));
+			fdir_filter->flow = NULL;
+		}
+	}
+	assert(fdir_queue->qp);
+	claim_zero(ibv_destroy_qp(fdir_queue->qp));
+	assert(fdir_queue->ind_table);
+	claim_zero(ibv_exp_destroy_rwq_ind_table(fdir_queue->ind_table));
+	if (fdir_queue->wq)
+		claim_zero(ibv_exp_destroy_wq(fdir_queue->wq));
+	if (fdir_queue->cq)
+		claim_zero(ibv_destroy_cq(fdir_queue->cq));
+#ifndef NDEBUG
+	memset(fdir_queue, 0x2a, sizeof(*fdir_queue));
+#endif
+	rte_free(fdir_queue);
+}
+
+/**
+ * Create a flow director queue.
+ *
+ * @param priv
+ *   Private structure.
+ * @param wq
+ *   Work queue to route matched packets to, NULL if one needs to
+ *   be created.
+ *
+ * @return
+ *   Related flow director queue on success, NULL otherwise.
+ */
+static struct fdir_queue *
+priv_fdir_queue_create(struct priv *priv, struct ibv_exp_wq *wq,
+		       unsigned int socket)
+{
+	struct fdir_queue *fdir_queue;
+
+	fdir_queue = rte_calloc_socket(__func__, 1, sizeof(*fdir_queue),
+				       0, socket);
+	if (!fdir_queue) {
+		ERROR("cannot allocate flow director queue");
+		return NULL;
+	}
+	assert(priv->pd);
+	assert(priv->ctx);
+	if (!wq) {
+		fdir_queue->cq = ibv_exp_create_cq(
+			priv->ctx, 1, NULL, NULL, 0,
+			&(struct ibv_exp_cq_init_attr){
+				.comp_mask = 0,
+			});
+		if (!fdir_queue->cq) {
+			ERROR("cannot create flow director CQ");
+			goto error;
+		}
+		fdir_queue->wq = ibv_exp_create_wq(
+			priv->ctx,
+			&(struct ibv_exp_wq_init_attr){
+				.wq_type = IBV_EXP_WQT_RQ,
+				.max_recv_wr = 1,
+				.max_recv_sge = 1,
+				.pd = priv->pd,
+				.cq = fdir_queue->cq,
+			});
+		if (!fdir_queue->wq) {
+			ERROR("cannot create flow director WQ");
+			goto error;
+		}
+		wq = fdir_queue->wq;
+	}
+	fdir_queue->ind_table = ibv_exp_create_rwq_ind_table(
+		priv->ctx,
+		&(struct ibv_exp_rwq_ind_table_init_attr){
+			.pd = priv->pd,
+			.log_ind_tbl_size = 0,
+			.ind_tbl = &wq,
+			.comp_mask = 0,
+		});
+	if (!fdir_queue->ind_table) {
+		ERROR("cannot create flow director indirection table");
+		goto error;
+	}
+	fdir_queue->qp = ibv_exp_create_qp(
+		priv->ctx,
+		&(struct ibv_exp_qp_init_attr){
+			.qp_type = IBV_QPT_RAW_PACKET,
+			.comp_mask =
+				IBV_EXP_QP_INIT_ATTR_PD |
+				IBV_EXP_QP_INIT_ATTR_PORT |
+				IBV_EXP_QP_INIT_ATTR_RX_HASH,
+			.pd = priv->pd,
+			.rx_hash_conf = &(struct ibv_exp_rx_hash_conf){
+				.rx_hash_function =
+					IBV_EXP_RX_HASH_FUNC_TOEPLITZ,
+				.rx_hash_key_len = rss_hash_default_key_len,
+				.rx_hash_key = rss_hash_default_key,
+				.rx_hash_fields_mask = 0,
+				.rwq_ind_tbl = fdir_queue->ind_table,
+			},
+			.port_num = priv->port,
+		});
+	if (!fdir_queue->qp) {
+		ERROR("cannot create flow director hash RX QP");
+		goto error;
+	}
+	return fdir_queue;
+error:
+	assert(fdir_queue);
+	assert(!fdir_queue->qp);
+	if (fdir_queue->ind_table)
+		claim_zero(ibv_exp_destroy_rwq_ind_table
+			   (fdir_queue->ind_table));
+	if (fdir_queue->wq)
+		claim_zero(ibv_exp_destroy_wq(fdir_queue->wq));
+	if (fdir_queue->cq)
+		claim_zero(ibv_destroy_cq(fdir_queue->cq));
+	rte_free(fdir_queue);
+	return NULL;
+}
+
+/**
  * Get flow director queue for a specific RX queue, create it in case
  * it does not exist.
  *
@@ -416,74 +555,15 @@ priv_get_fdir_queue(struct priv *priv, uint16_t idx)
 {
 	struct rxq_ctrl *rxq_ctrl =
 		container_of((*priv->rxqs)[idx], struct rxq_ctrl, rxq);
-	struct fdir_queue *fdir_queue = &rxq_ctrl->fdir_queue;
-	struct ibv_exp_rwq_ind_table *ind_table = NULL;
-	struct ibv_qp *qp = NULL;
-	struct ibv_exp_rwq_ind_table_init_attr ind_init_attr;
-	struct ibv_exp_rx_hash_conf hash_conf;
-	struct ibv_exp_qp_init_attr qp_init_attr;
-	int err = 0;
-
-	/* Return immediately if it has already been created. */
-	if (fdir_queue->qp != NULL)
-		return fdir_queue;
-
-	ind_init_attr = (struct ibv_exp_rwq_ind_table_init_attr){
-		.pd = priv->pd,
-		.log_ind_tbl_size = 0,
-		.ind_tbl = &rxq_ctrl->wq,
-		.comp_mask = 0,
-	};
+	struct fdir_queue *fdir_queue = rxq_ctrl->fdir_queue;
 
-	errno = 0;
-	ind_table = ibv_exp_create_rwq_ind_table(priv->ctx,
-						 &ind_init_attr);
-	if (ind_table == NULL) {
-		/* Not clear whether errno is set. */
-		err = (errno ? errno : EINVAL);
-		ERROR("RX indirection table creation failed with error %d: %s",
-		      err, strerror(err));
-		goto error;
-	}
-
-	/* Create fdir_queue qp. */
-	hash_conf = (struct ibv_exp_rx_hash_conf){
-		.rx_hash_function = IBV_EXP_RX_HASH_FUNC_TOEPLITZ,
-		.rx_hash_key_len = rss_hash_default_key_len,
-		.rx_hash_key = rss_hash_default_key,
-		.rx_hash_fields_mask = 0,
-		.rwq_ind_tbl = ind_table,
-	};
-	qp_init_attr = (struct ibv_exp_qp_init_attr){
-		.max_inl_recv = 0, /* Currently not supported. */
-		.qp_type = IBV_QPT_RAW_PACKET,
-		.comp_mask = (IBV_EXP_QP_INIT_ATTR_PD |
-			      IBV_EXP_QP_INIT_ATTR_RX_HASH),
-		.pd = priv->pd,
-		.rx_hash_conf = &hash_conf,
-		.port_num = priv->port,
-	};
-
-	qp = ibv_exp_create_qp(priv->ctx, &qp_init_attr);
-	if (qp == NULL) {
-		err = (errno ? errno : EINVAL);
-		ERROR("hash RX QP creation failure: %s", strerror(err));
-		goto error;
+	assert(rxq_ctrl->wq);
+	if (fdir_queue == NULL) {
+		fdir_queue = priv_fdir_queue_create(priv, rxq_ctrl->wq,
+						    rxq_ctrl->socket);
+		rxq_ctrl->fdir_queue = fdir_queue;
 	}
-
-	fdir_queue->ind_table = ind_table;
-	fdir_queue->qp = qp;
-
 	return fdir_queue;
-
-error:
-	if (qp != NULL)
-		claim_zero(ibv_destroy_qp(qp));
-
-	if (ind_table != NULL)
-		claim_zero(ibv_exp_destroy_rwq_ind_table(ind_table));
-
-	return NULL;
 }
 
 /**
@@ -601,7 +681,6 @@ priv_fdir_disable(struct priv *priv)
 {
 	unsigned int i;
 	struct mlx5_fdir_filter *mlx5_fdir_filter;
-	struct fdir_queue *fdir_queue;
 
 	/* Run on every flow director filter and destroy flow handle. */
 	LIST_FOREACH(mlx5_fdir_filter, priv->fdir_filter_list, next) {
@@ -618,23 +697,15 @@ priv_fdir_disable(struct priv *priv)
 		}
 	}
 
-	/* Run on every RX queue to destroy related flow director QP and
-	 * indirection table. */
+	/* Destroy flow director context in each RX queue. */
 	for (i = 0; (i != priv->rxqs_n); i++) {
 		struct rxq_ctrl *rxq_ctrl =
 			container_of((*priv->rxqs)[i], struct rxq_ctrl, rxq);
 
-		fdir_queue = &rxq_ctrl->fdir_queue;
-		if (fdir_queue->qp != NULL) {
-			claim_zero(ibv_destroy_qp(fdir_queue->qp));
-			fdir_queue->qp = NULL;
-		}
-
-		if (fdir_queue->ind_table != NULL) {
-			claim_zero(ibv_exp_destroy_rwq_ind_table
-				   (fdir_queue->ind_table));
-			fdir_queue->ind_table = NULL;
-		}
+		if (!rxq_ctrl->fdir_queue)
+			continue;
+		priv_fdir_queue_destroy(priv, rxq_ctrl->fdir_queue);
+		rxq_ctrl->fdir_queue = NULL;
 	}
 }
 
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 29c137c..44889d1 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -745,6 +745,8 @@ rxq_cleanup(struct rxq_ctrl *rxq_ctrl)
 
 	DEBUG("cleaning up %p", (void *)rxq_ctrl);
 	rxq_free_elts(rxq_ctrl);
+	if (rxq_ctrl->fdir_queue != NULL)
+		priv_fdir_queue_destroy(rxq_ctrl->priv, rxq_ctrl->fdir_queue);
 	if (rxq_ctrl->if_wq != NULL) {
 		assert(rxq_ctrl->priv != NULL);
 		assert(rxq_ctrl->priv->ctx != NULL);
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index f6e2cba..c8a93c0 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -87,6 +87,8 @@ struct mlx5_txq_stats {
 struct fdir_queue {
 	struct ibv_qp *qp; /* Associated RX QP. */
 	struct ibv_exp_rwq_ind_table *ind_table; /* Indirection table. */
+	struct ibv_exp_wq *wq; /* Work queue. */
+	struct ibv_cq *cq; /* Completion queue. */
 };
 
 struct priv;
@@ -128,7 +130,7 @@ struct rxq_ctrl {
 	struct ibv_cq *cq; /* Completion Queue. */
 	struct ibv_exp_wq *wq; /* Work Queue. */
 	struct ibv_exp_res_domain *rd; /* Resource Domain. */
-	struct fdir_queue fdir_queue; /* Flow director queue. */
+	struct fdir_queue *fdir_queue; /* Flow director queue. */
 	struct ibv_mr *mr; /* Memory Region (for mp). */
 	struct ibv_exp_wq_family *if_wq; /* WQ burst interface. */
 	struct ibv_exp_cq_family_v1 *if_cq; /* CQ interface. */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH V2 5/8] net/mlx5: fix support for flow director drop mode
  2016-09-07  7:02 [PATCH 0/8] net/mlx5: various fixes Nelio Laranjeiro
                   ` (12 preceding siblings ...)
  2016-09-14 11:53 ` [PATCH V2 4/8] net/mlx5: refactor allocation of flow director queues Nelio Laranjeiro
@ 2016-09-14 11:53 ` Nelio Laranjeiro
  2016-09-14 11:53 ` [PATCH V2 6/8] net/mlx5: force inline for completion function Nelio Laranjeiro
                   ` (2 subsequent siblings)
  16 siblings, 0 replies; 22+ messages in thread
From: Nelio Laranjeiro @ 2016-09-14 11:53 UTC (permalink / raw)
  To: dev; +Cc: Yaacov Hazan, Adrien Mazarguil

From: Yaacov Hazan <yaacovh@mellanox.com>

Packet rejection was routed to a polled queue.  This patch route them to a
dummy queue which is not polled.

Fixes: 76f5c99e6840 ("mlx5: support flow director")

Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
---
 doc/guides/nics/mlx5.rst     |  3 ++-
 drivers/net/mlx5/mlx5.h      |  1 +
 drivers/net/mlx5/mlx5_fdir.c | 41 +++++++++++++++++++++++++++++++++++++++--
 3 files changed, 42 insertions(+), 3 deletions(-)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 5c10cd3..8923173 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -84,7 +84,8 @@ Features
 - Promiscuous mode.
 - Multicast promiscuous mode.
 - Hardware checksum offloads.
-- Flow director (RTE_FDIR_MODE_PERFECT and RTE_FDIR_MODE_PERFECT_MAC_VLAN).
+- Flow director (RTE_FDIR_MODE_PERFECT, RTE_FDIR_MODE_PERFECT_MAC_VLAN and
+  RTE_ETH_FDIR_REJECT).
 - Secondary process TX is supported.
 - KVM and VMware ESX SR-IOV modes are supported.
 
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index fa78623..8349e5b 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -134,6 +134,7 @@ struct priv {
 	unsigned int (*reta_idx)[]; /* RETA index table. */
 	unsigned int reta_idx_n; /* RETA index size. */
 	struct fdir_filter_list *fdir_filter_list; /* Flow director rules. */
+	struct fdir_queue *fdir_drop_queue; /* Flow director drop queue. */
 	rte_spinlock_t lock; /* Lock for control functions. */
 };
 
diff --git a/drivers/net/mlx5/mlx5_fdir.c b/drivers/net/mlx5/mlx5_fdir.c
index 802669b..25c8c52 100644
--- a/drivers/net/mlx5/mlx5_fdir.c
+++ b/drivers/net/mlx5/mlx5_fdir.c
@@ -75,6 +75,7 @@ struct fdir_flow_desc {
 struct mlx5_fdir_filter {
 	LIST_ENTRY(mlx5_fdir_filter) next;
 	uint16_t queue; /* Queue assigned to if FDIR match. */
+	enum rte_eth_fdir_behavior behavior;
 	struct fdir_flow_desc desc;
 	struct ibv_exp_flow *flow;
 };
@@ -567,6 +568,33 @@ priv_get_fdir_queue(struct priv *priv, uint16_t idx)
 }
 
 /**
+ * Get or flow director drop queue. Create it if it does not exist.
+ *
+ * @param priv
+ *   Private structure.
+ *
+ * @return
+ *   Flow director drop queue on success, NULL otherwise.
+ */
+static struct fdir_queue *
+priv_get_fdir_drop_queue(struct priv *priv)
+{
+	struct fdir_queue *fdir_queue = priv->fdir_drop_queue;
+
+	if (fdir_queue == NULL) {
+		unsigned int socket = SOCKET_ID_ANY;
+
+		/* Select a known NUMA socket if possible. */
+		if (priv->rxqs_n && (*priv->rxqs)[0])
+			socket = container_of((*priv->rxqs)[0],
+					      struct rxq_ctrl, rxq)->socket;
+		fdir_queue = priv_fdir_queue_create(priv, NULL, socket);
+		priv->fdir_drop_queue = fdir_queue;
+	}
+	return fdir_queue;
+}
+
+/**
  * Enable flow director filter and create steering rules.
  *
  * @param priv
@@ -588,7 +616,11 @@ priv_fdir_filter_enable(struct priv *priv,
 		return 0;
 
 	/* Get fdir_queue for specific queue. */
-	fdir_queue = priv_get_fdir_queue(priv, mlx5_fdir_filter->queue);
+	if (mlx5_fdir_filter->behavior == RTE_ETH_FDIR_REJECT)
+		fdir_queue = priv_get_fdir_drop_queue(priv);
+	else
+		fdir_queue = priv_get_fdir_queue(priv,
+						 mlx5_fdir_filter->queue);
 
 	if (fdir_queue == NULL) {
 		ERROR("failed to create flow director rxq for queue %d",
@@ -707,6 +739,10 @@ priv_fdir_disable(struct priv *priv)
 		priv_fdir_queue_destroy(priv, rxq_ctrl->fdir_queue);
 		rxq_ctrl->fdir_queue = NULL;
 	}
+	if (priv->fdir_drop_queue) {
+		priv_fdir_queue_destroy(priv, priv->fdir_drop_queue);
+		priv->fdir_drop_queue = NULL;
+	}
 }
 
 /**
@@ -807,8 +843,9 @@ priv_fdir_filter_add(struct priv *priv,
 		return err;
 	}
 
-	/* Set queue. */
+	/* Set action parameters. */
 	mlx5_fdir_filter->queue = fdir_filter->action.rx_queue;
+	mlx5_fdir_filter->behavior = fdir_filter->action.behavior;
 
 	/* Convert to mlx5 filter descriptor. */
 	fdir_filter_to_flow_desc(fdir_filter,
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH V2 6/8] net/mlx5: force inline for completion function
  2016-09-07  7:02 [PATCH 0/8] net/mlx5: various fixes Nelio Laranjeiro
                   ` (13 preceding siblings ...)
  2016-09-14 11:53 ` [PATCH V2 5/8] net/mlx5: fix support for flow director drop mode Nelio Laranjeiro
@ 2016-09-14 11:53 ` Nelio Laranjeiro
  2016-09-14 11:53 ` [PATCH V2 7/8] net/mlx5: re-factorize functions Nelio Laranjeiro
  2016-09-14 11:53 ` [PATCH V2 8/8] net/mlx5: fix inline logic Nelio Laranjeiro
  16 siblings, 0 replies; 22+ messages in thread
From: Nelio Laranjeiro @ 2016-09-14 11:53 UTC (permalink / raw)
  To: dev

This function was supposed to be inlined, but was not because several
functions calls it.  This function should always be inline avoid
external function calls and to optimize code in data-path.

Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
---
 drivers/net/mlx5/mlx5_rxtx.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 3757366..5c39cbb 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -152,6 +152,9 @@ check_cqe64(volatile struct mlx5_cqe64 *cqe,
 	return 0;
 }
 
+static inline void
+txq_complete(struct txq *txq) __attribute__((always_inline));
+
 /**
  * Manage TX completions.
  *
@@ -160,7 +163,7 @@ check_cqe64(volatile struct mlx5_cqe64 *cqe,
  * @param txq
  *   Pointer to TX queue structure.
  */
-static void
+static inline void
 txq_complete(struct txq *txq)
 {
 	const unsigned int elts_n = txq->elts_n;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH V2 7/8] net/mlx5: re-factorize functions
  2016-09-07  7:02 [PATCH 0/8] net/mlx5: various fixes Nelio Laranjeiro
                   ` (14 preceding siblings ...)
  2016-09-14 11:53 ` [PATCH V2 6/8] net/mlx5: force inline for completion function Nelio Laranjeiro
@ 2016-09-14 11:53 ` Nelio Laranjeiro
  2016-09-14 11:53 ` [PATCH V2 8/8] net/mlx5: fix inline logic Nelio Laranjeiro
  16 siblings, 0 replies; 22+ messages in thread
From: Nelio Laranjeiro @ 2016-09-14 11:53 UTC (permalink / raw)
  To: dev

Rework logic of wqe_write() and wqe_write_vlan() which are pretty similar
to keep a single one.

Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
---
 drivers/net/mlx5/mlx5_rxtx.c | 98 ++++++++++----------------------------------
 1 file changed, 22 insertions(+), 76 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 5c39cbb..c7e538f 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -293,8 +293,8 @@ txq_mp2mr(struct txq *txq, struct rte_mempool *mp)
  *   Pointer to TX queue structure.
  * @param wqe
  *   Pointer to the WQE to fill.
- * @param addr
- *   Buffer data address.
+ * @param buf
+ *   Buffer.
  * @param length
  *   Packet length.
  * @param lkey
@@ -302,54 +302,24 @@ txq_mp2mr(struct txq *txq, struct rte_mempool *mp)
  */
 static inline void
 mlx5_wqe_write(struct txq *txq, volatile union mlx5_wqe *wqe,
-	       uintptr_t addr, uint32_t length, uint32_t lkey)
-{
-	wqe->wqe.ctrl.data[0] = htonl((txq->wqe_ci << 8) | MLX5_OPCODE_SEND);
-	wqe->wqe.ctrl.data[1] = htonl((txq->qp_num_8s) | 4);
-	wqe->wqe.ctrl.data[2] = 0;
-	wqe->wqe.ctrl.data[3] = 0;
-	wqe->inl.eseg.rsvd0 = 0;
-	wqe->inl.eseg.rsvd1 = 0;
-	wqe->inl.eseg.mss = 0;
-	wqe->inl.eseg.rsvd2 = 0;
-	wqe->wqe.eseg.inline_hdr_sz = htons(MLX5_ETH_INLINE_HEADER_SIZE);
-	/* Copy the first 16 bytes into inline header. */
-	rte_memcpy((uint8_t *)(uintptr_t)wqe->wqe.eseg.inline_hdr_start,
-		   (uint8_t *)(uintptr_t)addr,
-		   MLX5_ETH_INLINE_HEADER_SIZE);
-	addr += MLX5_ETH_INLINE_HEADER_SIZE;
-	length -= MLX5_ETH_INLINE_HEADER_SIZE;
-	/* Store remaining data in data segment. */
-	wqe->wqe.dseg.byte_count = htonl(length);
-	wqe->wqe.dseg.lkey = lkey;
-	wqe->wqe.dseg.addr = htonll(addr);
-	/* Increment consumer index. */
-	++txq->wqe_ci;
-}
-
-/**
- * Write a regular WQE with VLAN.
- *
- * @param txq
- *   Pointer to TX queue structure.
- * @param wqe
- *   Pointer to the WQE to fill.
- * @param addr
- *   Buffer data address.
- * @param length
- *   Packet length.
- * @param lkey
- *   Memory region lkey.
- * @param vlan_tci
- *   VLAN field to insert in packet.
- */
-static inline void
-mlx5_wqe_write_vlan(struct txq *txq, volatile union mlx5_wqe *wqe,
-		    uintptr_t addr, uint32_t length, uint32_t lkey,
-		    uint16_t vlan_tci)
+	       struct rte_mbuf *buf, uint32_t length, uint32_t lkey)
 {
-	uint32_t vlan = htonl(0x81000000 | vlan_tci);
-
+	uintptr_t addr = rte_pktmbuf_mtod(buf, uintptr_t);
+
+	rte_mov16((uint8_t *)&wqe->wqe.eseg.inline_hdr_start,
+		  (uint8_t *)addr);
+	addr += 16;
+	length -= 16;
+	/* Need to insert VLAN ? */
+	if (buf->ol_flags & PKT_TX_VLAN_PKT) {
+		uint32_t vlan = htonl(0x81000000 | buf->vlan_tci);
+
+		memcpy((uint8_t *)&wqe->wqe.eseg.inline_hdr_start + 12,
+		       &vlan, sizeof(vlan));
+		addr -= sizeof(vlan);
+		length += sizeof(vlan);
+	}
+	/* Write the WQE. */
 	wqe->wqe.ctrl.data[0] = htonl((txq->wqe_ci << 8) | MLX5_OPCODE_SEND);
 	wqe->wqe.ctrl.data[1] = htonl((txq->qp_num_8s) | 4);
 	wqe->wqe.ctrl.data[2] = 0;
@@ -358,20 +328,7 @@ mlx5_wqe_write_vlan(struct txq *txq, volatile union mlx5_wqe *wqe,
 	wqe->inl.eseg.rsvd1 = 0;
 	wqe->inl.eseg.mss = 0;
 	wqe->inl.eseg.rsvd2 = 0;
-	wqe->wqe.eseg.inline_hdr_sz = htons(MLX5_ETH_VLAN_INLINE_HEADER_SIZE);
-	/*
-	 * Copy 12 bytes of source & destination MAC address.
-	 * Copy 4 bytes of VLAN.
-	 * Copy 2 bytes of Ether type.
-	 */
-	rte_memcpy((uint8_t *)(uintptr_t)wqe->wqe.eseg.inline_hdr_start,
-		   (uint8_t *)(uintptr_t)addr, 12);
-	rte_memcpy((uint8_t *)((uintptr_t)wqe->wqe.eseg.inline_hdr_start + 12),
-		   &vlan, sizeof(vlan));
-	rte_memcpy((uint8_t *)((uintptr_t)wqe->wqe.eseg.inline_hdr_start + 16),
-		   (uint8_t *)((uintptr_t)addr + 12), 2);
-	addr += MLX5_ETH_VLAN_INLINE_HEADER_SIZE - sizeof(vlan);
-	length -= MLX5_ETH_VLAN_INLINE_HEADER_SIZE - sizeof(vlan);
+	wqe->wqe.eseg.inline_hdr_sz = htons(16);
 	/* Store remaining data in data segment. */
 	wqe->wqe.dseg.byte_count = htonl(length);
 	wqe->wqe.dseg.lkey = lkey;
@@ -612,7 +569,6 @@ mlx5_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
 	do {
 		struct rte_mbuf *buf = *(pkts++);
 		unsigned int elts_head_next;
-		uintptr_t addr;
 		uint32_t length;
 		uint32_t lkey;
 		unsigned int segs_n = buf->nb_segs;
@@ -634,8 +590,6 @@ mlx5_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
 		rte_prefetch0(wqe);
 		if (pkts_n)
 			rte_prefetch0(*pkts);
-		/* Retrieve buffer information. */
-		addr = rte_pktmbuf_mtod(buf, uintptr_t);
 		length = DATA_LEN(buf);
 		/* Update element. */
 		(*txq->elts)[elts_head] = buf;
@@ -645,11 +599,7 @@ mlx5_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
 						       volatile void *));
 		/* Retrieve Memory Region key for this memory pool. */
 		lkey = txq_mp2mr(txq, txq_mb2mp(buf));
-		if (buf->ol_flags & PKT_TX_VLAN_PKT)
-			mlx5_wqe_write_vlan(txq, wqe, addr, length, lkey,
-					    buf->vlan_tci);
-		else
-			mlx5_wqe_write(txq, wqe, addr, length, lkey);
+		mlx5_wqe_write(txq, wqe, buf, length, lkey);
 		/* Should we enable HW CKSUM offload */
 		if (buf->ol_flags &
 		    (PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM)) {
@@ -813,11 +763,7 @@ mlx5_tx_burst_inline(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
 		} else {
 			/* Retrieve Memory Region key for this memory pool. */
 			lkey = txq_mp2mr(txq, txq_mb2mp(buf));
-			if (buf->ol_flags & PKT_TX_VLAN_PKT)
-				mlx5_wqe_write_vlan(txq, wqe, addr, length,
-						    lkey, buf->vlan_tci);
-			else
-				mlx5_wqe_write(txq, wqe, addr, length, lkey);
+			mlx5_wqe_write(txq, wqe, buf, length, lkey);
 		}
 		while (--segs_n) {
 			/*
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH V2 8/8] net/mlx5: fix inline logic
  2016-09-07  7:02 [PATCH 0/8] net/mlx5: various fixes Nelio Laranjeiro
                   ` (15 preceding siblings ...)
  2016-09-14 11:53 ` [PATCH V2 7/8] net/mlx5: re-factorize functions Nelio Laranjeiro
@ 2016-09-14 11:53 ` Nelio Laranjeiro
  16 siblings, 0 replies; 22+ messages in thread
From: Nelio Laranjeiro @ 2016-09-14 11:53 UTC (permalink / raw)
  To: dev; +Cc: Vasily Philipov

To improve performance the NIC expects for large packets to have a pointer
to a cache aligned address, old inline code could break this assumption
which hurts performance.

Fixes: 2a66cf378954 ("net/mlx5: support inline send")

Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Vasily Philipov <vasilyf@mellanox.com>
---
 drivers/net/mlx5/mlx5_ethdev.c |   4 -
 drivers/net/mlx5/mlx5_rxtx.c   | 422 ++++++++++-------------------------------
 drivers/net/mlx5/mlx5_rxtx.h   |   3 +-
 drivers/net/mlx5/mlx5_txq.c    |   9 +-
 4 files changed, 103 insertions(+), 335 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 47f323e..1ae80e5 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -1398,10 +1398,6 @@ priv_select_tx_function(struct priv *priv)
 	} else if ((priv->sriov == 0) && priv->mps) {
 		priv->dev->tx_pkt_burst = mlx5_tx_burst_mpw;
 		DEBUG("selected MPW TX function");
-	} else if (priv->txq_inline && (priv->txqs_n >= priv->txqs_inline)) {
-		priv->dev->tx_pkt_burst = mlx5_tx_burst_inline;
-		DEBUG("selected inline TX function (%u >= %u queues)",
-		      priv->txqs_n, priv->txqs_inline);
 	}
 }
 
diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index c7e538f..ecc76ad 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -297,179 +297,99 @@ txq_mp2mr(struct txq *txq, struct rte_mempool *mp)
  *   Buffer.
  * @param length
  *   Packet length.
- * @param lkey
- *   Memory region lkey.
+ *
+ * @return ds
+ *   Number of DS elements consumed.
  */
-static inline void
+static inline unsigned int
 mlx5_wqe_write(struct txq *txq, volatile union mlx5_wqe *wqe,
-	       struct rte_mbuf *buf, uint32_t length, uint32_t lkey)
+	       struct rte_mbuf *buf, uint32_t length)
 {
+	uintptr_t raw = (uintptr_t)&wqe->wqe.eseg.inline_hdr_start;
+	uint16_t ds;
+	uint16_t pkt_inline_sz = 16;
 	uintptr_t addr = rte_pktmbuf_mtod(buf, uintptr_t);
+	struct mlx5_wqe_data_seg *dseg = NULL;
 
-	rte_mov16((uint8_t *)&wqe->wqe.eseg.inline_hdr_start,
-		  (uint8_t *)addr);
-	addr += 16;
+	assert(length >= 16);
+	/* Start the know and common part of the WQE structure. */
+	wqe->wqe.ctrl.data[0] = htonl((txq->wqe_ci << 8) | MLX5_OPCODE_SEND);
+	wqe->wqe.ctrl.data[2] = 0;
+	wqe->wqe.ctrl.data[3] = 0;
+	wqe->wqe.eseg.rsvd0 = 0;
+	wqe->wqe.eseg.rsvd1 = 0;
+	wqe->wqe.eseg.mss = 0;
+	wqe->wqe.eseg.rsvd2 = 0;
+	/* Start by copying the Ethernet Header. */
+	rte_mov16((uint8_t *)raw, (uint8_t *)addr);
 	length -= 16;
-	/* Need to insert VLAN ? */
+	addr += 16;
+	/* Replace the Ethernet type by the VLAN if necessary. */
 	if (buf->ol_flags & PKT_TX_VLAN_PKT) {
 		uint32_t vlan = htonl(0x81000000 | buf->vlan_tci);
 
-		memcpy((uint8_t *)&wqe->wqe.eseg.inline_hdr_start + 12,
+		memcpy((uint8_t *)(raw + 16 - sizeof(vlan)),
 		       &vlan, sizeof(vlan));
 		addr -= sizeof(vlan);
 		length += sizeof(vlan);
 	}
-	/* Write the WQE. */
-	wqe->wqe.ctrl.data[0] = htonl((txq->wqe_ci << 8) | MLX5_OPCODE_SEND);
-	wqe->wqe.ctrl.data[1] = htonl((txq->qp_num_8s) | 4);
-	wqe->wqe.ctrl.data[2] = 0;
-	wqe->wqe.ctrl.data[3] = 0;
-	wqe->inl.eseg.rsvd0 = 0;
-	wqe->inl.eseg.rsvd1 = 0;
-	wqe->inl.eseg.mss = 0;
-	wqe->inl.eseg.rsvd2 = 0;
-	wqe->wqe.eseg.inline_hdr_sz = htons(16);
-	/* Store remaining data in data segment. */
-	wqe->wqe.dseg.byte_count = htonl(length);
-	wqe->wqe.dseg.lkey = lkey;
-	wqe->wqe.dseg.addr = htonll(addr);
-	/* Increment consumer index. */
-	++txq->wqe_ci;
-}
-
-/**
- * Write a inline WQE.
- *
- * @param txq
- *   Pointer to TX queue structure.
- * @param wqe
- *   Pointer to the WQE to fill.
- * @param addr
- *   Buffer data address.
- * @param length
- *   Packet length.
- * @param lkey
- *   Memory region lkey.
- */
-static inline void
-mlx5_wqe_write_inline(struct txq *txq, volatile union mlx5_wqe *wqe,
-		      uintptr_t addr, uint32_t length)
-{
-	uint32_t size;
-	uint16_t wqe_cnt = txq->wqe_n - 1;
-	uint16_t wqe_ci = txq->wqe_ci + 1;
-
-	/* Copy the first 16 bytes into inline header. */
-	rte_memcpy((void *)(uintptr_t)wqe->inl.eseg.inline_hdr_start,
-		   (void *)(uintptr_t)addr,
-		   MLX5_ETH_INLINE_HEADER_SIZE);
-	addr += MLX5_ETH_INLINE_HEADER_SIZE;
-	length -= MLX5_ETH_INLINE_HEADER_SIZE;
-	size = 3 + ((4 + length + 15) / 16);
-	wqe->inl.byte_cnt = htonl(length | MLX5_INLINE_SEG);
-	rte_memcpy((void *)(uintptr_t)&wqe->inl.data[0],
-		   (void *)addr, MLX5_WQE64_INL_DATA);
-	addr += MLX5_WQE64_INL_DATA;
-	length -= MLX5_WQE64_INL_DATA;
-	while (length) {
-		volatile union mlx5_wqe *wqe_next =
-			&(*txq->wqes)[wqe_ci & wqe_cnt];
-		uint32_t copy_bytes = (length > sizeof(*wqe)) ?
-				      sizeof(*wqe) :
-				      length;
-
-		rte_mov64((uint8_t *)(uintptr_t)&wqe_next->data[0],
-			  (uint8_t *)addr);
-		addr += copy_bytes;
-		length -= copy_bytes;
-		++wqe_ci;
-	}
-	assert(size < 64);
-	wqe->inl.ctrl.data[0] = htonl((txq->wqe_ci << 8) | MLX5_OPCODE_SEND);
-	wqe->inl.ctrl.data[1] = htonl(txq->qp_num_8s | size);
-	wqe->inl.ctrl.data[2] = 0;
-	wqe->inl.ctrl.data[3] = 0;
-	wqe->inl.eseg.rsvd0 = 0;
-	wqe->inl.eseg.rsvd1 = 0;
-	wqe->inl.eseg.mss = 0;
-	wqe->inl.eseg.rsvd2 = 0;
-	wqe->inl.eseg.inline_hdr_sz = htons(MLX5_ETH_INLINE_HEADER_SIZE);
-	/* Increment consumer index. */
-	txq->wqe_ci = wqe_ci;
-}
-
-/**
- * Write a inline WQE with VLAN.
- *
- * @param txq
- *   Pointer to TX queue structure.
- * @param wqe
- *   Pointer to the WQE to fill.
- * @param addr
- *   Buffer data address.
- * @param length
- *   Packet length.
- * @param lkey
- *   Memory region lkey.
- * @param vlan_tci
- *   VLAN field to insert in packet.
- */
-static inline void
-mlx5_wqe_write_inline_vlan(struct txq *txq, volatile union mlx5_wqe *wqe,
-			   uintptr_t addr, uint32_t length, uint16_t vlan_tci)
-{
-	uint32_t size;
-	uint32_t wqe_cnt = txq->wqe_n - 1;
-	uint16_t wqe_ci = txq->wqe_ci + 1;
-	uint32_t vlan = htonl(0x81000000 | vlan_tci);
-
-	/*
-	 * Copy 12 bytes of source & destination MAC address.
-	 * Copy 4 bytes of VLAN.
-	 * Copy 2 bytes of Ether type.
-	 */
-	rte_memcpy((uint8_t *)(uintptr_t)wqe->inl.eseg.inline_hdr_start,
-		   (uint8_t *)addr, 12);
-	rte_memcpy((uint8_t *)(uintptr_t)wqe->inl.eseg.inline_hdr_start + 12,
-		   &vlan, sizeof(vlan));
-	rte_memcpy((uint8_t *)((uintptr_t)wqe->inl.eseg.inline_hdr_start + 16),
-		   (uint8_t *)(addr + 12), 2);
-	addr += MLX5_ETH_VLAN_INLINE_HEADER_SIZE - sizeof(vlan);
-	length -= MLX5_ETH_VLAN_INLINE_HEADER_SIZE - sizeof(vlan);
-	size = (sizeof(wqe->inl.ctrl.ctrl) +
-		sizeof(wqe->inl.eseg) +
-		sizeof(wqe->inl.byte_cnt) +
-		length + 15) / 16;
-	wqe->inl.byte_cnt = htonl(length | MLX5_INLINE_SEG);
-	rte_memcpy((void *)(uintptr_t)&wqe->inl.data[0],
-		   (void *)addr, MLX5_WQE64_INL_DATA);
-	addr += MLX5_WQE64_INL_DATA;
-	length -= MLX5_WQE64_INL_DATA;
-	while (length) {
-		volatile union mlx5_wqe *wqe_next =
-			&(*txq->wqes)[wqe_ci & wqe_cnt];
-		uint32_t copy_bytes = (length > sizeof(*wqe)) ?
-				      sizeof(*wqe) :
-				      length;
-
-		rte_mov64((uint8_t *)(uintptr_t)&wqe_next->data[0],
-			  (uint8_t *)addr);
-		addr += copy_bytes;
-		length -= copy_bytes;
-		++wqe_ci;
+	/* Inline if enough room. */
+	if (txq->max_inline != 0) {
+		uintptr_t end = (uintptr_t)&(*txq->wqes)[txq->wqe_n];
+		uint16_t max_inline = txq->max_inline * RTE_CACHE_LINE_SIZE;
+		uint16_t room;
+
+		raw += 16;
+		room = end - (uintptr_t)raw;
+		if (room > max_inline) {
+			uintptr_t addr_end = (addr + max_inline) &
+				~(RTE_CACHE_LINE_SIZE - 1);
+			uint16_t copy_b = ((addr_end - addr) > length) ?
+					  length :
+					  (addr_end - addr);
+
+			rte_memcpy((void *)raw, (void *)addr, copy_b);
+			addr += copy_b;
+			length -= copy_b;
+			pkt_inline_sz += copy_b;
+			/* Sanity check. */
+			assert(addr <= addr_end);
+		}
+		/* Store the inlined packet size in the WQE. */
+		wqe->wqe.eseg.inline_hdr_sz = htons(pkt_inline_sz);
+		/*
+		 * 2 DWORDs consumed by the WQE header + 1 DSEG +
+		 * the size of the inline part of the packet.
+		 */
+		ds = 2 + ((pkt_inline_sz - 2 + 15) / 16);
+		if (length > 0) {
+			dseg = (struct mlx5_wqe_data_seg *)
+				((uintptr_t)wqe + (ds * 16));
+			if ((uintptr_t)dseg >= end)
+				dseg = (struct mlx5_wqe_data_seg *)
+					((uintptr_t)&(*txq->wqes)[0]);
+			goto use_dseg;
+		}
+	} else {
+		/* Add the remaining packet as a simple ds. */
+		ds = 3;
+		/*
+		 * No inline has been done in the packet, only the Ethernet
+		 * Header as been stored.
+		 */
+		wqe->wqe.eseg.inline_hdr_sz = htons(16);
+		dseg = (struct mlx5_wqe_data_seg *)
+			((uintptr_t)wqe + (ds * 16));
+use_dseg:
+		*dseg = (struct mlx5_wqe_data_seg) {
+			.addr = htonll(addr),
+			.byte_count = htonl(length),
+			.lkey = txq_mp2mr(txq, txq_mb2mp(buf)),
+		};
+		++ds;
 	}
-	assert(size < 64);
-	wqe->inl.ctrl.data[0] = htonl((txq->wqe_ci << 8) | MLX5_OPCODE_SEND);
-	wqe->inl.ctrl.data[1] = htonl(txq->qp_num_8s | size);
-	wqe->inl.ctrl.data[2] = 0;
-	wqe->inl.ctrl.data[3] = 0;
-	wqe->inl.eseg.rsvd0 = 0;
-	wqe->inl.eseg.rsvd1 = 0;
-	wqe->inl.eseg.mss = 0;
-	wqe->inl.eseg.rsvd2 = 0;
-	wqe->inl.eseg.inline_hdr_sz = htons(MLX5_ETH_VLAN_INLINE_HEADER_SIZE);
-	/* Increment consumer index. */
-	txq->wqe_ci = wqe_ci;
+	wqe->wqe.ctrl.data[1] = htonl(txq->qp_num_8s | ds);
+	return ds;
 }
 
 /**
@@ -570,7 +490,6 @@ mlx5_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
 		struct rte_mbuf *buf = *(pkts++);
 		unsigned int elts_head_next;
 		uint32_t length;
-		uint32_t lkey;
 		unsigned int segs_n = buf->nb_segs;
 		volatile struct mlx5_wqe_data_seg *dseg;
 		unsigned int ds = sizeof(*wqe) / 16;
@@ -586,8 +505,8 @@ mlx5_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
 		--pkts_n;
 		elts_head_next = (elts_head + 1) & (elts_n - 1);
 		wqe = &(*txq->wqes)[txq->wqe_ci & (txq->wqe_n - 1)];
-		dseg = &wqe->wqe.dseg;
-		rte_prefetch0(wqe);
+		tx_prefetch_wqe(txq, txq->wqe_ci);
+		tx_prefetch_wqe(txq, txq->wqe_ci + 1);
 		if (pkts_n)
 			rte_prefetch0(*pkts);
 		length = DATA_LEN(buf);
@@ -597,9 +516,6 @@ mlx5_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
 		if (pkts_n)
 			rte_prefetch0(rte_pktmbuf_mtod(*pkts,
 						       volatile void *));
-		/* Retrieve Memory Region key for this memory pool. */
-		lkey = txq_mp2mr(txq, txq_mb2mp(buf));
-		mlx5_wqe_write(txq, wqe, buf, length, lkey);
 		/* Should we enable HW CKSUM offload */
 		if (buf->ol_flags &
 		    (PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM)) {
@@ -609,6 +525,11 @@ mlx5_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
 		} else {
 			wqe->wqe.eseg.cs_flags = 0;
 		}
+		ds = mlx5_wqe_write(txq, wqe, buf, length);
+		if (segs_n == 1)
+			goto skip_segs;
+		dseg = (volatile struct mlx5_wqe_data_seg *)
+			(((uintptr_t)wqe) + ds * 16);
 		while (--segs_n) {
 			/*
 			 * Spill on next WQE when the current one does not have
@@ -639,11 +560,13 @@ mlx5_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
 		/* Update DS field in WQE. */
 		wqe->wqe.ctrl.data[1] &= htonl(0xffffffc0);
 		wqe->wqe.ctrl.data[1] |= htonl(ds & 0x3f);
-		elts_head = elts_head_next;
+skip_segs:
 #ifdef MLX5_PMD_SOFT_COUNTERS
 		/* Increment sent bytes counter. */
 		txq->stats.obytes += length;
 #endif
+		/* Increment consumer index. */
+		txq->wqe_ci += (ds + 3) / 4;
 		elts_head = elts_head_next;
 		++i;
 	} while (pkts_n);
@@ -672,162 +595,6 @@ mlx5_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
 }
 
 /**
- * DPDK callback for TX with inline support.
- *
- * @param dpdk_txq
- *   Generic pointer to TX queue structure.
- * @param[in] pkts
- *   Packets to transmit.
- * @param pkts_n
- *   Number of packets in array.
- *
- * @return
- *   Number of packets successfully transmitted (<= pkts_n).
- */
-uint16_t
-mlx5_tx_burst_inline(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
-{
-	struct txq *txq = (struct txq *)dpdk_txq;
-	uint16_t elts_head = txq->elts_head;
-	const unsigned int elts_n = txq->elts_n;
-	unsigned int i = 0;
-	unsigned int j = 0;
-	unsigned int max;
-	unsigned int comp;
-	volatile union mlx5_wqe *wqe = NULL;
-	unsigned int max_inline = txq->max_inline;
-
-	if (unlikely(!pkts_n))
-		return 0;
-	/* Prefetch first packet cacheline. */
-	tx_prefetch_cqe(txq, txq->cq_ci);
-	tx_prefetch_cqe(txq, txq->cq_ci + 1);
-	rte_prefetch0(*pkts);
-	/* Start processing. */
-	txq_complete(txq);
-	max = (elts_n - (elts_head - txq->elts_tail));
-	if (max > elts_n)
-		max -= elts_n;
-	do {
-		struct rte_mbuf *buf = *(pkts++);
-		unsigned int elts_head_next;
-		uintptr_t addr;
-		uint32_t length;
-		uint32_t lkey;
-		unsigned int segs_n = buf->nb_segs;
-		volatile struct mlx5_wqe_data_seg *dseg;
-		unsigned int ds = sizeof(*wqe) / 16;
-
-		/*
-		 * Make sure there is enough room to store this packet and
-		 * that one ring entry remains unused.
-		 */
-		assert(segs_n);
-		if (max < segs_n + 1)
-			break;
-		max -= segs_n;
-		--pkts_n;
-		elts_head_next = (elts_head + 1) & (elts_n - 1);
-		wqe = &(*txq->wqes)[txq->wqe_ci & (txq->wqe_n - 1)];
-		dseg = &wqe->wqe.dseg;
-		tx_prefetch_wqe(txq, txq->wqe_ci);
-		tx_prefetch_wqe(txq, txq->wqe_ci + 1);
-		if (pkts_n)
-			rte_prefetch0(*pkts);
-		/* Should we enable HW CKSUM offload */
-		if (buf->ol_flags &
-		    (PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM)) {
-			wqe->inl.eseg.cs_flags =
-				MLX5_ETH_WQE_L3_CSUM |
-				MLX5_ETH_WQE_L4_CSUM;
-		} else {
-			wqe->inl.eseg.cs_flags = 0;
-		}
-		/* Retrieve buffer information. */
-		addr = rte_pktmbuf_mtod(buf, uintptr_t);
-		length = DATA_LEN(buf);
-		/* Update element. */
-		(*txq->elts)[elts_head] = buf;
-		/* Prefetch next buffer data. */
-		if (pkts_n)
-			rte_prefetch0(rte_pktmbuf_mtod(*pkts,
-						       volatile void *));
-		if ((length <= max_inline) && (segs_n == 1)) {
-			if (buf->ol_flags & PKT_TX_VLAN_PKT)
-				mlx5_wqe_write_inline_vlan(txq, wqe,
-							   addr, length,
-							   buf->vlan_tci);
-			else
-				mlx5_wqe_write_inline(txq, wqe, addr, length);
-			goto skip_segs;
-		} else {
-			/* Retrieve Memory Region key for this memory pool. */
-			lkey = txq_mp2mr(txq, txq_mb2mp(buf));
-			mlx5_wqe_write(txq, wqe, buf, length, lkey);
-		}
-		while (--segs_n) {
-			/*
-			 * Spill on next WQE when the current one does not have
-			 * enough room left. Size of WQE must a be a multiple
-			 * of data segment size.
-			 */
-			assert(!(sizeof(*wqe) % sizeof(*dseg)));
-			if (!(ds % (sizeof(*wqe) / 16)))
-				dseg = (volatile void *)
-					&(*txq->wqes)[txq->wqe_ci++ &
-						      (txq->wqe_n - 1)];
-			else
-				++dseg;
-			++ds;
-			buf = buf->next;
-			assert(buf);
-			/* Store segment information. */
-			dseg->byte_count = htonl(DATA_LEN(buf));
-			dseg->lkey = txq_mp2mr(txq, txq_mb2mp(buf));
-			dseg->addr = htonll(rte_pktmbuf_mtod(buf, uintptr_t));
-			(*txq->elts)[elts_head_next] = buf;
-			elts_head_next = (elts_head_next + 1) & (elts_n - 1);
-#ifdef MLX5_PMD_SOFT_COUNTERS
-			length += DATA_LEN(buf);
-#endif
-			++j;
-		}
-		/* Update DS field in WQE. */
-		wqe->inl.ctrl.data[1] &= htonl(0xffffffc0);
-		wqe->inl.ctrl.data[1] |= htonl(ds & 0x3f);
-skip_segs:
-		elts_head = elts_head_next;
-#ifdef MLX5_PMD_SOFT_COUNTERS
-		/* Increment sent bytes counter. */
-		txq->stats.obytes += length;
-#endif
-		++i;
-	} while (pkts_n);
-	/* Take a shortcut if nothing must be sent. */
-	if (unlikely(i == 0))
-		return 0;
-	/* Check whether completion threshold has been reached. */
-	comp = txq->elts_comp + i + j;
-	if (comp >= MLX5_TX_COMP_THRESH) {
-		/* Request completion on last WQE. */
-		wqe->inl.ctrl.data[2] = htonl(8);
-		/* Save elts_head in unused "immediate" field of WQE. */
-		wqe->inl.ctrl.data[3] = elts_head;
-		txq->elts_comp = 0;
-	} else {
-		txq->elts_comp = comp;
-	}
-#ifdef MLX5_PMD_SOFT_COUNTERS
-	/* Increment sent packets counter. */
-	txq->stats.opackets += i;
-#endif
-	/* Ring QP doorbell. */
-	mlx5_tx_dbrec(txq);
-	txq->elts_head = elts_head;
-	return i;
-}
-
-/**
  * Open a MPW session.
  *
  * @param txq
@@ -1117,7 +884,7 @@ mlx5_tx_burst_mpw_inline(void *dpdk_txq, struct rte_mbuf **pkts,
 	unsigned int j = 0;
 	unsigned int max;
 	unsigned int comp;
-	unsigned int inline_room = txq->max_inline;
+	unsigned int inline_room = txq->max_inline * RTE_CACHE_LINE_SIZE;
 	struct mlx5_mpw mpw = {
 		.state = MLX5_MPW_STATE_CLOSED,
 	};
@@ -1171,7 +938,8 @@ mlx5_tx_burst_mpw_inline(void *dpdk_txq, struct rte_mbuf **pkts,
 			    (length > inline_room) ||
 			    (mpw.wqe->mpw_inl.eseg.cs_flags != cs_flags)) {
 				mlx5_mpw_inline_close(txq, &mpw);
-				inline_room = txq->max_inline;
+				inline_room =
+					txq->max_inline * RTE_CACHE_LINE_SIZE;
 			}
 		}
 		if (mpw.state == MLX5_MPW_STATE_CLOSED) {
@@ -1187,7 +955,8 @@ mlx5_tx_burst_mpw_inline(void *dpdk_txq, struct rte_mbuf **pkts,
 		/* Multi-segment packets must be alone in their MPW. */
 		assert((segs_n == 1) || (mpw.pkts_n == 0));
 		if (mpw.state == MLX5_MPW_STATE_OPENED) {
-			assert(inline_room == txq->max_inline);
+			assert(inline_room ==
+			       txq->max_inline * RTE_CACHE_LINE_SIZE);
 #if defined(MLX5_PMD_SOFT_COUNTERS) || !defined(NDEBUG)
 			length = 0;
 #endif
@@ -1252,7 +1021,8 @@ mlx5_tx_burst_mpw_inline(void *dpdk_txq, struct rte_mbuf **pkts,
 			++j;
 			if (mpw.pkts_n == MLX5_MPW_DSEG_MAX) {
 				mlx5_mpw_inline_close(txq, &mpw);
-				inline_room = txq->max_inline;
+				inline_room =
+					txq->max_inline * RTE_CACHE_LINE_SIZE;
 			} else {
 				inline_room -= length;
 			}
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index c8a93c0..8c568ad 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -249,7 +249,7 @@ struct txq {
 	uint16_t wqe_n; /* Number of WQ elements. */
 	uint16_t bf_offset; /* Blueflame offset. */
 	uint16_t bf_buf_size; /* Blueflame size. */
-	uint16_t max_inline; /* Maximum size to inline in a WQE. */
+	uint16_t max_inline; /* Multiple of RTE_CACHE_LINE_SIZE to inline. */
 	uint32_t qp_num_8s; /* QP number shifted by 8. */
 	volatile struct mlx5_cqe (*cqes)[]; /* Completion queue. */
 	volatile union mlx5_wqe (*wqes)[]; /* Work queue. */
@@ -314,7 +314,6 @@ uint16_t mlx5_tx_burst_secondary_setup(void *, struct rte_mbuf **, uint16_t);
 /* mlx5_rxtx.c */
 
 uint16_t mlx5_tx_burst(void *, struct rte_mbuf **, uint16_t);
-uint16_t mlx5_tx_burst_inline(void *, struct rte_mbuf **, uint16_t);
 uint16_t mlx5_tx_burst_mpw(void *, struct rte_mbuf **, uint16_t);
 uint16_t mlx5_tx_burst_mpw_inline(void *, struct rte_mbuf **, uint16_t);
 uint16_t mlx5_rx_burst(void *, struct rte_mbuf **, uint16_t);
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index 6fe61c4..5ddd2fb 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -338,9 +338,12 @@ txq_ctrl_setup(struct rte_eth_dev *dev, struct txq_ctrl *txq_ctrl,
 		.comp_mask = (IBV_EXP_QP_INIT_ATTR_PD |
 			      IBV_EXP_QP_INIT_ATTR_RES_DOMAIN),
 	};
-	if (priv->txq_inline && priv->txqs_n >= priv->txqs_inline) {
-		tmpl.txq.max_inline = priv->txq_inline;
-		attr.init.cap.max_inline_data = tmpl.txq.max_inline;
+	if (priv->txq_inline && (priv->txqs_n >= priv->txqs_inline)) {
+		tmpl.txq.max_inline =
+			((priv->txq_inline + (RTE_CACHE_LINE_SIZE - 1)) /
+			 RTE_CACHE_LINE_SIZE);
+		attr.init.cap.max_inline_data =
+			tmpl.txq.max_inline * RTE_CACHE_LINE_SIZE;
 	}
 	tmpl.qp = ibv_exp_create_qp(priv->ctx, &attr.init);
 	if (tmpl.qp == NULL) {
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH V2 0/8] net/mlx5: various fixes
  2016-09-14 11:53 ` [PATCH V2 0/8] net/mlx5: various fixes Nelio Laranjeiro
@ 2016-09-14 12:21   ` Nélio Laranjeiro
  2016-09-19 15:30   ` Bruce Richardson
  1 sibling, 0 replies; 22+ messages in thread
From: Nélio Laranjeiro @ 2016-09-14 12:21 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev

On Wed, Sep 14, 2016 at 01:53:47PM +0200, Nelio Laranjeiro wrote:
>  - Flow director
>  - Rx Capabilities
>  - Inline
> 
> Changes in V2:
> 
>  - Fix a compilation error.
> 
> Adrien Mazarguil (1):
>   net/mlx5: fix Rx VLAN offload capability report
> 
> Nelio Laranjeiro (3):
>   net/mlx5: force inline for completion function
>   net/mlx5: re-factorize functions
>   net/mlx5: fix inline logic
> 
> Raslan Darawsheh (1):
>   net/mlx5: fix removing VLAN filter
> 
> Yaacov Hazan (3):
>   net/mlx5: fix inconsistent return value in Flow Director
>   net/mlx5: refactor allocation of flow director queues
>   net/mlx5: fix support for flow director drop mode
> 
>  doc/guides/nics/mlx5.rst       |   3 +-
>  drivers/net/mlx5/mlx5.h        |   2 +
>  drivers/net/mlx5/mlx5_ethdev.c |   7 +-
>  drivers/net/mlx5/mlx5_fdir.c   | 270 +++++++++++++++-------
>  drivers/net/mlx5/mlx5_rxq.c    |   2 +
>  drivers/net/mlx5/mlx5_rxtx.c   | 497 +++++++++--------------------------------
>  drivers/net/mlx5/mlx5_rxtx.h   |   7 +-
>  drivers/net/mlx5/mlx5_txq.c    |   9 +-
>  drivers/net/mlx5/mlx5_vlan.c   |   3 +-
>  9 files changed, 317 insertions(+), 483 deletions(-)
> 
> -- 
> 2.1.4
> 

Sorry Ferruh, I forgot to add you in this serie.

-- 
Nélio Laranjeiro
6WIND

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH V2 0/8] net/mlx5: various fixes
  2016-09-14 11:53 ` [PATCH V2 0/8] net/mlx5: various fixes Nelio Laranjeiro
  2016-09-14 12:21   ` Nélio Laranjeiro
@ 2016-09-19 15:30   ` Bruce Richardson
  1 sibling, 0 replies; 22+ messages in thread
From: Bruce Richardson @ 2016-09-19 15:30 UTC (permalink / raw)
  To: Nelio Laranjeiro; +Cc: dev

On Wed, Sep 14, 2016 at 01:53:47PM +0200, Nelio Laranjeiro wrote:
>  - Flow director
>  - Rx Capabilities
>  - Inline
> 
> Changes in V2:
> 
>  - Fix a compilation error.
> 
> Adrien Mazarguil (1):
>   net/mlx5: fix Rx VLAN offload capability report
> 
> Nelio Laranjeiro (3):
>   net/mlx5: force inline for completion function
>   net/mlx5: re-factorize functions
>   net/mlx5: fix inline logic
> 
> Raslan Darawsheh (1):
>   net/mlx5: fix removing VLAN filter
> 
> Yaacov Hazan (3):
>   net/mlx5: fix inconsistent return value in Flow Director
>   net/mlx5: refactor allocation of flow director queues
>   net/mlx5: fix support for flow director drop mode
> 
Applied to dpdk-next-net/rel_16_11

/Bruce

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2016-09-19 15:30 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-09-07  7:02 [PATCH 0/8] net/mlx5: various fixes Nelio Laranjeiro
2016-09-07  7:02 ` [PATCH 1/8] net/mlx5: fix inconsistent return value in Flow Director Nelio Laranjeiro
2016-09-07  7:02 ` [PATCH 2/8] net/mlx5: fix Rx VLAN offload capability report Nelio Laranjeiro
2016-09-07  7:02 ` [PATCH 3/8] net/mlx5: fix removing VLAN filter Nelio Laranjeiro
2016-09-07  7:02 ` [PATCH 4/8] net/mlx5: refactor allocation of flow director queues Nelio Laranjeiro
2016-09-07  7:02 ` [PATCH 5/8] net/mlx5: fix support for flow director drop mode Nelio Laranjeiro
2016-09-07  7:02 ` [PATCH 6/8] net/mlx5: force inline for completion function Nelio Laranjeiro
2016-09-07  7:02 ` [PATCH 7/8] net/mlx5: re-factorize functions Nelio Laranjeiro
2016-09-07  7:02 ` [PATCH 8/8] net/mlx5: fix inline logic Nelio Laranjeiro
2016-09-14 10:43   ` Ferruh Yigit
2016-09-14 11:07     ` Nélio Laranjeiro
2016-09-14 11:53 ` [PATCH V2 0/8] net/mlx5: various fixes Nelio Laranjeiro
2016-09-14 12:21   ` Nélio Laranjeiro
2016-09-19 15:30   ` Bruce Richardson
2016-09-14 11:53 ` [PATCH V2 1/8] net/mlx5: fix inconsistent return value in Flow Director Nelio Laranjeiro
2016-09-14 11:53 ` [PATCH V2 2/8] net/mlx5: fix Rx VLAN offload capability report Nelio Laranjeiro
2016-09-14 11:53 ` [PATCH V2 3/8] net/mlx5: fix removing VLAN filter Nelio Laranjeiro
2016-09-14 11:53 ` [PATCH V2 4/8] net/mlx5: refactor allocation of flow director queues Nelio Laranjeiro
2016-09-14 11:53 ` [PATCH V2 5/8] net/mlx5: fix support for flow director drop mode Nelio Laranjeiro
2016-09-14 11:53 ` [PATCH V2 6/8] net/mlx5: force inline for completion function Nelio Laranjeiro
2016-09-14 11:53 ` [PATCH V2 7/8] net/mlx5: re-factorize functions Nelio Laranjeiro
2016-09-14 11:53 ` [PATCH V2 8/8] net/mlx5: fix inline logic Nelio Laranjeiro

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.