All of lore.kernel.org
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO
@ 2019-07-22  9:12 Matan Azrad
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 01/28] net/mlx5: remove redundant item from union Matan Azrad
                   ` (30 more replies)
  0 siblings, 31 replies; 92+ messages in thread
From: Matan Azrad @ 2019-07-22  9:12 UTC (permalink / raw)
  To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled

Introduction:
LRO (Large Receive Offload) is intended to reduce host CPU overhead when processing Rx TCP packets.
LRO works by aggregating multiple incoming packets from a single stream into a larger buffer, before they are passed higher up the networking stack. Thus reducing the number of packets that have to be processed.

Use:
MLX5 PMD will query the HCA capabilities on initialization to check if LRO is supported and can be used.
LRO in MLX5 PMD is intended for use by applications using a relatively small number of flows.
LRO support can be enabled only per port.
In each LRO session, packets of the same flow will be coalesced until one of the following occur:
  *   Buffer size limit is exceeded.
  *   Session timeout is exceeded.
  *   Packet from a different flow is received on the same queue.

When LRO session ends the coalesced packet is passed to the PMD, which will update the header fields before passing the packet to the application.
For efficient memory utilization, the MPRQ mechanism is used.
Support of Non-LRO flows will not be impacted.

Existing API:
Offload capability DEV_RX_OFFLOAD_TCP_LRO will be used to indicate device supports LRO.
testpmd command-line option "-enable-lro" will be used to request LRO feature enable on application start.
testpmd rx_offload "tcp_lro" on or off will be used to request LRO feature enable or disable during application runtime.
Offload flag PKT_RX_LRO will be used. This flag can be set in Rx mbuf to indicate this is a LRO coalesced packet.

New API:
PMD configuration parameter lro_timeout_usec will be added.
This parameter can be used by application to select LRO session timeout (in microseconds).
If this value is not specified, the minimal value supported by device will be used.

Known limitations:
mbuf head-room is zero for any packet if LRO is configured in the port.
Keep CRC offload cannot be supported with LRO.
CQE compression is not supported with LRO.

Dekel Peled (23):
  net/mlx5: remove redundant item from union
  net/mlx5: add LRO APIs and initial settings
  net/mlx5: support LRO caps query using devx API
  net/mlx5: glue func for queue query using new API
  net/mlx5: glue function for action using new API
  net/mlx5: check conditions to enable LRO
  net/mlx5: support Tx interface query using new API
  net/mlx5: update Tx queue create for LRO
  net/mlx5: create advanced RxQ object using new API
  net/mlx5: modify advanced RxQ object using new API
  net/mlx5: create advanced Rx object using new API
  net/mlx5: create advanced RxQ table using new API
  net/mlx5: allocate door-bells using new API
  net/mlx5: rename RxQ verbs to general RxQ object
  net/mlx5: rename verbs indirection table to obj
  net/mlx5: rename hash RxQ verbs to general
  net/mlx5: update queue state modify function
  net/mlx5: store protection domain number on create
  net/mlx5: func to create Rx verbs completion queue
  net/mlx5: function to create Rx verbs work queue
  net/mlx5: create advanced RxQ using new API
  net/mlx5: support LRO with single RxQ object
  doc: update MLX5 doc and release notes with LRO

Matan Azrad (5):
  net/mlx5: replace the external mbuf shared memory
  net/mlx5: update LRO fields in completion entry
  net/mlx5: handle LRO packets in Rx queue
  net/mlx5: zero the LRO mbuf headroom
  net/mlx5: adjust the maximum LRO message size

 doc/guides/nics/features/mlx5.ini      |    1 +
 doc/guides/nics/mlx5.rst               |   14 +
 doc/guides/rel_notes/release_19_08.rst |    2 +-
 drivers/net/mlx5/Makefile              |    5 +
 drivers/net/mlx5/meson.build           |    2 +
 drivers/net/mlx5/mlx5.c                |  223 ++++++-
 drivers/net/mlx5/mlx5.h                |  160 ++++-
 drivers/net/mlx5/mlx5_devx_cmds.c      |  326 +++++++++
 drivers/net/mlx5/mlx5_ethdev.c         |   14 +-
 drivers/net/mlx5/mlx5_flow.h           |    6 +
 drivers/net/mlx5/mlx5_flow_dv.c        |   28 +-
 drivers/net/mlx5/mlx5_flow_verbs.c     |    3 +-
 drivers/net/mlx5/mlx5_glue.c           |   33 +
 drivers/net/mlx5/mlx5_glue.h           |    6 +-
 drivers/net/mlx5/mlx5_prm.h            |  379 ++++++++++-
 drivers/net/mlx5/mlx5_rxq.c            | 1135 ++++++++++++++++++++++----------
 drivers/net/mlx5/mlx5_rxtx.c           |  167 ++++-
 drivers/net/mlx5/mlx5_rxtx.h           |   80 ++-
 drivers/net/mlx5/mlx5_rxtx_vec.h       |    6 +-
 drivers/net/mlx5/mlx5_rxtx_vec_sse.h   |   16 +-
 drivers/net/mlx5/mlx5_trigger.c        |   12 +-
 drivers/net/mlx5/mlx5_txq.c            |   27 +-
 drivers/net/mlx5/mlx5_vlan.c           |   32 +-
 23 files changed, 2197 insertions(+), 480 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH 01/28] net/mlx5: remove redundant item from union
  2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
@ 2019-07-22  9:12 ` Matan Azrad
  2019-07-22  9:17   ` Slava Ovsiienko
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 02/28] net/mlx5: add LRO APIs and initial settings Matan Azrad
                   ` (29 subsequent siblings)
  30 siblings, 1 reply; 92+ messages in thread
From: Matan Azrad @ 2019-07-22  9:12 UTC (permalink / raw)
  To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

A variable of type struct ibv_cq_ex is declared in 2 unions, but
isn't used.
This patch removes the 2 redundant declarations.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5_rxq.c | 1 -
 drivers/net/mlx5/mlx5_txq.c | 1 -
 2 files changed, 2 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 39b8b7a..0535ce3 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -839,7 +839,6 @@ struct mlx5_rxq_ibv *
 			struct mlx5dv_wq_init_attr mlx5;
 #endif
 		} wq;
-		struct ibv_cq_ex cq_attr;
 	} attr;
 	unsigned int cqe_n;
 	unsigned int wqe_n = 1 << rxq_data->elts_n;
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index 2f3aa5b..dbad361 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -388,7 +388,6 @@ struct mlx5_txq_ibv *
 		struct ibv_qp_init_attr_ex init;
 		struct ibv_cq_init_attr_ex cq;
 		struct ibv_qp_attr mod;
-		struct ibv_cq_ex cq_attr;
 	} attr;
 	unsigned int cqe_n;
 	struct mlx5dv_qp qp = { .comp_mask = MLX5DV_QP_MASK_UAR_MMAP_OFFSET };
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH 02/28] net/mlx5: add LRO APIs and initial settings
  2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 01/28] net/mlx5: remove redundant item from union Matan Azrad
@ 2019-07-22  9:12 ` Matan Azrad
  2019-07-22  9:25   ` Slava Ovsiienko
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 03/28] net/mlx5: support LRO caps query using devx API Matan Azrad
                   ` (28 subsequent siblings)
  30 siblings, 1 reply; 92+ messages in thread
From: Matan Azrad @ 2019-07-22  9:12 UTC (permalink / raw)
  To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Add command-line argument to set LRO session timeout.
Add LRO settings struct in PMD configuration struct.
Add support of LRO offload in port configuration.
Add macros and function to check if LRO is supported and enabled.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5.c        |  6 ++++++
 drivers/net/mlx5/mlx5.h        | 16 ++++++++++++++++
 drivers/net/mlx5/mlx5_ethdev.c |  2 +-
 drivers/net/mlx5/mlx5_rxq.c    | 22 +++++++++++++++++++++-
 drivers/net/mlx5/mlx5_rxtx.h   |  3 ++-
 5 files changed, 46 insertions(+), 3 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 37d3c08..5bef431 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -138,6 +138,9 @@
 /* Device parameter to configure the maximum number of dump files per queue. */
 #define MLX5_MAX_DUMP_FILES_NUM "max_dump_files_num"
 
+/* Configure timeout of LRO session (in microseconds). */
+#define MLX5_LRO_TIMEOUT_USEC "lro_timeout_usec"
+
 #ifndef HAVE_IBV_MLX5_MOD_MPW
 #define MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED (1 << 2)
 #define MLX5DV_CONTEXT_FLAGS_ENHANCED_MPW (1 << 3)
@@ -1052,6 +1055,8 @@ struct mlx5_dev_spawn_data {
 		config->mr_ext_memseg_en = !!tmp;
 	} else if (strcmp(MLX5_MAX_DUMP_FILES_NUM, key) == 0) {
 		config->max_dump_files_num = tmp;
+	} else if (strcmp(MLX5_LRO_TIMEOUT_USEC, key) == 0) {
+		config->lro.timeout = tmp;
 	} else {
 		DRV_LOG(WARNING, "%s: unknown parameter", key);
 		rte_errno = EINVAL;
@@ -1100,6 +1105,7 @@ struct mlx5_dev_spawn_data {
 		MLX5_MR_EXT_MEMSEG_EN,
 		MLX5_REPRESENTOR,
 		MLX5_MAX_DUMP_FILES_NUM,
+		MLX5_LRO_TIMEOUT_USEC,
 		NULL,
 	};
 	struct rte_kvargs *kvlist;
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 7aad94d..4074a7e 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -183,6 +183,21 @@ struct mlx5_hca_attr {
 /* Default PMD specific parameter value. */
 #define MLX5_ARG_UNSET (-1)
 
+#define MLX5_LRO_SUPPORTED(dev) \
+	(((struct mlx5_priv *)((dev)->data->dev_private))->config.lro.supported)
+
+#define MLX5_LRO_ENABLED(dev) \
+	((dev)->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO)
+
+#define MLX5_FLOW_IPV4_LRO	(1 << 0)
+#define MLX5_FLOW_IPV6_LRO	(1 << 1)
+
+/* LRO configurations structure. */
+struct mlx5_lro_config {
+	uint32_t supported:1; /* Whether LRO is supported. */
+	uint32_t timeout; /* User configuration. */
+};
+
 /*
  * Device configuration structure.
  *
@@ -233,6 +248,7 @@ struct mlx5_dev_config {
 	int txq_inline_max; /* Max packet size for inlining with SEND. */
 	int txq_inline_mpw; /* Max packet size for inlining with eMPW. */
 	struct mlx5_hca_attr hca_attr; /* HCA attributes. */
+	struct mlx5_lro_config lro; /* LRO configuration. */
 };
 
 /**
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 6c9bcf1..1125e16 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -659,7 +659,7 @@ struct ethtool_link_settings {
 	info->max_tx_queues = max;
 	info->max_mac_addrs = MLX5_MAX_UC_MAC_ADDRESSES;
 	info->rx_queue_offload_capa = mlx5_get_rx_queue_offloads(dev);
-	info->rx_offload_capa = (mlx5_get_rx_port_offloads() |
+	info->rx_offload_capa = (mlx5_get_rx_port_offloads(dev) |
 				 info->rx_queue_offload_capa);
 	info->tx_offload_capa = mlx5_get_tx_port_offloads(dev);
 	if (mlx5_get_ifname(dev, &ifname) == 0)
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 0535ce3..e68de50 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -124,6 +124,21 @@
 }
 
 /**
+ * Check whether LRO is supported and enabled for the device.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ *
+ * @return
+ *   0 if disabled, 1 if enabled.
+ */
+inline int
+mlx5_lro_on(struct rte_eth_dev *dev)
+{
+	return (MLX5_LRO_SUPPORTED(dev) && MLX5_LRO_ENABLED(dev));
+}
+
+/**
  * Allocate RX queue elements for Multi-Packet RQ.
  *
  * @param rxq_ctrl
@@ -386,14 +401,19 @@
 /**
  * Returns the per-port supported offloads.
  *
+ * @param dev
+ *   Pointer to Ethernet device.
+ *
  * @return
  *   Supported Rx offloads.
  */
 uint64_t
-mlx5_get_rx_port_offloads(void)
+mlx5_get_rx_port_offloads(struct rte_eth_dev *dev)
 {
 	uint64_t offloads = DEV_RX_OFFLOAD_VLAN_FILTER;
 
+	if (MLX5_LRO_SUPPORTED(dev))
+		offloads |= DEV_RX_OFFLOAD_TCP_LRO;
 	return offloads;
 }
 
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index bc8a61a..2f688ac 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -323,8 +323,9 @@ struct mlx5_hrxq *mlx5_hrxq_get(struct rte_eth_dev *dev,
 int mlx5_hrxq_ibv_verify(struct rte_eth_dev *dev);
 struct mlx5_hrxq *mlx5_hrxq_drop_new(struct rte_eth_dev *dev);
 void mlx5_hrxq_drop_release(struct rte_eth_dev *dev);
-uint64_t mlx5_get_rx_port_offloads(void);
+uint64_t mlx5_get_rx_port_offloads(struct rte_eth_dev *dev);
 uint64_t mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev);
+int mlx5_lro_on(struct rte_eth_dev *dev);
 
 /* mlx5_txq.c */
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH 03/28] net/mlx5: support LRO caps query using devx API
  2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 01/28] net/mlx5: remove redundant item from union Matan Azrad
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 02/28] net/mlx5: add LRO APIs and initial settings Matan Azrad
@ 2019-07-22  9:12 ` Matan Azrad
  2019-07-22  9:17   ` Slava Ovsiienko
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 04/28] net/mlx5: glue func for queue query using new API Matan Azrad
                   ` (27 subsequent siblings)
  30 siblings, 1 reply; 92+ messages in thread
From: Matan Azrad @ 2019-07-22  9:12 UTC (permalink / raw)
  To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Update function mlx5_devx_cmd_query_hca_attr() to query HCA
capabilities related to LRO.

Add relevant structs in drivers/net/mlx5/mlx5_prm.h.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5.h           |  8 ++++++
 drivers/net/mlx5/mlx5_devx_cmds.c | 14 ++++++++++
 drivers/net/mlx5/mlx5_prm.h       | 58 ++++++++++++++++++++-------------------
 3 files changed, 52 insertions(+), 28 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 4074a7e..ff407f6 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -165,6 +165,9 @@ struct mlx5_devx_mkey_attr {
 	uint32_t pd;
 };
 
+/* HCA supports this number of time periods for LRO. */
+#define MLX5_LRO_NUM_SUPP_PERIODS 4
+
 /* HCA attributes. */
 struct mlx5_hca_attr {
 	uint32_t eswitch_manager:1;
@@ -175,6 +178,11 @@ struct mlx5_hca_attr {
 	uint32_t wqe_vlan_insert:1;
 	uint32_t wqe_inline_mode:2;
 	uint32_t vport_inline_mode:3;
+	uint32_t lro_cap:1;
+	uint32_t tunnel_lro_gre:1;
+	uint32_t tunnel_lro_vxlan:1;
+	uint32_t lro_max_msg_sz_mode:2;
+	uint32_t lro_timer_supported_periods[MLX5_LRO_NUM_SUPP_PERIODS];
 };
 
 /* Flow list . */
diff --git a/drivers/net/mlx5/mlx5_devx_cmds.c b/drivers/net/mlx5/mlx5_devx_cmds.c
index 18f8ab6..1cba00f 100644
--- a/drivers/net/mlx5/mlx5_devx_cmds.c
+++ b/drivers/net/mlx5/mlx5_devx_cmds.c
@@ -360,6 +360,20 @@ struct mlx5_devx_obj *
 	hcattr = MLX5_ADDR_OF(query_hca_cap_out, out, capability);
 	attr->wqe_vlan_insert = MLX5_GET(per_protocol_networking_offload_caps,
 					 hcattr, wqe_vlan_insert);
+	attr->lro_cap = MLX5_GET(per_protocol_networking_offload_caps, hcattr,
+				 lro_cap);
+	attr->tunnel_lro_gre = MLX5_GET(per_protocol_networking_offload_caps,
+					hcattr, tunnel_lro_gre);
+	attr->tunnel_lro_vxlan = MLX5_GET(per_protocol_networking_offload_caps,
+					  hcattr, tunnel_lro_vxlan);
+	attr->lro_max_msg_sz_mode = MLX5_GET
+					(per_protocol_networking_offload_caps,
+					 hcattr, lro_max_msg_sz_mode);
+	for (int i = 0 ; i < MLX5_LRO_NUM_SUPP_PERIODS ; i++) {
+		attr->lro_timer_supported_periods[i] =
+			MLX5_GET(per_protocol_networking_offload_caps, hcattr,
+				 lro_timer_supported_periods[i]);
+	}
 	attr->wqe_inline_mode = MLX5_GET(per_protocol_networking_offload_caps,
 					 hcattr, wqe_inline_mode);
 	if (attr->wqe_inline_mode != MLX5_CAP_INLINE_MODE_VPORT_CONTEXT)
diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index 3c2b3d8..4f20dea 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -1084,16 +1084,42 @@ struct mlx5_ifc_cmd_hca_cap_bits {
 	u8 reserved_at_61f[0x1e1];
 };
 
+struct mlx5_ifc_qos_cap_bits {
+	u8 packet_pacing[0x1];
+	u8 esw_scheduling[0x1];
+	u8 esw_bw_share[0x1];
+	u8 esw_rate_limit[0x1];
+	u8 reserved_at_4[0x1];
+	u8 packet_pacing_burst_bound[0x1];
+	u8 packet_pacing_typical_size[0x1];
+	u8 flow_meter_srtcm[0x1];
+	u8 reserved_at_8[0x8];
+	u8 log_max_flow_meter[0x8];
+	u8 flow_meter_reg_id[0x8];
+	u8 reserved_at_25[0x20];
+	u8 packet_pacing_max_rate[0x20];
+	u8 packet_pacing_min_rate[0x20];
+	u8 reserved_at_80[0x10];
+	u8 packet_pacing_rate_table_size[0x10];
+	u8 esw_element_type[0x10];
+	u8 esw_tsar_type[0x10];
+	u8 reserved_at_c0[0x10];
+	u8 max_qos_para_vport[0x10];
+	u8 max_tsar_bw_share[0x20];
+	u8 reserved_at_100[0x6e8];
+};
+
 struct mlx5_ifc_per_protocol_networking_offload_caps_bits {
 	u8 csum_cap[0x1];
 	u8 vlan_cap[0x1];
 	u8 lro_cap[0x1];
 	u8 lro_psh_flag[0x1];
 	u8 lro_time_stamp[0x1];
-	u8 reserved_at_5[0x2];
+	u8 lro_max_msg_sz_mode[0x2];
 	u8 wqe_vlan_insert[0x1];
 	u8 self_lb_en_modifiable[0x1];
-	u8 reserved_at_9[0x2];
+	u8 self_lb_mc[0x1];
+	u8 self_lb_uc[0x1];
 	u8 max_lso_cap[0x5];
 	u8 multi_pkt_send_wqe[0x2];
 	u8 wqe_inline_mode[0x2];
@@ -1102,7 +1128,8 @@ struct mlx5_ifc_per_protocol_networking_offload_caps_bits {
 	u8 scatter_fcs[0x1];
 	u8 enhanced_multi_pkt_send_wqe[0x1];
 	u8 tunnel_lso_const_out_ip_id[0x1];
-	u8 reserved_at_1c[0x2];
+	u8 tunnel_lro_gre[0x1];
+	u8 tunnel_lro_vxlan[0x1];
 	u8 tunnel_stateless_gre[0x1];
 	u8 tunnel_stateless_vxlan[0x1];
 	u8 swp[0x1];
@@ -1120,31 +1147,6 @@ struct mlx5_ifc_per_protocol_networking_offload_caps_bits {
 	u8 reserved_at_200[0x600];
 };
 
-struct mlx5_ifc_qos_cap_bits {
-	u8 packet_pacing[0x1];
-	u8 esw_scheduling[0x1];
-	u8 esw_bw_share[0x1];
-	u8 esw_rate_limit[0x1];
-	u8 reserved_at_4[0x1];
-	u8 packet_pacing_burst_bound[0x1];
-	u8 packet_pacing_typical_size[0x1];
-	u8 flow_meter_srtcm[0x1];
-	u8 reserved_at_8[0x8];
-	u8 log_max_flow_meter[0x8];
-	u8 flow_meter_reg_id[0x8];
-	u8 reserved_at_25[0x20];
-	u8 packet_pacing_max_rate[0x20];
-	u8 packet_pacing_min_rate[0x20];
-	u8 reserved_at_80[0x10];
-	u8 packet_pacing_rate_table_size[0x10];
-	u8 esw_element_type[0x10];
-	u8 esw_tsar_type[0x10];
-	u8 reserved_at_c0[0x10];
-	u8 max_qos_para_vport[0x10];
-	u8 max_tsar_bw_share[0x20];
-	u8 reserved_at_100[0x6e8];
-};
-
 union mlx5_ifc_hca_cap_union_bits {
 	struct mlx5_ifc_cmd_hca_cap_bits cmd_hca_cap;
 	struct mlx5_ifc_per_protocol_networking_offload_caps_bits
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH 04/28] net/mlx5: glue func for queue query using new API
  2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
                   ` (2 preceding siblings ...)
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 03/28] net/mlx5: support LRO caps query using devx API Matan Azrad
@ 2019-07-22  9:12 ` Matan Azrad
  2019-07-22  9:18   ` Slava Ovsiienko
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 05/28] net/mlx5: glue function for action " Matan Azrad
                   ` (26 subsequent siblings)
  30 siblings, 1 reply; 92+ messages in thread
From: Matan Azrad @ 2019-07-22  9:12 UTC (permalink / raw)
  To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Add function mlx5_glue_devx_qp_query().
Add glue function pointer devx_qp_query to run it.
Glue version updated to 19.08.0.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5_glue.c | 19 +++++++++++++++++++
 drivers/net/mlx5/mlx5_glue.h |  3 +++
 2 files changed, 22 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_glue.c b/drivers/net/mlx5/mlx5_glue.c
index 942f89d..05474a0 100644
--- a/drivers/net/mlx5/mlx5_glue.c
+++ b/drivers/net/mlx5/mlx5_glue.c
@@ -934,6 +934,24 @@
 #endif
 }
 
+static int
+mlx5_glue_devx_qp_query(struct ibv_qp *qp,
+			const void *in, size_t inlen,
+			void *out, size_t outlen)
+{
+#ifdef HAVE_IBV_DEVX_ASYNC
+	return mlx5dv_devx_qp_query(qp, in, inlen, out, outlen);
+#else
+	(void)qp;
+	(void)in;
+	(void)inlen;
+	(void)out;
+	(void)outlen;
+	errno = ENOTSUP;
+	return errno;
+#endif
+}
+
 alignas(RTE_CACHE_LINE_SIZE)
 const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue){
 	.version = MLX5_GLUE_VERSION,
@@ -1021,4 +1039,5 @@
 	.devx_get_async_cmd_comp = mlx5_glue_devx_get_async_cmd_comp,
 	.devx_umem_reg = mlx5_glue_devx_umem_reg,
 	.devx_umem_dereg = mlx5_glue_devx_umem_dereg,
+	.devx_qp_query = mlx5_glue_devx_qp_query,
 };
diff --git a/drivers/net/mlx5/mlx5_glue.h b/drivers/net/mlx5/mlx5_glue.h
index 9facdb9..d5c7523 100644
--- a/drivers/net/mlx5/mlx5_glue.h
+++ b/drivers/net/mlx5/mlx5_glue.h
@@ -229,6 +229,9 @@ struct mlx5_glue {
 						  void *addr, size_t size,
 						  uint32_t access);
 	int (*devx_umem_dereg)(struct mlx5dv_devx_umem *dv_devx_umem);
+	int (*devx_qp_query)(struct ibv_qp *qp,
+			     const void *in, size_t inlen,
+			     void *out, size_t outlen);
 };
 
 const struct mlx5_glue *mlx5_glue;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH 05/28] net/mlx5: glue function for action using new API
  2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
                   ` (3 preceding siblings ...)
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 04/28] net/mlx5: glue func for queue query using new API Matan Azrad
@ 2019-07-22  9:12 ` Matan Azrad
  2019-07-22  9:18   ` Slava Ovsiienko
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 06/28] net/mlx5: check conditions to enable LRO Matan Azrad
                   ` (25 subsequent siblings)
  30 siblings, 1 reply; 92+ messages in thread
From: Matan Azrad @ 2019-07-22  9:12 UTC (permalink / raw)
  To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Add compile option HAVE_MLX5DV_DR_ACTION_DEST_DEVX_TIR, and matching
dest_tir flag in device configuration structure.
Add glue function pointer dv_create_flow_action_dest_devx_tir, and
function mlx5_glue_dv_create_flow_action_dest_devx_tir(),
to invoke API mlx5dv_dr_action_create_dest_devx_tir();

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/Makefile    |  5 +++++
 drivers/net/mlx5/meson.build |  2 ++
 drivers/net/mlx5/mlx5.c      |  3 +++
 drivers/net/mlx5/mlx5.h      |  1 +
 drivers/net/mlx5/mlx5_glue.c | 14 ++++++++++++++
 drivers/net/mlx5/mlx5_glue.h |  1 +
 6 files changed, 26 insertions(+)

diff --git a/drivers/net/mlx5/Makefile b/drivers/net/mlx5/Makefile
index 76d40b1..dbb2a4e 100644
--- a/drivers/net/mlx5/Makefile
+++ b/drivers/net/mlx5/Makefile
@@ -178,6 +178,11 @@ mlx5_autoconf.h.new: $(RTE_SDK)/buildtools/auto-config-h.sh
 		func mlx5dv_devx_obj_query_async \
 		$(AUTOCONF_OUTPUT)
 	$Q sh -- '$<' '$@' \
+		HAVE_MLX5DV_DR_ACTION_DEST_DEVX_TIR \
+		infiniband/mlx5dv.h \
+		func mlx5dv_dr_action_create_dest_devx_tir \
+		$(AUTOCONF_OUTPUT)
+	$Q sh -- '$<' '$@' \
 		HAVE_ETHTOOL_LINK_MODE_25G \
 		/usr/include/linux/ethtool.h \
 		enum ETHTOOL_LINK_MODE_25000baseCR_Full_BIT \
diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build
index ed42641..62b41ca 100644
--- a/drivers/net/mlx5/meson.build
+++ b/drivers/net/mlx5/meson.build
@@ -124,6 +124,8 @@ if build
 		'MLX5DV_FLOW_ACTION_COUNTERS_DEVX' ],
 		[ 'HAVE_IBV_DEVX_ASYNC', 'infiniband/mlx5dv.h',
 		'mlx5dv_devx_obj_query_async' ],
+		[ 'HAVE_MLX5DV_DR_ACTION_DEST_DEVX_TIR', 'infiniband/mlx5dv.h',
+		'mlx5dv_dr_action_create_dest_devx_tir' ],
 		[ 'HAVE_MLX5DV_DR', 'infiniband/mlx5dv.h',
 		'MLX5DV_DR_DOMAIN_TYPE_NIC_RX' ],
 		[ 'HAVE_MLX5DV_DR_ESWITCH', 'infiniband/mlx5dv.h',
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 5bef431..792324f 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1423,6 +1423,9 @@ struct mlx5_dev_spawn_data {
 	if (!sh)
 		return NULL;
 	config.devx = sh->devx;
+#ifdef HAVE_MLX5DV_DR_ACTION_DEST_DEVX_TIR
+	config.dest_tir = 1;
+#endif
 #ifdef HAVE_IBV_MLX5_MOD_SWP
 	dv_attr.comp_mask |= MLX5DV_CONTEXT_MASK_SWP;
 #endif
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index ff407f6..7a3d42c 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -236,6 +236,7 @@ struct mlx5_dev_config {
 	unsigned int dv_flow_en:1; /* Enable DV flow. */
 	unsigned int swp:1; /* Tx generic tunnel checksum and TSO offload. */
 	unsigned int devx:1; /* Whether devx interface is available or not. */
+	unsigned int dest_tir:1; /* Whether advanced DR API is available. */
 	struct {
 		unsigned int enabled:1; /* Whether MPRQ is enabled. */
 		unsigned int stride_num_n; /* Number of strides. */
diff --git a/drivers/net/mlx5/mlx5_glue.c b/drivers/net/mlx5/mlx5_glue.c
index 05474a0..50c369a 100644
--- a/drivers/net/mlx5/mlx5_glue.c
+++ b/drivers/net/mlx5/mlx5_glue.c
@@ -628,6 +628,18 @@
 }
 
 static void *
+mlx5_glue_dv_create_flow_action_dest_devx_tir(void *tir)
+{
+#ifdef HAVE_MLX5DV_DR_ACTION_DEST_DEVX_TIR
+	return mlx5dv_dr_action_create_dest_devx_tir(tir);
+#else
+	(void)tir;
+	errno = ENOTSUP;
+	return NULL;
+#endif
+}
+
+static void *
 mlx5_glue_dv_create_flow_action_modify_header
 					(struct ibv_context *ctx,
 					 enum mlx5dv_flow_table_type ft_type,
@@ -1020,6 +1032,8 @@
 		mlx5_glue_dv_create_flow_action_counter,
 	.dv_create_flow_action_dest_ibv_qp =
 		mlx5_glue_dv_create_flow_action_dest_ibv_qp,
+	.dv_create_flow_action_dest_devx_tir =
+		mlx5_glue_dv_create_flow_action_dest_devx_tir,
 	.dv_create_flow_action_modify_header =
 		mlx5_glue_dv_create_flow_action_modify_header,
 	.dv_create_flow_action_packet_reformat =
diff --git a/drivers/net/mlx5/mlx5_glue.h b/drivers/net/mlx5/mlx5_glue.h
index d5c7523..f8e2b9a 100644
--- a/drivers/net/mlx5/mlx5_glue.h
+++ b/drivers/net/mlx5/mlx5_glue.h
@@ -187,6 +187,7 @@ struct mlx5_glue {
 			  size_t num_actions, void *actions[]);
 	void *(*dv_create_flow_action_counter)(void *obj, uint32_t  offset);
 	void *(*dv_create_flow_action_dest_ibv_qp)(void *qp);
+	void *(*dv_create_flow_action_dest_devx_tir)(void *tir);
 	void *(*dv_create_flow_action_modify_header)
 		(struct ibv_context *ctx, enum mlx5dv_flow_table_type ft_type,
 		 void *domain, uint64_t flags, size_t actions_sz,
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH 06/28] net/mlx5: check conditions to enable LRO
  2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
                   ` (4 preceding siblings ...)
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 05/28] net/mlx5: glue function for action " Matan Azrad
@ 2019-07-22  9:12 ` Matan Azrad
  2019-07-22  9:18   ` Slava Ovsiienko
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 07/28] net/mlx5: support Tx interface query using new API Matan Azrad
                   ` (24 subsequent siblings)
  30 siblings, 1 reply; 92+ messages in thread
From: Matan Azrad @ 2019-07-22  9:12 UTC (permalink / raw)
  To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Use DevX API to read device LRO capabilities.
Check if LRO is supported and can be enabled.
Check if MPRQ is supported and can be used.
Enable MPRQ for LRO use if not enabled by user.
Added note for mlx5_mprq_enabled(), to emphasize that LRO
enables MPRQ.
Disable CQE compression and CRC stripping if LRO is enabled.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5.c        | 49 +++++++++++++++++++++++++++---------------
 drivers/net/mlx5/mlx5_ethdev.c | 12 +++++++++++
 drivers/net/mlx5/mlx5_rxq.c    | 14 +++++++++++-
 3 files changed, 57 insertions(+), 18 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 792324f..8d0ebeb 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1683,6 +1683,38 @@ struct mlx5_dev_spawn_data {
 	} else if (config.cqe_pad) {
 		DRV_LOG(INFO, "Rx CQE padding is enabled");
 	}
+	if (config.devx) {
+		priv->counter_fallback = 0;
+		err = mlx5_devx_cmd_query_hca_attr(sh->ctx, &config.hca_attr);
+		if (err) {
+			err = -err;
+			goto error;
+		}
+		if (!config.hca_attr.flow_counters_dump)
+			priv->counter_fallback = 1;
+#ifndef HAVE_IBV_DEVX_ASYNC
+		priv->counter_fallback = 1;
+#endif
+		if (priv->counter_fallback)
+			DRV_LOG(INFO, "Use fall-back DV counter management\n");
+		/* Check for LRO support. */
+		if (config.dest_tir && mprq && config.hca_attr.lro_cap) {
+			/* TBD check tunnel lro caps. */
+			config.lro.supported = config.hca_attr.lro_cap;
+			DRV_LOG(DEBUG, "Device supports LRO");
+			/*
+			 * If LRO timeout is not configured by application,
+			 * use the minimal supported value.
+			 */
+			if (!config.lro.timeout)
+				config.lro.timeout =
+				config.hca_attr.lro_timer_supported_periods[0];
+			DRV_LOG(DEBUG, "LRO session timeout set to %d usec",
+				config.lro.timeout);
+			config.mprq.enabled = 1;
+			DRV_LOG(DEBUG, "Enable MPRQ for LRO use");
+		}
+	}
 	if (config.mprq.enabled && mprq) {
 		if (config.mprq.stride_num_n > mprq_max_stride_num_n ||
 		    config.mprq.stride_num_n < mprq_min_stride_num_n) {
@@ -1783,23 +1815,6 @@ struct mlx5_dev_spawn_data {
 	 * Verbs context returned by ibv_open_device().
 	 */
 	mlx5_link_update(eth_dev, 0);
-#ifdef HAVE_IBV_DEVX_OBJ
-	if (config.devx) {
-		priv->counter_fallback = 0;
-		err = mlx5_devx_cmd_query_hca_attr(sh->ctx, &config.hca_attr);
-		if (err) {
-			err = -err;
-			goto error;
-		}
-		if (!config.hca_attr.flow_counters_dump)
-			priv->counter_fallback = 1;
-#ifndef HAVE_IBV_DEVX_ASYNC
-		priv->counter_fallback = 1;
-#endif
-		if (priv->counter_fallback)
-			DRV_LOG(INFO, "Use fall-back DV counter management\n");
-	}
-#endif
 #ifdef HAVE_MLX5DV_DR_ESWITCH
 	if (!(config.hca_attr.eswitch_manager && config.dv_flow_en &&
 	      (switch_info->representor || switch_info->master)))
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 1125e16..72416db 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -482,6 +482,7 @@ struct ethtool_link_settings {
 	const uint8_t use_app_rss_key =
 		!!dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key;
 	int ret = 0;
+	unsigned int lro_on = mlx5_lro_on(dev);
 
 	if (use_app_rss_key &&
 	    (dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len !=
@@ -525,6 +526,12 @@ struct ethtool_link_settings {
 			dev->data->port_id, priv->rxqs_n, rxqs_n);
 		priv->rxqs_n = rxqs_n;
 		/*
+		 * WHen using LRO, MPRQ is implicitly enabled.
+		 * Adjust threshold value to ensure MPRQ can be enabled.
+		 */
+		if (lro_on && priv->config.mprq.min_rxqs_num > priv->rxqs_n)
+			priv->config.mprq.min_rxqs_num = priv->rxqs_n;
+		/*
 		 * If the requested number of RX queues is not a power of two,
 		 * use the maximum indirection table size for better balancing.
 		 * The result is always rounded to the next power of two.
@@ -546,6 +553,11 @@ struct ethtool_link_settings {
 				j = 0;
 		}
 	}
+	if (lro_on && priv->config.cqe_comp) {
+		/* CQE compressing is not supported for LRO CQEs. */
+		DRV_LOG(WARNING, "Rx CQE compression isn't supported with LRO");
+		priv->config.cqe_comp = 0;
+	}
 	ret = mlx5_proc_priv_init(dev);
 	if (ret)
 		return ret;
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index e68de50..8567ee5 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -93,6 +93,7 @@
 
 /**
  * Check whether Multi-Packet RQ is enabled for the device.
+ * MPRQ can be enabled explicitly, or implicitly by enabling LRO.
  *
  * @param dev
  *   Pointer to Ethernet device.
@@ -1431,7 +1432,18 @@ struct mlx5_rxq_ctrl *
 	tmpl->rxq.crc_present = 0;
 	if (offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
 		if (config->hw_fcs_strip) {
-			tmpl->rxq.crc_present = 1;
+			/*
+			 * RQs used for LRO-enabled TIRs should not be
+			 * configured to scatter the FCS.
+			 */
+			if (mlx5_lro_on(dev))
+				DRV_LOG(WARNING,
+					"port %u CRC stripping has been "
+					"disabled but will still be performed "
+					"by hardware, because LRO is enabled",
+					dev->data->port_id);
+			else
+				tmpl->rxq.crc_present = 1;
 		} else {
 			DRV_LOG(WARNING,
 				"port %u CRC stripping has been disabled but will"
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH 07/28] net/mlx5: support Tx interface query using new API
  2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
                   ` (5 preceding siblings ...)
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 06/28] net/mlx5: check conditions to enable LRO Matan Azrad
@ 2019-07-22  9:12 ` Matan Azrad
  2019-07-22  9:19   ` Slava Ovsiienko
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 08/28] net/mlx5: update Tx queue create for LRO Matan Azrad
                   ` (23 subsequent siblings)
  30 siblings, 1 reply; 92+ messages in thread
From: Matan Azrad @ 2019-07-22  9:12 UTC (permalink / raw)
  To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Implement function mlx5_devx_cmd_qp_tis_td_query(), to query
QP TIS Transport Domain value.

Add related structs in mlx5_prm.h.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5.h           |  2 ++
 drivers/net/mlx5/mlx5_devx_cmds.c | 34 ++++++++++++++++++++++++++++++++++
 drivers/net/mlx5/mlx5_prm.h       | 34 ++++++++++++++++++++++++++++++++++
 3 files changed, 70 insertions(+)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 7a3d42c..b78a1ec 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -742,4 +742,6 @@ int mlx5_devx_cmd_query_hca_attr(struct ibv_context *ctx,
 struct mlx5_devx_obj *mlx5_devx_cmd_mkey_create(struct ibv_context *ctx,
 					     struct mlx5_devx_mkey_attr *attr);
 int mlx5_devx_get_out_command_status(void *out);
+int mlx5_devx_cmd_qp_query_tis_td(struct ibv_qp *qp, uint32_t tis_num,
+				  uint32_t *tis_td);
 #endif /* RTE_PMD_MLX5_H_ */
diff --git a/drivers/net/mlx5/mlx5_devx_cmds.c b/drivers/net/mlx5/mlx5_devx_cmds.c
index 1cba00f..3d07fcf 100644
--- a/drivers/net/mlx5/mlx5_devx_cmds.c
+++ b/drivers/net/mlx5/mlx5_devx_cmds.c
@@ -390,3 +390,37 @@ struct mlx5_devx_obj *
 	rc = (rc > 0) ? -rc : rc;
 	return rc;
 }
+
+/**
+ * Query TIS transport domain from QP verbs object using DevX API.
+ *
+ * @param[in] qp
+ *   Pointer to verbs QP returned by ibv_create_qp .
+ * @param[in] tis_num
+ *   TIS number of TIS to query.
+ * @param[out] tis_td
+ *   Pointer to TIS transport domain variable, to be set by the routine.
+ *
+ * @return
+ *   0 on success, a negative value otherwise.
+ */
+int
+mlx5_devx_cmd_qp_query_tis_td(struct ibv_qp *qp, uint32_t tis_num,
+			      uint32_t *tis_td)
+{
+	uint32_t in[MLX5_ST_SZ_DW(query_tis_in)] = {0};
+	uint32_t out[MLX5_ST_SZ_DW(query_tis_out)] = {0};
+	int rc;
+	void *tis_ctx;
+
+	MLX5_SET(query_tis_in, in, opcode, MLX5_CMD_OP_QUERY_TIS);
+	MLX5_SET(query_tis_in, in, tisn, tis_num);
+	rc = mlx5_glue->devx_qp_query(qp, in, sizeof(in), out, sizeof(out));
+	if (rc) {
+		DRV_LOG(ERR, "Failed to query QP using DevX");
+		return -rc;
+	};
+	tis_ctx = MLX5_ADDR_OF(query_tis_out, out, tis_context);
+	*tis_td = MLX5_GET(tisc, tis_ctx, transport_domain);
+	return 0;
+}
diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index 4f20dea..b5de0c3 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -627,6 +627,7 @@ enum {
 	MLX5_CMD_OP_QUERY_HCA_CAP = 0x100,
 	MLX5_CMD_OP_CREATE_MKEY = 0x200,
 	MLX5_CMD_OP_QUERY_NIC_VPORT_CONTEXT = 0x754,
+	MLX5_CMD_OP_QUERY_TIS = 0x915,
 	MLX5_CMD_OP_ALLOC_FLOW_COUNTER = 0x939,
 	MLX5_CMD_OP_QUERY_FLOW_COUNTER = 0x93b,
 };
@@ -1234,6 +1235,39 @@ struct mlx5_ifc_query_nic_vport_context_in_bits {
 	u8 reserved_at_68[0x18];
 };
 
+struct mlx5_ifc_tisc_bits {
+	u8 strict_lag_tx_port_affinity[0x1];
+	u8 reserved_at_1[0x3];
+	u8 lag_tx_port_affinity[0x04];
+	u8 reserved_at_8[0x4];
+	u8 prio[0x4];
+	u8 reserved_at_10[0x10];
+	u8 reserved_at_20[0x100];
+	u8 reserved_at_120[0x8];
+	u8 transport_domain[0x18];
+	u8 reserved_at_140[0x8];
+	u8 underlay_qpn[0x18];
+	u8 reserved_at_160[0x3a0];
+};
+
+struct mlx5_ifc_query_tis_out_bits {
+	u8 status[0x8];
+	u8 reserved_at_8[0x18];
+	u8 syndrome[0x20];
+	u8 reserved_at_40[0x40];
+	struct mlx5_ifc_tisc_bits tis_context;
+};
+
+struct mlx5_ifc_query_tis_in_bits {
+	u8 opcode[0x10];
+	u8 reserved_at_10[0x10];
+	u8 reserved_at_20[0x10];
+	u8 op_mod[0x10];
+	u8 reserved_at_40[0x8];
+	u8 tisn[0x18];
+	u8 reserved_at_60[0x20];
+};
+
 /* CQE format mask. */
 #define MLX5E_CQE_FORMAT_MASK 0xc
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH 08/28] net/mlx5: update Tx queue create for LRO
  2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
                   ` (6 preceding siblings ...)
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 07/28] net/mlx5: support Tx interface query using new API Matan Azrad
@ 2019-07-22  9:12 ` Matan Azrad
  2019-07-22  9:18   ` Slava Ovsiienko
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 09/28] net/mlx5: create advanced RxQ object using new API Matan Azrad
                   ` (22 subsequent siblings)
  30 siblings, 1 reply; 92+ messages in thread
From: Matan Azrad @ 2019-07-22  9:12 UTC (permalink / raw)
  To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Update function mlx5_txq_ibv_new(), query and store the TIS
transport domain value.
It is required later on Rx side when creating matching TIR.
Add field in mlx5 data structure to store Transport Domain ID.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5.h     |  1 +
 drivers/net/mlx5/mlx5_txq.c | 26 ++++++++++++++++++++++++++
 2 files changed, 27 insertions(+)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index b78a1ec..471e220 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -418,6 +418,7 @@ struct mlx5_ibv_shared {
 	uint32_t max_port; /* Maximal IB device port index. */
 	struct ibv_context *ctx; /* Verbs/DV context. */
 	struct ibv_pd *pd; /* Protection Domain. */
+	uint32_t tdn; /* Transport Domain number. */
 	char ibdev_name[IBV_SYSFS_NAME_MAX]; /* IB device name. */
 	char ibdev_path[IBV_SYSFS_PATH_MAX]; /* IB device path for secondary */
 	struct ibv_device_attr_ex device_attr; /* Device properties. */
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index dbad361..fe3b4ec 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -396,6 +396,11 @@ struct mlx5_txq_ibv *
 	const int desc = 1 << txq_data->elts_n;
 	int ret = 0;
 
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+	/* If using DevX, need additional mask to read tisn value. */
+	if (priv->config.devx && !priv->sh->tdn)
+		qp.comp_mask |= MLX5DV_QP_MASK_RAW_QP_HANDLES;
+#endif
 	assert(txq_data);
 	priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_TX_QUEUE;
 	priv->verbs_alloc_ctx.obj = txq_ctrl;
@@ -542,6 +547,27 @@ struct mlx5_txq_ibv *
 	txq_data->wqe_pi = 0;
 	txq_data->wqe_comp = 0;
 	txq_data->wqe_thres = txq_data->wqe_s / MLX5_TX_COMP_THRESH_INLINE_DIV;
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+	/*
+	 * If using DevX need to query and store TIS transport domain value.
+	 * This is done once per port.
+	 * Will use this value on Rx, when creating matching TIR.
+	 */
+	if (priv->config.devx && !priv->sh->tdn) {
+		ret = mlx5_devx_cmd_qp_query_tis_td(tmpl.qp, qp.tisn,
+						    &priv->sh->tdn);
+		if (ret) {
+			DRV_LOG(ERR, "Fail to query port %u Tx queue %u QP TIS "
+				"transport domain", dev->data->port_id, idx);
+			rte_errno = EINVAL;
+			goto error;
+		} else {
+			DRV_LOG(DEBUG, "port %u Tx queue %u TIS number %d "
+				"transport domain %d", dev->data->port_id,
+				idx, qp.tisn, priv->sh->tdn);
+		}
+	}
+#endif
 	txq_ibv->qp = tmpl.qp;
 	txq_ibv->cq = tmpl.cq;
 	rte_atomic32_inc(&txq_ibv->refcnt);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH 09/28] net/mlx5: create advanced RxQ object using new API
  2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
                   ` (7 preceding siblings ...)
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 08/28] net/mlx5: update Tx queue create for LRO Matan Azrad
@ 2019-07-22  9:12 ` Matan Azrad
  2019-07-22  9:17   ` Slava Ovsiienko
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 10/28] net/mlx5: modify " Matan Azrad
                   ` (21 subsequent siblings)
  30 siblings, 1 reply; 92+ messages in thread
From: Matan Azrad @ 2019-07-22  9:12 UTC (permalink / raw)
  To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Implement function mlx5_devx_cmd_create_rq() to create RQ object using
DevX API.
Add related structs in mlx5.h and mlx5_prm.h.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5.h           |  50 +++++++++++++++++
 drivers/net/mlx5/mlx5_devx_cmds.c | 102 +++++++++++++++++++++++++++++++++++
 drivers/net/mlx5/mlx5_prm.h       | 110 ++++++++++++++++++++++++++++++++++++++
 3 files changed, 262 insertions(+)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 471e220..7ad6687 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -260,6 +260,52 @@ struct mlx5_dev_config {
 	struct mlx5_lro_config lro; /* LRO configuration. */
 };
 
+struct mlx5_devx_wq_attr {
+	uint32_t wq_type:4;
+	uint32_t wq_signature:1;
+	uint32_t end_padding_mode:2;
+	uint32_t cd_slave:1;
+	uint32_t hds_skip_first_sge:1;
+	uint32_t log2_hds_buf_size:3;
+	uint32_t page_offset:5;
+	uint32_t lwm:16;
+	uint32_t pd:24;
+	uint32_t uar_page:24;
+	uint64_t dbr_addr;
+	uint32_t hw_counter;
+	uint32_t sw_counter;
+	uint32_t log_wq_stride:4;
+	uint32_t log_wq_pg_sz:5;
+	uint32_t log_wq_sz:5;
+	uint32_t dbr_umem_valid:1;
+	uint32_t wq_umem_valid:1;
+	uint32_t log_hairpin_num_packets:5;
+	uint32_t log_hairpin_data_sz:5;
+	uint32_t single_wqe_log_num_of_strides:4;
+	uint32_t two_byte_shift_en:1;
+	uint32_t single_stride_log_num_of_bytes:3;
+	uint32_t dbr_umem_id;
+	uint32_t wq_umem_id;
+	uint64_t wq_umem_offset;
+};
+
+/* Create RQ attributes structure, used by create RQ operation. */
+struct mlx5_devx_create_rq_attr {
+	uint32_t rlky:1;
+	uint32_t delay_drop_en:1;
+	uint32_t scatter_fcs:1;
+	uint32_t vsd:1;
+	uint32_t mem_rq_type:4;
+	uint32_t state:4;
+	uint32_t flush_in_error_en:1;
+	uint32_t hairpin:1;
+	uint32_t user_index:24;
+	uint32_t cqn:24;
+	uint32_t counter_set_id:8;
+	uint32_t rmpn:24;
+	struct mlx5_devx_wq_attr wq_attr;
+};
+
 /**
  * Type of object being allocated.
  */
@@ -745,4 +791,8 @@ struct mlx5_devx_obj *mlx5_devx_cmd_mkey_create(struct ibv_context *ctx,
 int mlx5_devx_get_out_command_status(void *out);
 int mlx5_devx_cmd_qp_query_tis_td(struct ibv_qp *qp, uint32_t tis_num,
 				  uint32_t *tis_td);
+struct mlx5_devx_obj *mlx5_devx_cmd_create_rq(struct ibv_context *ctx,
+				struct mlx5_devx_create_rq_attr *rq_attr,
+				int socket);
+
 #endif /* RTE_PMD_MLX5_H_ */
diff --git a/drivers/net/mlx5/mlx5_devx_cmds.c b/drivers/net/mlx5/mlx5_devx_cmds.c
index 3d07fcf..f68c94b 100644
--- a/drivers/net/mlx5/mlx5_devx_cmds.c
+++ b/drivers/net/mlx5/mlx5_devx_cmds.c
@@ -424,3 +424,105 @@ struct mlx5_devx_obj *
 	*tis_td = MLX5_GET(tisc, tis_ctx, transport_domain);
 	return 0;
 }
+
+/**
+ * Fill WQ data for DevX API command.
+ * Utility function for use when creating DevX objects containing a WQ.
+ *
+ * @param[in] wq_ctx
+ *   Pointer to WQ context to fill with data.
+ * @param [in] wq_attr
+ *   Pointer to WQ attributes structure to fill in WQ context.
+ */
+static void
+devx_cmd_fill_wq_data(void *wq_ctx, struct mlx5_devx_wq_attr *wq_attr)
+{
+	MLX5_SET(wq, wq_ctx, wq_type, wq_attr->wq_type);
+	MLX5_SET(wq, wq_ctx, wq_signature, wq_attr->wq_signature);
+	MLX5_SET(wq, wq_ctx, end_padding_mode, wq_attr->end_padding_mode);
+	MLX5_SET(wq, wq_ctx, cd_slave, wq_attr->cd_slave);
+	MLX5_SET(wq, wq_ctx, hds_skip_first_sge, wq_attr->hds_skip_first_sge);
+	MLX5_SET(wq, wq_ctx, log2_hds_buf_size, wq_attr->log2_hds_buf_size);
+	MLX5_SET(wq, wq_ctx, page_offset, wq_attr->page_offset);
+	MLX5_SET(wq, wq_ctx, lwm, wq_attr->lwm);
+	MLX5_SET(wq, wq_ctx, pd, wq_attr->pd);
+	MLX5_SET(wq, wq_ctx, uar_page, wq_attr->uar_page);
+	MLX5_SET64(wq, wq_ctx, dbr_addr, wq_attr->dbr_addr);
+	MLX5_SET(wq, wq_ctx, hw_counter, wq_attr->hw_counter);
+	MLX5_SET(wq, wq_ctx, sw_counter, wq_attr->sw_counter);
+	MLX5_SET(wq, wq_ctx, log_wq_stride, wq_attr->log_wq_stride);
+	MLX5_SET(wq, wq_ctx, log_wq_pg_sz, wq_attr->log_wq_pg_sz);
+	MLX5_SET(wq, wq_ctx, log_wq_sz, wq_attr->log_wq_sz);
+	MLX5_SET(wq, wq_ctx, dbr_umem_valid, wq_attr->dbr_umem_valid);
+	MLX5_SET(wq, wq_ctx, wq_umem_valid, wq_attr->wq_umem_valid);
+	MLX5_SET(wq, wq_ctx, log_hairpin_num_packets,
+		 wq_attr->log_hairpin_num_packets);
+	MLX5_SET(wq, wq_ctx, log_hairpin_data_sz, wq_attr->log_hairpin_data_sz);
+	MLX5_SET(wq, wq_ctx, single_wqe_log_num_of_strides,
+		 wq_attr->single_wqe_log_num_of_strides);
+	MLX5_SET(wq, wq_ctx, two_byte_shift_en, wq_attr->two_byte_shift_en);
+	MLX5_SET(wq, wq_ctx, single_stride_log_num_of_bytes,
+		 wq_attr->single_stride_log_num_of_bytes);
+	MLX5_SET(wq, wq_ctx, dbr_umem_id, wq_attr->dbr_umem_id);
+	MLX5_SET(wq, wq_ctx, wq_umem_id, wq_attr->wq_umem_id);
+	MLX5_SET64(wq, wq_ctx, wq_umem_offset, wq_attr->wq_umem_offset);
+}
+
+/**
+ * Create RQ using DevX API.
+ *
+ * @param[in] ctx
+ *   ibv_context returned from mlx5dv_open_device.
+ * @param [in] rq_attr
+ *   Pointer to create RQ attributes structure.
+ * @param [in] socket
+ *   CPU socket ID for allocations.
+ *
+ * @return
+ *   The DevX object created, NULL otherwise and rte_errno is set.
+ */
+struct mlx5_devx_obj *
+mlx5_devx_cmd_create_rq(struct ibv_context *ctx,
+			struct mlx5_devx_create_rq_attr *rq_attr,
+			int socket)
+{
+	uint32_t in[MLX5_ST_SZ_DW(create_rq_in)] = {0};
+	uint32_t out[MLX5_ST_SZ_DW(create_rq_out)] = {0};
+	void *rq_ctx, *wq_ctx;
+	struct mlx5_devx_wq_attr *wq_attr;
+	struct mlx5_devx_obj *rq = NULL;
+
+	rq = rte_calloc_socket(__func__, 1, sizeof(*rq), 0, socket);
+	if (!rq) {
+		DRV_LOG(ERR, "Failed to allocate RQ data");
+		rte_errno = ENOMEM;
+		return NULL;
+	}
+	MLX5_SET(create_rq_in, in, opcode, MLX5_CMD_OP_CREATE_RQ);
+	rq_ctx = MLX5_ADDR_OF(create_rq_in, in, ctx);
+	MLX5_SET(rqc, rq_ctx, rlky, rq_attr->rlky);
+	MLX5_SET(rqc, rq_ctx, delay_drop_en, rq_attr->delay_drop_en);
+	MLX5_SET(rqc, rq_ctx, scatter_fcs, rq_attr->scatter_fcs);
+	MLX5_SET(rqc, rq_ctx, vsd, rq_attr->vsd);
+	MLX5_SET(rqc, rq_ctx, mem_rq_type, rq_attr->mem_rq_type);
+	MLX5_SET(rqc, rq_ctx, state, rq_attr->state);
+	MLX5_SET(rqc, rq_ctx, flush_in_error_en, rq_attr->flush_in_error_en);
+	MLX5_SET(rqc, rq_ctx, hairpin, rq_attr->hairpin);
+	MLX5_SET(rqc, rq_ctx, user_index, rq_attr->user_index);
+	MLX5_SET(rqc, rq_ctx, cqn, rq_attr->cqn);
+	MLX5_SET(rqc, rq_ctx, counter_set_id, rq_attr->counter_set_id);
+	MLX5_SET(rqc, rq_ctx, rmpn, rq_attr->rmpn);
+	wq_ctx = MLX5_ADDR_OF(rqc, rq_ctx, wq);
+	wq_attr = &rq_attr->wq_attr;
+	devx_cmd_fill_wq_data(wq_ctx, wq_attr);
+	rq->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in),
+						  out, sizeof(out));
+	if (!rq->obj) {
+		DRV_LOG(ERR, "Failed to create RQ using DevX");
+		rte_errno = errno;
+		rte_free(rq);
+		return NULL;
+	}
+	rq->id = MLX5_GET(create_rq_out, out, rqn);
+	return rq;
+}
diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index b5de0c3..fbf00a0 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -627,6 +627,7 @@ enum {
 	MLX5_CMD_OP_QUERY_HCA_CAP = 0x100,
 	MLX5_CMD_OP_CREATE_MKEY = 0x200,
 	MLX5_CMD_OP_QUERY_NIC_VPORT_CONTEXT = 0x754,
+	MLX5_CMD_OP_CREATE_RQ = 0x908,
 	MLX5_CMD_OP_QUERY_TIS = 0x915,
 	MLX5_CMD_OP_ALLOC_FLOW_COUNTER = 0x939,
 	MLX5_CMD_OP_QUERY_FLOW_COUNTER = 0x93b,
@@ -1268,6 +1269,115 @@ struct mlx5_ifc_query_tis_in_bits {
 	u8 reserved_at_60[0x20];
 };
 
+enum {
+	MLX5_WQ_TYPE_LINKED_LIST                = 0x0,
+	MLX5_WQ_TYPE_CYCLIC                     = 0x1,
+	MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ    = 0x2,
+	MLX5_WQ_TYPE_CYCLIC_STRIDING_RQ         = 0x3,
+};
+
+enum {
+	MLX5_WQ_END_PAD_MODE_NONE  = 0x0,
+	MLX5_WQ_END_PAD_MODE_ALIGN = 0x1,
+};
+
+struct mlx5_ifc_wq_bits {
+	u8 wq_type[0x4];
+	u8 wq_signature[0x1];
+	u8 end_padding_mode[0x2];
+	u8 cd_slave[0x1];
+	u8 reserved_at_8[0x18];
+	u8 hds_skip_first_sge[0x1];
+	u8 log2_hds_buf_size[0x3];
+	u8 reserved_at_24[0x7];
+	u8 page_offset[0x5];
+	u8 lwm[0x10];
+	u8 reserved_at_40[0x8];
+	u8 pd[0x18];
+	u8 reserved_at_60[0x8];
+	u8 uar_page[0x18];
+	u8 dbr_addr[0x40];
+	u8 hw_counter[0x20];
+	u8 sw_counter[0x20];
+	u8 reserved_at_100[0xc];
+	u8 log_wq_stride[0x4];
+	u8 reserved_at_110[0x3];
+	u8 log_wq_pg_sz[0x5];
+	u8 reserved_at_118[0x3];
+	u8 log_wq_sz[0x5];
+	u8 dbr_umem_valid[0x1];
+	u8 wq_umem_valid[0x1];
+	u8 reserved_at_122[0x1];
+	u8 log_hairpin_num_packets[0x5];
+	u8 reserved_at_128[0x3];
+	u8 log_hairpin_data_sz[0x5];
+	u8 reserved_at_130[0x4];
+	u8 single_wqe_log_num_of_strides[0x4];
+	u8 two_byte_shift_en[0x1];
+	u8 reserved_at_139[0x4];
+	u8 single_stride_log_num_of_bytes[0x3];
+	u8 dbr_umem_id[0x20];
+	u8 wq_umem_id[0x20];
+	u8 wq_umem_offset[0x40];
+	u8 reserved_at_1c0[0x440];
+};
+
+enum {
+	MLX5_RQC_MEM_RQ_TYPE_MEMORY_RQ_INLINE  = 0x0,
+	MLX5_RQC_MEM_RQ_TYPE_MEMORY_RQ_RMP     = 0x1,
+};
+
+enum {
+	MLX5_RQC_STATE_RST  = 0x0,
+	MLX5_RQC_STATE_RDY  = 0x1,
+	MLX5_RQC_STATE_ERR  = 0x3,
+};
+
+struct mlx5_ifc_rqc_bits {
+	u8 rlky[0x1];
+	u8 delay_drop_en[0x1];
+	u8 scatter_fcs[0x1];
+	u8 vsd[0x1];
+	u8 mem_rq_type[0x4];
+	u8 state[0x4];
+	u8 reserved_at_c[0x1];
+	u8 flush_in_error_en[0x1];
+	u8 hairpin[0x1];
+	u8 reserved_at_f[0x11];
+	u8 reserved_at_20[0x8];
+	u8 user_index[0x18];
+	u8 reserved_at_40[0x8];
+	u8 cqn[0x18];
+	u8 counter_set_id[0x8];
+	u8 reserved_at_68[0x18];
+	u8 reserved_at_80[0x8];
+	u8 rmpn[0x18];
+	u8 reserved_at_a0[0x8];
+	u8 hairpin_peer_sq[0x18];
+	u8 reserved_at_c0[0x10];
+	u8 hairpin_peer_vhca[0x10];
+	u8 reserved_at_e0[0xa0];
+	struct mlx5_ifc_wq_bits wq; /* Not used in LRO RQ. */
+};
+
+struct mlx5_ifc_create_rq_out_bits {
+	u8 status[0x8];
+	u8 reserved_at_8[0x18];
+	u8 syndrome[0x20];
+	u8 reserved_at_40[0x8];
+	u8 rqn[0x18];
+	u8 reserved_at_60[0x20];
+};
+
+struct mlx5_ifc_create_rq_in_bits {
+	u8 opcode[0x10];
+	u8 uid[0x10];
+	u8 reserved_at_20[0x10];
+	u8 op_mod[0x10];
+	u8 reserved_at_40[0xc0];
+	struct mlx5_ifc_rqc_bits ctx;
+};
+
 /* CQE format mask. */
 #define MLX5E_CQE_FORMAT_MASK 0xc
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH 10/28] net/mlx5: modify advanced RxQ object using new API
  2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
                   ` (8 preceding siblings ...)
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 09/28] net/mlx5: create advanced RxQ object using new API Matan Azrad
@ 2019-07-22  9:12 ` Matan Azrad
  2019-07-22  9:20   ` Slava Ovsiienko
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 11/28] net/mlx5: create advanced Rx " Matan Azrad
                   ` (20 subsequent siblings)
  30 siblings, 1 reply; 92+ messages in thread
From: Matan Azrad @ 2019-07-22  9:12 UTC (permalink / raw)
  To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Implement function mlx5_devx_cmd_modify_rq() to modify RQ.
Add related structs in mlx5.h and mlx5_prm.h.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5.h           | 16 +++++++++++++
 drivers/net/mlx5/mlx5_devx_cmds.c | 50 +++++++++++++++++++++++++++++++++++++++
 drivers/net/mlx5/mlx5_prm.h       | 29 +++++++++++++++++++++++
 3 files changed, 95 insertions(+)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 7ad6687..fbd1311 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -306,6 +306,20 @@ struct mlx5_devx_create_rq_attr {
 	struct mlx5_devx_wq_attr wq_attr;
 };
 
+/* Modify RQ attributes structure, used by modify RQ operation. */
+struct mlx5_devx_modify_rq_attr {
+	uint32_t rqn:24;
+	uint32_t rq_state:4; /* Current RQ state. */
+	uint32_t state:4; /* Required RQ state. */
+	uint32_t scatter_fcs:1;
+	uint32_t vsd:1;
+	uint32_t counter_set_id:8;
+	uint32_t hairpin_peer_sq:24;
+	uint32_t hairpin_peer_vhca:16;
+	uint64_t modify_bitmask;
+	uint32_t lwm:16; /* Contained WQ lwm. */
+};
+
 /**
  * Type of object being allocated.
  */
@@ -794,5 +808,7 @@ int mlx5_devx_cmd_qp_query_tis_td(struct ibv_qp *qp, uint32_t tis_num,
 struct mlx5_devx_obj *mlx5_devx_cmd_create_rq(struct ibv_context *ctx,
 				struct mlx5_devx_create_rq_attr *rq_attr,
 				int socket);
+int mlx5_devx_cmd_modify_rq(struct mlx5_devx_obj *rq,
+			    struct mlx5_devx_modify_rq_attr *rq_attr);
 
 #endif /* RTE_PMD_MLX5_H_ */
diff --git a/drivers/net/mlx5/mlx5_devx_cmds.c b/drivers/net/mlx5/mlx5_devx_cmds.c
index f68c94b..e8953bb 100644
--- a/drivers/net/mlx5/mlx5_devx_cmds.c
+++ b/drivers/net/mlx5/mlx5_devx_cmds.c
@@ -526,3 +526,53 @@ struct mlx5_devx_obj *
 	rq->id = MLX5_GET(create_rq_out, out, rqn);
 	return rq;
 }
+
+/**
+ * Modify RQ using DevX API.
+ *
+ * @param[in] rq
+ *   Pointer to RQ object structure.
+ * @param [in] rq_attr
+ *   Pointer to modify RQ attributes structure.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_devx_cmd_modify_rq(struct mlx5_devx_obj *rq,
+			struct mlx5_devx_modify_rq_attr *rq_attr)
+{
+	uint32_t in[MLX5_ST_SZ_DW(modify_rq_in)] = {0};
+	uint32_t out[MLX5_ST_SZ_DW(modify_rq_out)] = {0};
+	void *rq_ctx, *wq_ctx;
+	int ret;
+
+	MLX5_SET(modify_rq_in, in, opcode, MLX5_CMD_OP_MODIFY_RQ);
+	MLX5_SET(modify_rq_in, in, rq_state, rq_attr->rq_state);
+	MLX5_SET(modify_rq_in, in, rqn, rq->id);
+	MLX5_SET64(modify_rq_in, in, modify_bitmask, rq_attr->modify_bitmask);
+	rq_ctx = MLX5_ADDR_OF(modify_rq_in, in, ctx);
+	MLX5_SET(rqc, rq_ctx, state, rq_attr->state);
+	if (rq_attr->modify_bitmask &
+			MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_SCATTER_FCS)
+		MLX5_SET(rqc, rq_ctx, scatter_fcs, rq_attr->scatter_fcs);
+	if (rq_attr->modify_bitmask & MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_VSD)
+		MLX5_SET(rqc, rq_ctx, vsd, rq_attr->vsd);
+	if (rq_attr->modify_bitmask &
+			MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_RQ_COUNTER_SET_ID)
+		MLX5_SET(rqc, rq_ctx, counter_set_id, rq_attr->counter_set_id);
+	MLX5_SET(rqc, rq_ctx, hairpin_peer_sq, rq_attr->hairpin_peer_sq);
+	MLX5_SET(rqc, rq_ctx, hairpin_peer_vhca, rq_attr->hairpin_peer_vhca);
+	if (rq_attr->modify_bitmask & MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_WQ_LWM) {
+		wq_ctx = MLX5_ADDR_OF(rqc, rq_ctx, wq);
+		MLX5_SET(wq, wq_ctx, lwm, rq_attr->lwm);
+	}
+	ret = mlx5_glue->devx_obj_modify(rq->obj, in, sizeof(in),
+					 out, sizeof(out));
+	if (ret) {
+		DRV_LOG(ERR, "Failed to modify RQ using DevX");
+		rte_errno = errno;
+		return -errno;
+	}
+	return ret;
+}
diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index fbf00a0..7ec709b 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -628,6 +628,7 @@ enum {
 	MLX5_CMD_OP_CREATE_MKEY = 0x200,
 	MLX5_CMD_OP_QUERY_NIC_VPORT_CONTEXT = 0x754,
 	MLX5_CMD_OP_CREATE_RQ = 0x908,
+	MLX5_CMD_OP_MODIFY_RQ = 0x909,
 	MLX5_CMD_OP_QUERY_TIS = 0x915,
 	MLX5_CMD_OP_ALLOC_FLOW_COUNTER = 0x939,
 	MLX5_CMD_OP_QUERY_FLOW_COUNTER = 0x93b,
@@ -1378,6 +1379,34 @@ struct mlx5_ifc_create_rq_in_bits {
 	struct mlx5_ifc_rqc_bits ctx;
 };
 
+struct mlx5_ifc_modify_rq_out_bits {
+	u8 status[0x8];
+	u8 reserved_at_8[0x18];
+	u8 syndrome[0x20];
+	u8 reserved_at_40[0x40];
+};
+
+enum {
+	MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_WQ_LWM = 1ULL << 0,
+	MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_VSD = 1ULL << 1,
+	MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_SCATTER_FCS = 1ULL << 2,
+	MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_RQ_COUNTER_SET_ID = 1ULL << 3,
+};
+
+struct mlx5_ifc_modify_rq_in_bits {
+	u8 opcode[0x10];
+	u8 uid[0x10];
+	u8 reserved_at_20[0x10];
+	u8 op_mod[0x10];
+	u8 rq_state[0x4];
+	u8 reserved_at_44[0x4];
+	u8 rqn[0x18];
+	u8 reserved_at_60[0x20];
+	u8 modify_bitmask[0x40];
+	u8 reserved_at_c0[0x40];
+	struct mlx5_ifc_rqc_bits ctx;
+};
+
 /* CQE format mask. */
 #define MLX5E_CQE_FORMAT_MASK 0xc
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH 11/28] net/mlx5: create advanced Rx object using new API
  2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
                   ` (9 preceding siblings ...)
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 10/28] net/mlx5: modify " Matan Azrad
@ 2019-07-22  9:12 ` Matan Azrad
  2019-07-22  9:20   ` Slava Ovsiienko
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 12/28] net/mlx5: create advanced RxQ table " Matan Azrad
                   ` (19 subsequent siblings)
  30 siblings, 1 reply; 92+ messages in thread
From: Matan Azrad @ 2019-07-22  9:12 UTC (permalink / raw)
  To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Implement function mlx5_devx_cmd_create_tir() to create TIR
object using DevX API..
Add related structs in mlx5.h and mlx5_prm.h.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5.h           | 26 +++++++++++++
 drivers/net/mlx5/mlx5_devx_cmds.c | 72 ++++++++++++++++++++++++++++++++++
 drivers/net/mlx5/mlx5_prm.h       | 81 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 179 insertions(+)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index fbd1311..183acfb 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -320,6 +320,30 @@ struct mlx5_devx_modify_rq_attr {
 	uint32_t lwm:16; /* Contained WQ lwm. */
 };
 
+struct mlx5_rx_hash_field_select {
+	uint32_t l3_prot_type:1;
+	uint32_t l4_prot_type:1;
+	uint32_t selected_fields:30;
+};
+
+/* TIR attributes structure, used by TIR operations. */
+struct mlx5_devx_tir_attr {
+	uint32_t disp_type:4;
+	uint32_t lro_timeout_period_usecs:16;
+	uint32_t lro_enable_mask:4;
+	uint32_t lro_max_msg_sz:8;
+	uint32_t inline_rqn:24;
+	uint32_t rx_hash_symmetric:1;
+	uint32_t tunneled_offload_en:1;
+	uint32_t indirect_table:24;
+	uint32_t rx_hash_fn:4;
+	uint32_t self_lb_block:2;
+	uint32_t transport_domain:24;
+	uint32_t rx_hash_toeplitz_key[10];
+	struct mlx5_rx_hash_field_select rx_hash_field_selector_outer;
+	struct mlx5_rx_hash_field_select rx_hash_field_selector_inner;
+};
+
 /**
  * Type of object being allocated.
  */
@@ -810,5 +834,7 @@ struct mlx5_devx_obj *mlx5_devx_cmd_create_rq(struct ibv_context *ctx,
 				int socket);
 int mlx5_devx_cmd_modify_rq(struct mlx5_devx_obj *rq,
 			    struct mlx5_devx_modify_rq_attr *rq_attr);
+struct mlx5_devx_obj *mlx5_devx_cmd_create_tir(struct ibv_context *ctx,
+					struct mlx5_devx_tir_attr *tir_attr);
 
 #endif /* RTE_PMD_MLX5_H_ */
diff --git a/drivers/net/mlx5/mlx5_devx_cmds.c b/drivers/net/mlx5/mlx5_devx_cmds.c
index e8953bb..5faa2a0 100644
--- a/drivers/net/mlx5/mlx5_devx_cmds.c
+++ b/drivers/net/mlx5/mlx5_devx_cmds.c
@@ -576,3 +576,75 @@ struct mlx5_devx_obj *
 	}
 	return ret;
 }
+
+/**
+ * Create TIR using DevX API.
+ *
+ * @param[in] ctx
+ *   ibv_context returned from mlx5dv_open_device.
+ * @param [in] tir_attr
+ *   Pointer to TIR attributes structure.
+ *
+ * @return
+ *   The DevX object created, NULL otherwise and rte_errno is set.
+ */
+struct mlx5_devx_obj *
+mlx5_devx_cmd_create_tir(struct ibv_context *ctx,
+			 struct mlx5_devx_tir_attr *tir_attr)
+{
+	uint32_t in[MLX5_ST_SZ_DW(create_tir_in)] = {0};
+	uint32_t out[MLX5_ST_SZ_DW(create_tir_out)] = {0};
+	void *tir_ctx, *outer, *inner;
+	struct mlx5_devx_obj *tir = NULL;
+	int i;
+
+	tir = rte_calloc(__func__, 1, sizeof(*tir), 0);
+	if (!tir) {
+		DRV_LOG(ERR, "Failed to allocate TIR data");
+		rte_errno = ENOMEM;
+		return NULL;
+	}
+	MLX5_SET(create_tir_in, in, opcode, MLX5_CMD_OP_CREATE_TIR);
+	tir_ctx = MLX5_ADDR_OF(create_tir_in, in, ctx);
+	MLX5_SET(tirc, tir_ctx, disp_type, tir_attr->disp_type);
+	MLX5_SET(tirc, tir_ctx, lro_timeout_period_usecs,
+		 tir_attr->lro_timeout_period_usecs);
+	MLX5_SET(tirc, tir_ctx, lro_enable_mask, tir_attr->lro_enable_mask);
+	MLX5_SET(tirc, tir_ctx, lro_max_msg_sz, tir_attr->lro_max_msg_sz);
+	MLX5_SET(tirc, tir_ctx, inline_rqn, tir_attr->inline_rqn);
+	MLX5_SET(tirc, tir_ctx, rx_hash_symmetric, tir_attr->rx_hash_symmetric);
+	MLX5_SET(tirc, tir_ctx, tunneled_offload_en,
+		 tir_attr->tunneled_offload_en);
+	MLX5_SET(tirc, tir_ctx, indirect_table, tir_attr->indirect_table);
+	MLX5_SET(tirc, tir_ctx, rx_hash_fn, tir_attr->rx_hash_fn);
+	MLX5_SET(tirc, tir_ctx, self_lb_block, tir_attr->self_lb_block);
+	MLX5_SET(tirc, tir_ctx, transport_domain, tir_attr->transport_domain);
+	for (i = 0; i < 10; i++) {
+		MLX5_SET(tirc, tir_ctx, rx_hash_toeplitz_key[i],
+			 tir_attr->rx_hash_toeplitz_key[i]);
+	}
+	outer = MLX5_ADDR_OF(tirc, tir_ctx, rx_hash_field_selector_outer);
+	MLX5_SET(rx_hash_field_select, outer, l3_prot_type,
+		 tir_attr->rx_hash_field_selector_outer.l3_prot_type);
+	MLX5_SET(rx_hash_field_select, outer, l4_prot_type,
+		 tir_attr->rx_hash_field_selector_outer.l4_prot_type);
+	MLX5_SET(rx_hash_field_select, outer, selected_fields,
+		 tir_attr->rx_hash_field_selector_outer.selected_fields);
+	inner = MLX5_ADDR_OF(tirc, tir_ctx, rx_hash_field_selector_inner);
+	MLX5_SET(rx_hash_field_select, inner, l3_prot_type,
+		 tir_attr->rx_hash_field_selector_inner.l3_prot_type);
+	MLX5_SET(rx_hash_field_select, inner, l4_prot_type,
+		 tir_attr->rx_hash_field_selector_inner.l4_prot_type);
+	MLX5_SET(rx_hash_field_select, inner, selected_fields,
+		 tir_attr->rx_hash_field_selector_inner.selected_fields);
+	tir->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in),
+						   out, sizeof(out));
+	if (!tir->obj) {
+		DRV_LOG(ERR, "Failed to create TIR using DevX");
+		rte_errno = errno;
+		rte_free(tir);
+		return NULL;
+	}
+	tir->id = MLX5_GET(create_tir_out, out, tirn);
+	return tir;
+}
diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index 7ec709b..970dee0 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -627,6 +627,7 @@ enum {
 	MLX5_CMD_OP_QUERY_HCA_CAP = 0x100,
 	MLX5_CMD_OP_CREATE_MKEY = 0x200,
 	MLX5_CMD_OP_QUERY_NIC_VPORT_CONTEXT = 0x754,
+	MLX5_CMD_OP_CREATE_TIR = 0x900,
 	MLX5_CMD_OP_CREATE_RQ = 0x908,
 	MLX5_CMD_OP_MODIFY_RQ = 0x909,
 	MLX5_CMD_OP_QUERY_TIS = 0x915,
@@ -1407,6 +1408,86 @@ struct mlx5_ifc_modify_rq_in_bits {
 	struct mlx5_ifc_rqc_bits ctx;
 };
 
+enum {
+	MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_SRC_IP     = 0x0,
+	MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_DST_IP     = 0x1,
+	MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_L4_SPORT   = 0x2,
+	MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_L4_DPORT   = 0x3,
+	MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_IPSEC_SPI  = 0x4,
+};
+
+struct mlx5_ifc_rx_hash_field_select_bits {
+	u8 l3_prot_type[0x1];
+	u8 l4_prot_type[0x1];
+	u8 selected_fields[0x1e];
+};
+
+enum {
+	MLX5_TIRC_DISP_TYPE_DIRECT    = 0x0,
+	MLX5_TIRC_DISP_TYPE_INDIRECT  = 0x1,
+};
+
+enum {
+	MLX5_TIRC_LRO_ENABLE_MASK_IPV4_LRO  = 0x1,
+	MLX5_TIRC_LRO_ENABLE_MASK_IPV6_LRO  = 0x2,
+};
+
+enum {
+	MLX5_RX_HASH_FN_NONE           = 0x0,
+	MLX5_RX_HASH_FN_INVERTED_XOR8  = 0x1,
+	MLX5_RX_HASH_FN_TOEPLITZ       = 0x2,
+};
+
+enum {
+	MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST    = 0x1,
+	MLX5_TIRC_SELF_LB_BLOCK_BLOCK_MULTICAST  = 0x2,
+};
+
+struct mlx5_ifc_tirc_bits {
+	u8 reserved_at_0[0x20];
+	u8 disp_type[0x4];
+	u8 reserved_at_24[0x1c];
+	u8 reserved_at_40[0x40];
+	u8 reserved_at_80[0x4];
+	u8 lro_timeout_period_usecs[0x10];
+	u8 lro_enable_mask[0x4];
+	u8 lro_max_msg_sz[0x8];
+	u8 reserved_at_a0[0x40];
+	u8 reserved_at_e0[0x8];
+	u8 inline_rqn[0x18];
+	u8 rx_hash_symmetric[0x1];
+	u8 reserved_at_101[0x1];
+	u8 tunneled_offload_en[0x1];
+	u8 reserved_at_103[0x5];
+	u8 indirect_table[0x18];
+	u8 rx_hash_fn[0x4];
+	u8 reserved_at_124[0x2];
+	u8 self_lb_block[0x2];
+	u8 transport_domain[0x18];
+	u8 rx_hash_toeplitz_key[10][0x20];
+	struct mlx5_ifc_rx_hash_field_select_bits rx_hash_field_selector_outer;
+	struct mlx5_ifc_rx_hash_field_select_bits rx_hash_field_selector_inner;
+	u8 reserved_at_2c0[0x4c0];
+};
+
+struct mlx5_ifc_create_tir_out_bits {
+	u8 status[0x8];
+	u8 reserved_at_8[0x18];
+	u8 syndrome[0x20];
+	u8 reserved_at_40[0x8];
+	u8 tirn[0x18];
+	u8 reserved_at_60[0x20];
+};
+
+struct mlx5_ifc_create_tir_in_bits {
+	u8 opcode[0x10];
+	u8 uid[0x10];
+	u8 reserved_at_20[0x10];
+	u8 op_mod[0x10];
+	u8 reserved_at_40[0xc0];
+	struct mlx5_ifc_tirc_bits ctx;
+};
+
 /* CQE format mask. */
 #define MLX5E_CQE_FORMAT_MASK 0xc
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH 12/28] net/mlx5: create advanced RxQ table using new API
  2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
                   ` (10 preceding siblings ...)
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 11/28] net/mlx5: create advanced Rx " Matan Azrad
@ 2019-07-22  9:12 ` Matan Azrad
  2019-07-22  9:21   ` Slava Ovsiienko
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 13/28] net/mlx5: allocate door-bells " Matan Azrad
                   ` (18 subsequent siblings)
  30 siblings, 1 reply; 92+ messages in thread
From: Matan Azrad @ 2019-07-22  9:12 UTC (permalink / raw)
  To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Implement function mlx5_devx_cmd_create_rqt() to create RQT
object using DevX API.
Add related structs in mlx5.h and mlx5_prm.h.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5.h           |  9 +++++++
 drivers/net/mlx5/mlx5_devx_cmds.c | 54 +++++++++++++++++++++++++++++++++++++++
 drivers/net/mlx5/mlx5_prm.h       | 40 +++++++++++++++++++++++++++++
 3 files changed, 103 insertions(+)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 183acfb..8d72b2d 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -344,6 +344,13 @@ struct mlx5_devx_tir_attr {
 	struct mlx5_rx_hash_field_select rx_hash_field_selector_inner;
 };
 
+/* RQT attributes structure, used by RQT operations. */
+struct mlx5_devx_rqt_attr {
+	uint32_t rqt_max_size:16;
+	uint32_t rqt_actual_size:16;
+	uint32_t rq_list[];
+};
+
 /**
  * Type of object being allocated.
  */
@@ -836,5 +843,7 @@ int mlx5_devx_cmd_modify_rq(struct mlx5_devx_obj *rq,
 			    struct mlx5_devx_modify_rq_attr *rq_attr);
 struct mlx5_devx_obj *mlx5_devx_cmd_create_tir(struct ibv_context *ctx,
 					struct mlx5_devx_tir_attr *tir_attr);
+struct mlx5_devx_obj *mlx5_devx_cmd_create_rqt(struct ibv_context *ctx,
+					struct mlx5_devx_rqt_attr *rqt_attr);
 
 #endif /* RTE_PMD_MLX5_H_ */
diff --git a/drivers/net/mlx5/mlx5_devx_cmds.c b/drivers/net/mlx5/mlx5_devx_cmds.c
index 5faa2a0..acfe1de 100644
--- a/drivers/net/mlx5/mlx5_devx_cmds.c
+++ b/drivers/net/mlx5/mlx5_devx_cmds.c
@@ -648,3 +648,57 @@ struct mlx5_devx_obj *
 	tir->id = MLX5_GET(create_tir_out, out, tirn);
 	return tir;
 }
+
+/**
+ * Create RQT using DevX API.
+ *
+ * @param[in] ctx
+ *   ibv_context returned from mlx5dv_open_device.
+ * @param [in] rqt_attr
+ *   Pointer to RQT attributes structure.
+ *
+ * @return
+ *   The DevX object created, NULL otherwise and rte_errno is set.
+ */
+struct mlx5_devx_obj *
+mlx5_devx_cmd_create_rqt(struct ibv_context *ctx,
+			 struct mlx5_devx_rqt_attr *rqt_attr)
+{
+	uint32_t *in = NULL;
+	uint32_t inlen = MLX5_ST_SZ_BYTES(create_rqt_in) +
+			 rqt_attr->rqt_actual_size * sizeof(uint32_t);
+	uint32_t out[MLX5_ST_SZ_DW(create_rqt_out)] = {0};
+	void *rqt_ctx;
+	struct mlx5_devx_obj *rqt = NULL;
+	int i;
+
+	in = rte_calloc(__func__, 1, inlen, 0);
+	if (!in) {
+		DRV_LOG(ERR, "Failed to allocate RQT IN data");
+		rte_errno = ENOMEM;
+		return NULL;
+	}
+	rqt = rte_calloc(__func__, 1, sizeof(*rqt), 0);
+	if (!rqt) {
+		DRV_LOG(ERR, "Failed to allocate RQT data");
+		rte_errno = ENOMEM;
+		rte_free(in);
+		return NULL;
+	}
+	MLX5_SET(create_rqt_in, in, opcode, MLX5_CMD_OP_CREATE_RQT);
+	rqt_ctx = MLX5_ADDR_OF(create_rqt_in, in, rqt_context);
+	MLX5_SET(rqtc, rqt_ctx, rqt_max_size, rqt_attr->rqt_max_size);
+	MLX5_SET(rqtc, rqt_ctx, rqt_actual_size, rqt_attr->rqt_actual_size);
+	for (i = 0; i < rqt_attr->rqt_actual_size; i++)
+		MLX5_SET(rqtc, rqt_ctx, rq_num[i], rqt_attr->rq_list[i]);
+	rqt->obj = mlx5_glue->devx_obj_create(ctx, in, inlen, out, sizeof(out));
+	rte_free(in);
+	if (!rqt->obj) {
+		DRV_LOG(ERR, "Failed to create RQT using DevX");
+		rte_errno = errno;
+		rte_free(rqt);
+		return NULL;
+	}
+	rqt->id = MLX5_GET(create_rqt_out, out, rqtn);
+	return rqt;
+}
diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index 970dee0..b0e281f 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -631,6 +631,7 @@ enum {
 	MLX5_CMD_OP_CREATE_RQ = 0x908,
 	MLX5_CMD_OP_MODIFY_RQ = 0x909,
 	MLX5_CMD_OP_QUERY_TIS = 0x915,
+	MLX5_CMD_OP_CREATE_RQT = 0x916,
 	MLX5_CMD_OP_ALLOC_FLOW_COUNTER = 0x939,
 	MLX5_CMD_OP_QUERY_FLOW_COUNTER = 0x93b,
 };
@@ -1488,6 +1489,45 @@ struct mlx5_ifc_create_tir_in_bits {
 	struct mlx5_ifc_tirc_bits ctx;
 };
 
+struct mlx5_ifc_rq_num_bits {
+	u8 reserved_at_0[0x8];
+	u8 rq_num[0x18];
+};
+
+struct mlx5_ifc_rqtc_bits {
+	u8 reserved_at_0[0xa0];
+	u8 reserved_at_a0[0x10];
+	u8 rqt_max_size[0x10];
+	u8 reserved_at_c0[0x10];
+	u8 rqt_actual_size[0x10];
+	u8 reserved_at_e0[0x6a0];
+	struct mlx5_ifc_rq_num_bits rq_num[];
+};
+
+struct mlx5_ifc_create_rqt_out_bits {
+	u8 status[0x8];
+	u8 reserved_at_8[0x18];
+	u8 syndrome[0x20];
+	u8 reserved_at_40[0x8];
+	u8 rqtn[0x18];
+	u8 reserved_at_60[0x20];
+};
+
+#ifdef PEDANTIC
+#pragma GCC diagnostic ignored "-Wpedantic"
+#endif
+struct mlx5_ifc_create_rqt_in_bits {
+	u8 opcode[0x10];
+	u8 uid[0x10];
+	u8 reserved_at_20[0x10];
+	u8 op_mod[0x10];
+	u8 reserved_at_40[0xc0];
+	struct mlx5_ifc_rqtc_bits rqt_context;
+};
+#ifdef PEDANTIC
+#pragma GCC diagnostic error "-Wpedantic"
+#endif
+
 /* CQE format mask. */
 #define MLX5E_CQE_FORMAT_MASK 0xc
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH 13/28] net/mlx5: allocate door-bells using new API
  2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
                   ` (11 preceding siblings ...)
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 12/28] net/mlx5: create advanced RxQ table " Matan Azrad
@ 2019-07-22  9:13 ` Matan Azrad
  2019-07-22  9:20   ` Slava Ovsiienko
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 14/28] net/mlx5: rename RxQ verbs to general RxQ object Matan Azrad
                   ` (17 subsequent siblings)
  30 siblings, 1 reply; 92+ messages in thread
From: Matan Azrad @ 2019-07-22  9:13 UTC (permalink / raw)
  To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

When using DevX API, memory for door-bell records should be allocated
by PMD and registered using DevX API.

This patch implements the utility functions to support it:
- Add struct mlx5_devx_dbr_page, containing door-bells page data.
- Add list of struct mlx5_devx_dbr_page door-bell pages to device
  private data.
- Implement function mlx5_alloc_dbr_page() to allocate page for
  door-bell records, and register it using DevX API.
- Implement function mlx5_get_dbr(). to acquire a door-bell record
  from the door-bells page, allocating a new page if needed.
- Implement function mlx5_release_dbr() to release a door-bell
  record that is no longer needed, freeing the containing page if
  it becomes empty.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5.c      | 119 +++++++++++++++++++++++++++++++++++++++++++
 drivers/net/mlx5/mlx5.h      |  21 ++++++++
 drivers/net/mlx5/mlx5_rxtx.h |   1 +
 3 files changed, 141 insertions(+)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 8d0ebeb..6b878da 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1312,6 +1312,125 @@ struct mlx5_dev_spawn_data {
 }
 
 /**
+ * Allocate page of door-bells and register it using DevX API.
+ *
+ * @param [in] dev
+ *   Pointer to Ethernet device.
+ *
+ * @return
+ *   Pointer to new page on success, NULL otherwise.
+ */
+static struct mlx5_devx_dbr_page *
+mlx5_alloc_dbr_page(struct rte_eth_dev *dev)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_devx_dbr_page *page;
+
+	/* Allocate space for door-bell page and management data. */
+	page = rte_calloc_socket(__func__, 1, sizeof(struct mlx5_devx_dbr_page),
+				 RTE_CACHE_LINE_SIZE, dev->device->numa_node);
+	if (!page) {
+		DRV_LOG(ERR, "port %u cannot allocate dbr page",
+			dev->data->port_id);
+		return NULL;
+	}
+	/* Register allocated memory. */
+	page->umem = mlx5_glue->devx_umem_reg(priv->sh->ctx, page->dbrs,
+					      MLX5_DBR_PAGE_SIZE, 0);
+	if (!page->umem) {
+		DRV_LOG(ERR, "port %u cannot umem reg dbr page",
+			dev->data->port_id);
+		rte_free(page);
+		return NULL;
+	}
+	return page;
+}
+
+/**
+ * Find the next available door-bell, allocate new page if needed.
+ *
+ * @param [in] dev
+ *   Pointer to Ethernet device.
+ * @param [out] dbr_page
+ *   Door-bell page containing the page data.
+ *
+ * @return
+ *   Door-bell address offset on success, a negative error value otherwise.
+ */
+int64_t
+mlx5_get_dbr(struct rte_eth_dev *dev, struct mlx5_devx_dbr_page **dbr_page)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_devx_dbr_page *page = NULL;
+	uint32_t i, j;
+
+	LIST_FOREACH(page, &priv->dbrpgs, next)
+		if (page->dbr_count < MLX5_DBR_PER_PAGE)
+			break;
+	if (!page) { /* No page with free door-bell exists. */
+		page = mlx5_alloc_dbr_page(dev);
+		if (!page) /* Failed to allocate new page. */
+			return (-1);
+		LIST_INSERT_HEAD(&priv->dbrpgs, page, next);
+	}
+	/* Loop to find bitmap part with clear bit. */
+	for (i = 0;
+	     i < MLX5_DBR_BITMAP_SIZE && page->dbr_bitmap[i] == UINT64_MAX;
+	     i++)
+		; /* Empty. */
+	/* Find the first clear bit. */
+	j = rte_bsf64(~page->dbr_bitmap[i]);
+	assert(i < (MLX5_DBR_PER_PAGE / 64));
+	page->dbr_bitmap[i] |= (1 << j);
+	page->dbr_count++;
+	*dbr_page = page;
+	return (((i * 64) + j) * sizeof(uint64_t));
+}
+
+/**
+ * Release a door-bell record.
+ *
+ * @param [in] dev
+ *   Pointer to Ethernet device.
+ * @param [in] umem_id
+ *   UMEM ID of page containing the door-bell record to release.
+ * @param [in] offset
+ *   Offset of door-bell record in page.
+ *
+ * @return
+ *   0 on success, a negative error value otherwise.
+ */
+int32_t
+mlx5_release_dbr(struct rte_eth_dev *dev, uint32_t umem_id, uint64_t offset)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_devx_dbr_page *page = NULL;
+	int ret = 0;
+
+	LIST_FOREACH(page, &priv->dbrpgs, next)
+		/* Find the page this address belongs to. */
+		if (page->umem->umem_id == umem_id)
+			break;
+	if (!page)
+		return -EINVAL;
+	page->dbr_count--;
+	if (!page->dbr_count) {
+		/* Page not used, free it and remove from list. */
+		LIST_REMOVE(page, next);
+		if (page->umem)
+			ret = -mlx5_glue->devx_umem_dereg(page->umem);
+		rte_free(page);
+	} else {
+		/* Mark in bitmap that this door-bell is not in use. */
+		int i = offset / 64;
+		int j = offset % 64;
+
+		page->dbr_bitmap[i] &= ~(1 << j);
+	}
+	return ret;
+}
+
+/**
  * Spawn an Ethernet device from Verbs information.
  *
  * @param dpdk_dev
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 8d72b2d..1543aaf 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -36,6 +36,7 @@
 #include "mlx5_mr.h"
 #include "mlx5_autoconf.h"
 #include "mlx5_defs.h"
+#include "mlx5_glue.h"
 
 enum {
 	PCI_VENDOR_ID_MELLANOX = 0x15b3,
@@ -498,6 +499,21 @@ struct mlx5_flow_tbl_resource {
 #define MLX5_MAX_TABLES_FDB 32
 #define MLX5_GROUP_FACTOR 1
 
+#define MLX5_DBR_PAGE_SIZE 4096 /* Must be >= 512. */
+#define MLX5_DBR_SIZE 8
+#define MLX5_DBR_PER_PAGE (MLX5_DBR_PAGE_SIZE / MLX5_DBR_SIZE)
+#define MLX5_DBR_BITMAP_SIZE (MLX5_DBR_PER_PAGE / 64)
+
+struct mlx5_devx_dbr_page {
+	/* Door-bell records, must be first member in structure. */
+	uint8_t dbrs[MLX5_DBR_PAGE_SIZE];
+	LIST_ENTRY(mlx5_devx_dbr_page) next; /* Pointer to the next element. */
+	struct mlx5dv_devx_umem *umem;
+	uint32_t dbr_count; /* Number of door-bell records in use. */
+	/* 1 bit marks matching door-bell is in use. */
+	uint64_t dbr_bitmap[MLX5_DBR_BITMAP_SIZE];
+};
+
 /*
  * Shared Infiniband device context for Master/Representors
  * which belong to same IB device with multiple IB ports.
@@ -617,6 +633,7 @@ struct mlx5_priv {
 	int nl_socket_rdma; /* Netlink socket (NETLINK_RDMA). */
 	int nl_socket_route; /* Netlink socket (NETLINK_ROUTE). */
 	uint32_t nl_sn; /* Netlink message sequence number. */
+	LIST_HEAD(dbrpage, mlx5_devx_dbr_page) dbrpgs; /* Door-bell pages. */
 #ifndef RTE_ARCH_64
 	rte_spinlock_t uar_lock_cq; /* CQs share a common distinct UAR */
 	rte_spinlock_t uar_lock[MLX5_UAR_PAGE_NUM_MAX];
@@ -631,6 +648,10 @@ struct mlx5_priv {
 
 int mlx5_getenv_int(const char *);
 int mlx5_proc_priv_init(struct rte_eth_dev *dev);
+int64_t mlx5_get_dbr(struct rte_eth_dev *dev,
+		     struct mlx5_devx_dbr_page **dbr_page);
+int32_t mlx5_release_dbr(struct rte_eth_dev *dev, uint32_t umem_id,
+			 uint64_t offset);
 
 /* mlx5_ethdev.c */
 
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index 2f688ac..436b453 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -29,6 +29,7 @@
 #include <rte_spinlock.h>
 #include <rte_io.h>
 #include <rte_bus_pci.h>
+#include <rte_malloc.h>
 
 #include "mlx5_utils.h"
 #include "mlx5.h"
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH 14/28] net/mlx5: rename RxQ verbs to general RxQ object
  2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
                   ` (12 preceding siblings ...)
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 13/28] net/mlx5: allocate door-bells " Matan Azrad
@ 2019-07-22  9:13 ` Matan Azrad
  2019-07-22  9:22   ` Slava Ovsiienko
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 15/28] net/mlx5: rename verbs indirection table to obj Matan Azrad
                   ` (16 subsequent siblings)
  30 siblings, 1 reply; 92+ messages in thread
From: Matan Azrad @ 2019-07-22  9:13 UTC (permalink / raw)
  To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Prepare for introducing of DevX RxQ object.
RxQ object is currently created using verbs only.
The next patches will add the option to create RxQ object using DevX.
This patch renames rxq_ibv to rxq_obj wherever relevant, and adds the
DevX items to relevant structs.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5.c         |   4 +-
 drivers/net/mlx5/mlx5.h         |   4 +-
 drivers/net/mlx5/mlx5_rxq.c     | 142 ++++++++++++++++++++--------------------
 drivers/net/mlx5/mlx5_rxtx.c    |   2 +-
 drivers/net/mlx5/mlx5_rxtx.h    |  24 +++++--
 drivers/net/mlx5/mlx5_trigger.c |   6 +-
 drivers/net/mlx5/mlx5_vlan.c    |   4 +-
 7 files changed, 98 insertions(+), 88 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 6b878da..7d267a6 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -823,9 +823,9 @@ struct mlx5_dev_spawn_data {
 	if (ret)
 		DRV_LOG(WARNING, "port %u some indirection table still remain",
 			dev->data->port_id);
-	ret = mlx5_rxq_ibv_verify(dev);
+	ret = mlx5_rxq_obj_verify(dev);
 	if (ret)
-		DRV_LOG(WARNING, "port %u some Verbs Rx queue still remain",
+		DRV_LOG(WARNING, "port %u some Rx queue objects still remain",
 			dev->data->port_id);
 	ret = mlx5_rxq_verify(dev);
 	if (ret)
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 1543aaf..fcbaaae 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -375,7 +375,7 @@ struct mlx5_verbs_alloc_ctx {
 /* Flow drop context necessary due to Verbs API. */
 struct mlx5_drop {
 	struct mlx5_hrxq *hrxq; /* Hash Rx queue queue. */
-	struct mlx5_rxq_ibv *rxq; /* Verbs Rx queue. */
+	struct mlx5_rxq_obj *rxq; /* Rx queue object. */
 };
 
 #define MLX5_COUNTERS_PER_POOL 512
@@ -612,7 +612,7 @@ struct mlx5_priv {
 	struct mlx5_flows flows; /* RTE Flow rules. */
 	struct mlx5_flows ctrl_flows; /* Control flow rules. */
 	LIST_HEAD(rxq, mlx5_rxq_ctrl) rxqsctrl; /* DPDK Rx queues. */
-	LIST_HEAD(rxqibv, mlx5_rxq_ibv) rxqsibv; /* Verbs Rx queues. */
+	LIST_HEAD(rxqobj, mlx5_rxq_obj) rxqsobj; /* Verbs/DevX Rx queues. */
 	LIST_HEAD(hrxq, mlx5_hrxq) hrxqs; /* Verbs Hash Rx queues. */
 	LIST_HEAD(txq, mlx5_txq_ctrl) txqsctrl; /* DPDK Tx queues. */
 	LIST_HEAD(txqibv, mlx5_txq_ibv) txqsibv; /* Verbs Tx queues. */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 8567ee5..20a4695 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -533,7 +533,7 @@
 }
 
 /**
- * Get an Rx queue Verbs object.
+ * Get an Rx queue Verbs/DevX object.
  *
  * @param dev
  *   Pointer to Ethernet device.
@@ -541,10 +541,10 @@
  *   Queue index in DPDK Rx queue array
  *
  * @return
- *   The Verbs object if it exists.
+ *   The Verbs/DevX object if it exists.
  */
-static struct mlx5_rxq_ibv *
-mlx5_rxq_ibv_get(struct rte_eth_dev *dev, uint16_t idx)
+static struct mlx5_rxq_obj *
+mlx5_rxq_obj_get(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
@@ -555,35 +555,35 @@
 	if (!rxq_data)
 		return NULL;
 	rxq_ctrl = container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
-	if (rxq_ctrl->ibv)
-		rte_atomic32_inc(&rxq_ctrl->ibv->refcnt);
-	return rxq_ctrl->ibv;
+	if (rxq_ctrl->obj)
+		rte_atomic32_inc(&rxq_ctrl->obj->refcnt);
+	return rxq_ctrl->obj;
 }
 
 /**
- * Release an Rx verbs queue object.
+ * Release an Rx verbs/DevX queue object.
  *
- * @param rxq_ibv
- *   Verbs Rx queue object.
+ * @param rxq_obj
+ *   Verbs/DevX Rx queue object.
  *
  * @return
  *   1 while a reference on it exists, 0 when freed.
  */
 static int
-mlx5_rxq_ibv_release(struct mlx5_rxq_ibv *rxq_ibv)
+mlx5_rxq_obj_release(struct mlx5_rxq_obj *rxq_obj)
 {
-	assert(rxq_ibv);
-	assert(rxq_ibv->wq);
-	assert(rxq_ibv->cq);
-	if (rte_atomic32_dec_and_test(&rxq_ibv->refcnt)) {
-		rxq_free_elts(rxq_ibv->rxq_ctrl);
-		claim_zero(mlx5_glue->destroy_wq(rxq_ibv->wq));
-		claim_zero(mlx5_glue->destroy_cq(rxq_ibv->cq));
-		if (rxq_ibv->channel)
+	assert(rxq_obj);
+	assert(rxq_obj->wq);
+	assert(rxq_obj->cq);
+	if (rte_atomic32_dec_and_test(&rxq_obj->refcnt)) {
+		rxq_free_elts(rxq_obj->rxq_ctrl);
+		claim_zero(mlx5_glue->destroy_wq(rxq_obj->wq));
+		claim_zero(mlx5_glue->destroy_cq(rxq_obj->cq));
+		if (rxq_obj->channel)
 			claim_zero(mlx5_glue->destroy_comp_channel
-				   (rxq_ibv->channel));
-		LIST_REMOVE(rxq_ibv, next);
-		rte_free(rxq_ibv);
+				   (rxq_obj->channel));
+		LIST_REMOVE(rxq_obj, next);
+		rte_free(rxq_obj);
 		return 0;
 	}
 	return 1;
@@ -622,14 +622,14 @@
 	}
 	intr_handle->type = RTE_INTR_HANDLE_EXT;
 	for (i = 0; i != n; ++i) {
-		/* This rxq ibv must not be released in this function. */
-		struct mlx5_rxq_ibv *rxq_ibv = mlx5_rxq_ibv_get(dev, i);
+		/* This rxq obj must not be released in this function. */
+		struct mlx5_rxq_obj *rxq_obj = mlx5_rxq_obj_get(dev, i);
 		int fd;
 		int flags;
 		int rc;
 
 		/* Skip queues that cannot request interrupts. */
-		if (!rxq_ibv || !rxq_ibv->channel) {
+		if (!rxq_obj || !rxq_obj->channel) {
 			/* Use invalid intr_vec[] index to disable entry. */
 			intr_handle->intr_vec[i] =
 				RTE_INTR_VEC_RXTX_OFFSET +
@@ -646,7 +646,7 @@
 			rte_errno = ENOMEM;
 			return -rte_errno;
 		}
-		fd = rxq_ibv->channel->fd;
+		fd = rxq_obj->channel->fd;
 		flags = fcntl(fd, F_GETFL);
 		rc = fcntl(fd, F_SETFL, flags | O_NONBLOCK);
 		if (rc < 0) {
@@ -702,8 +702,8 @@
 		 */
 		rxq_data = (*priv->rxqs)[i];
 		rxq_ctrl = container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
-		if (rxq_ctrl->ibv)
-			mlx5_rxq_ibv_release(rxq_ctrl->ibv);
+		if (rxq_ctrl->obj)
+			mlx5_rxq_obj_release(rxq_ctrl->obj);
 	}
 free:
 	rte_intr_free_epoll_fd(intr_handle);
@@ -763,15 +763,15 @@
 	}
 	rxq_ctrl = container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
 	if (rxq_ctrl->irq) {
-		struct mlx5_rxq_ibv *rxq_ibv;
+		struct mlx5_rxq_obj *rxq_obj;
 
-		rxq_ibv = mlx5_rxq_ibv_get(dev, rx_queue_id);
-		if (!rxq_ibv) {
+		rxq_obj = mlx5_rxq_obj_get(dev, rx_queue_id);
+		if (!rxq_obj) {
 			rte_errno = EINVAL;
 			return -rte_errno;
 		}
 		mlx5_arm_cq(rxq_data, rxq_data->cq_arm_sn);
-		mlx5_rxq_ibv_release(rxq_ibv);
+		mlx5_rxq_obj_release(rxq_obj);
 	}
 	return 0;
 }
@@ -793,7 +793,7 @@
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_rxq_data *rxq_data;
 	struct mlx5_rxq_ctrl *rxq_ctrl;
-	struct mlx5_rxq_ibv *rxq_ibv = NULL;
+	struct mlx5_rxq_obj *rxq_obj = NULL;
 	struct ibv_cq *ev_cq;
 	void *ev_ctx;
 	int ret;
@@ -806,24 +806,24 @@
 	rxq_ctrl = container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
 	if (!rxq_ctrl->irq)
 		return 0;
-	rxq_ibv = mlx5_rxq_ibv_get(dev, rx_queue_id);
-	if (!rxq_ibv) {
+	rxq_obj = mlx5_rxq_obj_get(dev, rx_queue_id);
+	if (!rxq_obj) {
 		rte_errno = EINVAL;
 		return -rte_errno;
 	}
-	ret = mlx5_glue->get_cq_event(rxq_ibv->channel, &ev_cq, &ev_ctx);
-	if (ret || ev_cq != rxq_ibv->cq) {
+	ret = mlx5_glue->get_cq_event(rxq_obj->channel, &ev_cq, &ev_ctx);
+	if (ret || ev_cq != rxq_obj->cq) {
 		rte_errno = EINVAL;
 		goto exit;
 	}
 	rxq_data->cq_arm_sn++;
-	mlx5_glue->ack_cq_events(rxq_ibv->cq, 1);
-	mlx5_rxq_ibv_release(rxq_ibv);
+	mlx5_glue->ack_cq_events(rxq_obj->cq, 1);
+	mlx5_rxq_obj_release(rxq_obj);
 	return 0;
 exit:
 	ret = rte_errno; /* Save rte_errno before cleanup. */
-	if (rxq_ibv)
-		mlx5_rxq_ibv_release(rxq_ibv);
+	if (rxq_obj)
+		mlx5_rxq_obj_release(rxq_obj);
 	DRV_LOG(WARNING, "port %u unable to disable interrupt on Rx queue %d",
 		dev->data->port_id, rx_queue_id);
 	rte_errno = ret; /* Restore rte_errno. */
@@ -831,7 +831,7 @@
 }
 
 /**
- * Create the Rx queue Verbs object.
+ * Create the Rx queue Verbs/DevX object.
  *
  * @param dev
  *   Pointer to Ethernet device.
@@ -839,10 +839,10 @@
  *   Queue index in DPDK Rx queue array
  *
  * @return
- *   The Verbs object initialised, NULL otherwise and rte_errno is set.
+ *   The Verbs/DevX object initialised, NULL otherwise and rte_errno is set.
  */
-struct mlx5_rxq_ibv *
-mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx)
+struct mlx5_rxq_obj *
+mlx5_rxq_obj_new(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
@@ -863,7 +863,7 @@ struct mlx5_rxq_ibv *
 	} attr;
 	unsigned int cqe_n;
 	unsigned int wqe_n = 1 << rxq_data->elts_n;
-	struct mlx5_rxq_ibv *tmpl = NULL;
+	struct mlx5_rxq_obj *tmpl = NULL;
 	struct mlx5dv_cq cq_info;
 	struct mlx5dv_rwq rwq;
 	int ret = 0;
@@ -1062,7 +1062,7 @@ struct mlx5_rxq_ibv *
 	DRV_LOG(DEBUG, "port %u rxq %u updated with %p", dev->data->port_id,
 		idx, (void *)&tmpl);
 	rte_atomic32_inc(&tmpl->refcnt);
-	LIST_INSERT_HEAD(&priv->rxqsibv, tmpl, next);
+	LIST_INSERT_HEAD(&priv->rxqsobj, tmpl, next);
 	priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE;
 	return tmpl;
 error:
@@ -1083,24 +1083,24 @@ struct mlx5_rxq_ibv *
 }
 
 /**
- * Verify the Verbs Rx queue list is empty
+ * Verify the Rx queue objects list is empty
  *
  * @param dev
  *   Pointer to Ethernet device.
  *
  * @return
- *   The number of object not released.
+ *   The number of objects not released.
  */
 int
-mlx5_rxq_ibv_verify(struct rte_eth_dev *dev)
+mlx5_rxq_obj_verify(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	int ret = 0;
-	struct mlx5_rxq_ibv *rxq_ibv;
+	struct mlx5_rxq_obj *rxq_obj;
 
-	LIST_FOREACH(rxq_ibv, &priv->rxqsibv, next) {
-		DRV_LOG(DEBUG, "port %u Verbs Rx queue %u still referenced",
-			dev->data->port_id, rxq_ibv->rxq_ctrl->rxq.idx);
+	LIST_FOREACH(rxq_obj, &priv->rxqsobj, next) {
+		DRV_LOG(DEBUG, "port %u Rx queue %u still referenced",
+			dev->data->port_id, rxq_obj->rxq_ctrl->rxq.idx);
 		++ret;
 	}
 	return ret;
@@ -1502,7 +1502,7 @@ struct mlx5_rxq_ctrl *
 		rxq_ctrl = container_of((*priv->rxqs)[idx],
 					struct mlx5_rxq_ctrl,
 					rxq);
-		mlx5_rxq_ibv_get(dev, idx);
+		mlx5_rxq_obj_get(dev, idx);
 		rte_atomic32_inc(&rxq_ctrl->refcnt);
 	}
 	return rxq_ctrl;
@@ -1529,8 +1529,8 @@ struct mlx5_rxq_ctrl *
 		return 0;
 	rxq_ctrl = container_of((*priv->rxqs)[idx], struct mlx5_rxq_ctrl, rxq);
 	assert(rxq_ctrl->priv);
-	if (rxq_ctrl->ibv && !mlx5_rxq_ibv_release(rxq_ctrl->ibv))
-		rxq_ctrl->ibv = NULL;
+	if (rxq_ctrl->obj && !mlx5_rxq_obj_release(rxq_ctrl->obj))
+		rxq_ctrl->obj = NULL;
 	if (rte_atomic32_dec_and_test(&rxq_ctrl->refcnt)) {
 		mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh);
 		LIST_REMOVE(rxq_ctrl, next);
@@ -1602,7 +1602,7 @@ struct mlx5_rxq_ctrl *
 
 		if (!rxq)
 			goto error;
-		wq[i] = rxq->ibv->wq;
+		wq[i] = rxq->obj->wq;
 		ind_tbl->queues[i] = queues[i];
 	}
 	ind_tbl->queues_n = queues_n;
@@ -1953,22 +1953,22 @@ struct mlx5_hrxq *
 }
 
 /**
- * Create a drop Rx queue Verbs object.
+ * Create a drop Rx queue Verbs/DevX object.
  *
  * @param dev
  *   Pointer to Ethernet device.
  *
  * @return
- *   The Verbs object initialised, NULL otherwise and rte_errno is set.
+ *   The Verbs/DevX object initialised, NULL otherwise and rte_errno is set.
  */
-static struct mlx5_rxq_ibv *
-mlx5_rxq_ibv_drop_new(struct rte_eth_dev *dev)
+static struct mlx5_rxq_obj *
+mlx5_rxq_obj_drop_new(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct ibv_context *ctx = priv->sh->ctx;
 	struct ibv_cq *cq;
 	struct ibv_wq *wq = NULL;
-	struct mlx5_rxq_ibv *rxq;
+	struct mlx5_rxq_obj *rxq;
 
 	if (priv->drop_queue.rxq)
 		return priv->drop_queue.rxq;
@@ -2013,19 +2013,19 @@ struct mlx5_hrxq *
 }
 
 /**
- * Release a drop Rx queue Verbs object.
+ * Release a drop Rx queue Verbs/DevX object.
  *
  * @param dev
  *   Pointer to Ethernet device.
  *
  * @return
- *   The Verbs object initialised, NULL otherwise and rte_errno is set.
+ *   The Verbs/DevX object initialised, NULL otherwise and rte_errno is set.
  */
 static void
-mlx5_rxq_ibv_drop_release(struct rte_eth_dev *dev)
+mlx5_rxq_obj_drop_release(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_ibv *rxq = priv->drop_queue.rxq;
+	struct mlx5_rxq_obj *rxq = priv->drop_queue.rxq;
 
 	if (rxq->wq)
 		claim_zero(mlx5_glue->destroy_wq(rxq->wq));
@@ -2049,10 +2049,10 @@ struct mlx5_hrxq *
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_ind_table_ibv *ind_tbl;
-	struct mlx5_rxq_ibv *rxq;
+	struct mlx5_rxq_obj *rxq;
 	struct mlx5_ind_table_ibv tmpl;
 
-	rxq = mlx5_rxq_ibv_drop_new(dev);
+	rxq = mlx5_rxq_obj_drop_new(dev);
 	if (!rxq)
 		return NULL;
 	tmpl.ind_table = mlx5_glue->create_rwq_ind_table
@@ -2077,7 +2077,7 @@ struct mlx5_hrxq *
 	ind_tbl->ind_table = tmpl.ind_table;
 	return ind_tbl;
 error:
-	mlx5_rxq_ibv_drop_release(dev);
+	mlx5_rxq_obj_drop_release(dev);
 	return NULL;
 }
 
@@ -2094,7 +2094,7 @@ struct mlx5_hrxq *
 	struct mlx5_ind_table_ibv *ind_tbl = priv->drop_queue.hrxq->ind_table;
 
 	claim_zero(mlx5_glue->destroy_rwq_ind_table(ind_tbl->ind_table));
-	mlx5_rxq_ibv_drop_release(dev);
+	mlx5_rxq_obj_drop_release(dev);
 	rte_free(ind_tbl);
 	priv->drop_queue.hrxq->ind_table = NULL;
 }
diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 35d60f4..9bfb002 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -815,7 +815,7 @@ enum mlx5_txcmp_code {
 		struct mlx5_rxq_ctrl *rxq_ctrl =
 			container_of(rxq, struct mlx5_rxq_ctrl, rxq);
 
-		ret = mlx5_glue->modify_wq(rxq_ctrl->ibv->wq, &mod);
+		ret = mlx5_glue->modify_wq(rxq_ctrl->obj->wq, &mod);
 		if (ret) {
 			DRV_LOG(ERR, "Cannot change Rx WQ state to %u  - %s\n",
 					sm->state, strerror(errno));
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index 436b453..eb20a07 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -141,13 +141,23 @@ struct mlx5_rxq_data {
 	uint32_t tunnel; /* Tunnel information. */
 } __rte_cache_aligned;
 
-/* Verbs Rx queue elements. */
-struct mlx5_rxq_ibv {
-	LIST_ENTRY(mlx5_rxq_ibv) next; /* Pointer to the next element. */
+enum mlx5_rxq_obj_type {
+	MLX5_RXQ_OBJ_TYPE_IBV,		/* mlx5_rxq_obj with ibv_wq. */
+	MLX5_RXQ_OBJ_TYPE_DEVX_RQ,	/* mlx5_rxq_obj with mlx5_devx_rq. */
+};
+
+/* Verbs/DevX Rx queue elements. */
+struct mlx5_rxq_obj {
+	LIST_ENTRY(mlx5_rxq_obj) next; /* Pointer to the next element. */
 	rte_atomic32_t refcnt; /* Reference counter. */
 	struct mlx5_rxq_ctrl *rxq_ctrl; /* Back pointer to parent. */
 	struct ibv_cq *cq; /* Completion Queue. */
-	struct ibv_wq *wq; /* Work Queue. */
+	enum mlx5_rxq_obj_type type;
+	RTE_STD_C11
+	union {
+		struct ibv_wq *wq; /* Work Queue. */
+		struct mlx5_devx_obj *rq; /* DevX object for Rx Queue. */
+	};
 	struct ibv_comp_channel *channel;
 };
 
@@ -156,7 +166,7 @@ struct mlx5_rxq_ctrl {
 	struct mlx5_rxq_data rxq; /* Data path structure. */
 	LIST_ENTRY(mlx5_rxq_ctrl) next; /* Pointer to the next element. */
 	rte_atomic32_t refcnt; /* Reference counter. */
-	struct mlx5_rxq_ibv *ibv; /* Verbs elements. */
+	struct mlx5_rxq_obj *obj; /* Verbs/DevX elements. */
 	struct mlx5_priv *priv; /* Back pointer to private data. */
 	unsigned int socket; /* CPU socket ID for allocations. */
 	unsigned int irq:1; /* Whether IRQ is enabled. */
@@ -300,8 +310,8 @@ int mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 void mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev);
 int mlx5_rx_intr_enable(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id);
-struct mlx5_rxq_ibv *mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx);
-int mlx5_rxq_ibv_verify(struct rte_eth_dev *dev);
+struct mlx5_rxq_obj *mlx5_rxq_obj_new(struct rte_eth_dev *dev, uint16_t idx);
+int mlx5_rxq_obj_verify(struct rte_eth_dev *dev);
 struct mlx5_rxq_ctrl *mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx,
 				   uint16_t desc, unsigned int socket,
 				   const struct rte_eth_rxconf *conf,
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 864c985..54353ee 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -123,10 +123,10 @@
 		ret = rxq_alloc_elts(rxq_ctrl);
 		if (ret)
 			goto error;
-		rxq_ctrl->ibv = mlx5_rxq_ibv_new(dev, i);
-		if (!rxq_ctrl->ibv)
+		rxq_ctrl->obj = mlx5_rxq_obj_new(dev, i);
+		if (!rxq_ctrl->obj)
 			goto error;
-		rxq_ctrl->wqn = rxq_ctrl->ibv->wq->wq_num;
+		rxq_ctrl->wqn = rxq_ctrl->obj->wq->wq_num;
 	}
 	return 0;
 error:
diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c
index 4004930..67518c2 100644
--- a/drivers/net/mlx5/mlx5_vlan.c
+++ b/drivers/net/mlx5/mlx5_vlan.c
@@ -127,7 +127,7 @@
 	}
 	DRV_LOG(DEBUG, "port %u set VLAN offloads 0x%x for port %uqueue %d",
 		dev->data->port_id, vlan_offloads, rxq->port_id, queue);
-	if (!rxq_ctrl->ibv) {
+	if (!rxq_ctrl->obj) {
 		/* Update related bits in RX queue. */
 		rxq->vlan_strip = !!on;
 		return;
@@ -137,7 +137,7 @@
 		.flags_mask = IBV_WQ_FLAGS_CVLAN_STRIPPING,
 		.flags = vlan_offloads,
 	};
-	ret = mlx5_glue->modify_wq(rxq_ctrl->ibv->wq, &mod);
+	ret = mlx5_glue->modify_wq(rxq_ctrl->obj->wq, &mod);
 	if (ret) {
 		DRV_LOG(ERR, "port %u failed to modified stripping mode: %s",
 			dev->data->port_id, strerror(rte_errno));
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH 15/28] net/mlx5: rename verbs indirection table to obj
  2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
                   ` (13 preceding siblings ...)
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 14/28] net/mlx5: rename RxQ verbs to general RxQ object Matan Azrad
@ 2019-07-22  9:13 ` Matan Azrad
  2019-07-22  9:22   ` Slava Ovsiienko
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 16/28] net/mlx5: rename hash RxQ verbs to general Matan Azrad
                   ` (15 subsequent siblings)
  30 siblings, 1 reply; 92+ messages in thread
From: Matan Azrad @ 2019-07-22  9:13 UTC (permalink / raw)
  To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Prepare for introducing of DevX RQT object.
Rx indirection table object is currently created using verbs only.
The next patches will add the option to create an RQT object using
DevX.
This patch renames ind_table_ibv to ind_table_obj wherever relevant,
and adds the DevX items to relevant structs.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5.c      |  2 +-
 drivers/net/mlx5/mlx5.h      |  4 +--
 drivers/net/mlx5/mlx5_rxq.c  | 64 ++++++++++++++++++++++----------------------
 drivers/net/mlx5/mlx5_rxtx.h | 20 ++++++++++----
 4 files changed, 50 insertions(+), 40 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 7d267a6..23ee887 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -819,7 +819,7 @@ struct mlx5_dev_spawn_data {
 	if (ret)
 		DRV_LOG(WARNING, "port %u some hash Rx queue still remain",
 			dev->data->port_id);
-	ret = mlx5_ind_table_ibv_verify(dev);
+	ret = mlx5_ind_table_obj_verify(dev);
 	if (ret)
 		DRV_LOG(WARNING, "port %u some indirection table still remain",
 			dev->data->port_id);
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index fcbaaae..955e28f 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -616,8 +616,8 @@ struct mlx5_priv {
 	LIST_HEAD(hrxq, mlx5_hrxq) hrxqs; /* Verbs Hash Rx queues. */
 	LIST_HEAD(txq, mlx5_txq_ctrl) txqsctrl; /* DPDK Tx queues. */
 	LIST_HEAD(txqibv, mlx5_txq_ibv) txqsibv; /* Verbs Tx queues. */
-	/* Verbs Indirection tables. */
-	LIST_HEAD(ind_tables, mlx5_ind_table_ibv) ind_tbls;
+	/* Indirection tables. */
+	LIST_HEAD(ind_tables, mlx5_ind_table_obj) ind_tbls;
 	/* Pointer to next element. */
 	rte_atomic32_t refcnt; /**< Reference counter. */
 	struct ibv_flow_action *verbs_action;
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 20a4695..507a1ab 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1576,14 +1576,14 @@ struct mlx5_rxq_ctrl *
  *   Number of queues in the array.
  *
  * @return
- *   The Verbs object initialised, NULL otherwise and rte_errno is set.
+ *   The Verbs/DevX object initialised, NULL otherwise and rte_errno is set.
  */
-static struct mlx5_ind_table_ibv *
-mlx5_ind_table_ibv_new(struct rte_eth_dev *dev, const uint16_t *queues,
+static struct mlx5_ind_table_obj *
+mlx5_ind_table_obj_new(struct rte_eth_dev *dev, const uint16_t *queues,
 		       uint32_t queues_n)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_ind_table_ibv *ind_tbl;
+	struct mlx5_ind_table_obj *ind_tbl;
 	const unsigned int wq_n = rte_is_power_of_2(queues_n) ?
 		log2above(queues_n) :
 		log2above(priv->config.ind_table_max_size);
@@ -1642,12 +1642,12 @@ struct mlx5_rxq_ctrl *
  * @return
  *   An indirection table if found.
  */
-static struct mlx5_ind_table_ibv *
-mlx5_ind_table_ibv_get(struct rte_eth_dev *dev, const uint16_t *queues,
+static struct mlx5_ind_table_obj *
+mlx5_ind_table_obj_get(struct rte_eth_dev *dev, const uint16_t *queues,
 		       uint32_t queues_n)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_ind_table_ibv *ind_tbl;
+	struct mlx5_ind_table_obj *ind_tbl;
 
 	LIST_FOREACH(ind_tbl, &priv->ind_tbls, next) {
 		if ((ind_tbl->queues_n == queues_n) &&
@@ -1678,8 +1678,8 @@ struct mlx5_rxq_ctrl *
  *   1 while a reference on it exists, 0 when freed.
  */
 static int
-mlx5_ind_table_ibv_release(struct rte_eth_dev *dev,
-			   struct mlx5_ind_table_ibv *ind_tbl)
+mlx5_ind_table_obj_release(struct rte_eth_dev *dev,
+			   struct mlx5_ind_table_obj *ind_tbl)
 {
 	unsigned int i;
 
@@ -1706,15 +1706,15 @@ struct mlx5_rxq_ctrl *
  *   The number of object not released.
  */
 int
-mlx5_ind_table_ibv_verify(struct rte_eth_dev *dev)
+mlx5_ind_table_obj_verify(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_ind_table_ibv *ind_tbl;
+	struct mlx5_ind_table_obj *ind_tbl;
 	int ret = 0;
 
 	LIST_FOREACH(ind_tbl, &priv->ind_tbls, next) {
 		DRV_LOG(DEBUG,
-			"port %u Verbs indirection table %p still referenced",
+			"port %u indirection table obj %p still referenced",
 			dev->data->port_id, (void *)ind_tbl);
 		++ret;
 	}
@@ -1752,7 +1752,7 @@ struct mlx5_hrxq *
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_hrxq *hrxq;
-	struct mlx5_ind_table_ibv *ind_tbl;
+	struct mlx5_ind_table_obj *ind_tbl;
 	struct ibv_qp *qp;
 #ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT
 	struct mlx5dv_qp_init_attr qp_init_attr;
@@ -1760,9 +1760,9 @@ struct mlx5_hrxq *
 	int err;
 
 	queues_n = hash_fields ? queues_n : 1;
-	ind_tbl = mlx5_ind_table_ibv_get(dev, queues, queues_n);
+	ind_tbl = mlx5_ind_table_obj_get(dev, queues, queues_n);
 	if (!ind_tbl)
-		ind_tbl = mlx5_ind_table_ibv_new(dev, queues, queues_n);
+		ind_tbl = mlx5_ind_table_obj_new(dev, queues, queues_n);
 	if (!ind_tbl) {
 		rte_errno = ENOMEM;
 		return NULL;
@@ -1844,7 +1844,7 @@ struct mlx5_hrxq *
 	return hrxq;
 error:
 	err = rte_errno; /* Save rte_errno before cleanup. */
-	mlx5_ind_table_ibv_release(dev, ind_tbl);
+	mlx5_ind_table_obj_release(dev, ind_tbl);
 	if (qp)
 		claim_zero(mlx5_glue->destroy_qp(qp));
 	rte_errno = err; /* Restore rte_errno. */
@@ -1878,7 +1878,7 @@ struct mlx5_hrxq *
 
 	queues_n = hash_fields ? queues_n : 1;
 	LIST_FOREACH(hrxq, &priv->hrxqs, next) {
-		struct mlx5_ind_table_ibv *ind_tbl;
+		struct mlx5_ind_table_obj *ind_tbl;
 
 		if (hrxq->rss_key_len != rss_key_len)
 			continue;
@@ -1886,11 +1886,11 @@ struct mlx5_hrxq *
 			continue;
 		if (hrxq->hash_fields != hash_fields)
 			continue;
-		ind_tbl = mlx5_ind_table_ibv_get(dev, queues, queues_n);
+		ind_tbl = mlx5_ind_table_obj_get(dev, queues, queues_n);
 		if (!ind_tbl)
 			continue;
 		if (ind_tbl != hrxq->ind_table) {
-			mlx5_ind_table_ibv_release(dev, ind_tbl);
+			mlx5_ind_table_obj_release(dev, ind_tbl);
 			continue;
 		}
 		rte_atomic32_inc(&hrxq->refcnt);
@@ -1918,12 +1918,12 @@ struct mlx5_hrxq *
 		mlx5_glue->destroy_flow_action(hrxq->action);
 #endif
 		claim_zero(mlx5_glue->destroy_qp(hrxq->qp));
-		mlx5_ind_table_ibv_release(dev, hrxq->ind_table);
+		mlx5_ind_table_obj_release(dev, hrxq->ind_table);
 		LIST_REMOVE(hrxq, next);
 		rte_free(hrxq);
 		return 0;
 	}
-	claim_nonzero(mlx5_ind_table_ibv_release(dev, hrxq->ind_table));
+	claim_nonzero(mlx5_ind_table_obj_release(dev, hrxq->ind_table));
 	return 1;
 }
 
@@ -2042,15 +2042,15 @@ struct mlx5_hrxq *
  *   Pointer to Ethernet device.
  *
  * @return
- *   The Verbs object initialised, NULL otherwise and rte_errno is set.
+ *   The Verbs/DevX object initialised, NULL otherwise and rte_errno is set.
  */
-static struct mlx5_ind_table_ibv *
-mlx5_ind_table_ibv_drop_new(struct rte_eth_dev *dev)
+static struct mlx5_ind_table_obj *
+mlx5_ind_table_obj_drop_new(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_ind_table_ibv *ind_tbl;
+	struct mlx5_ind_table_obj *ind_tbl;
 	struct mlx5_rxq_obj *rxq;
-	struct mlx5_ind_table_ibv tmpl;
+	struct mlx5_ind_table_obj tmpl;
 
 	rxq = mlx5_rxq_obj_drop_new(dev);
 	if (!rxq)
@@ -2088,10 +2088,10 @@ struct mlx5_hrxq *
  *   Pointer to Ethernet device.
  */
 static void
-mlx5_ind_table_ibv_drop_release(struct rte_eth_dev *dev)
+mlx5_ind_table_obj_drop_release(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_ind_table_ibv *ind_tbl = priv->drop_queue.hrxq->ind_table;
+	struct mlx5_ind_table_obj *ind_tbl = priv->drop_queue.hrxq->ind_table;
 
 	claim_zero(mlx5_glue->destroy_rwq_ind_table(ind_tbl->ind_table));
 	mlx5_rxq_obj_drop_release(dev);
@@ -2112,7 +2112,7 @@ struct mlx5_hrxq *
 mlx5_hrxq_drop_new(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_ind_table_ibv *ind_tbl;
+	struct mlx5_ind_table_obj *ind_tbl;
 	struct ibv_qp *qp;
 	struct mlx5_hrxq *hrxq;
 
@@ -2120,7 +2120,7 @@ struct mlx5_hrxq *
 		rte_atomic32_inc(&priv->drop_queue.hrxq->refcnt);
 		return priv->drop_queue.hrxq;
 	}
-	ind_tbl = mlx5_ind_table_ibv_drop_new(dev);
+	ind_tbl = mlx5_ind_table_obj_drop_new(dev);
 	if (!ind_tbl)
 		return NULL;
 	qp = mlx5_glue->create_qp_ex(priv->sh->ctx,
@@ -2168,7 +2168,7 @@ struct mlx5_hrxq *
 	return hrxq;
 error:
 	if (ind_tbl)
-		mlx5_ind_table_ibv_drop_release(dev);
+		mlx5_ind_table_obj_drop_release(dev);
 	return NULL;
 }
 
@@ -2189,7 +2189,7 @@ struct mlx5_hrxq *
 		mlx5_glue->destroy_flow_action(hrxq->action);
 #endif
 		claim_zero(mlx5_glue->destroy_qp(hrxq->qp));
-		mlx5_ind_table_ibv_drop_release(dev);
+		mlx5_ind_table_obj_drop_release(dev);
 		rte_free(hrxq);
 		priv->drop_queue.hrxq = NULL;
 	}
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index eb20a07..0e7b428 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -176,11 +176,21 @@ struct mlx5_rxq_ctrl {
 	uint16_t dump_file_n; /* Number of dump files. */
 };
 
+enum mlx5_ind_tbl_type {
+	MLX5_IND_TBL_TYPE_IBV,
+	MLX5_IND_TBL_TYPE_DEVX,
+};
+
 /* Indirection table. */
-struct mlx5_ind_table_ibv {
-	LIST_ENTRY(mlx5_ind_table_ibv) next; /* Pointer to the next element. */
+struct mlx5_ind_table_obj {
+	LIST_ENTRY(mlx5_ind_table_obj) next; /* Pointer to the next element. */
 	rte_atomic32_t refcnt; /* Reference counter. */
-	struct ibv_rwq_ind_table *ind_table; /**< Indirection table. */
+	enum mlx5_ind_tbl_type type;
+	RTE_STD_C11
+	union {
+		struct ibv_rwq_ind_table *ind_table; /**< Indirection table. */
+		struct mlx5_devx_obj *rqt; /* DevX RQT object. */
+	};
 	uint32_t queues_n; /**< Number of queues in the list. */
 	uint16_t queues[]; /**< Queue list. */
 };
@@ -189,7 +199,7 @@ struct mlx5_ind_table_ibv {
 struct mlx5_hrxq {
 	LIST_ENTRY(mlx5_hrxq) next; /* Pointer to the next element. */
 	rte_atomic32_t refcnt; /* Reference counter. */
-	struct mlx5_ind_table_ibv *ind_table; /* Indirection table. */
+	struct mlx5_ind_table_obj *ind_table; /* Indirection table. */
 	struct ibv_qp *qp; /* Verbs queue pair. */
 #ifdef HAVE_IBV_FLOW_DV_SUPPORT
 	void *action; /* DV QP action pointer. */
@@ -320,7 +330,7 @@ struct mlx5_rxq_ctrl *mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx,
 int mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx);
 int mlx5_rxq_verify(struct rte_eth_dev *dev);
 int rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl);
-int mlx5_ind_table_ibv_verify(struct rte_eth_dev *dev);
+int mlx5_ind_table_obj_verify(struct rte_eth_dev *dev);
 struct mlx5_hrxq *mlx5_hrxq_new(struct rte_eth_dev *dev,
 				const uint8_t *rss_key, uint32_t rss_key_len,
 				uint64_t hash_fields,
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH 16/28] net/mlx5: rename hash RxQ verbs to general
  2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
                   ` (14 preceding siblings ...)
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 15/28] net/mlx5: rename verbs indirection table to obj Matan Azrad
@ 2019-07-22  9:13 ` Matan Azrad
  2019-07-22  9:22   ` Slava Ovsiienko
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 17/28] net/mlx5: update queue state modify function Matan Azrad
                   ` (14 subsequent siblings)
  30 siblings, 1 reply; 92+ messages in thread
From: Matan Azrad @ 2019-07-22  9:13 UTC (permalink / raw)
  To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Prepare for introducing use of DevX TIR object.
Hash Rx queue is currently created using verbs QP only.
The next patches will add the option to create it with a TIR object
using DevX.
This patch renames hrxq_ibv to hrxq wherever relevant, and adds
the DevX items to relevant structs.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5.c      | 2 +-
 drivers/net/mlx5/mlx5_rxq.c  | 8 ++++----
 drivers/net/mlx5/mlx5_rxtx.h | 8 ++++++--
 3 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 23ee887..776198f 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -815,7 +815,7 @@ struct mlx5_dev_spawn_data {
 		mlx5_free_shared_ibctx(priv->sh);
 		priv->sh = NULL;
 	}
-	ret = mlx5_hrxq_ibv_verify(dev);
+	ret = mlx5_hrxq_verify(dev);
 	if (ret)
 		DRV_LOG(WARNING, "port %u some hash Rx queue still remain",
 			dev->data->port_id);
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 507a1ab..d3bc3ee 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1741,7 +1741,7 @@ struct mlx5_rxq_ctrl *
  *   Tunnel type.
  *
  * @return
- *   The Verbs object initialised, NULL otherwise and rte_errno is set.
+ *   The Verbs/DevX object initialised, NULL otherwise and rte_errno is set.
  */
 struct mlx5_hrxq *
 mlx5_hrxq_new(struct rte_eth_dev *dev,
@@ -1937,7 +1937,7 @@ struct mlx5_hrxq *
  *   The number of object not released.
  */
 int
-mlx5_hrxq_ibv_verify(struct rte_eth_dev *dev)
+mlx5_hrxq_verify(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_hrxq *hrxq;
@@ -1945,7 +1945,7 @@ struct mlx5_hrxq *
 
 	LIST_FOREACH(hrxq, &priv->hrxqs, next) {
 		DRV_LOG(DEBUG,
-			"port %u Verbs hash Rx queue %p still referenced",
+			"port %u hash Rx queue %p still referenced",
 			dev->data->port_id, (void *)hrxq);
 		++ret;
 	}
@@ -2106,7 +2106,7 @@ struct mlx5_hrxq *
  *   Pointer to Ethernet device.
  *
  * @return
- *   The Verbs object initialised, NULL otherwise and rte_errno is set.
+ *   The Verbs/DevX object initialised, NULL otherwise and rte_errno is set.
  */
 struct mlx5_hrxq *
 mlx5_hrxq_drop_new(struct rte_eth_dev *dev)
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index 0e7b428..f4f5c0d 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -200,7 +200,11 @@ struct mlx5_hrxq {
 	LIST_ENTRY(mlx5_hrxq) next; /* Pointer to the next element. */
 	rte_atomic32_t refcnt; /* Reference counter. */
 	struct mlx5_ind_table_obj *ind_table; /* Indirection table. */
-	struct ibv_qp *qp; /* Verbs queue pair. */
+	RTE_STD_C11
+	union {
+		struct ibv_qp *qp; /* Verbs queue pair. */
+		struct mlx5_devx_obj *tir; /* DevX TIR object. */
+	};
 #ifdef HAVE_IBV_FLOW_DV_SUPPORT
 	void *action; /* DV QP action pointer. */
 #endif
@@ -341,7 +345,7 @@ struct mlx5_hrxq *mlx5_hrxq_get(struct rte_eth_dev *dev,
 				uint64_t hash_fields,
 				const uint16_t *queues, uint32_t queues_n);
 int mlx5_hrxq_release(struct rte_eth_dev *dev, struct mlx5_hrxq *hxrq);
-int mlx5_hrxq_ibv_verify(struct rte_eth_dev *dev);
+int mlx5_hrxq_verify(struct rte_eth_dev *dev);
 struct mlx5_hrxq *mlx5_hrxq_drop_new(struct rte_eth_dev *dev);
 void mlx5_hrxq_drop_release(struct rte_eth_dev *dev);
 uint64_t mlx5_get_rx_port_offloads(struct rte_eth_dev *dev);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH 17/28] net/mlx5: update queue state modify function
  2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
                   ` (15 preceding siblings ...)
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 16/28] net/mlx5: rename hash RxQ verbs to general Matan Azrad
@ 2019-07-22  9:13 ` Matan Azrad
  2019-07-22  9:22   ` Slava Ovsiienko
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 18/28] net/mlx5: store protection domain number on create Matan Azrad
                   ` (13 subsequent siblings)
  30 siblings, 1 reply; 92+ messages in thread
From: Matan Azrad @ 2019-07-22  9:13 UTC (permalink / raw)
  To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Function mlx5_queue_state_modify_primary() was implemented to handle
state change for queues created using Verbs API.

This patch update function mlx5_queue_state_modify_primary() to
support state change of RQ object created using DevX API.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5_rxtx.c | 31 +++++++++++++++++++++++++------
 1 file changed, 25 insertions(+), 6 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 9bfb002..584da3e 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -788,7 +788,7 @@ enum mlx5_txcmp_code {
 }
 
 /**
- * Modify a Verbs queue state.
+ * Modify a Verbs/DevX queue state.
  * This must be called from the primary process.
  *
  * @param dev
@@ -807,15 +807,34 @@ enum mlx5_txcmp_code {
 	struct mlx5_priv *priv = dev->data->dev_private;
 
 	if (sm->is_wq) {
-		struct ibv_wq_attr mod = {
-			.attr_mask = IBV_WQ_ATTR_STATE,
-			.wq_state = sm->state,
-		};
 		struct mlx5_rxq_data *rxq = (*priv->rxqs)[sm->queue_id];
 		struct mlx5_rxq_ctrl *rxq_ctrl =
 			container_of(rxq, struct mlx5_rxq_ctrl, rxq);
 
-		ret = mlx5_glue->modify_wq(rxq_ctrl->obj->wq, &mod);
+		if (rxq_ctrl->obj->type == MLX5_RXQ_OBJ_TYPE_IBV) {
+			struct ibv_wq_attr mod = {
+				.attr_mask = IBV_WQ_ATTR_STATE,
+				.wq_state = sm->state,
+			};
+
+			ret = mlx5_glue->modify_wq(rxq_ctrl->obj->wq, &mod);
+		} else { /* rxq_ctrl->obj->type == MLX5_RXQ_OBJ_TYPE_DEVX_RQ. */
+			struct mlx5_devx_modify_rq_attr rq_attr;
+
+			memset(&rq_attr, 0, sizeof(rq_attr));
+			if (sm->state == IBV_WQS_RESET) {
+				rq_attr.rq_state = MLX5_RQC_STATE_ERR;
+				rq_attr.state = MLX5_RQC_STATE_RST;
+			} else if (sm->state == IBV_WQS_RDY) {
+				rq_attr.rq_state = MLX5_RQC_STATE_RST;
+				rq_attr.state = MLX5_RQC_STATE_RDY;
+			} else if (sm->state == IBV_WQS_ERR) {
+				rq_attr.rq_state = MLX5_RQC_STATE_RDY;
+				rq_attr.state = MLX5_RQC_STATE_ERR;
+			}
+			ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq,
+						      &rq_attr);
+		}
 		if (ret) {
 			DRV_LOG(ERR, "Cannot change Rx WQ state to %u  - %s\n",
 					sm->state, strerror(errno));
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH 18/28] net/mlx5: store protection domain number on create
  2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
                   ` (16 preceding siblings ...)
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 17/28] net/mlx5: update queue state modify function Matan Azrad
@ 2019-07-22  9:13 ` Matan Azrad
  2019-07-22  9:21   ` Slava Ovsiienko
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 19/28] net/mlx5: func to create Rx verbs completion queue Matan Azrad
                   ` (12 subsequent siblings)
  30 siblings, 1 reply; 92+ messages in thread
From: Matan Azrad @ 2019-07-22  9:13 UTC (permalink / raw)
  To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Function mlx5_alloc_shared_ibctx() allocates Protection Domain using
verbs API, as part of shared IB device context.
This patch adds reading and storing of pdn value from the created PD
object, using DV API.
The pdn value is required when creating WQ using DevX API.

This patch also updates function flow_dv_create_counter_stat_mem_mng()
which uses the pdn value as well.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5.c         | 38 ++++++++++++++++++++++++++++++++++++++
 drivers/net/mlx5/mlx5.h         |  1 +
 drivers/net/mlx5/mlx5_flow_dv.c |  7 +------
 3 files changed, 40 insertions(+), 6 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 776198f..b085176 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -270,6 +270,37 @@ struct mlx5_dev_spawn_data {
 }
 
 /**
+ * Extract pdn of PD object using DV API.
+ *
+ * @param[in] pd
+ *   Pointer to the verbs PD object.
+ * @param[out] pdn
+ *   Pointer to the PD object number variable.
+ *
+ * @return
+ *   0 on success, error value otherwise.
+ */
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+static int
+mlx5_get_pdn(struct ibv_pd *pd __rte_unused, uint32_t *pdn __rte_unused)
+{
+	struct mlx5dv_obj obj;
+	struct mlx5dv_pd pd_info;
+	int ret = 0;
+
+	obj.pd.in = pd;
+	obj.pd.out = &pd_info;
+	ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_PD);
+	if (ret) {
+		DRV_LOG(DEBUG, "Fail to get PD object info");
+		return ret;
+	}
+	*pdn = pd_info.pdn;
+	return 0;
+}
+#endif /* HAVE_IBV_FLOW_DV_SUPPORT */
+
+/**
  * Allocate shared IB device context. If there is multiport device the
  * master and representors will share this context, if there is single
  * port dedicated IB device, the context will be used by only given
@@ -357,6 +388,13 @@ struct mlx5_dev_spawn_data {
 		err = ENOMEM;
 		goto error;
 	}
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+	err = mlx5_get_pdn(sh->pd, &sh->pdn);
+	if (err) {
+		DRV_LOG(ERR, "Fail to extract pdn from PD");
+		goto error;
+	}
+#endif /* HAVE_IBV_FLOW_DV_SUPPORT */
 	/*
 	 * Once the device is added to the list of memory event
 	 * callback, its global MR cache table cannot be expanded
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 955e28f..1dc8b7c 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -525,6 +525,7 @@ struct mlx5_ibv_shared {
 	uint32_t max_port; /* Maximal IB device port index. */
 	struct ibv_context *ctx; /* Verbs/DV context. */
 	struct ibv_pd *pd; /* Protection Domain. */
+	uint32_t pdn; /* Protection Domain number. */
 	uint32_t tdn; /* Transport Domain number. */
 	char ibdev_name[IBV_SYSFS_NAME_MAX]; /* IB device name. */
 	char ibdev_path[IBV_SYSFS_PATH_MAX]; /* IB device path for secondary */
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index afaa19c..36696c8 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -2319,8 +2319,6 @@ struct field_modify_info modify_tcp[] = {
 {
 	struct mlx5_ibv_shared *sh = ((struct mlx5_priv *)
 					(dev->data->dev_private))->sh;
-	struct mlx5dv_pd dv_pd;
-	struct mlx5dv_obj dv_obj;
 	struct mlx5_devx_mkey_attr mkey_attr;
 	struct mlx5_counter_stats_mem_mng *mem_mng;
 	volatile struct flow_counter_stats *raw_data;
@@ -2344,13 +2342,10 @@ struct field_modify_info modify_tcp[] = {
 		rte_free(mem);
 		return NULL;
 	}
-	dv_obj.pd.in = sh->pd;
-	dv_obj.pd.out = &dv_pd;
-	mlx5_glue->dv_init_obj(&dv_obj, MLX5DV_OBJ_PD);
 	mkey_attr.addr = (uintptr_t)mem;
 	mkey_attr.size = size;
 	mkey_attr.umem_id = mem_mng->umem->umem_id;
-	mkey_attr.pd = dv_pd.pdn;
+	mkey_attr.pd = sh->pdn;
 	mem_mng->dm = mlx5_devx_cmd_mkey_create(sh->ctx, &mkey_attr);
 	if (!mem_mng->dm) {
 		mlx5_glue->devx_umem_dereg(mem_mng->umem);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH 19/28] net/mlx5: func to create Rx verbs completion queue
  2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
                   ` (17 preceding siblings ...)
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 18/28] net/mlx5: store protection domain number on create Matan Azrad
@ 2019-07-22  9:13 ` Matan Azrad
  2019-07-22  9:23   ` Slava Ovsiienko
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 20/28] net/mlx5: function to create Rx verbs work queue Matan Azrad
                   ` (11 subsequent siblings)
  30 siblings, 1 reply; 92+ messages in thread
From: Matan Azrad @ 2019-07-22  9:13 UTC (permalink / raw)
  To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Verbs CQ for RxQ is created inside function mlx5_rxq_obj_new().
This patch moves the creation of CQ to dedicated function
mlx5_ibv_cq_new().

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5_rxq.c | 159 +++++++++++++++++++++++++-------------------
 1 file changed, 92 insertions(+), 67 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index d3bc3ee..e5015bb 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -831,6 +831,75 @@
 }
 
 /**
+ * Create a CQ Verbs object.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param priv
+ *   Pointer to device private data.
+ * @param rxq_data
+ *   Pointer to Rx queue data.
+ * @param cqe_n
+ *   Number of CQEs in CQ.
+ * @param rxq_obj
+ *   Pointer to Rx queue object data.
+ *
+ * @return
+ *   The Verbs object initialised, NULL otherwise and rte_errno is set.
+ */
+static struct ibv_cq *
+mlx5_ibv_cq_new(struct rte_eth_dev *dev, struct mlx5_priv *priv,
+		struct mlx5_rxq_data *rxq_data,
+		unsigned int cqe_n, struct mlx5_rxq_obj *rxq_obj)
+{
+	struct {
+		struct ibv_cq_init_attr_ex ibv;
+		struct mlx5dv_cq_init_attr mlx5;
+	} cq_attr;
+
+	cq_attr.ibv = (struct ibv_cq_init_attr_ex){
+		.cqe = cqe_n,
+		.channel = rxq_obj->channel,
+		.comp_mask = 0,
+	};
+	cq_attr.mlx5 = (struct mlx5dv_cq_init_attr){
+		.comp_mask = 0,
+	};
+	if (priv->config.cqe_comp && !rxq_data->hw_timestamp) {
+		cq_attr.mlx5.comp_mask |=
+				MLX5DV_CQ_INIT_ATTR_MASK_COMPRESSED_CQE;
+#ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT
+		cq_attr.mlx5.cqe_comp_res_format =
+				mlx5_rxq_mprq_enabled(rxq_data) ?
+				MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX :
+				MLX5DV_CQE_RES_FORMAT_HASH;
+#else
+		cq_attr.mlx5.cqe_comp_res_format = MLX5DV_CQE_RES_FORMAT_HASH;
+#endif
+		/*
+		 * For vectorized Rx, it must not be doubled in order to
+		 * make cq_ci and rq_ci aligned.
+		 */
+		if (mlx5_rxq_check_vec_support(rxq_data) < 0)
+			cq_attr.ibv.cqe *= 2;
+	} else if (priv->config.cqe_comp && rxq_data->hw_timestamp) {
+		DRV_LOG(DEBUG,
+			"port %u Rx CQE compression is disabled for HW"
+			" timestamp",
+			dev->data->port_id);
+	}
+#ifdef HAVE_IBV_MLX5_MOD_CQE_128B_PAD
+	if (priv->config.cqe_pad) {
+		cq_attr.mlx5.comp_mask |= MLX5DV_CQ_INIT_ATTR_MASK_FLAGS;
+		cq_attr.mlx5.flags |= MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD;
+	}
+#endif
+	return mlx5_glue->cq_ex_to_cq(mlx5_glue->dv_create_cq(priv->sh->ctx,
+							      &cq_attr.ibv,
+							      &cq_attr.mlx5));
+}
+
+/**
  * Create the Rx queue Verbs/DevX object.
  *
  * @param dev
@@ -849,18 +918,12 @@ struct mlx5_rxq_obj *
 	struct mlx5_rxq_ctrl *rxq_ctrl =
 		container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
 	struct ibv_wq_attr mod;
-	union {
-		struct {
-			struct ibv_cq_init_attr_ex ibv;
-			struct mlx5dv_cq_init_attr mlx5;
-		} cq;
-		struct {
-			struct ibv_wq_init_attr ibv;
+	struct {
+		struct ibv_wq_init_attr ibv;
 #ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT
-			struct mlx5dv_wq_init_attr mlx5;
+		struct mlx5dv_wq_init_attr mlx5;
 #endif
-		} wq;
-	} attr;
+	} wq_attr;
 	unsigned int cqe_n;
 	unsigned int wqe_n = 1 << rxq_data->elts_n;
 	struct mlx5_rxq_obj *tmpl = NULL;
@@ -872,7 +935,7 @@ struct mlx5_rxq_obj *
 	const int mprq_en = mlx5_rxq_mprq_enabled(rxq_data);
 
 	assert(rxq_data);
-	assert(!rxq_ctrl->ibv);
+	assert(!rxq_ctrl->obj);
 	priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_RX_QUEUE;
 	priv->verbs_alloc_ctx.obj = rxq_ctrl;
 	tmpl = rte_calloc_socket(__func__, 1, sizeof(*tmpl), 0,
@@ -898,46 +961,8 @@ struct mlx5_rxq_obj *
 		cqe_n = wqe_n * (1 << rxq_data->strd_num_n) - 1;
 	else
 		cqe_n = wqe_n  - 1;
-	attr.cq.ibv = (struct ibv_cq_init_attr_ex){
-		.cqe = cqe_n,
-		.channel = tmpl->channel,
-		.comp_mask = 0,
-	};
-	attr.cq.mlx5 = (struct mlx5dv_cq_init_attr){
-		.comp_mask = 0,
-	};
-	if (config->cqe_comp && !rxq_data->hw_timestamp) {
-		attr.cq.mlx5.comp_mask |=
-			MLX5DV_CQ_INIT_ATTR_MASK_COMPRESSED_CQE;
-#ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT
-		attr.cq.mlx5.cqe_comp_res_format =
-			mprq_en ? MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX :
-				  MLX5DV_CQE_RES_FORMAT_HASH;
-#else
-		attr.cq.mlx5.cqe_comp_res_format = MLX5DV_CQE_RES_FORMAT_HASH;
-#endif
-		/*
-		 * For vectorized Rx, it must not be doubled in order to
-		 * make cq_ci and rq_ci aligned.
-		 */
-		if (mlx5_rxq_check_vec_support(rxq_data) < 0)
-			attr.cq.ibv.cqe *= 2;
-	} else if (config->cqe_comp && rxq_data->hw_timestamp) {
-		DRV_LOG(DEBUG,
-			"port %u Rx CQE compression is disabled for HW"
-			" timestamp",
-			dev->data->port_id);
-	}
-#ifdef HAVE_IBV_MLX5_MOD_CQE_128B_PAD
-	if (config->cqe_pad) {
-		attr.cq.mlx5.comp_mask |= MLX5DV_CQ_INIT_ATTR_MASK_FLAGS;
-		attr.cq.mlx5.flags |= MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD;
-	}
-#endif
-	tmpl->cq = mlx5_glue->cq_ex_to_cq
-		(mlx5_glue->dv_create_cq(priv->sh->ctx, &attr.cq.ibv,
-					 &attr.cq.mlx5));
-	if (tmpl->cq == NULL) {
+	tmpl->cq = mlx5_ibv_cq_new(dev, priv, rxq_data, cqe_n, tmpl);
+	if (!tmpl->cq) {
 		DRV_LOG(ERR, "port %u Rx queue %u CQ creation failure",
 			dev->data->port_id, idx);
 		rte_errno = ENOMEM;
@@ -947,7 +972,7 @@ struct mlx5_rxq_obj *
 		dev->data->port_id, priv->sh->device_attr.orig_attr.max_qp_wr);
 	DRV_LOG(DEBUG, "port %u device_attr.max_sge is %d",
 		dev->data->port_id, priv->sh->device_attr.orig_attr.max_sge);
-	attr.wq.ibv = (struct ibv_wq_init_attr){
+	wq_attr.ibv = (struct ibv_wq_init_attr){
 		.wq_context = NULL, /* Could be useful in the future. */
 		.wq_type = IBV_WQT_RQ,
 		/* Max number of outstanding WRs. */
@@ -965,37 +990,37 @@ struct mlx5_rxq_obj *
 	};
 	/* By default, FCS (CRC) is stripped by hardware. */
 	if (rxq_data->crc_present) {
-		attr.wq.ibv.create_flags |= IBV_WQ_FLAGS_SCATTER_FCS;
-		attr.wq.ibv.comp_mask |= IBV_WQ_INIT_ATTR_FLAGS;
+		wq_attr.ibv.create_flags |= IBV_WQ_FLAGS_SCATTER_FCS;
+		wq_attr.ibv.comp_mask |= IBV_WQ_INIT_ATTR_FLAGS;
 	}
 	if (config->hw_padding) {
 #if defined(HAVE_IBV_WQ_FLAG_RX_END_PADDING)
-		attr.wq.ibv.create_flags |= IBV_WQ_FLAG_RX_END_PADDING;
-		attr.wq.ibv.comp_mask |= IBV_WQ_INIT_ATTR_FLAGS;
+		wq_attr.ibv.create_flags |= IBV_WQ_FLAG_RX_END_PADDING;
+		wq_attr.ibv.comp_mask |= IBV_WQ_INIT_ATTR_FLAGS;
 #elif defined(HAVE_IBV_WQ_FLAGS_PCI_WRITE_END_PADDING)
-		attr.wq.ibv.create_flags |= IBV_WQ_FLAGS_PCI_WRITE_END_PADDING;
-		attr.wq.ibv.comp_mask |= IBV_WQ_INIT_ATTR_FLAGS;
+		wq_attr.ibv.create_flags |= IBV_WQ_FLAGS_PCI_WRITE_END_PADDING;
+		wq_attr.ibv.comp_mask |= IBV_WQ_INIT_ATTR_FLAGS;
 #endif
 	}
 #ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT
-	attr.wq.mlx5 = (struct mlx5dv_wq_init_attr){
+	wq_attr.mlx5 = (struct mlx5dv_wq_init_attr){
 		.comp_mask = 0,
 	};
 	if (mprq_en) {
 		struct mlx5dv_striding_rq_init_attr *mprq_attr =
-			&attr.wq.mlx5.striding_rq_attrs;
+			&wq_attr.mlx5.striding_rq_attrs;
 
-		attr.wq.mlx5.comp_mask |= MLX5DV_WQ_INIT_ATTR_MASK_STRIDING_RQ;
+		wq_attr.mlx5.comp_mask |= MLX5DV_WQ_INIT_ATTR_MASK_STRIDING_RQ;
 		*mprq_attr = (struct mlx5dv_striding_rq_init_attr){
 			.single_stride_log_num_of_bytes = rxq_data->strd_sz_n,
 			.single_wqe_log_num_of_strides = rxq_data->strd_num_n,
 			.two_byte_shift_en = MLX5_MPRQ_TWO_BYTE_SHIFT,
 		};
 	}
-	tmpl->wq = mlx5_glue->dv_create_wq(priv->sh->ctx, &attr.wq.ibv,
-					   &attr.wq.mlx5);
+	tmpl->wq = mlx5_glue->dv_create_wq(priv->sh->ctx, &wq_attr.ibv,
+					   &wq_attr.mlx5);
 #else
-	tmpl->wq = mlx5_glue->create_wq(priv->sh->ctx, &attr.wq.ibv);
+	tmpl->wq = mlx5_glue->create_wq(priv->sh->ctx, &wq_attr.ibv);
 #endif
 	if (tmpl->wq == NULL) {
 		DRV_LOG(ERR, "port %u Rx queue %u WQ creation failure",
@@ -1007,14 +1032,14 @@ struct mlx5_rxq_obj *
 	 * Make sure number of WRs*SGEs match expectations since a queue
 	 * cannot allocate more than "desc" buffers.
 	 */
-	if (attr.wq.ibv.max_wr != (wqe_n >> rxq_data->sges_n) ||
-	    attr.wq.ibv.max_sge != (1u << rxq_data->sges_n)) {
+	if (wq_attr.ibv.max_wr != (wqe_n >> rxq_data->sges_n) ||
+	    wq_attr.ibv.max_sge != (1u << rxq_data->sges_n)) {
 		DRV_LOG(ERR,
 			"port %u Rx queue %u requested %u*%u but got %u*%u"
 			" WRs*SGEs",
 			dev->data->port_id, idx,
 			wqe_n >> rxq_data->sges_n, (1 << rxq_data->sges_n),
-			attr.wq.ibv.max_wr, attr.wq.ibv.max_sge);
+			wq_attr.ibv.max_wr, wq_attr.ibv.max_sge);
 		rte_errno = EINVAL;
 		goto error;
 	}
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH 20/28] net/mlx5: function to create Rx verbs work queue
  2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
                   ` (18 preceding siblings ...)
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 19/28] net/mlx5: func to create Rx verbs completion queue Matan Azrad
@ 2019-07-22  9:13 ` Matan Azrad
  2019-07-22  9:21   ` Slava Ovsiienko
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 21/28] net/mlx5: create advanced RxQ using new API Matan Azrad
                   ` (10 subsequent siblings)
  30 siblings, 1 reply; 92+ messages in thread
From: Matan Azrad @ 2019-07-22  9:13 UTC (permalink / raw)
  To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Verbs WQ for RxQ is created inside function mlx5_rxq_obj_new().
This patch moves the creation of verbs WQ to dedicated function
mlx5_ibv_wq_new().

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5_rxq.c | 178 +++++++++++++++++++++++++-------------------
 1 file changed, 103 insertions(+), 75 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index e5015bb..9d859df 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -900,6 +900,106 @@
 }
 
 /**
+ * Create a WQ Verbs object.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param priv
+ *   Pointer to device private data.
+ * @param rxq_data
+ *   Pointer to Rx queue data.
+ * @param idx
+ *   Queue index in DPDK Rx queue array
+ * @param wqe_n
+ *   Number of WQEs in WQ.
+ * @param rxq_obj
+ *   Pointer to Rx queue object data.
+ *
+ * @return
+ *   The Verbs object initialised, NULL otherwise and rte_errno is set.
+ */
+static struct ibv_wq *
+mlx5_ibv_wq_new(struct rte_eth_dev *dev, struct mlx5_priv *priv,
+		struct mlx5_rxq_data *rxq_data, uint16_t idx,
+		unsigned int wqe_n, struct mlx5_rxq_obj *rxq_obj)
+{
+	struct {
+		struct ibv_wq_init_attr ibv;
+#ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT
+		struct mlx5dv_wq_init_attr mlx5;
+#endif
+	} wq_attr;
+
+	wq_attr.ibv = (struct ibv_wq_init_attr){
+		.wq_context = NULL, /* Could be useful in the future. */
+		.wq_type = IBV_WQT_RQ,
+		/* Max number of outstanding WRs. */
+		.max_wr = wqe_n >> rxq_data->sges_n,
+		/* Max number of scatter/gather elements in a WR. */
+		.max_sge = 1 << rxq_data->sges_n,
+		.pd = priv->sh->pd,
+		.cq = rxq_obj->cq,
+		.comp_mask = IBV_WQ_FLAGS_CVLAN_STRIPPING | 0,
+		.create_flags = (rxq_data->vlan_strip ?
+				 IBV_WQ_FLAGS_CVLAN_STRIPPING : 0),
+	};
+	/* By default, FCS (CRC) is stripped by hardware. */
+	if (rxq_data->crc_present) {
+		wq_attr.ibv.create_flags |= IBV_WQ_FLAGS_SCATTER_FCS;
+		wq_attr.ibv.comp_mask |= IBV_WQ_INIT_ATTR_FLAGS;
+	}
+	if (priv->config.hw_padding) {
+#if defined(HAVE_IBV_WQ_FLAG_RX_END_PADDING)
+		wq_attr.ibv.create_flags |= IBV_WQ_FLAG_RX_END_PADDING;
+		wq_attr.ibv.comp_mask |= IBV_WQ_INIT_ATTR_FLAGS;
+#elif defined(HAVE_IBV_WQ_FLAGS_PCI_WRITE_END_PADDING)
+		wq_attr.ibv.create_flags |= IBV_WQ_FLAGS_PCI_WRITE_END_PADDING;
+		wq_attr.ibv.comp_mask |= IBV_WQ_INIT_ATTR_FLAGS;
+#endif
+	}
+#ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT
+	wq_attr.mlx5 = (struct mlx5dv_wq_init_attr){
+		.comp_mask = 0,
+	};
+	if (mlx5_rxq_mprq_enabled(rxq_data)) {
+		struct mlx5dv_striding_rq_init_attr *mprq_attr =
+						&wq_attr.mlx5.striding_rq_attrs;
+
+		wq_attr.mlx5.comp_mask |= MLX5DV_WQ_INIT_ATTR_MASK_STRIDING_RQ;
+		*mprq_attr = (struct mlx5dv_striding_rq_init_attr){
+			.single_stride_log_num_of_bytes = rxq_data->strd_sz_n,
+			.single_wqe_log_num_of_strides = rxq_data->strd_num_n,
+			.two_byte_shift_en = MLX5_MPRQ_TWO_BYTE_SHIFT,
+		};
+	}
+	rxq_obj->wq = mlx5_glue->dv_create_wq(priv->sh->ctx, &wq_attr.ibv,
+					      &wq_attr.mlx5);
+#else
+	rxq_obj->wq = mlx5_glue->create_wq(priv->sh->ctx, &wq_attr.ibv);
+#endif
+	if (rxq_obj->wq) {
+		/*
+		 * Make sure number of WRs*SGEs match expectations since a queue
+		 * cannot allocate more than "desc" buffers.
+		 */
+		if (wq_attr.ibv.max_wr != (wqe_n >> rxq_data->sges_n) ||
+		    wq_attr.ibv.max_sge != (1u << rxq_data->sges_n)) {
+			DRV_LOG(ERR,
+				"port %u Rx queue %u requested %u*%u but got"
+				" %u*%u WRs*SGEs",
+				dev->data->port_id, idx,
+				wqe_n >> rxq_data->sges_n,
+				(1 << rxq_data->sges_n),
+				wq_attr.ibv.max_wr, wq_attr.ibv.max_sge);
+			claim_zero(mlx5_glue->destroy_wq(rxq_obj->wq));
+			rxq_obj->wq = NULL;
+			rte_errno = EINVAL;
+		}
+	}
+	return rxq_obj->wq;
+}
+
+/**
  * Create the Rx queue Verbs/DevX object.
  *
  * @param dev
@@ -918,12 +1018,6 @@ struct mlx5_rxq_obj *
 	struct mlx5_rxq_ctrl *rxq_ctrl =
 		container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
 	struct ibv_wq_attr mod;
-	struct {
-		struct ibv_wq_init_attr ibv;
-#ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT
-		struct mlx5dv_wq_init_attr mlx5;
-#endif
-	} wq_attr;
 	unsigned int cqe_n;
 	unsigned int wqe_n = 1 << rxq_data->elts_n;
 	struct mlx5_rxq_obj *tmpl = NULL;
@@ -931,8 +1025,6 @@ struct mlx5_rxq_obj *
 	struct mlx5dv_rwq rwq;
 	int ret = 0;
 	struct mlx5dv_obj obj;
-	struct mlx5_dev_config *config = &priv->config;
-	const int mprq_en = mlx5_rxq_mprq_enabled(rxq_data);
 
 	assert(rxq_data);
 	assert(!rxq_ctrl->obj);
@@ -957,7 +1049,7 @@ struct mlx5_rxq_obj *
 			goto error;
 		}
 	}
-	if (mprq_en)
+	if (mlx5_rxq_mprq_enabled(rxq_data))
 		cqe_n = wqe_n * (1 << rxq_data->strd_num_n) - 1;
 	else
 		cqe_n = wqe_n  - 1;
@@ -972,77 +1064,13 @@ struct mlx5_rxq_obj *
 		dev->data->port_id, priv->sh->device_attr.orig_attr.max_qp_wr);
 	DRV_LOG(DEBUG, "port %u device_attr.max_sge is %d",
 		dev->data->port_id, priv->sh->device_attr.orig_attr.max_sge);
-	wq_attr.ibv = (struct ibv_wq_init_attr){
-		.wq_context = NULL, /* Could be useful in the future. */
-		.wq_type = IBV_WQT_RQ,
-		/* Max number of outstanding WRs. */
-		.max_wr = wqe_n >> rxq_data->sges_n,
-		/* Max number of scatter/gather elements in a WR. */
-		.max_sge = 1 << rxq_data->sges_n,
-		.pd = priv->sh->pd,
-		.cq = tmpl->cq,
-		.comp_mask =
-			IBV_WQ_FLAGS_CVLAN_STRIPPING |
-			0,
-		.create_flags = (rxq_data->vlan_strip ?
-				 IBV_WQ_FLAGS_CVLAN_STRIPPING :
-				 0),
-	};
-	/* By default, FCS (CRC) is stripped by hardware. */
-	if (rxq_data->crc_present) {
-		wq_attr.ibv.create_flags |= IBV_WQ_FLAGS_SCATTER_FCS;
-		wq_attr.ibv.comp_mask |= IBV_WQ_INIT_ATTR_FLAGS;
-	}
-	if (config->hw_padding) {
-#if defined(HAVE_IBV_WQ_FLAG_RX_END_PADDING)
-		wq_attr.ibv.create_flags |= IBV_WQ_FLAG_RX_END_PADDING;
-		wq_attr.ibv.comp_mask |= IBV_WQ_INIT_ATTR_FLAGS;
-#elif defined(HAVE_IBV_WQ_FLAGS_PCI_WRITE_END_PADDING)
-		wq_attr.ibv.create_flags |= IBV_WQ_FLAGS_PCI_WRITE_END_PADDING;
-		wq_attr.ibv.comp_mask |= IBV_WQ_INIT_ATTR_FLAGS;
-#endif
-	}
-#ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT
-	wq_attr.mlx5 = (struct mlx5dv_wq_init_attr){
-		.comp_mask = 0,
-	};
-	if (mprq_en) {
-		struct mlx5dv_striding_rq_init_attr *mprq_attr =
-			&wq_attr.mlx5.striding_rq_attrs;
-
-		wq_attr.mlx5.comp_mask |= MLX5DV_WQ_INIT_ATTR_MASK_STRIDING_RQ;
-		*mprq_attr = (struct mlx5dv_striding_rq_init_attr){
-			.single_stride_log_num_of_bytes = rxq_data->strd_sz_n,
-			.single_wqe_log_num_of_strides = rxq_data->strd_num_n,
-			.two_byte_shift_en = MLX5_MPRQ_TWO_BYTE_SHIFT,
-		};
-	}
-	tmpl->wq = mlx5_glue->dv_create_wq(priv->sh->ctx, &wq_attr.ibv,
-					   &wq_attr.mlx5);
-#else
-	tmpl->wq = mlx5_glue->create_wq(priv->sh->ctx, &wq_attr.ibv);
-#endif
-	if (tmpl->wq == NULL) {
+	tmpl->wq = mlx5_ibv_wq_new(dev, priv, rxq_data, idx, wqe_n, tmpl);
+	if (!tmpl->wq) {
 		DRV_LOG(ERR, "port %u Rx queue %u WQ creation failure",
 			dev->data->port_id, idx);
 		rte_errno = ENOMEM;
 		goto error;
 	}
-	/*
-	 * Make sure number of WRs*SGEs match expectations since a queue
-	 * cannot allocate more than "desc" buffers.
-	 */
-	if (wq_attr.ibv.max_wr != (wqe_n >> rxq_data->sges_n) ||
-	    wq_attr.ibv.max_sge != (1u << rxq_data->sges_n)) {
-		DRV_LOG(ERR,
-			"port %u Rx queue %u requested %u*%u but got %u*%u"
-			" WRs*SGEs",
-			dev->data->port_id, idx,
-			wqe_n >> rxq_data->sges_n, (1 << rxq_data->sges_n),
-			wq_attr.ibv.max_wr, wq_attr.ibv.max_sge);
-		rte_errno = EINVAL;
-		goto error;
-	}
 	/* Change queue state to ready. */
 	mod = (struct ibv_wq_attr){
 		.attr_mask = IBV_WQ_ATTR_STATE,
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH 21/28] net/mlx5: create advanced RxQ using new API
  2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
                   ` (19 preceding siblings ...)
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 20/28] net/mlx5: function to create Rx verbs work queue Matan Azrad
@ 2019-07-22  9:13 ` Matan Azrad
  2019-07-22  9:21   ` Slava Ovsiienko
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 22/28] net/mlx5: support LRO with single RxQ object Matan Azrad
                   ` (9 subsequent siblings)
  30 siblings, 1 reply; 92+ messages in thread
From: Matan Azrad @ 2019-07-22  9:13 UTC (permalink / raw)
  To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Function mlx5_rxq_obj_new(), previously called mlx5_rxq_ibv_new(),
supports creating Rx queue objects using verbs.
This patch expands the relevant functions, to support creating
verbs or DevX Rx queue objects:
Function mlx5_rxq_obj_new() updated to create RQ object using DevX.
Function mlx5_ind_table_obj_new() updated to create RQT object using DevX.
Function mlx5_hrxq_new() updated to create TIR object using DevX.
New utility functions added to perform specific operations:
mlx5_devx_rq_new(),  mlx5_devx_wq_attr_fill(),
mlx5_devx_create_rq_attr_fill().

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5_glue.h    |   2 +-
 drivers/net/mlx5/mlx5_rxq.c     | 550 +++++++++++++++++++++++++++++++---------
 drivers/net/mlx5/mlx5_rxtx.h    |   9 +-
 drivers/net/mlx5/mlx5_trigger.c |   3 +-
 drivers/net/mlx5/mlx5_vlan.c    |  30 ++-
 5 files changed, 456 insertions(+), 138 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_glue.h b/drivers/net/mlx5/mlx5_glue.h
index f8e2b9a..6b5dadf 100644
--- a/drivers/net/mlx5/mlx5_glue.h
+++ b/drivers/net/mlx5/mlx5_glue.h
@@ -61,7 +61,7 @@
 
 #ifndef HAVE_IBV_DEVX_OBJ
 struct mlx5dv_devx_obj;
-struct mlx5dv_devx_umem;
+struct mlx5dv_devx_umem { uint32_t umem_id; };
 #endif
 
 #ifndef HAVE_IBV_DEVX_ASYNC
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 9d859df..9712db4 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -561,6 +561,23 @@
 }
 
 /**
+ * Release the resources allocated for an RQ DevX object.
+ *
+ * @param rxq_ctrl
+ *   DevX Rx queue object.
+ */
+static void
+rxq_release_rq_resources(struct mlx5_rxq_ctrl *rxq_ctrl)
+{
+	if (rxq_ctrl->rxq.wqes) {
+		rte_free((void *)(uintptr_t)rxq_ctrl->rxq.wqes);
+		rxq_ctrl->rxq.wqes = NULL;
+	}
+	if (rxq_ctrl->wq_umem)
+		mlx5_glue->devx_umem_dereg(rxq_ctrl->wq_umem);
+}
+
+/**
  * Release an Rx verbs/DevX queue object.
  *
  * @param rxq_obj
@@ -573,11 +590,17 @@
 mlx5_rxq_obj_release(struct mlx5_rxq_obj *rxq_obj)
 {
 	assert(rxq_obj);
-	assert(rxq_obj->wq);
+	if (rxq_obj->type == MLX5_RXQ_OBJ_TYPE_IBV)
+		assert(rxq_obj->wq);
 	assert(rxq_obj->cq);
 	if (rte_atomic32_dec_and_test(&rxq_obj->refcnt)) {
 		rxq_free_elts(rxq_obj->rxq_ctrl);
-		claim_zero(mlx5_glue->destroy_wq(rxq_obj->wq));
+		if (rxq_obj->type == MLX5_RXQ_OBJ_TYPE_IBV) {
+			claim_zero(mlx5_glue->destroy_wq(rxq_obj->wq));
+		} else if (rxq_obj->type == MLX5_RXQ_OBJ_TYPE_DEVX_RQ) {
+			claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
+			rxq_release_rq_resources(rxq_obj->rxq_ctrl);
+		}
 		claim_zero(mlx5_glue->destroy_cq(rxq_obj->cq));
 		if (rxq_obj->channel)
 			claim_zero(mlx5_glue->destroy_comp_channel
@@ -1000,18 +1023,147 @@
 }
 
 /**
+ * Fill common fields of create RQ attributes structure.
+ *
+ * @param rxq_data
+ *   Pointer to Rx queue data.
+ * @param cqn
+ *   CQ number to use with this RQ.
+ * @param rq_attr
+ *   RQ attributes structure to fill..
+ */
+static void
+mlx5_devx_create_rq_attr_fill(struct mlx5_rxq_data *rxq_data, uint32_t cqn,
+			      struct mlx5_devx_create_rq_attr *rq_attr)
+{
+	rq_attr->state = MLX5_RQC_STATE_RST;
+	rq_attr->vsd = (rxq_data->vlan_strip) ? 0 : 1;
+	rq_attr->cqn = cqn;
+	rq_attr->scatter_fcs = (rxq_data->crc_present) ? 1 : 0;
+}
+
+/**
+ * Fill common fields of DevX WQ attributes structure.
+ *
+ * @param priv
+ *   Pointer to device private data.
+ * @param rxq_ctrl
+ *   Pointer to Rx queue control structure.
+ * @param wq_attr
+ *   WQ attributes structure to fill..
+ */
+static void
+mlx5_devx_wq_attr_fill(struct mlx5_priv *priv, struct mlx5_rxq_ctrl *rxq_ctrl,
+		       struct mlx5_devx_wq_attr *wq_attr)
+{
+	wq_attr->end_padding_mode = priv->config.cqe_pad ?
+					MLX5_WQ_END_PAD_MODE_ALIGN :
+					MLX5_WQ_END_PAD_MODE_NONE;
+	wq_attr->pd = priv->sh->pdn;
+	wq_attr->dbr_addr = rxq_ctrl->dbr_offset;
+	wq_attr->dbr_umem_id = rxq_ctrl->dbr_umem_id;
+	wq_attr->dbr_umem_valid = 1;
+	wq_attr->wq_umem_id = rxq_ctrl->wq_umem->umem_id;
+	wq_attr->wq_umem_valid = 1;
+}
+
+/**
+ * Create a RQ object using DevX.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param idx
+ *   Queue index in DPDK Rx queue array
+ * @param cqn
+ *   CQ number to use with this RQ.
+ *
+ * @return
+ *   The DevX object initialised, NULL otherwise and rte_errno is set.
+ */
+static struct mlx5_devx_obj *
+mlx5_devx_rq_new(struct rte_eth_dev *dev, uint16_t idx, uint32_t cqn)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
+	struct mlx5_rxq_ctrl *rxq_ctrl =
+		container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
+	struct mlx5_devx_create_rq_attr rq_attr;
+	uint32_t wqe_n = 1 << rxq_data->elts_n;
+	uint32_t wq_size = 0;
+	uint32_t wqe_size = 0;
+	uint32_t log_wqe_size = 0;
+	void *buf = NULL;
+	struct mlx5_devx_obj *rq;
+
+	memset(&rq_attr, 0, sizeof(rq_attr));
+	/* Fill RQ attributes. */
+	rq_attr.mem_rq_type = MLX5_RQC_MEM_RQ_TYPE_MEMORY_RQ_INLINE;
+	rq_attr.flush_in_error_en = 1;
+	mlx5_devx_create_rq_attr_fill(rxq_data, cqn, &rq_attr);
+	/* Fill WQ attributes for this RQ. */
+	if (mlx5_rxq_mprq_enabled(rxq_data)) {
+		rq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC_STRIDING_RQ;
+		/*
+		 * Number of strides in each WQE:
+		 * 512*2^single_wqe_log_num_of_strides.
+		 */
+		rq_attr.wq_attr.single_wqe_log_num_of_strides =
+				rxq_data->strd_num_n -
+				MLX5_MIN_SINGLE_WQE_LOG_NUM_STRIDES;
+		/* Stride size = (2^single_stride_log_num_of_bytes)*64B. */
+		rq_attr.wq_attr.single_stride_log_num_of_bytes =
+				rxq_data->strd_sz_n -
+				MLX5_MIN_SINGLE_STRIDE_LOG_NUM_BYTES;
+		wqe_size = sizeof(struct mlx5_wqe_mprq);
+	} else {
+		int max_sge = 0;
+		int num_scatter = 0;
+
+		rq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC;
+		max_sge = 1 << rxq_data->sges_n;
+		num_scatter = RTE_MAX(max_sge, 1);
+		wqe_size = sizeof(struct mlx5_wqe_data_seg) * num_scatter;
+	}
+	log_wqe_size = log2above(wqe_size);
+	rq_attr.wq_attr.log_wq_stride = log_wqe_size;
+	rq_attr.wq_attr.log_wq_sz = rxq_data->elts_n;
+	/* Calculate and allocate WQ memory space. */
+	wqe_size = 1 << log_wqe_size; /* round up power of two.*/
+	wq_size = wqe_n * wqe_size;
+	buf = rte_calloc_socket(__func__, 1, wq_size, RTE_CACHE_LINE_SIZE,
+				rxq_ctrl->socket);
+	if (!buf)
+		return NULL;
+	rxq_data->wqes = buf;
+	rxq_ctrl->wq_umem = mlx5_glue->devx_umem_reg(priv->sh->ctx,
+						     buf, wq_size, 0);
+	if (!rxq_ctrl->wq_umem) {
+		rte_free(buf);
+		return NULL;
+	}
+	mlx5_devx_wq_attr_fill(priv, rxq_ctrl, &rq_attr.wq_attr);
+	rq = mlx5_devx_cmd_create_rq(priv->sh->ctx, &rq_attr, rxq_ctrl->socket);
+	if (!rq)
+		rxq_release_rq_resources(rxq_ctrl);
+	return rq;
+}
+
+/**
  * Create the Rx queue Verbs/DevX object.
  *
  * @param dev
  *   Pointer to Ethernet device.
  * @param idx
  *   Queue index in DPDK Rx queue array
+ * @param type
+ *   Type of Rx queue object to create.
  *
  * @return
  *   The Verbs/DevX object initialised, NULL otherwise and rte_errno is set.
  */
 struct mlx5_rxq_obj *
-mlx5_rxq_obj_new(struct rte_eth_dev *dev, uint16_t idx)
+mlx5_rxq_obj_new(struct rte_eth_dev *dev, uint16_t idx,
+		 enum mlx5_rxq_obj_type type)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
@@ -1039,6 +1191,7 @@ struct mlx5_rxq_obj *
 		rte_errno = ENOMEM;
 		goto error;
 	}
+	tmpl->type = type;
 	tmpl->rxq_ctrl = rxq_ctrl;
 	if (rxq_ctrl->irq) {
 		tmpl->channel = mlx5_glue->create_comp_channel(priv->sh->ctx);
@@ -1060,35 +1213,9 @@ struct mlx5_rxq_obj *
 		rte_errno = ENOMEM;
 		goto error;
 	}
-	DRV_LOG(DEBUG, "port %u device_attr.max_qp_wr is %d",
-		dev->data->port_id, priv->sh->device_attr.orig_attr.max_qp_wr);
-	DRV_LOG(DEBUG, "port %u device_attr.max_sge is %d",
-		dev->data->port_id, priv->sh->device_attr.orig_attr.max_sge);
-	tmpl->wq = mlx5_ibv_wq_new(dev, priv, rxq_data, idx, wqe_n, tmpl);
-	if (!tmpl->wq) {
-		DRV_LOG(ERR, "port %u Rx queue %u WQ creation failure",
-			dev->data->port_id, idx);
-		rte_errno = ENOMEM;
-		goto error;
-	}
-	/* Change queue state to ready. */
-	mod = (struct ibv_wq_attr){
-		.attr_mask = IBV_WQ_ATTR_STATE,
-		.wq_state = IBV_WQS_RDY,
-	};
-	ret = mlx5_glue->modify_wq(tmpl->wq, &mod);
-	if (ret) {
-		DRV_LOG(ERR,
-			"port %u Rx queue %u WQ state to IBV_WQS_RDY failed",
-			dev->data->port_id, idx);
-		rte_errno = ret;
-		goto error;
-	}
 	obj.cq.in = tmpl->cq;
 	obj.cq.out = &cq_info;
-	obj.rwq.in = tmpl->wq;
-	obj.rwq.out = &rwq;
-	ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_CQ | MLX5DV_OBJ_RWQ);
+	ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_CQ);
 	if (ret) {
 		rte_errno = ret;
 		goto error;
@@ -1101,9 +1228,76 @@ struct mlx5_rxq_obj *
 		rte_errno = EINVAL;
 		goto error;
 	}
+	DRV_LOG(DEBUG, "port %u device_attr.max_qp_wr is %d",
+		dev->data->port_id, priv->sh->device_attr.orig_attr.max_qp_wr);
+	DRV_LOG(DEBUG, "port %u device_attr.max_sge is %d",
+		dev->data->port_id, priv->sh->device_attr.orig_attr.max_sge);
+	/* Allocate door-bell for types created with DevX. */
+	if (tmpl->type != MLX5_RXQ_OBJ_TYPE_IBV) {
+		struct mlx5_devx_dbr_page *dbr_page;
+		int64_t dbr_offset;
+
+		dbr_offset = mlx5_get_dbr(dev, &dbr_page);
+		if (dbr_offset < 0)
+			goto error;
+		rxq_ctrl->dbr_offset = dbr_offset;
+		rxq_ctrl->dbr_umem_id = dbr_page->umem->umem_id;
+		rxq_data->rq_db = (uint32_t *)RTE_PTR_ADD(dbr_page->dbrs,
+							  rxq_ctrl->dbr_offset);
+		rxq_data->rq_db = (uint32_t *)(uintptr_t)RTE_PTR_ADD
+							(dbr_page->dbrs,
+							 rxq_ctrl->dbr_offset);
+	}
+	if (tmpl->type == MLX5_RXQ_OBJ_TYPE_IBV) {
+		tmpl->wq = mlx5_ibv_wq_new(dev, priv, rxq_data, idx, wqe_n,
+					   tmpl);
+		if (!tmpl->wq) {
+			DRV_LOG(ERR, "port %u Rx queue %u WQ creation failure",
+				dev->data->port_id, idx);
+			rte_errno = ENOMEM;
+			goto error;
+		}
+		/* Change queue state to ready. */
+		mod = (struct ibv_wq_attr){
+			.attr_mask = IBV_WQ_ATTR_STATE,
+			.wq_state = IBV_WQS_RDY,
+		};
+		ret = mlx5_glue->modify_wq(tmpl->wq, &mod);
+		if (ret) {
+			DRV_LOG(ERR,
+				"port %u Rx queue %u WQ state to IBV_WQS_RDY"
+				" failed", dev->data->port_id, idx);
+			rte_errno = ret;
+			goto error;
+		}
+		obj.rwq.in = tmpl->wq;
+		obj.rwq.out = &rwq;
+		ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_RWQ);
+		if (ret) {
+			rte_errno = ret;
+			goto error;
+		}
+		rxq_data->wqes = rwq.buf;
+		rxq_data->rq_db = rwq.dbrec;
+	} else if (tmpl->type == MLX5_RXQ_OBJ_TYPE_DEVX_RQ) {
+		struct mlx5_devx_modify_rq_attr rq_attr;
+
+		memset(&rq_attr, 0, sizeof(rq_attr));
+		tmpl->rq = mlx5_devx_rq_new(dev, idx, cq_info.cqn);
+		if (!tmpl->rq) {
+			DRV_LOG(ERR, "port %u Rx queue %u RQ creation failure",
+				dev->data->port_id, idx);
+			rte_errno = ENOMEM;
+			goto error;
+		}
+		/* Change queue state to ready. */
+		rq_attr.rq_state = MLX5_RQC_STATE_RST;
+		rq_attr.state = MLX5_RQC_STATE_RDY;
+		ret = mlx5_devx_cmd_modify_rq(tmpl->rq, &rq_attr);
+		if (ret)
+			goto error;
+	}
 	/* Fill the rings. */
-	rxq_data->wqes = rwq.buf;
-	rxq_data->rq_db = rwq.dbrec;
 	rxq_data->cqe_n = log2above(cq_info.cqe_cnt);
 	rxq_data->cq_db = cq_info.dbrec;
 	rxq_data->cqes = (volatile struct mlx5_cqe (*)[])(uintptr_t)cq_info.buf;
@@ -1121,8 +1315,10 @@ struct mlx5_rxq_obj *
 error:
 	if (tmpl) {
 		ret = rte_errno; /* Save rte_errno before cleanup. */
-		if (tmpl->wq)
+		if (tmpl->type == MLX5_RXQ_OBJ_TYPE_IBV && tmpl->wq)
 			claim_zero(mlx5_glue->destroy_wq(tmpl->wq));
+		else if (tmpl->type == MLX5_RXQ_OBJ_TYPE_DEVX_RQ && tmpl->rq)
+			claim_zero(mlx5_devx_cmd_destroy(tmpl->rq));
 		if (tmpl->cq)
 			claim_zero(mlx5_glue->destroy_cq(tmpl->cq));
 		if (tmpl->channel)
@@ -1131,6 +1327,8 @@ struct mlx5_rxq_obj *
 		rte_free(tmpl);
 		rte_errno = ret; /* Restore rte_errno. */
 	}
+	if (type == MLX5_RXQ_OBJ_TYPE_DEVX_RQ)
+		rxq_release_rq_resources(rxq_ctrl);
 	priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE;
 	return NULL;
 }
@@ -1585,6 +1783,8 @@ struct mlx5_rxq_ctrl *
 	if (rxq_ctrl->obj && !mlx5_rxq_obj_release(rxq_ctrl->obj))
 		rxq_ctrl->obj = NULL;
 	if (rte_atomic32_dec_and_test(&rxq_ctrl->refcnt)) {
+		claim_zero(mlx5_release_dbr(dev, rxq_ctrl->dbr_umem_id,
+					    rxq_ctrl->dbr_offset));
 		mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh);
 		LIST_REMOVE(rxq_ctrl, next);
 		rte_free(rxq_ctrl);
@@ -1633,16 +1833,11 @@ struct mlx5_rxq_ctrl *
  */
 static struct mlx5_ind_table_obj *
 mlx5_ind_table_obj_new(struct rte_eth_dev *dev, const uint16_t *queues,
-		       uint32_t queues_n)
+		       uint32_t queues_n, enum mlx5_ind_tbl_type type)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_ind_table_obj *ind_tbl;
-	const unsigned int wq_n = rte_is_power_of_2(queues_n) ?
-		log2above(queues_n) :
-		log2above(priv->config.ind_table_max_size);
-	struct ibv_wq *wq[1 << wq_n];
-	unsigned int i;
-	unsigned int j;
+	unsigned int i = 0, j = 0, k = 0;
 
 	ind_tbl = rte_calloc(__func__, 1, sizeof(*ind_tbl) +
 			     queues_n * sizeof(uint16_t), 0);
@@ -1650,33 +1845,75 @@ struct mlx5_rxq_ctrl *
 		rte_errno = ENOMEM;
 		return NULL;
 	}
-	for (i = 0; i != queues_n; ++i) {
-		struct mlx5_rxq_ctrl *rxq = mlx5_rxq_get(dev, queues[i]);
+	ind_tbl->type = type;
+	if (ind_tbl->type == MLX5_IND_TBL_TYPE_IBV) {
+		const unsigned int wq_n = rte_is_power_of_2(queues_n) ?
+			log2above(queues_n) :
+			log2above(priv->config.ind_table_max_size);
+		struct ibv_wq *wq[1 << wq_n];
+
+		for (i = 0; i != queues_n; ++i) {
+			struct mlx5_rxq_ctrl *rxq = mlx5_rxq_get(dev,
+								 queues[i]);
+			if (!rxq)
+				goto error;
+			wq[i] = rxq->obj->wq;
+			ind_tbl->queues[i] = queues[i];
+		}
+		ind_tbl->queues_n = queues_n;
+		/* Finalise indirection table. */
+		k = i; /* Retain value of i for use in error case. */
+		for (j = 0; k != (unsigned int)(1 << wq_n); ++k, ++j)
+			wq[k] = wq[j];
+		ind_tbl->ind_table = mlx5_glue->create_rwq_ind_table
+			(priv->sh->ctx,
+			 &(struct ibv_rwq_ind_table_init_attr){
+				.log_ind_tbl_size = wq_n,
+				.ind_tbl = wq,
+				.comp_mask = 0,
+			});
+		if (!ind_tbl->ind_table) {
+			rte_errno = errno;
+			goto error;
+		}
+	} else { /* ind_tbl->type == MLX5_IND_TBL_TYPE_DEVX */
+		struct mlx5_devx_rqt_attr *rqt_attr = NULL;
 
-		if (!rxq)
+		rqt_attr = rte_calloc(__func__, 1, sizeof(*rqt_attr) +
+				      queues_n * sizeof(uint16_t), 0);
+		if (!rqt_attr) {
+			DRV_LOG(ERR, "port %u cannot allocate RQT resources",
+				dev->data->port_id);
+			rte_errno = ENOMEM;
 			goto error;
-		wq[i] = rxq->obj->wq;
-		ind_tbl->queues[i] = queues[i];
-	}
-	ind_tbl->queues_n = queues_n;
-	/* Finalise indirection table. */
-	for (j = 0; i != (unsigned int)(1 << wq_n); ++i, ++j)
-		wq[i] = wq[j];
-	ind_tbl->ind_table = mlx5_glue->create_rwq_ind_table
-		(priv->sh->ctx,
-		 &(struct ibv_rwq_ind_table_init_attr){
-			.log_ind_tbl_size = wq_n,
-			.ind_tbl = wq,
-			.comp_mask = 0,
-		 });
-	if (!ind_tbl->ind_table) {
-		rte_errno = errno;
-		goto error;
+		}
+		rqt_attr->rqt_max_size = priv->config.ind_table_max_size;
+		rqt_attr->rqt_actual_size = queues_n;
+		for (i = 0; i != queues_n; ++i) {
+			struct mlx5_rxq_ctrl *rxq = mlx5_rxq_get(dev,
+								 queues[i]);
+			if (!rxq)
+				goto error;
+			rqt_attr->rq_list[i] = rxq->obj->rq->id;
+			ind_tbl->queues[i] = queues[i];
+		}
+		ind_tbl->rqt = mlx5_devx_cmd_create_rqt(priv->sh->ctx,
+							rqt_attr);
+		rte_free(rqt_attr);
+		if (!ind_tbl->rqt) {
+			DRV_LOG(ERR, "port %u cannot create DevX RQT",
+				dev->data->port_id);
+			rte_errno = errno;
+			goto error;
+		}
+		ind_tbl->queues_n = queues_n;
 	}
 	rte_atomic32_inc(&ind_tbl->refcnt);
 	LIST_INSERT_HEAD(&priv->ind_tbls, ind_tbl, next);
 	return ind_tbl;
 error:
+	for (j = 0; j < i; j++)
+		mlx5_rxq_release(dev, ind_tbl->queues[j]);
 	rte_free(ind_tbl);
 	DEBUG("port %u cannot create indirection table", dev->data->port_id);
 	return NULL;
@@ -1736,9 +1973,13 @@ struct mlx5_rxq_ctrl *
 {
 	unsigned int i;
 
-	if (rte_atomic32_dec_and_test(&ind_tbl->refcnt))
-		claim_zero(mlx5_glue->destroy_rwq_ind_table
-			   (ind_tbl->ind_table));
+	if (rte_atomic32_dec_and_test(&ind_tbl->refcnt)) {
+		if (ind_tbl->type == MLX5_IND_TBL_TYPE_IBV)
+			claim_zero(mlx5_glue->destroy_rwq_ind_table
+							(ind_tbl->ind_table));
+		else if (ind_tbl->type == MLX5_IND_TBL_TYPE_DEVX)
+			claim_zero(mlx5_devx_cmd_destroy(ind_tbl->rqt));
+	}
 	for (i = 0; i != ind_tbl->queues_n; ++i)
 		claim_nonzero(mlx5_rxq_release(dev, ind_tbl->queues[i]));
 	if (!rte_atomic32_read(&ind_tbl->refcnt)) {
@@ -1805,93 +2046,145 @@ struct mlx5_hrxq *
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_hrxq *hrxq;
-	struct mlx5_ind_table_obj *ind_tbl;
 	struct ibv_qp *qp;
-#ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT
-	struct mlx5dv_qp_init_attr qp_init_attr;
-#endif
+	struct mlx5_ind_table_obj *ind_tbl;
 	int err;
+	struct mlx5_devx_obj *tir;
 
 	queues_n = hash_fields ? queues_n : 1;
 	ind_tbl = mlx5_ind_table_obj_get(dev, queues, queues_n);
-	if (!ind_tbl)
-		ind_tbl = mlx5_ind_table_obj_new(dev, queues, queues_n);
+	if (!ind_tbl) {
+		struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[queues[0]];
+		struct mlx5_rxq_ctrl *rxq_ctrl =
+			container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
+		enum mlx5_ind_tbl_type type;
+
+		type = rxq_ctrl->obj->type == MLX5_RXQ_OBJ_TYPE_IBV ?
+				MLX5_IND_TBL_TYPE_IBV : MLX5_IND_TBL_TYPE_DEVX;
+		ind_tbl = mlx5_ind_table_obj_new(dev, queues, queues_n, type);
+	}
 	if (!ind_tbl) {
 		rte_errno = ENOMEM;
 		return NULL;
 	}
+	if (ind_tbl->type == MLX5_IND_TBL_TYPE_IBV) {
 #ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT
-	memset(&qp_init_attr, 0, sizeof(qp_init_attr));
-	if (tunnel) {
-		qp_init_attr.comp_mask =
+		struct mlx5dv_qp_init_attr qp_init_attr;
+
+		memset(&qp_init_attr, 0, sizeof(qp_init_attr));
+		if (tunnel) {
+			qp_init_attr.comp_mask =
 				MLX5DV_QP_INIT_ATTR_MASK_QP_CREATE_FLAGS;
-		qp_init_attr.create_flags = MLX5DV_QP_CREATE_TUNNEL_OFFLOADS;
-	}
+			qp_init_attr.create_flags =
+				MLX5DV_QP_CREATE_TUNNEL_OFFLOADS;
+		}
 #ifdef HAVE_IBV_FLOW_DV_SUPPORT
-	if (dev->data->dev_conf.lpbk_mode) {
-		/* Allow packet sent from NIC loop back w/o source MAC check. */
-		qp_init_attr.comp_mask |=
+		if (dev->data->dev_conf.lpbk_mode) {
+			/*
+			 * Allow packet sent from NIC loop back
+			 * w/o source MAC check.
+			 */
+			qp_init_attr.comp_mask |=
 				MLX5DV_QP_INIT_ATTR_MASK_QP_CREATE_FLAGS;
-		qp_init_attr.create_flags |=
+			qp_init_attr.create_flags |=
 				MLX5DV_QP_CREATE_TIR_ALLOW_SELF_LOOPBACK_UC;
-	}
+		}
 #endif
-	qp = mlx5_glue->dv_create_qp
-		(priv->sh->ctx,
-		 &(struct ibv_qp_init_attr_ex){
-			.qp_type = IBV_QPT_RAW_PACKET,
-			.comp_mask =
-				IBV_QP_INIT_ATTR_PD |
-				IBV_QP_INIT_ATTR_IND_TABLE |
-				IBV_QP_INIT_ATTR_RX_HASH,
-			.rx_hash_conf = (struct ibv_rx_hash_conf){
-				.rx_hash_function = IBV_RX_HASH_FUNC_TOEPLITZ,
-				.rx_hash_key_len = rss_key_len,
-				.rx_hash_key = (void *)(uintptr_t)rss_key,
-				.rx_hash_fields_mask = hash_fields,
-			},
-			.rwq_ind_tbl = ind_tbl->ind_table,
-			.pd = priv->sh->pd,
-		 },
-		 &qp_init_attr);
+		qp = mlx5_glue->dv_create_qp
+			(priv->sh->ctx,
+			 &(struct ibv_qp_init_attr_ex){
+				.qp_type = IBV_QPT_RAW_PACKET,
+				.comp_mask =
+					IBV_QP_INIT_ATTR_PD |
+					IBV_QP_INIT_ATTR_IND_TABLE |
+					IBV_QP_INIT_ATTR_RX_HASH,
+				.rx_hash_conf = (struct ibv_rx_hash_conf){
+					.rx_hash_function =
+						IBV_RX_HASH_FUNC_TOEPLITZ,
+					.rx_hash_key_len = rss_key_len,
+					.rx_hash_key =
+						(void *)(uintptr_t)rss_key,
+					.rx_hash_fields_mask = hash_fields,
+				},
+				.rwq_ind_tbl = ind_tbl->ind_table,
+				.pd = priv->sh->pd,
+			  },
+			  &qp_init_attr);
 #else
-	qp = mlx5_glue->create_qp_ex
-		(priv->sh->ctx,
-		 &(struct ibv_qp_init_attr_ex){
-			.qp_type = IBV_QPT_RAW_PACKET,
-			.comp_mask =
-				IBV_QP_INIT_ATTR_PD |
-				IBV_QP_INIT_ATTR_IND_TABLE |
-				IBV_QP_INIT_ATTR_RX_HASH,
-			.rx_hash_conf = (struct ibv_rx_hash_conf){
-				.rx_hash_function = IBV_RX_HASH_FUNC_TOEPLITZ,
-				.rx_hash_key_len = rss_key_len,
-				.rx_hash_key = (void *)(uintptr_t)rss_key,
-				.rx_hash_fields_mask = hash_fields,
-			},
-			.rwq_ind_tbl = ind_tbl->ind_table,
-			.pd = priv->sh->pd,
-		 });
+		qp = mlx5_glue->create_qp_ex
+			(priv->sh->ctx,
+			 &(struct ibv_qp_init_attr_ex){
+				.qp_type = IBV_QPT_RAW_PACKET,
+				.comp_mask =
+					IBV_QP_INIT_ATTR_PD |
+					IBV_QP_INIT_ATTR_IND_TABLE |
+					IBV_QP_INIT_ATTR_RX_HASH,
+				.rx_hash_conf = (struct ibv_rx_hash_conf){
+					.rx_hash_function =
+						IBV_RX_HASH_FUNC_TOEPLITZ,
+					.rx_hash_key_len = rss_key_len,
+					.rx_hash_key =
+						(void *)(uintptr_t)rss_key,
+					.rx_hash_fields_mask = hash_fields,
+				},
+				.rwq_ind_tbl = ind_tbl->ind_table,
+				.pd = priv->sh->pd,
+			 });
 #endif
-	if (!qp) {
-		rte_errno = errno;
-		goto error;
+		if (!qp) {
+			rte_errno = errno;
+			goto error;
+		}
+	} else { /* ind_tbl->type == MLX5_IND_TBL_TYPE_DEVX */
+		struct mlx5_devx_tir_attr tir_attr;
+
+		memset(&tir_attr, 0, sizeof(tir_attr));
+		tir_attr.disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT;
+		tir_attr.rx_hash_fn = MLX5_RX_HASH_FN_TOEPLITZ;
+		memcpy(&tir_attr.rx_hash_field_selector_outer, &hash_fields,
+		       sizeof(uint64_t));
+		tir_attr.transport_domain = priv->sh->tdn;
+		memcpy(tir_attr.rx_hash_toeplitz_key, rss_key, rss_key_len);
+		tir_attr.indirect_table = ind_tbl->rqt->id;
+		if (dev->data->dev_conf.lpbk_mode)
+			tir_attr.self_lb_block =
+					MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST;
+		tir = mlx5_devx_cmd_create_tir(priv->sh->ctx, &tir_attr);
+		if (!tir) {
+			DRV_LOG(ERR, "port %u cannot create DevX TIR",
+				dev->data->port_id);
+			rte_errno = errno;
+			goto error;
+		}
 	}
 	hrxq = rte_calloc(__func__, 1, sizeof(*hrxq) + rss_key_len, 0);
 	if (!hrxq)
 		goto error;
 	hrxq->ind_table = ind_tbl;
-	hrxq->qp = qp;
+	if (ind_tbl->type == MLX5_IND_TBL_TYPE_IBV) {
+		hrxq->qp = qp;
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+		hrxq->action =
+			mlx5_glue->dv_create_flow_action_dest_ibv_qp(hrxq->qp);
+		if (!hrxq->action) {
+			rte_errno = errno;
+			goto error;
+		}
+#endif
+	} else { /* ind_tbl->type == MLX5_IND_TBL_TYPE_DEVX */
+		hrxq->tir = tir;
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+		hrxq->action = mlx5_glue->dv_create_flow_action_dest_devx_tir
+							(hrxq->tir->obj);
+		if (!hrxq->action) {
+			rte_errno = errno;
+			goto error;
+		}
+#endif
+	}
 	hrxq->rss_key_len = rss_key_len;
 	hrxq->hash_fields = hash_fields;
 	memcpy(hrxq->rss_key, rss_key, rss_key_len);
-#ifdef HAVE_IBV_FLOW_DV_SUPPORT
-	hrxq->action = mlx5_glue->dv_create_flow_action_dest_ibv_qp(hrxq->qp);
-	if (!hrxq->action) {
-		rte_errno = errno;
-		goto error;
-	}
-#endif
 	rte_atomic32_inc(&hrxq->refcnt);
 	LIST_INSERT_HEAD(&priv->hrxqs, hrxq, next);
 	return hrxq;
@@ -1900,6 +2193,8 @@ struct mlx5_hrxq *
 	mlx5_ind_table_obj_release(dev, ind_tbl);
 	if (qp)
 		claim_zero(mlx5_glue->destroy_qp(qp));
+	else if (tir)
+		claim_zero(mlx5_devx_cmd_destroy(tir));
 	rte_errno = err; /* Restore rte_errno. */
 	return NULL;
 }
@@ -1970,7 +2265,10 @@ struct mlx5_hrxq *
 #ifdef HAVE_IBV_FLOW_DV_SUPPORT
 		mlx5_glue->destroy_flow_action(hrxq->action);
 #endif
-		claim_zero(mlx5_glue->destroy_qp(hrxq->qp));
+		if (hrxq->ind_table->type == MLX5_IND_TBL_TYPE_IBV)
+			claim_zero(mlx5_glue->destroy_qp(hrxq->qp));
+		else /* hrxq->ind_table->type == MLX5_IND_TBL_TYPE_DEVX */
+			claim_zero(mlx5_devx_cmd_destroy(hrxq->tir));
 		mlx5_ind_table_obj_release(dev, hrxq->ind_table);
 		LIST_REMOVE(hrxq, next);
 		rte_free(hrxq);
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index f4f5c0d..bd4ae80 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -80,6 +80,9 @@ struct mlx5_mprq_buf {
 /* Get pointer to the first stride. */
 #define mlx5_mprq_buf_addr(ptr) ((ptr) + 1)
 
+#define MLX5_MIN_SINGLE_STRIDE_LOG_NUM_BYTES 6
+#define MLX5_MIN_SINGLE_WQE_LOG_NUM_STRIDES 9
+
 enum mlx5_rxq_err_state {
 	MLX5_RXQ_ERR_STATE_NO_ERROR = 0,
 	MLX5_RXQ_ERR_STATE_NEED_RESET,
@@ -174,6 +177,9 @@ struct mlx5_rxq_ctrl {
 	uint32_t flow_tunnels_n[MLX5_FLOW_TUNNEL]; /* Tunnels counters. */
 	uint32_t wqn; /* WQ number. */
 	uint16_t dump_file_n; /* Number of dump files. */
+	uint32_t dbr_umem_id; /* Storing door-bell information, */
+	uint64_t dbr_offset;  /* needed when freeing door-bell. */
+	struct mlx5dv_devx_umem *wq_umem; /* WQ buffer registration info. */
 };
 
 enum mlx5_ind_tbl_type {
@@ -324,7 +330,8 @@ int mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 void mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev);
 int mlx5_rx_intr_enable(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id);
-struct mlx5_rxq_obj *mlx5_rxq_obj_new(struct rte_eth_dev *dev, uint16_t idx);
+struct mlx5_rxq_obj *mlx5_rxq_obj_new(struct rte_eth_dev *dev, uint16_t idx,
+				      enum mlx5_rxq_obj_type type);
 int mlx5_rxq_obj_verify(struct rte_eth_dev *dev);
 struct mlx5_rxq_ctrl *mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx,
 				   uint16_t desc, unsigned int socket,
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 54353ee..acd2902 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -123,7 +123,8 @@
 		ret = rxq_alloc_elts(rxq_ctrl);
 		if (ret)
 			goto error;
-		rxq_ctrl->obj = mlx5_rxq_obj_new(dev, i);
+		rxq_ctrl->obj = mlx5_rxq_obj_new(dev, i,
+						 MLX5_RXQ_OBJ_TYPE_DEVX_RQ);
 		if (!rxq_ctrl->obj)
 			goto error;
 		rxq_ctrl->wqn = rxq_ctrl->obj->wq->wq_num;
diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c
index 67518c2..5f6554a 100644
--- a/drivers/net/mlx5/mlx5_vlan.c
+++ b/drivers/net/mlx5/mlx5_vlan.c
@@ -111,7 +111,7 @@
 	uint16_t vlan_offloads =
 		(on ? IBV_WQ_FLAGS_CVLAN_STRIPPING : 0) |
 		0;
-	int ret;
+	int ret = 0;
 
 	/* Validate hw support */
 	if (!priv->config.hw_vlan_strip) {
@@ -132,15 +132,27 @@
 		rxq->vlan_strip = !!on;
 		return;
 	}
-	mod = (struct ibv_wq_attr){
-		.attr_mask = IBV_WQ_ATTR_FLAGS,
-		.flags_mask = IBV_WQ_FLAGS_CVLAN_STRIPPING,
-		.flags = vlan_offloads,
-	};
-	ret = mlx5_glue->modify_wq(rxq_ctrl->obj->wq, &mod);
+	if (rxq_ctrl->obj->type == MLX5_RXQ_OBJ_TYPE_IBV) {
+		mod = (struct ibv_wq_attr){
+			.attr_mask = IBV_WQ_ATTR_FLAGS,
+			.flags_mask = IBV_WQ_FLAGS_CVLAN_STRIPPING,
+			.flags = vlan_offloads,
+		};
+		ret = mlx5_glue->modify_wq(rxq_ctrl->obj->wq, &mod);
+	} else if (rxq_ctrl->obj->type == MLX5_RXQ_OBJ_TYPE_DEVX_RQ) {
+		struct mlx5_devx_modify_rq_attr rq_attr;
+
+		memset(&rq_attr, 0, sizeof(rq_attr));
+		rq_attr.rq_state = MLX5_RQC_STATE_RDY;
+		rq_attr.state = MLX5_RQC_STATE_RDY;
+		rq_attr.vsd = (on ? 0 : 1);
+		rq_attr.modify_bitmask = MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_VSD;
+		ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr);
+	}
 	if (ret) {
-		DRV_LOG(ERR, "port %u failed to modified stripping mode: %s",
-			dev->data->port_id, strerror(rte_errno));
+		DRV_LOG(ERR, "port %u failed to modify object %d stripping "
+			"mode: %s", dev->data->port_id,
+			rxq_ctrl->obj->type, strerror(rte_errno));
 		return;
 	}
 	/* Update related bits in RX queue. */
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH 22/28] net/mlx5: support LRO with single RxQ object
  2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
                   ` (20 preceding siblings ...)
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 21/28] net/mlx5: create advanced RxQ using new API Matan Azrad
@ 2019-07-22  9:13 ` Matan Azrad
  2019-07-22  9:22   ` Slava Ovsiienko
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 23/28] net/mlx5: replace the external mbuf shared memory Matan Azrad
                   ` (8 subsequent siblings)
  30 siblings, 1 reply; 92+ messages in thread
From: Matan Azrad @ 2019-07-22  9:13 UTC (permalink / raw)
  To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Implement LRO support using a single RQ object per DPDK RxQ.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5_flow.h       |  6 ++++++
 drivers/net/mlx5/mlx5_flow_dv.c    | 21 +++++++++++++++++++--
 drivers/net/mlx5/mlx5_flow_verbs.c |  3 ++-
 drivers/net/mlx5/mlx5_rxq.c        | 14 +++++++++++---
 drivers/net/mlx5/mlx5_rxtx.h       |  2 +-
 drivers/net/mlx5/mlx5_trigger.c    | 11 ++++++++---
 6 files changed, 47 insertions(+), 10 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index f3c563e..3f96bec 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -70,6 +70,12 @@
 	(MLX5_FLOW_LAYER_OUTER_L2 | MLX5_FLOW_LAYER_OUTER_L3 | \
 	 MLX5_FLOW_LAYER_OUTER_L4)
 
+/* LRO support mask, i.e. flow contains IPv4/IPv6 and TCP. */
+#define MLX5_FLOW_LAYER_IPV4_LRO \
+	(MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L4_TCP)
+#define MLX5_FLOW_LAYER_IPV6_LRO \
+	(MLX5_FLOW_LAYER_OUTER_L3_IPV6 | MLX5_FLOW_LAYER_OUTER_L4_TCP)
+
 /* Tunnel Masks. */
 #define MLX5_FLOW_LAYER_TUNNEL \
 	(MLX5_FLOW_LAYER_VXLAN | MLX5_FLOW_LAYER_VXLAN_GPE | \
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 36696c8..7240d3b 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -62,6 +62,9 @@
 	uint32_t attr;
 };
 
+#define MLX5_FLOW_IPV4_LRO (1 << 0)
+#define MLX5_FLOW_IPV6_LRO (1 << 1)
+
 /**
  * Initialize flow attributes structure according to flow items' types.
  *
@@ -5161,13 +5164,27 @@ struct field_modify_info modify_tcp[] = {
 					     dv->hash_fields,
 					     (*flow->queue),
 					     flow->rss.queue_num);
-			if (!hrxq)
+			if (!hrxq) {
+				int lro = 0;
+
+				if (mlx5_lro_on(dev)) {
+					if ((dev_flow->layers &
+					     MLX5_FLOW_LAYER_IPV4_LRO)
+					    == MLX5_FLOW_LAYER_IPV4_LRO)
+						lro = MLX5_FLOW_IPV4_LRO;
+					else if ((dev_flow->layers &
+						  MLX5_FLOW_LAYER_IPV6_LRO)
+						 == MLX5_FLOW_LAYER_IPV6_LRO)
+						lro = MLX5_FLOW_IPV6_LRO;
+				}
 				hrxq = mlx5_hrxq_new
 					(dev, flow->key, MLX5_RSS_HASH_KEY_LEN,
 					 dv->hash_fields, (*flow->queue),
 					 flow->rss.queue_num,
 					 !!(dev_flow->layers &
-					    MLX5_FLOW_LAYER_TUNNEL));
+					    MLX5_FLOW_LAYER_TUNNEL), lro);
+			}
+
 			if (!hrxq) {
 				rte_flow_error_set
 					(error, rte_errno,
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index b3395b8..bcec3b4 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -1669,7 +1669,8 @@
 						     (*flow->queue),
 						     flow->rss.queue_num,
 						     !!(dev_flow->layers &
-						      MLX5_FLOW_LAYER_TUNNEL));
+							MLX5_FLOW_LAYER_TUNNEL),
+						     0);
 			if (!hrxq) {
 				rte_flow_error_set
 					(error, rte_errno,
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 9712db4..c03ecf8 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -2033,6 +2033,8 @@ struct mlx5_rxq_ctrl *
  *   Number of queues.
  * @param tunnel
  *   Tunnel type.
+ * @param lro
+ *   Flow rule is relevant for LRO, i.e. contains IPv4/IPv6 and TCP.
  *
  * @return
  *   The Verbs/DevX object initialised, NULL otherwise and rte_errno is set.
@@ -2042,14 +2044,14 @@ struct mlx5_hrxq *
 	      const uint8_t *rss_key, uint32_t rss_key_len,
 	      uint64_t hash_fields,
 	      const uint16_t *queues, uint32_t queues_n,
-	      int tunnel __rte_unused)
+	      int tunnel __rte_unused, int lro)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_hrxq *hrxq;
-	struct ibv_qp *qp;
+	struct ibv_qp *qp = NULL;
 	struct mlx5_ind_table_obj *ind_tbl;
 	int err;
-	struct mlx5_devx_obj *tir;
+	struct mlx5_devx_obj *tir = NULL;
 
 	queues_n = hash_fields ? queues_n : 1;
 	ind_tbl = mlx5_ind_table_obj_get(dev, queues, queues_n);
@@ -2149,6 +2151,12 @@ struct mlx5_hrxq *
 		if (dev->data->dev_conf.lpbk_mode)
 			tir_attr.self_lb_block =
 					MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST;
+		if (lro) {
+			tir_attr.lro_timeout_period_usecs =
+					priv->config.lro.timeout;
+			tir_attr.lro_max_msg_sz = 0xff;
+			tir_attr.lro_enable_mask = lro;
+		}
 		tir = mlx5_devx_cmd_create_tir(priv->sh->ctx, &tir_attr);
 		if (!tir) {
 			DRV_LOG(ERR, "port %u cannot create DevX TIR",
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index bd4ae80..ed5f637 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -346,7 +346,7 @@ struct mlx5_hrxq *mlx5_hrxq_new(struct rte_eth_dev *dev,
 				const uint8_t *rss_key, uint32_t rss_key_len,
 				uint64_t hash_fields,
 				const uint16_t *queues, uint32_t queues_n,
-				int tunnel __rte_unused);
+				int tunnel __rte_unused, int lro);
 struct mlx5_hrxq *mlx5_hrxq_get(struct rte_eth_dev *dev,
 				const uint8_t *rss_key, uint32_t rss_key_len,
 				uint64_t hash_fields,
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index acd2902..8bc2174 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -99,6 +99,9 @@
 	struct mlx5_priv *priv = dev->data->dev_private;
 	unsigned int i;
 	int ret = 0;
+	unsigned int lro_on = mlx5_lro_on(dev);
+	enum mlx5_rxq_obj_type obj_type = lro_on ? MLX5_RXQ_OBJ_TYPE_DEVX_RQ :
+						   MLX5_RXQ_OBJ_TYPE_IBV;
 
 	/* Allocate/reuse/resize mempool for Multi-Packet RQ. */
 	if (mlx5_mprq_alloc_mp(dev)) {
@@ -123,11 +126,13 @@
 		ret = rxq_alloc_elts(rxq_ctrl);
 		if (ret)
 			goto error;
-		rxq_ctrl->obj = mlx5_rxq_obj_new(dev, i,
-						 MLX5_RXQ_OBJ_TYPE_DEVX_RQ);
+		rxq_ctrl->obj = mlx5_rxq_obj_new(dev, i, obj_type);
 		if (!rxq_ctrl->obj)
 			goto error;
-		rxq_ctrl->wqn = rxq_ctrl->obj->wq->wq_num;
+		if (obj_type == MLX5_RXQ_OBJ_TYPE_IBV)
+			rxq_ctrl->wqn = rxq_ctrl->obj->wq->wq_num;
+		else if (obj_type == MLX5_RXQ_OBJ_TYPE_DEVX_RQ)
+			rxq_ctrl->wqn = rxq_ctrl->obj->rq->id;
 	}
 	return 0;
 error:
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH 23/28] net/mlx5: replace the external mbuf shared memory
  2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
                   ` (21 preceding siblings ...)
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 22/28] net/mlx5: support LRO with single RxQ object Matan Azrad
@ 2019-07-22  9:13 ` Matan Azrad
  2019-07-22  9:21   ` Slava Ovsiienko
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 24/28] net/mlx5: update LRO fields in completion entry Matan Azrad
                   ` (7 subsequent siblings)
  30 siblings, 1 reply; 92+ messages in thread
From: Matan Azrad @ 2019-07-22  9:13 UTC (permalink / raw)
  To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled

As an arrangement to the LRO support when a packet can consume all the
stride memory, the external mbuf shared information cannot be anymore
in the end of the stride, because the HW may write the packet data to
all the stride memory.

Move the shared information memory from the stride to the control
memory of the external mbuf.

Signed-off-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5_rxq.c  | 22 +++++++++++++++-------
 drivers/net/mlx5/mlx5_rxtx.c | 19 +++++++++++--------
 drivers/net/mlx5/mlx5_rxtx.h | 12 +++++++++++-
 3 files changed, 37 insertions(+), 16 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index c03ecf8..344ff90 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1361,14 +1361,22 @@ struct mlx5_rxq_obj *
  * Callback function to initialize mbufs for Multi-Packet RQ.
  */
 static inline void
-mlx5_mprq_buf_init(struct rte_mempool *mp, void *opaque_arg __rte_unused,
+mlx5_mprq_buf_init(struct rte_mempool *mp, void *opaque_arg,
 		    void *_m, unsigned int i __rte_unused)
 {
 	struct mlx5_mprq_buf *buf = _m;
+	struct rte_mbuf_ext_shared_info *shinfo;
+	unsigned int strd_n = (unsigned int)(uintptr_t)opaque_arg;
+	unsigned int j;
 
 	memset(_m, 0, sizeof(*buf));
 	buf->mp = mp;
 	rte_atomic16_set(&buf->refcnt, 1);
+	for (j = 0; j != strd_n; ++j) {
+		shinfo = &buf->shinfos[j];
+		shinfo->free_cb = mlx5_mprq_buf_free_cb;
+		shinfo->fcb_opaque = buf;
+	}
 }
 
 /**
@@ -1463,7 +1471,8 @@ struct mlx5_rxq_obj *
 	}
 	assert(strd_num_n && strd_sz_n);
 	buf_len = (1 << strd_num_n) * (1 << strd_sz_n);
-	obj_size = buf_len + sizeof(struct mlx5_mprq_buf);
+	obj_size = sizeof(struct mlx5_mprq_buf) + buf_len + (1 << strd_num_n) *
+		sizeof(struct rte_mbuf_ext_shared_info) + RTE_PKTMBUF_HEADROOM;
 	/*
 	 * Received packets can be either memcpy'd or externally referenced. In
 	 * case that the packet is attached to an mbuf as an external buffer, as
@@ -1508,7 +1517,8 @@ struct mlx5_rxq_obj *
 	}
 	snprintf(name, sizeof(name), "port-%u-mprq", dev->data->port_id);
 	mp = rte_mempool_create(name, obj_num, obj_size, MLX5_MPRQ_MP_CACHE_SZ,
-				0, NULL, NULL, mlx5_mprq_buf_init, NULL,
+				0, NULL, NULL, mlx5_mprq_buf_init,
+				(void *)(uintptr_t)(1 << strd_num_n),
 				dev->device->numa_node, 0);
 	if (mp == NULL) {
 		DRV_LOG(ERR,
@@ -1594,10 +1604,8 @@ struct mlx5_rxq_ctrl *
 	 *  Otherwise, enable Rx scatter if necessary.
 	 */
 	assert(mb_len >= RTE_PKTMBUF_HEADROOM);
-	mprq_stride_size =
-		dev->data->dev_conf.rxmode.max_rx_pkt_len +
-		sizeof(struct rte_mbuf_ext_shared_info) +
-		RTE_PKTMBUF_HEADROOM;
+	mprq_stride_size = dev->data->dev_conf.rxmode.max_rx_pkt_len +
+				RTE_PKTMBUF_HEADROOM;
 	if (mprq_en &&
 	    desc > (1U << config->mprq.stride_num_n) &&
 	    mprq_stride_size <= (1U << config->mprq.max_stride_size_n)) {
diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 584da3e..241e01b 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -100,7 +100,8 @@ enum mlx5_txcmp_code {
 	       volatile struct mlx5_cqe *cqe, uint32_t rss_hash_res);
 
 static __rte_always_inline void
-mprq_buf_replace(struct mlx5_rxq_data *rxq, uint16_t rq_idx);
+mprq_buf_replace(struct mlx5_rxq_data *rxq, uint16_t rq_idx,
+		 const unsigned int strd_n);
 
 static int
 mlx5_queue_state_modify(struct rte_eth_dev *dev,
@@ -756,7 +757,8 @@ enum mlx5_txcmp_code {
 
 			scat = &((volatile struct mlx5_wqe_mprq *)
 				rxq->wqes)[i].dseg;
-			addr = (uintptr_t)mlx5_mprq_buf_addr(buf);
+			addr = (uintptr_t)mlx5_mprq_buf_addr(buf,
+							 1 << rxq->strd_num_n);
 			byte_count = (1 << rxq->strd_sz_n) *
 					(1 << rxq->strd_num_n);
 		} else {
@@ -1392,7 +1394,8 @@ enum mlx5_txcmp_code {
 }
 
 static inline void
-mprq_buf_replace(struct mlx5_rxq_data *rxq, uint16_t rq_idx)
+mprq_buf_replace(struct mlx5_rxq_data *rxq, uint16_t rq_idx,
+		 const unsigned int strd_n)
 {
 	struct mlx5_mprq_buf *rep = rxq->mprq_repl;
 	volatile struct mlx5_wqe_data_seg *wqe =
@@ -1403,7 +1406,7 @@ enum mlx5_txcmp_code {
 	/* Replace MPRQ buf. */
 	(*rxq->mprq_bufs)[rq_idx] = rep;
 	/* Replace WQE. */
-	addr = mlx5_mprq_buf_addr(rep);
+	addr = mlx5_mprq_buf_addr(rep, strd_n);
 	wqe->addr = rte_cpu_to_be_64((uintptr_t)addr);
 	/* If there's only one MR, no need to replace LKey in WQE. */
 	if (unlikely(mlx5_mr_btree_len(&rxq->mr_ctrl.cache_bh) > 1))
@@ -1459,7 +1462,7 @@ enum mlx5_txcmp_code {
 		if (consumed_strd == strd_n) {
 			/* Replace WQE only if the buffer is still in use. */
 			if (rte_atomic16_read(&buf->refcnt) > 1) {
-				mprq_buf_replace(rxq, rq_ci & wq_mask);
+				mprq_buf_replace(rxq, rq_ci & wq_mask, strd_n);
 				/* Release the old buffer. */
 				mlx5_mprq_buf_free(buf);
 			} else if (unlikely(rxq->mprq_repl == NULL)) {
@@ -1521,7 +1524,7 @@ enum mlx5_txcmp_code {
 		if (rxq->crc_present)
 			len -= RTE_ETHER_CRC_LEN;
 		offset = strd_idx * strd_sz + strd_shift;
-		addr = RTE_PTR_ADD(mlx5_mprq_buf_addr(buf), offset);
+		addr = RTE_PTR_ADD(mlx5_mprq_buf_addr(buf, strd_n), offset);
 		/* Initialize the offload flag. */
 		pkt->ol_flags = 0;
 		/*
@@ -1557,8 +1560,8 @@ enum mlx5_txcmp_code {
 			 */
 			buf_iova = rte_mempool_virt2iova(buf) +
 				   RTE_PTR_DIFF(addr, buf);
-			shinfo = rte_pktmbuf_ext_shinfo_init_helper(addr,
-					&buf_len, mlx5_mprq_buf_free_cb, buf);
+			shinfo = &buf->shinfos[strd_idx];
+			rte_mbuf_ext_refcnt_set(shinfo, 1);
 			/*
 			 * EXT_ATTACHED_MBUF will be set to pkt->ol_flags when
 			 * attaching the stride to mbuf and more offload flags
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index ed5f637..bbd9b31 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -75,10 +75,20 @@ struct mlx5_mprq_buf {
 	struct rte_mempool *mp;
 	rte_atomic16_t refcnt; /* Atomically accessed refcnt. */
 	uint8_t pad[RTE_PKTMBUF_HEADROOM]; /* Headroom for the first packet. */
+	struct rte_mbuf_ext_shared_info shinfos[];
+	/*
+	 * Shared information per stride.
+	 * More memory will be allocated for the first stride head-room and for
+	 * the strides data.
+	 */
 } __rte_cache_aligned;
 
 /* Get pointer to the first stride. */
-#define mlx5_mprq_buf_addr(ptr) ((ptr) + 1)
+#define mlx5_mprq_buf_addr(ptr, strd_n) (RTE_PTR_ADD((ptr), \
+				sizeof(struct mlx5_mprq_buf) + \
+				(strd_n) * \
+				sizeof(struct rte_mbuf_ext_shared_info) + \
+				RTE_PKTMBUF_HEADROOM))
 
 #define MLX5_MIN_SINGLE_STRIDE_LOG_NUM_BYTES 6
 #define MLX5_MIN_SINGLE_WQE_LOG_NUM_STRIDES 9
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH 24/28] net/mlx5: update LRO fields in completion entry
  2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
                   ` (22 preceding siblings ...)
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 23/28] net/mlx5: replace the external mbuf shared memory Matan Azrad
@ 2019-07-22  9:13 ` Matan Azrad
  2019-07-22  9:23   ` Slava Ovsiienko
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 25/28] net/mlx5: handle LRO packets in Rx queue Matan Azrad
                   ` (6 subsequent siblings)
  30 siblings, 1 reply; 92+ messages in thread
From: Matan Azrad @ 2019-07-22  9:13 UTC (permalink / raw)
  To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled

Update the CQE structure to include LRO fields.

Some reserved values were changed, hence also data-path code used the
reserved values were updated accordingly.

Signed-off-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5_prm.h          | 12 +++++++++---
 drivers/net/mlx5/mlx5_rxtx_vec.h     |  6 ++----
 drivers/net/mlx5/mlx5_rxtx_vec_sse.h | 16 ++++++++--------
 3 files changed, 19 insertions(+), 15 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index b0e281f..3f73a28 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -317,13 +317,19 @@ struct mlx5_cqe {
 	uint8_t pkt_info;
 	uint8_t rsvd0;
 	uint16_t wqe_id;
-	uint8_t rsvd3[8];
+	uint8_t lro_tcppsh_abort_dupack;
+	uint8_t lro_min_ttl;
+	uint16_t lro_tcp_win;
+	uint32_t lro_ack_seq_num;
 	uint32_t rx_hash_res;
 	uint8_t rx_hash_type;
-	uint8_t rsvd1[11];
+	uint8_t rsvd1[3];
+	uint16_t csum;
+	uint8_t rsvd2[6];
 	uint16_t hdr_type_etc;
 	uint16_t vlan_info;
-	uint8_t rsvd2[12];
+	uint8_t lro_num_seg;
+	uint8_t rsvd3[11];
 	uint32_t byte_cnt;
 	uint64_t timestamp;
 	uint32_t sop_drop_qpn;
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.h b/drivers/net/mlx5/mlx5_rxtx_vec.h
index 4220b08..b54ff72 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec.h
@@ -60,13 +60,11 @@
 #endif
 S_ASSERT_MLX5_CQE(offsetof(struct mlx5_cqe, rx_hash_res) ==
 		  offsetof(struct mlx5_cqe, pkt_info) + 12);
-S_ASSERT_MLX5_CQE(offsetof(struct mlx5_cqe, rsvd1) +
-		  sizeof(((struct mlx5_cqe *)0)->rsvd1) ==
+S_ASSERT_MLX5_CQE(offsetof(struct mlx5_cqe, rsvd1) + 11 ==
 		  offsetof(struct mlx5_cqe, hdr_type_etc));
 S_ASSERT_MLX5_CQE(offsetof(struct mlx5_cqe, vlan_info) ==
 		  offsetof(struct mlx5_cqe, hdr_type_etc) + 2);
-S_ASSERT_MLX5_CQE(offsetof(struct mlx5_cqe, rsvd2) +
-		  sizeof(((struct mlx5_cqe *)0)->rsvd2) ==
+S_ASSERT_MLX5_CQE(offsetof(struct mlx5_cqe, lro_num_seg) + 12 ==
 		  offsetof(struct mlx5_cqe, byte_cnt));
 S_ASSERT_MLX5_CQE(offsetof(struct mlx5_cqe, sop_drop_qpn) ==
 		  RTE_ALIGN(offsetof(struct mlx5_cqe, sop_drop_qpn), 8));
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
index 7bd254f..ca8ed41 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
@@ -533,12 +533,12 @@
 		cqe_tmp1 = _mm_load_si128((__m128i *)&cq[pos + p2]);
 		cqes[3] = _mm_blendv_epi8(cqes[3], cqe_tmp2, blend_mask);
 		cqes[2] = _mm_blendv_epi8(cqes[2], cqe_tmp1, blend_mask);
-		cqe_tmp2 = _mm_loadu_si128((__m128i *)&cq[pos + p3].rsvd1[3]);
-		cqe_tmp1 = _mm_loadu_si128((__m128i *)&cq[pos + p2].rsvd1[3]);
+		cqe_tmp2 = _mm_loadu_si128((__m128i *)&cq[pos + p3].csum);
+		cqe_tmp1 = _mm_loadu_si128((__m128i *)&cq[pos + p2].csum);
 		cqes[3] = _mm_blend_epi16(cqes[3], cqe_tmp2, 0x30);
 		cqes[2] = _mm_blend_epi16(cqes[2], cqe_tmp1, 0x30);
-		cqe_tmp2 = _mm_loadl_epi64((__m128i *)&cq[pos + p3].rsvd2[10]);
-		cqe_tmp1 = _mm_loadl_epi64((__m128i *)&cq[pos + p2].rsvd2[10]);
+		cqe_tmp2 = _mm_loadl_epi64((__m128i *)&cq[pos + p3].rsvd3[9]);
+		cqe_tmp1 = _mm_loadl_epi64((__m128i *)&cq[pos + p2].rsvd3[9]);
 		cqes[3] = _mm_blend_epi16(cqes[3], cqe_tmp2, 0x04);
 		cqes[2] = _mm_blend_epi16(cqes[2], cqe_tmp1, 0x04);
 		/* C.2 generate final structure for mbuf with swapping bytes. */
@@ -560,12 +560,12 @@
 		cqe_tmp1 = _mm_load_si128((__m128i *)&cq[pos]);
 		cqes[1] = _mm_blendv_epi8(cqes[1], cqe_tmp2, blend_mask);
 		cqes[0] = _mm_blendv_epi8(cqes[0], cqe_tmp1, blend_mask);
-		cqe_tmp2 = _mm_loadu_si128((__m128i *)&cq[pos + p1].rsvd1[3]);
-		cqe_tmp1 = _mm_loadu_si128((__m128i *)&cq[pos].rsvd1[3]);
+		cqe_tmp2 = _mm_loadu_si128((__m128i *)&cq[pos + p1].csum);
+		cqe_tmp1 = _mm_loadu_si128((__m128i *)&cq[pos].csum);
 		cqes[1] = _mm_blend_epi16(cqes[1], cqe_tmp2, 0x30);
 		cqes[0] = _mm_blend_epi16(cqes[0], cqe_tmp1, 0x30);
-		cqe_tmp2 = _mm_loadl_epi64((__m128i *)&cq[pos + p1].rsvd2[10]);
-		cqe_tmp1 = _mm_loadl_epi64((__m128i *)&cq[pos].rsvd2[10]);
+		cqe_tmp2 = _mm_loadl_epi64((__m128i *)&cq[pos + p1].rsvd3[9]);
+		cqe_tmp1 = _mm_loadl_epi64((__m128i *)&cq[pos].rsvd3[9]);
 		cqes[1] = _mm_blend_epi16(cqes[1], cqe_tmp2, 0x04);
 		cqes[0] = _mm_blend_epi16(cqes[0], cqe_tmp1, 0x04);
 		/* C.2 generate final structure for mbuf with swapping bytes. */
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH 25/28] net/mlx5: handle LRO packets in Rx queue
  2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
                   ` (23 preceding siblings ...)
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 24/28] net/mlx5: update LRO fields in completion entry Matan Azrad
@ 2019-07-22  9:13 ` Matan Azrad
  2019-07-22  9:26   ` Slava Ovsiienko
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 26/28] net/mlx5: zero the LRO mbuf headroom Matan Azrad
                   ` (5 subsequent siblings)
  30 siblings, 1 reply; 92+ messages in thread
From: Matan Azrad @ 2019-07-22  9:13 UTC (permalink / raw)
  To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled

When LRO offload is configured in Rx queue, the HW may coalesce TCP
packets from same TCP connection into single packet.

In this case the SW should fix the relevant packet headers because the
HW doesn't update them according to the new created packet
characteristics.

Add update header code to the mprq Rx burst function to support LRO
feature.

Signed-off-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5_prm.h  |  15 ++++++
 drivers/net/mlx5/mlx5_rxtx.c | 113 +++++++++++++++++++++++++++++++++++++++++--
 2 files changed, 123 insertions(+), 5 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index 3f73a28..32bc7a6 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -155,6 +155,21 @@
 /* Tunnel packet bit in the CQE. */
 #define MLX5_CQE_RX_TUNNEL_PACKET (1u << 0)
 
+/* Mask for LRO push flag in the CQE lro_tcppsh_abort_dupack field. */
+#define MLX5_CQE_LRO_PUSH_MASK 0x40
+
+/* Mask for L4 type in the CQE hdr_type_etc field. */
+#define MLX5_CQE_L4_TYPE_MASK 0x70
+
+/* The bit index of L4 type in CQE hdr_type_etc field. */
+#define MLX5_CQE_L4_TYPE_SHIFT 0x4
+
+/* L4 type to indicate TCP packet without acknowledgment. */
+#define MLX5_L4_HDR_TYPE_TCP_EMPTY_ACK 0x3
+
+/* L4 type to indicate TCP packet with acknowledgment. */
+#define MLX5_L4_HDR_TYPE_TCP_WITH_ACL 0x4
+
 /* Inner L3 checksum offload (Tunneled packets only). */
 #define MLX5_ETH_WQE_L3_INNER_CSUM (1u << 4)
 
diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 241e01b..c7487ac 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -1374,6 +1374,101 @@ enum mlx5_txcmp_code {
 	return i;
 }
 
+/**
+ * Update LRO packet TCP header.
+ * The HW LRO feature doesn't update the TCP header after coalescing the
+ * TCP segments but supplies information in CQE to fill it by SW.
+ *
+ * @param tcp
+ *   Pointer to the TCP header.
+ * @param cqe
+ *   Pointer to the completion entry..
+ * @param phcsum
+ *   The L3 pseudo-header checksum.
+ */
+static inline void
+mlx5_lro_update_tcp_hdr(struct rte_tcp_hdr *restrict tcp,
+			volatile struct mlx5_cqe *restrict cqe,
+			uint32_t phcsum)
+{
+	uint8_t l4_type = (rte_be_to_cpu_16(cqe->hdr_type_etc) &
+			   MLX5_CQE_L4_TYPE_MASK) >> MLX5_CQE_L4_TYPE_SHIFT;
+	/*
+	 * The HW calculates only the TCP payload checksum, need to complete
+	 * the TCP header checksum and the L3 pseudo-header checksum.
+	 */
+	uint32_t csum = phcsum + cqe->csum;
+
+	if (l4_type == MLX5_L4_HDR_TYPE_TCP_EMPTY_ACK ||
+	    l4_type == MLX5_L4_HDR_TYPE_TCP_WITH_ACL) {
+		tcp->tcp_flags |= RTE_TCP_ACK_FLAG;
+		tcp->recv_ack = cqe->lro_ack_seq_num;
+		tcp->rx_win = cqe->lro_tcp_win;
+	}
+	if (cqe->lro_tcppsh_abort_dupack & MLX5_CQE_LRO_PUSH_MASK)
+		tcp->tcp_flags |= RTE_TCP_PSH_FLAG;
+	tcp->cksum = 0;
+	csum += rte_raw_cksum(tcp, (tcp->data_off & 0xF) * 4);
+	csum = ((csum & 0xffff0000) >> 16) + (csum & 0xffff);
+	csum = (~csum) & 0xffff;
+	if (csum == 0)
+		csum = 0xffff;
+	tcp->cksum = csum;
+}
+
+/**
+ * Update LRO packet headers.
+ * The HW LRO feature doesn't update the L3/TCP headers after coalescing the
+ * TCP segments but supply information in CQE to fill it by SW.
+ *
+ * @param padd
+ *   The packet address.
+ * @param cqe
+ *   Pointer to the completion entry..
+ * @param len
+ *   The packet length.
+ */
+static inline void
+mlx5_lro_update_hdr(uint8_t *restrict padd,
+		    volatile struct mlx5_cqe *restrict cqe,
+		    uint32_t len)
+{
+	union {
+		struct rte_ether_hdr *eth;
+		struct rte_vlan_hdr *vlan;
+		struct rte_ipv4_hdr *ipv4;
+		struct rte_ipv6_hdr *ipv6;
+		struct rte_tcp_hdr *tcp;
+		uint8_t *hdr;
+	} h = {
+			.hdr = padd,
+	};
+	uint16_t proto = h.eth->ether_type;
+	uint32_t phcsum;
+
+	h.eth++;
+	while (proto == RTE_BE16(RTE_ETHER_TYPE_VLAN) ||
+	       proto == RTE_BE16(RTE_ETHER_TYPE_QINQ)) {
+		proto = h.vlan->eth_proto;
+		h.vlan++;
+	}
+	if (proto == RTE_BE16(RTE_ETHER_TYPE_IPV4)) {
+		h.ipv4->time_to_live = cqe->lro_min_ttl;
+		h.ipv4->total_length = rte_cpu_to_be_16(len - (h.hdr - padd));
+		h.ipv4->hdr_checksum = 0;
+		h.ipv4->hdr_checksum = rte_ipv4_cksum(h.ipv4);
+		phcsum = rte_ipv4_phdr_cksum(h.ipv4, 0);
+		h.ipv4++;
+	} else {
+		h.ipv6->hop_limits = cqe->lro_min_ttl;
+		h.ipv6->payload_len = rte_cpu_to_be_16(len - (h.hdr - padd) -
+						       sizeof(*h.ipv6));
+		phcsum = rte_ipv6_phdr_cksum(h.ipv6, 0);
+		h.ipv6++;
+	}
+	mlx5_lro_update_tcp_hdr(h.tcp, cqe, phcsum);
+}
+
 void
 mlx5_mprq_buf_free_cb(void *addr __rte_unused, void *opaque)
 {
@@ -1458,6 +1553,7 @@ enum mlx5_txcmp_code {
 		uint32_t byte_cnt;
 		volatile struct mlx5_mini_cqe8 *mcqe = NULL;
 		uint32_t rss_hash_res = 0;
+		uint8_t lro_num_seg;
 
 		if (consumed_strd == strd_n) {
 			/* Replace WQE only if the buffer is still in use. */
@@ -1503,6 +1599,7 @@ enum mlx5_txcmp_code {
 		}
 		assert(strd_idx < strd_n);
 		assert(!((rte_be_to_cpu_16(cqe->wqe_id) ^ rq_ci) & wq_mask));
+		lro_num_seg = cqe->lro_num_seg;
 		/*
 		 * Currently configured to receive a packet per a stride. But if
 		 * MTU is adjusted through kernel interface, device could
@@ -1510,7 +1607,7 @@ enum mlx5_txcmp_code {
 		 * case, the packet should be dropped because it is bigger than
 		 * the max_rx_pkt_len.
 		 */
-		if (unlikely(strd_cnt > 1)) {
+		if (unlikely(!lro_num_seg && strd_cnt > 1)) {
 			++rxq->stats.idropped;
 			continue;
 		}
@@ -1547,19 +1644,20 @@ enum mlx5_txcmp_code {
 			rte_iova_t buf_iova;
 			struct rte_mbuf_ext_shared_info *shinfo;
 			uint16_t buf_len = strd_cnt * strd_sz;
+			void *buf_addr;
 
 			/* Increment the refcnt of the whole chunk. */
 			rte_atomic16_add_return(&buf->refcnt, 1);
 			assert((uint16_t)rte_atomic16_read(&buf->refcnt) <=
 			       strd_n + 1);
-			addr = RTE_PTR_SUB(addr, RTE_PKTMBUF_HEADROOM);
+			buf_addr = RTE_PTR_SUB(addr, RTE_PKTMBUF_HEADROOM);
 			/*
 			 * MLX5 device doesn't use iova but it is necessary in a
 			 * case where the Rx packet is transmitted via a
 			 * different PMD.
 			 */
 			buf_iova = rte_mempool_virt2iova(buf) +
-				   RTE_PTR_DIFF(addr, buf);
+				   RTE_PTR_DIFF(buf_addr, buf);
 			shinfo = &buf->shinfos[strd_idx];
 			rte_mbuf_ext_refcnt_set(shinfo, 1);
 			/*
@@ -1568,8 +1666,8 @@ enum mlx5_txcmp_code {
 			 * will be added below by calling rxq_cq_to_mbuf().
 			 * Other fields will be overwritten.
 			 */
-			rte_pktmbuf_attach_extbuf(pkt, addr, buf_iova, buf_len,
-						  shinfo);
+			rte_pktmbuf_attach_extbuf(pkt, buf_addr, buf_iova,
+						  buf_len, shinfo);
 			rte_pktmbuf_reset_headroom(pkt);
 			assert(pkt->ol_flags == EXT_ATTACHED_MBUF);
 			/*
@@ -1583,6 +1681,11 @@ enum mlx5_txcmp_code {
 			}
 		}
 		rxq_cq_to_mbuf(rxq, pkt, cqe, rss_hash_res);
+		if (lro_num_seg > 1) {
+			mlx5_lro_update_hdr(addr, cqe, len);
+			pkt->ol_flags |= PKT_RX_LRO;
+			pkt->tso_segsz = strd_sz;
+		}
 		PKT_LEN(pkt) = len;
 		DATA_LEN(pkt) = len;
 		PORT(pkt) = rxq->port_id;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH 26/28] net/mlx5: zero the LRO mbuf headroom
  2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
                   ` (24 preceding siblings ...)
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 25/28] net/mlx5: handle LRO packets in Rx queue Matan Azrad
@ 2019-07-22  9:13 ` Matan Azrad
  2019-07-22  9:23   ` Slava Ovsiienko
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 27/28] net/mlx5: adjust the maximum LRO message size Matan Azrad
                   ` (4 subsequent siblings)
  30 siblings, 1 reply; 92+ messages in thread
From: Matan Azrad @ 2019-07-22  9:13 UTC (permalink / raw)
  To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled

LRO packet may consume all the stride memory, hence the PMD cannot
guaranty head-room for the LRO mbuf.

The issue is lack in HW support to write the packet in offset from the
stride start.

A new striding RQ feature may be added in CX6 DX to allow head-room and
tail-room for the LRO strides.

Signed-off-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5_rxq.c  | 16 +++++++++++-----
 drivers/net/mlx5/mlx5_rxtx.c |  6 ++++--
 drivers/net/mlx5/mlx5_rxtx.h |  3 ++-
 3 files changed, 17 insertions(+), 8 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 344ff90..7c252e3 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1569,6 +1569,12 @@ struct mlx5_rxq_ctrl *
 	unsigned int mprq_stride_size;
 	struct mlx5_dev_config *config = &priv->config;
 	/*
+	 * LRO packet may consume all the stride memory, hence we cannot
+	 * guaranty head-room. A new striding RQ feature may be added in CX6 DX
+	 * to allow head-room and tail-room for the LRO packets.
+	 */
+	unsigned int strd_headroom_en = mlx5_lro_on(dev) ? 0 : 1;
+	/*
 	 * Always allocate extra slots, even if eventually
 	 * the vector Rx will not be used.
 	 */
@@ -1603,9 +1609,9 @@ struct mlx5_rxq_ctrl *
 	 *    stride.
 	 *  Otherwise, enable Rx scatter if necessary.
 	 */
-	assert(mb_len >= RTE_PKTMBUF_HEADROOM);
+	assert(mb_len >= RTE_PKTMBUF_HEADROOM * strd_headroom_en);
 	mprq_stride_size = dev->data->dev_conf.rxmode.max_rx_pkt_len +
-				RTE_PKTMBUF_HEADROOM;
+				RTE_PKTMBUF_HEADROOM * strd_headroom_en;
 	if (mprq_en &&
 	    desc > (1U << config->mprq.stride_num_n) &&
 	    mprq_stride_size <= (1U << config->mprq.max_stride_size_n)) {
@@ -1617,9 +1623,9 @@ struct mlx5_rxq_ctrl *
 		tmpl->rxq.strd_sz_n = RTE_MAX(log2above(mprq_stride_size),
 					      config->mprq.min_stride_size_n);
 		tmpl->rxq.strd_shift_en = MLX5_MPRQ_TWO_BYTE_SHIFT;
-		tmpl->rxq.mprq_max_memcpy_len =
-			RTE_MIN(mb_len - RTE_PKTMBUF_HEADROOM,
-				config->mprq.max_memcpy_len);
+		tmpl->rxq.strd_headroom_en = strd_headroom_en;
+		tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(mb_len -
+			    RTE_PKTMBUF_HEADROOM, config->mprq.max_memcpy_len);
 		DRV_LOG(DEBUG,
 			"port %u Rx queue %u: Multi-Packet RQ is enabled"
 			" strd_num_n = %u, strd_sz_n = %u",
diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index c7487ac..3872966 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -1540,6 +1540,7 @@ enum mlx5_txcmp_code {
 	unsigned int i = 0;
 	uint32_t rq_ci = rxq->rq_ci;
 	uint16_t consumed_strd = rxq->consumed_strd;
+	uint16_t headroom_sz = rxq->strd_headroom_en * RTE_PKTMBUF_HEADROOM;
 	struct mlx5_mprq_buf *buf = (*rxq->mprq_bufs)[rq_ci & wq_mask];
 
 	while (i < pkts_n) {
@@ -1650,7 +1651,7 @@ enum mlx5_txcmp_code {
 			rte_atomic16_add_return(&buf->refcnt, 1);
 			assert((uint16_t)rte_atomic16_read(&buf->refcnt) <=
 			       strd_n + 1);
-			buf_addr = RTE_PTR_SUB(addr, RTE_PKTMBUF_HEADROOM);
+			buf_addr = RTE_PTR_SUB(addr, headroom_sz);
 			/*
 			 * MLX5 device doesn't use iova but it is necessary in a
 			 * case where the Rx packet is transmitted via a
@@ -1668,7 +1669,8 @@ enum mlx5_txcmp_code {
 			 */
 			rte_pktmbuf_attach_extbuf(pkt, buf_addr, buf_iova,
 						  buf_len, shinfo);
-			rte_pktmbuf_reset_headroom(pkt);
+			/* Set mbuf head-room. */
+			pkt->data_off = headroom_sz;
 			assert(pkt->ol_flags == EXT_ATTACHED_MBUF);
 			/*
 			 * Prevent potential overflow due to MTU change through
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index bbd9b31..4252832 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -114,7 +114,8 @@ struct mlx5_rxq_data {
 	unsigned int strd_sz_n:4; /* Log 2 of stride size. */
 	unsigned int strd_shift_en:1; /* Enable 2bytes shift on a stride. */
 	unsigned int err_state:2; /* enum mlx5_rxq_err_state. */
-	unsigned int :4; /* Remaining bits. */
+	unsigned int strd_headroom_en:1; /* Enable mbuf headroom in MPRQ. */
+	unsigned int :3; /* Remaining bits. */
 	volatile uint32_t *rq_db;
 	volatile uint32_t *cq_db;
 	uint16_t port_id;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH 27/28] net/mlx5: adjust the maximum LRO message size
  2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
                   ` (25 preceding siblings ...)
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 26/28] net/mlx5: zero the LRO mbuf headroom Matan Azrad
@ 2019-07-22  9:13 ` Matan Azrad
  2019-07-22  9:23   ` Slava Ovsiienko
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 28/28] doc: update MLX5 doc and release notes with LRO Matan Azrad
                   ` (3 subsequent siblings)
  30 siblings, 1 reply; 92+ messages in thread
From: Matan Azrad @ 2019-07-22  9:13 UTC (permalink / raw)
  To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled

LRO message is contained in the MPRQ strides.
While the LRO message size cannot be bigger than 65280 according to the
PRM, the strides which contain it may be bigger than the maximum buffer
size allowed in dpdk mbuf - 0xFFFF.

Adjust the maximum LRO message size to avoid buffer length overflow.

Signed-off-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5.h     |  1 +
 drivers/net/mlx5/mlx5_rxq.c | 37 ++++++++++++++++++++++++++++++++++++-
 2 files changed, 37 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 1dc8b7c..24fe817 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -624,6 +624,7 @@ struct mlx5_priv {
 	struct ibv_flow_action *verbs_action;
 	/**< Verbs modify header action object. */
 	uint8_t ft_type; /**< Flow table type, Rx or Tx. */
+	uint8_t max_lro_msg_size;
 	/* Tags resources cache. */
 	uint32_t link_speed_capa; /* Link speed capabilities. */
 	struct mlx5_xstats_ctrl xstats_ctrl; /* Extended stats control. */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 7c252e3..51b2f00 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1544,6 +1544,39 @@ struct mlx5_rxq_obj *
 }
 
 /**
+ * Adjust the maximum LRO massage size.
+ * LRO massage is contained in the MPRQ strides.
+ * While the LRO massage size cannot be bigger than 65280 according to the
+ * PRM, the strides which contain it may be bigger.
+ * Adjust the maximum LRO massage size to avoid the above option.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param strd_n
+ *   Number of strides per WQE..
+ * @param strd_sz
+ *   The stride size.
+ */
+static void
+mlx5_max_lro_msg_size_adjust(struct rte_eth_dev *dev, uint32_t strd_n,
+			     uint32_t strd_sz)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	uint32_t max_buf_len = strd_sz * strd_n;
+
+	if (max_buf_len > (uint64_t)UINT16_MAX)
+		max_buf_len = RTE_ALIGN_FLOOR((uint32_t)UINT16_MAX, strd_sz);
+	max_buf_len /= 256;
+	max_buf_len = RTE_MIN(max_buf_len, (uint32_t)UINT8_MAX);
+	assert(max_buf_len);
+	if (priv->max_lro_msg_size)
+		priv->max_lro_msg_size =
+			RTE_MIN((uint32_t)priv->max_lro_msg_size, max_buf_len);
+	else
+		priv->max_lro_msg_size = max_buf_len;
+}
+
+/**
  * Create a DPDK Rx queue.
  *
  * @param dev
@@ -1626,6 +1659,8 @@ struct mlx5_rxq_ctrl *
 		tmpl->rxq.strd_headroom_en = strd_headroom_en;
 		tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(mb_len -
 			    RTE_PKTMBUF_HEADROOM, config->mprq.max_memcpy_len);
+		mlx5_max_lro_msg_size_adjust(dev, (1 << tmpl->rxq.strd_num_n),
+					     (1 << tmpl->rxq.strd_sz_n));
 		DRV_LOG(DEBUG,
 			"port %u Rx queue %u: Multi-Packet RQ is enabled"
 			" strd_num_n = %u, strd_sz_n = %u",
@@ -2168,7 +2203,7 @@ struct mlx5_hrxq *
 		if (lro) {
 			tir_attr.lro_timeout_period_usecs =
 					priv->config.lro.timeout;
-			tir_attr.lro_max_msg_sz = 0xff;
+			tir_attr.lro_max_msg_sz = priv->max_lro_msg_size;
 			tir_attr.lro_enable_mask = lro;
 		}
 		tir = mlx5_devx_cmd_create_tir(priv->sh->ctx, &tir_attr);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH 28/28] doc: update MLX5 doc and release notes with LRO
  2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
                   ` (26 preceding siblings ...)
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 27/28] net/mlx5: adjust the maximum LRO message size Matan Azrad
@ 2019-07-22  9:13 ` Matan Azrad
  2019-07-22  9:23   ` Slava Ovsiienko
  2019-07-22 10:42 ` [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Raslan Darawsheh
                   ` (2 subsequent siblings)
  30 siblings, 1 reply; 92+ messages in thread
From: Matan Azrad @ 2019-07-22  9:13 UTC (permalink / raw)
  To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Add documentation of LRO feature.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 doc/guides/nics/features/mlx5.ini      |  1 +
 doc/guides/nics/mlx5.rst               | 14 ++++++++++++++
 doc/guides/rel_notes/release_19_08.rst |  2 +-
 3 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini
index f7e7358..c0ebdbd 100644
--- a/doc/guides/nics/features/mlx5.ini
+++ b/doc/guides/nics/features/mlx5.ini
@@ -13,6 +13,7 @@ Queue start/stop     = Y
 MTU update           = Y
 Jumbo frame          = Y
 Scattered Rx         = Y
+LRO                  = Y
 TSO                  = Y
 Promiscuous mode     = Y
 Allmulticast mode    = Y
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 7e87344..85d96be 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -82,6 +82,7 @@ Features
   increment/decrement, count, drop, mark. For details please see :ref:`Supported hardware offloads using rte_flow API`.
 - Flow insertion rate of more then million flows per second, when using Direct Rules.
 - Support for multiple rte_flow groups.
+- Hardware LRO.
 
 Limitations
 -----------
@@ -162,6 +163,11 @@ Limitations
 
 - ICMP/ICMP6 code/type matching cannot be supported togeter with IP-in-IP tunnel.
 
+- LRO:
+
+  - No mbuf head-room space is created for RX packets when LRO is configured.
+  - scatter_fcs is disabled  when LRO is configured.
+
 Statistics
 ----------
 
@@ -556,6 +562,14 @@ Run-time configuration
 
   set to 128 by default.
 
+- ``lro_timeout_usec`` parameter [int]
+
+  The maximum allowed duration of an LRO session, in micro-seconds.
+  PMD will set the nearest value supported by HW, which is not bigger than
+  the input lro_timeout_usec value.
+  If this parameter is not specified, by default PMD will set the smallest value
+  supported by HW.
+
 Firmware configuration
 ~~~~~~~~~~~~~~~~~~~~~~
 
diff --git a/doc/guides/rel_notes/release_19_08.rst b/doc/guides/rel_notes/release_19_08.rst
index 6c382cb..d8676b6 100644
--- a/doc/guides/rel_notes/release_19_08.rst
+++ b/doc/guides/rel_notes/release_19_08.rst
@@ -117,7 +117,7 @@ New Features
   * Accelerate flows with count action creation and destroy.
   * Accelerate flows counter query.
   * Improve Tx datapath improves performance with enabled HW offloads.
-
+  * Added support for LRO.
 
 * **Updated Solarflare network PMD.**
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH 03/28] net/mlx5: support LRO caps query using devx API
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 03/28] net/mlx5: support LRO caps query using devx API Matan Azrad
@ 2019-07-22  9:17   ` Slava Ovsiienko
  0 siblings, 0 replies; 92+ messages in thread
From: Slava Ovsiienko @ 2019-07-22  9:17 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Yongseok Koh; +Cc: dev, Dekel Peled

> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Monday, July 22, 2019 12:13
> To: Shahaf Shuler <shahafs@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [PATCH 03/28] net/mlx5: support LRO caps query using devx API
> 
> From: Dekel Peled <dekelp@mellanox.com>
> 
> Update function mlx5_devx_cmd_query_hca_attr() to query HCA capabilities
> related to LRO.
> 
> Add relevant structs in drivers/net/mlx5/mlx5_prm.h.
> 
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> Acked-by: Matan Azrad <matan@mellanox.com>

Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH 01/28] net/mlx5: remove redundant item from union
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 01/28] net/mlx5: remove redundant item from union Matan Azrad
@ 2019-07-22  9:17   ` Slava Ovsiienko
  0 siblings, 0 replies; 92+ messages in thread
From: Slava Ovsiienko @ 2019-07-22  9:17 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Yongseok Koh; +Cc: dev, Dekel Peled

> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Monday, July 22, 2019 12:13
> To: Shahaf Shuler <shahafs@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [PATCH 01/28] net/mlx5: remove redundant item from union
> 
> From: Dekel Peled <dekelp@mellanox.com>
> 
> A variable of type struct ibv_cq_ex is declared in 2 unions, but isn't used.
> This patch removes the 2 redundant declarations.
> 
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> Acked-by: Matan Azrad <matan@mellanox.com>

Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH 09/28] net/mlx5: create advanced RxQ object using new API
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 09/28] net/mlx5: create advanced RxQ object using new API Matan Azrad
@ 2019-07-22  9:17   ` Slava Ovsiienko
  0 siblings, 0 replies; 92+ messages in thread
From: Slava Ovsiienko @ 2019-07-22  9:17 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Yongseok Koh; +Cc: dev, Dekel Peled

> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Monday, July 22, 2019 12:13
> To: Shahaf Shuler <shahafs@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [PATCH 09/28] net/mlx5: create advanced RxQ object using new API
> 
> From: Dekel Peled <dekelp@mellanox.com>
> 
> Implement function mlx5_devx_cmd_create_rq() to create RQ object using
> DevX API.
> Add related structs in mlx5.h and mlx5_prm.h.
> 
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> Acked-by: Matan Azrad <matan@mellanox.com>

Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH 06/28] net/mlx5: check conditions to enable LRO
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 06/28] net/mlx5: check conditions to enable LRO Matan Azrad
@ 2019-07-22  9:18   ` Slava Ovsiienko
  0 siblings, 0 replies; 92+ messages in thread
From: Slava Ovsiienko @ 2019-07-22  9:18 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Yongseok Koh; +Cc: dev, Dekel Peled

> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Monday, July 22, 2019 12:13
> To: Shahaf Shuler <shahafs@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [PATCH 06/28] net/mlx5: check conditions to enable LRO
> 
> From: Dekel Peled <dekelp@mellanox.com>
> 
> Use DevX API to read device LRO capabilities.
> Check if LRO is supported and can be enabled.
> Check if MPRQ is supported and can be used.
> Enable MPRQ for LRO use if not enabled by user.
> Added note for mlx5_mprq_enabled(), to emphasize that LRO enables MPRQ.
> Disable CQE compression and CRC stripping if LRO is enabled.
> 
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> Acked-by: Matan Azrad <matan@mellanox.com>

Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH 08/28] net/mlx5: update Tx queue create for LRO
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 08/28] net/mlx5: update Tx queue create for LRO Matan Azrad
@ 2019-07-22  9:18   ` Slava Ovsiienko
  0 siblings, 0 replies; 92+ messages in thread
From: Slava Ovsiienko @ 2019-07-22  9:18 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Yongseok Koh; +Cc: dev, Dekel Peled

> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Monday, July 22, 2019 12:13
> To: Shahaf Shuler <shahafs@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [PATCH 08/28] net/mlx5: update Tx queue create for LRO
> 
> From: Dekel Peled <dekelp@mellanox.com>
> 
> Update function mlx5_txq_ibv_new(), query and store the TIS transport
> domain value.
> It is required later on Rx side when creating matching TIR.
> Add field in mlx5 data structure to store Transport Domain ID.
> 
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> Acked-by: Matan Azrad <matan@mellanox.com>

Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH 04/28] net/mlx5: glue func for queue query using new API
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 04/28] net/mlx5: glue func for queue query using new API Matan Azrad
@ 2019-07-22  9:18   ` Slava Ovsiienko
  0 siblings, 0 replies; 92+ messages in thread
From: Slava Ovsiienko @ 2019-07-22  9:18 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Yongseok Koh; +Cc: dev, Dekel Peled

> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Monday, July 22, 2019 12:13
> To: Shahaf Shuler <shahafs@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [PATCH 04/28] net/mlx5: glue func for queue query using new API
> 
> From: Dekel Peled <dekelp@mellanox.com>
> 
> Add function mlx5_glue_devx_qp_query().
> Add glue function pointer devx_qp_query to run it.
> Glue version updated to 19.08.0.
> 
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> Acked-by: Matan Azrad <matan@mellanox.com>

Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH 05/28] net/mlx5: glue function for action using new API
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 05/28] net/mlx5: glue function for action " Matan Azrad
@ 2019-07-22  9:18   ` Slava Ovsiienko
  0 siblings, 0 replies; 92+ messages in thread
From: Slava Ovsiienko @ 2019-07-22  9:18 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Yongseok Koh; +Cc: dev, Dekel Peled

> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Monday, July 22, 2019 12:13
> To: Shahaf Shuler <shahafs@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [PATCH 05/28] net/mlx5: glue function for action using new API
> 
> From: Dekel Peled <dekelp@mellanox.com>
> 
> Add compile option HAVE_MLX5DV_DR_ACTION_DEST_DEVX_TIR, and
> matching dest_tir flag in device configuration structure.
> Add glue function pointer dv_create_flow_action_dest_devx_tir, and
> function mlx5_glue_dv_create_flow_action_dest_devx_tir(),
> to invoke API mlx5dv_dr_action_create_dest_devx_tir();
> 
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> Acked-by: Matan Azrad <matan@mellanox.com>

Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH 07/28] net/mlx5: support Tx interface query using new API
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 07/28] net/mlx5: support Tx interface query using new API Matan Azrad
@ 2019-07-22  9:19   ` Slava Ovsiienko
  0 siblings, 0 replies; 92+ messages in thread
From: Slava Ovsiienko @ 2019-07-22  9:19 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Yongseok Koh; +Cc: dev, Dekel Peled

> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Monday, July 22, 2019 12:13
> To: Shahaf Shuler <shahafs@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [PATCH 07/28] net/mlx5: support Tx interface query using new API
> 
> From: Dekel Peled <dekelp@mellanox.com>
> 
> Implement function mlx5_devx_cmd_qp_tis_td_query(), to query QP TIS
> Transport Domain value.
> 
> Add related structs in mlx5_prm.h.
> 
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> Acked-by: Matan Azrad <matan@mellanox.com>

Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH 13/28] net/mlx5: allocate door-bells using new API
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 13/28] net/mlx5: allocate door-bells " Matan Azrad
@ 2019-07-22  9:20   ` Slava Ovsiienko
  0 siblings, 0 replies; 92+ messages in thread
From: Slava Ovsiienko @ 2019-07-22  9:20 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Yongseok Koh; +Cc: dev, Dekel Peled

> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Monday, July 22, 2019 12:13
> To: Shahaf Shuler <shahafs@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [PATCH 13/28] net/mlx5: allocate door-bells using new API
> 
> From: Dekel Peled <dekelp@mellanox.com>
> 
> When using DevX API, memory for door-bell records should be allocated by
> PMD and registered using DevX API.
> 
> This patch implements the utility functions to support it:
> - Add struct mlx5_devx_dbr_page, containing door-bells page data.
> - Add list of struct mlx5_devx_dbr_page door-bell pages to device
>   private data.
> - Implement function mlx5_alloc_dbr_page() to allocate page for
>   door-bell records, and register it using DevX API.
> - Implement function mlx5_get_dbr(). to acquire a door-bell record
>   from the door-bells page, allocating a new page if needed.
> - Implement function mlx5_release_dbr() to release a door-bell
>   record that is no longer needed, freeing the containing page if
>   it becomes empty.
> 
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> Acked-by: Matan Azrad <matan@mellanox.com>

Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH 10/28] net/mlx5: modify advanced RxQ object using new API
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 10/28] net/mlx5: modify " Matan Azrad
@ 2019-07-22  9:20   ` Slava Ovsiienko
  0 siblings, 0 replies; 92+ messages in thread
From: Slava Ovsiienko @ 2019-07-22  9:20 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Yongseok Koh; +Cc: dev, Dekel Peled

> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Monday, July 22, 2019 12:13
> To: Shahaf Shuler <shahafs@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [PATCH 10/28] net/mlx5: modify advanced RxQ object using new API
> 
> From: Dekel Peled <dekelp@mellanox.com>
> 
> Implement function mlx5_devx_cmd_modify_rq() to modify RQ.
> Add related structs in mlx5.h and mlx5_prm.h.
> 
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> Acked-by: Matan Azrad <matan@mellanox.com>

Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH 11/28] net/mlx5: create advanced Rx object using new API
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 11/28] net/mlx5: create advanced Rx " Matan Azrad
@ 2019-07-22  9:20   ` Slava Ovsiienko
  0 siblings, 0 replies; 92+ messages in thread
From: Slava Ovsiienko @ 2019-07-22  9:20 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Yongseok Koh; +Cc: dev, Dekel Peled

> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Monday, July 22, 2019 12:13
> To: Shahaf Shuler <shahafs@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [PATCH 11/28] net/mlx5: create advanced Rx object using new API
> 
> From: Dekel Peled <dekelp@mellanox.com>
> 
> Implement function mlx5_devx_cmd_create_tir() to create TIR object using
> DevX API..
> Add related structs in mlx5.h and mlx5_prm.h.
> 
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> Acked-by: Matan Azrad <matan@mellanox.com>

Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH 18/28] net/mlx5: store protection domain number on create
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 18/28] net/mlx5: store protection domain number on create Matan Azrad
@ 2019-07-22  9:21   ` Slava Ovsiienko
  0 siblings, 0 replies; 92+ messages in thread
From: Slava Ovsiienko @ 2019-07-22  9:21 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Yongseok Koh; +Cc: dev, Dekel Peled

> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Monday, July 22, 2019 12:13
> To: Shahaf Shuler <shahafs@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [PATCH 18/28] net/mlx5: store protection domain number on create
> 
> From: Dekel Peled <dekelp@mellanox.com>
> 
> Function mlx5_alloc_shared_ibctx() allocates Protection Domain using verbs
> API, as part of shared IB device context.
> This patch adds reading and storing of pdn value from the created PD object,
> using DV API.
> The pdn value is required when creating WQ using DevX API.
> 
> This patch also updates function flow_dv_create_counter_stat_mem_mng()
> which uses the pdn value as well.
> 
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> Acked-by: Matan Azrad <matan@mellanox.com>

Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH 20/28] net/mlx5: function to create Rx verbs work queue
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 20/28] net/mlx5: function to create Rx verbs work queue Matan Azrad
@ 2019-07-22  9:21   ` Slava Ovsiienko
  0 siblings, 0 replies; 92+ messages in thread
From: Slava Ovsiienko @ 2019-07-22  9:21 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Yongseok Koh; +Cc: dev, Dekel Peled

> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Monday, July 22, 2019 12:13
> To: Shahaf Shuler <shahafs@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [PATCH 20/28] net/mlx5: function to create Rx verbs work queue
> 
> From: Dekel Peled <dekelp@mellanox.com>
> 
> Verbs WQ for RxQ is created inside function mlx5_rxq_obj_new().
> This patch moves the creation of verbs WQ to dedicated function
> mlx5_ibv_wq_new().
> 
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> Acked-by: Matan Azrad <matan@mellanox.com>

Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH 21/28] net/mlx5: create advanced RxQ using new API
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 21/28] net/mlx5: create advanced RxQ using new API Matan Azrad
@ 2019-07-22  9:21   ` Slava Ovsiienko
  0 siblings, 0 replies; 92+ messages in thread
From: Slava Ovsiienko @ 2019-07-22  9:21 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Yongseok Koh; +Cc: dev, Dekel Peled

> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Monday, July 22, 2019 12:13
> To: Shahaf Shuler <shahafs@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [PATCH 21/28] net/mlx5: create advanced RxQ using new API
> 
> From: Dekel Peled <dekelp@mellanox.com>
> 
> Function mlx5_rxq_obj_new(), previously called mlx5_rxq_ibv_new(),
> supports creating Rx queue objects using verbs.
> This patch expands the relevant functions, to support creating verbs or DevX
> Rx queue objects:
> Function mlx5_rxq_obj_new() updated to create RQ object using DevX.
> Function mlx5_ind_table_obj_new() updated to create RQT object using
> DevX.
> Function mlx5_hrxq_new() updated to create TIR object using DevX.
> New utility functions added to perform specific operations:
> mlx5_devx_rq_new(),  mlx5_devx_wq_attr_fill(),
> mlx5_devx_create_rq_attr_fill().
> 
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> Acked-by: Matan Azrad <matan@mellanox.com>

Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH 12/28] net/mlx5: create advanced RxQ table using new API
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 12/28] net/mlx5: create advanced RxQ table " Matan Azrad
@ 2019-07-22  9:21   ` Slava Ovsiienko
  0 siblings, 0 replies; 92+ messages in thread
From: Slava Ovsiienko @ 2019-07-22  9:21 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Yongseok Koh; +Cc: dev, Dekel Peled

> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Monday, July 22, 2019 12:13
> To: Shahaf Shuler <shahafs@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [PATCH 12/28] net/mlx5: create advanced RxQ table using new API
> 
> From: Dekel Peled <dekelp@mellanox.com>
> 
> Implement function mlx5_devx_cmd_create_rqt() to create RQT object using
> DevX API.
> Add related structs in mlx5.h and mlx5_prm.h.
> 
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> Acked-by: Matan Azrad <matan@mellanox.com>

Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH 23/28] net/mlx5: replace the external mbuf shared memory
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 23/28] net/mlx5: replace the external mbuf shared memory Matan Azrad
@ 2019-07-22  9:21   ` Slava Ovsiienko
  0 siblings, 0 replies; 92+ messages in thread
From: Slava Ovsiienko @ 2019-07-22  9:21 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Yongseok Koh; +Cc: dev, Dekel Peled

> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Monday, July 22, 2019 12:13
> To: Shahaf Shuler <shahafs@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [PATCH 23/28] net/mlx5: replace the external mbuf shared memory
> 
> As an arrangement to the LRO support when a packet can consume all the
> stride memory, the external mbuf shared information cannot be anymore in
> the end of the stride, because the HW may write the packet data to all the
> stride memory.
> 
> Move the shared information memory from the stride to the control memory
> of the external mbuf.
> 
> Signed-off-by: Matan Azrad <matan@mellanox.com>

Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>



^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH 22/28] net/mlx5: support LRO with single RxQ object
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 22/28] net/mlx5: support LRO with single RxQ object Matan Azrad
@ 2019-07-22  9:22   ` Slava Ovsiienko
  0 siblings, 0 replies; 92+ messages in thread
From: Slava Ovsiienko @ 2019-07-22  9:22 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Yongseok Koh; +Cc: dev, Dekel Peled

> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Monday, July 22, 2019 12:13
> To: Shahaf Shuler <shahafs@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [PATCH 22/28] net/mlx5: support LRO with single RxQ object
> 
> From: Dekel Peled <dekelp@mellanox.com>
> 
> Implement LRO support using a single RQ object per DPDK RxQ.
> 
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> Acked-by: Matan Azrad <matan@mellanox.com>

Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH 15/28] net/mlx5: rename verbs indirection table to obj
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 15/28] net/mlx5: rename verbs indirection table to obj Matan Azrad
@ 2019-07-22  9:22   ` Slava Ovsiienko
  0 siblings, 0 replies; 92+ messages in thread
From: Slava Ovsiienko @ 2019-07-22  9:22 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Yongseok Koh; +Cc: dev, Dekel Peled

> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Monday, July 22, 2019 12:13
> To: Shahaf Shuler <shahafs@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [PATCH 15/28] net/mlx5: rename verbs indirection table to obj
> 
> From: Dekel Peled <dekelp@mellanox.com>
> 
> Prepare for introducing of DevX RQT object.
> Rx indirection table object is currently created using verbs only.
> The next patches will add the option to create an RQT object using DevX.
> This patch renames ind_table_ibv to ind_table_obj wherever relevant, and
> adds the DevX items to relevant structs.
> 
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> Acked-by: Matan Azrad <matan@mellanox.com>

Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH 17/28] net/mlx5: update queue state modify function
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 17/28] net/mlx5: update queue state modify function Matan Azrad
@ 2019-07-22  9:22   ` Slava Ovsiienko
  0 siblings, 0 replies; 92+ messages in thread
From: Slava Ovsiienko @ 2019-07-22  9:22 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Yongseok Koh; +Cc: dev, Dekel Peled

> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Monday, July 22, 2019 12:13
> To: Shahaf Shuler <shahafs@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [PATCH 17/28] net/mlx5: update queue state modify function
> 
> From: Dekel Peled <dekelp@mellanox.com>
> 
> Function mlx5_queue_state_modify_primary() was implemented to handle
> state change for queues created using Verbs API.
> 
> This patch update function mlx5_queue_state_modify_primary() to support
> state change of RQ object created using DevX API.
> 
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> Acked-by: Matan Azrad <matan@mellanox.com>

Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH 14/28] net/mlx5: rename RxQ verbs to general RxQ object
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 14/28] net/mlx5: rename RxQ verbs to general RxQ object Matan Azrad
@ 2019-07-22  9:22   ` Slava Ovsiienko
  0 siblings, 0 replies; 92+ messages in thread
From: Slava Ovsiienko @ 2019-07-22  9:22 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Yongseok Koh; +Cc: dev, Dekel Peled

> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Monday, July 22, 2019 12:13
> To: Shahaf Shuler <shahafs@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [PATCH 14/28] net/mlx5: rename RxQ verbs to general RxQ object
> 
> From: Dekel Peled <dekelp@mellanox.com>
> 
> Prepare for introducing of DevX RxQ object.
> RxQ object is currently created using verbs only.
> The next patches will add the option to create RxQ object using DevX.
> This patch renames rxq_ibv to rxq_obj wherever relevant, and adds the DevX
> items to relevant structs.
> 
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> Acked-by: Matan Azrad <matan@mellanox.com>

Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH 16/28] net/mlx5: rename hash RxQ verbs to general
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 16/28] net/mlx5: rename hash RxQ verbs to general Matan Azrad
@ 2019-07-22  9:22   ` Slava Ovsiienko
  0 siblings, 0 replies; 92+ messages in thread
From: Slava Ovsiienko @ 2019-07-22  9:22 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Yongseok Koh; +Cc: dev, Dekel Peled

> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Monday, July 22, 2019 12:13
> To: Shahaf Shuler <shahafs@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [PATCH 16/28] net/mlx5: rename hash RxQ verbs to general
> 
> From: Dekel Peled <dekelp@mellanox.com>
> 
> Prepare for introducing use of DevX TIR object.
> Hash Rx queue is currently created using verbs QP only.
> The next patches will add the option to create it with a TIR object using
> DevX.
> This patch renames hrxq_ibv to hrxq wherever relevant, and adds the DevX
> items to relevant structs.
> 
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> Acked-by: Matan Azrad <matan@mellanox.com>

Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH 24/28] net/mlx5: update LRO fields in completion entry
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 24/28] net/mlx5: update LRO fields in completion entry Matan Azrad
@ 2019-07-22  9:23   ` Slava Ovsiienko
  0 siblings, 0 replies; 92+ messages in thread
From: Slava Ovsiienko @ 2019-07-22  9:23 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Yongseok Koh; +Cc: dev, Dekel Peled

> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Monday, July 22, 2019 12:13
> To: Shahaf Shuler <shahafs@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [PATCH 24/28] net/mlx5: update LRO fields in completion entry
> 
> Update the CQE structure to include LRO fields.
> 
> Some reserved values were changed, hence also data-path code used the
> reserved values were updated accordingly.
> 
> Signed-off-by: Matan Azrad <matan@mellanox.com>

Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH 19/28] net/mlx5: func to create Rx verbs completion queue
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 19/28] net/mlx5: func to create Rx verbs completion queue Matan Azrad
@ 2019-07-22  9:23   ` Slava Ovsiienko
  0 siblings, 0 replies; 92+ messages in thread
From: Slava Ovsiienko @ 2019-07-22  9:23 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Yongseok Koh; +Cc: dev, Dekel Peled

> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Monday, July 22, 2019 12:13
> To: Shahaf Shuler <shahafs@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [PATCH 19/28] net/mlx5: func to create Rx verbs completion queue
> 
> From: Dekel Peled <dekelp@mellanox.com>
> 
> Verbs CQ for RxQ is created inside function mlx5_rxq_obj_new().
> This patch moves the creation of CQ to dedicated function
> mlx5_ibv_cq_new().
> 
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> Acked-by: Matan Azrad <matan@mellanox.com>

Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH 26/28] net/mlx5: zero the LRO mbuf headroom
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 26/28] net/mlx5: zero the LRO mbuf headroom Matan Azrad
@ 2019-07-22  9:23   ` Slava Ovsiienko
  0 siblings, 0 replies; 92+ messages in thread
From: Slava Ovsiienko @ 2019-07-22  9:23 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Yongseok Koh; +Cc: dev, Dekel Peled

> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Monday, July 22, 2019 12:13
> To: Shahaf Shuler <shahafs@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [PATCH 26/28] net/mlx5: zero the LRO mbuf headroom
> 
> LRO packet may consume all the stride memory, hence the PMD cannot
> guaranty head-room for the LRO mbuf.
> 
> The issue is lack in HW support to write the packet in offset from the stride
> start.
> 
> A new striding RQ feature may be added in CX6 DX to allow head-room and
> tail-room for the LRO strides.
> 
> Signed-off-by: Matan Azrad <matan@mellanox.com>

Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH 27/28] net/mlx5: adjust the maximum LRO message size
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 27/28] net/mlx5: adjust the maximum LRO message size Matan Azrad
@ 2019-07-22  9:23   ` Slava Ovsiienko
  0 siblings, 0 replies; 92+ messages in thread
From: Slava Ovsiienko @ 2019-07-22  9:23 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Yongseok Koh; +Cc: dev, Dekel Peled

> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Monday, July 22, 2019 12:13
> To: Shahaf Shuler <shahafs@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [PATCH 27/28] net/mlx5: adjust the maximum LRO message size
> 
> LRO message is contained in the MPRQ strides.
> While the LRO message size cannot be bigger than 65280 according to the
> PRM, the strides which contain it may be bigger than the maximum buffer
> size allowed in dpdk mbuf - 0xFFFF.
> 
> Adjust the maximum LRO message size to avoid buffer length overflow.
> 
> Signed-off-by: Matan Azrad <matan@mellanox.com>

Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH 28/28] doc: update MLX5 doc and release notes with LRO
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 28/28] doc: update MLX5 doc and release notes with LRO Matan Azrad
@ 2019-07-22  9:23   ` Slava Ovsiienko
  0 siblings, 0 replies; 92+ messages in thread
From: Slava Ovsiienko @ 2019-07-22  9:23 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Yongseok Koh; +Cc: dev, Dekel Peled

> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Monday, July 22, 2019 12:13
> To: Shahaf Shuler <shahafs@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [PATCH 28/28] doc: update MLX5 doc and release notes with LRO
> 
> From: Dekel Peled <dekelp@mellanox.com>
> 
> Add documentation of LRO feature.
> 
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> Acked-by: Matan Azrad <matan@mellanox.com>

Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH 02/28] net/mlx5: add LRO APIs and initial settings
  2019-07-22  9:12 ` [dpdk-dev] [PATCH 02/28] net/mlx5: add LRO APIs and initial settings Matan Azrad
@ 2019-07-22  9:25   ` Slava Ovsiienko
  0 siblings, 0 replies; 92+ messages in thread
From: Slava Ovsiienko @ 2019-07-22  9:25 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Yongseok Koh; +Cc: dev, Dekel Peled

> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Monday, July 22, 2019 12:13
> To: Shahaf Shuler <shahafs@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [PATCH 02/28] net/mlx5: add LRO APIs and initial settings
> 
> From: Dekel Peled <dekelp@mellanox.com>
> 
> Add command-line argument to set LRO session timeout.
> Add LRO settings struct in PMD configuration struct.
> Add support of LRO offload in port configuration.
> Add macros and function to check if LRO is supported and enabled.
> 
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> Acked-by: Matan Azrad <matan@mellanox.com>

Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH 25/28] net/mlx5: handle LRO packets in Rx queue
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 25/28] net/mlx5: handle LRO packets in Rx queue Matan Azrad
@ 2019-07-22  9:26   ` Slava Ovsiienko
  0 siblings, 0 replies; 92+ messages in thread
From: Slava Ovsiienko @ 2019-07-22  9:26 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Yongseok Koh; +Cc: dev, Dekel Peled

> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Monday, July 22, 2019 12:13
> To: Shahaf Shuler <shahafs@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [PATCH 25/28] net/mlx5: handle LRO packets in Rx queue
> 
> When LRO offload is configured in Rx queue, the HW may coalesce TCP
> packets from same TCP connection into single packet.
> 
> In this case the SW should fix the relevant packet headers because the HW
> doesn't update them according to the new created packet characteristics.
> 
> Add update header code to the mprq Rx burst function to support LRO
> feature.
> 
> Signed-off-by: Matan Azrad <matan@mellanox.com>

Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO
  2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
                   ` (27 preceding siblings ...)
  2019-07-22  9:13 ` [dpdk-dev] [PATCH 28/28] doc: update MLX5 doc and release notes with LRO Matan Azrad
@ 2019-07-22 10:42 ` Raslan Darawsheh
  2019-07-22 12:48 ` Ferruh Yigit
  2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
  30 siblings, 0 replies; 92+ messages in thread
From: Raslan Darawsheh @ 2019-07-22 10:42 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Yongseok Koh, Slava Ovsiienko
  Cc: dev, Dekel Peled

Hi,

> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Matan Azrad
> Sent: Monday, July 22, 2019 12:13 PM
> To: Shahaf Shuler <shahafs@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO
> 
> Introduction:
> LRO (Large Receive Offload) is intended to reduce host CPU overhead when
> processing Rx TCP packets.
> LRO works by aggregating multiple incoming packets from a single stream
> into a larger buffer, before they are passed higher up the networking stack.
> Thus reducing the number of packets that have to be processed.
> 
> Use:
> MLX5 PMD will query the HCA capabilities on initialization to check if LRO is
> supported and can be used.
> LRO in MLX5 PMD is intended for use by applications using a relatively small
> number of flows.
> LRO support can be enabled only per port.
> In each LRO session, packets of the same flow will be coalesced until one of
> the following occur:
>   *   Buffer size limit is exceeded.
>   *   Session timeout is exceeded.
>   *   Packet from a different flow is received on the same queue.
> 
> When LRO session ends the coalesced packet is passed to the PMD, which
> will update the header fields before passing the packet to the application.
> For efficient memory utilization, the MPRQ mechanism is used.
> Support of Non-LRO flows will not be impacted.
> 
> Existing API:
> Offload capability DEV_RX_OFFLOAD_TCP_LRO will be used to indicate
> device supports LRO.
> testpmd command-line option "-enable-lro" will be used to request LRO
> feature enable on application start.
> testpmd rx_offload "tcp_lro" on or off will be used to request LRO feature
> enable or disable during application runtime.
> Offload flag PKT_RX_LRO will be used. This flag can be set in Rx mbuf to
> indicate this is a LRO coalesced packet.
> 
> New API:
> PMD configuration parameter lro_timeout_usec will be added.
> This parameter can be used by application to select LRO session timeout (in
> microseconds).
> If this value is not specified, the minimal value supported by device will be
> used.
> 
> Known limitations:
> mbuf head-room is zero for any packet if LRO is configured in the port.
> Keep CRC offload cannot be supported with LRO.
> CQE compression is not supported with LRO.
> 
> Dekel Peled (23):
>   net/mlx5: remove redundant item from union
>   net/mlx5: add LRO APIs and initial settings
>   net/mlx5: support LRO caps query using devx API
>   net/mlx5: glue func for queue query using new API
>   net/mlx5: glue function for action using new API
>   net/mlx5: check conditions to enable LRO
>   net/mlx5: support Tx interface query using new API
>   net/mlx5: update Tx queue create for LRO
>   net/mlx5: create advanced RxQ object using new API
>   net/mlx5: modify advanced RxQ object using new API
>   net/mlx5: create advanced Rx object using new API
>   net/mlx5: create advanced RxQ table using new API
>   net/mlx5: allocate door-bells using new API
>   net/mlx5: rename RxQ verbs to general RxQ object
>   net/mlx5: rename verbs indirection table to obj
>   net/mlx5: rename hash RxQ verbs to general
>   net/mlx5: update queue state modify function
>   net/mlx5: store protection domain number on create
>   net/mlx5: func to create Rx verbs completion queue
>   net/mlx5: function to create Rx verbs work queue
>   net/mlx5: create advanced RxQ using new API
>   net/mlx5: support LRO with single RxQ object
>   doc: update MLX5 doc and release notes with LRO
> 
> Matan Azrad (5):
>   net/mlx5: replace the external mbuf shared memory
>   net/mlx5: update LRO fields in completion entry
>   net/mlx5: handle LRO packets in Rx queue
>   net/mlx5: zero the LRO mbuf headroom
>   net/mlx5: adjust the maximum LRO message size
> 
>  doc/guides/nics/features/mlx5.ini      |    1 +
>  doc/guides/nics/mlx5.rst               |   14 +
>  doc/guides/rel_notes/release_19_08.rst |    2 +-
>  drivers/net/mlx5/Makefile              |    5 +
>  drivers/net/mlx5/meson.build           |    2 +
>  drivers/net/mlx5/mlx5.c                |  223 ++++++-
>  drivers/net/mlx5/mlx5.h                |  160 ++++-
>  drivers/net/mlx5/mlx5_devx_cmds.c      |  326 +++++++++
>  drivers/net/mlx5/mlx5_ethdev.c         |   14 +-
>  drivers/net/mlx5/mlx5_flow.h           |    6 +
>  drivers/net/mlx5/mlx5_flow_dv.c        |   28 +-
>  drivers/net/mlx5/mlx5_flow_verbs.c     |    3 +-
>  drivers/net/mlx5/mlx5_glue.c           |   33 +
>  drivers/net/mlx5/mlx5_glue.h           |    6 +-
>  drivers/net/mlx5/mlx5_prm.h            |  379 ++++++++++-
>  drivers/net/mlx5/mlx5_rxq.c            | 1135 ++++++++++++++++++++++-------
> ---
>  drivers/net/mlx5/mlx5_rxtx.c           |  167 ++++-
>  drivers/net/mlx5/mlx5_rxtx.h           |   80 ++-
>  drivers/net/mlx5/mlx5_rxtx_vec.h       |    6 +-
>  drivers/net/mlx5/mlx5_rxtx_vec_sse.h   |   16 +-
>  drivers/net/mlx5/mlx5_trigger.c        |   12 +-
>  drivers/net/mlx5/mlx5_txq.c            |   27 +-
>  drivers/net/mlx5/mlx5_vlan.c           |   32 +-
>  23 files changed, 2197 insertions(+), 480 deletions(-)
> 
> --
> 1.8.3.1


Series applied to next-net-mlx,

Kindest regards,
Raslan Darawsheh

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO
  2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
                   ` (28 preceding siblings ...)
  2019-07-22 10:42 ` [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Raslan Darawsheh
@ 2019-07-22 12:48 ` Ferruh Yigit
  2019-07-22 13:32   ` Matan Azrad
  2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
  30 siblings, 1 reply; 92+ messages in thread
From: Ferruh Yigit @ 2019-07-22 12:48 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
  Cc: dev, Dekel Peled

On 7/22/2019 10:12 AM, Matan Azrad wrote:
> Introduction:
> LRO (Large Receive Offload) is intended to reduce host CPU overhead when processing Rx TCP packets.
> LRO works by aggregating multiple incoming packets from a single stream into a larger buffer, before they are passed higher up the networking stack. Thus reducing the number of packets that have to be processed.
> 
> Use:
> MLX5 PMD will query the HCA capabilities on initialization to check if LRO is supported and can be used.
> LRO in MLX5 PMD is intended for use by applications using a relatively small number of flows.
> LRO support can be enabled only per port.
> In each LRO session, packets of the same flow will be coalesced until one of the following occur:
>   *   Buffer size limit is exceeded.
>   *   Session timeout is exceeded.
>   *   Packet from a different flow is received on the same queue.
> 
> When LRO session ends the coalesced packet is passed to the PMD, which will update the header fields before passing the packet to the application.
> For efficient memory utilization, the MPRQ mechanism is used.
> Support of Non-LRO flows will not be impacted.
> 
> Existing API:
> Offload capability DEV_RX_OFFLOAD_TCP_LRO will be used to indicate device supports LRO.
> testpmd command-line option "-enable-lro" will be used to request LRO feature enable on application start.
> testpmd rx_offload "tcp_lro" on or off will be used to request LRO feature enable or disable during application runtime.
> Offload flag PKT_RX_LRO will be used. This flag can be set in Rx mbuf to indicate this is a LRO coalesced packet.
> 
> New API:
> PMD configuration parameter lro_timeout_usec will be added.
> This parameter can be used by application to select LRO session timeout (in microseconds).
> If this value is not specified, the minimal value supported by device will be used.
> 
> Known limitations:
> mbuf head-room is zero for any packet if LRO is configured in the port.
> Keep CRC offload cannot be supported with LRO.
> CQE compression is not supported with LRO.
> 
> Dekel Peled (23):
>   net/mlx5: remove redundant item from union
>   net/mlx5: add LRO APIs and initial settings
>   net/mlx5: support LRO caps query using devx API
>   net/mlx5: glue func for queue query using new API
>   net/mlx5: glue function for action using new API
>   net/mlx5: check conditions to enable LRO
>   net/mlx5: support Tx interface query using new API
>   net/mlx5: update Tx queue create for LRO
>   net/mlx5: create advanced RxQ object using new API
>   net/mlx5: modify advanced RxQ object using new API
>   net/mlx5: create advanced Rx object using new API
>   net/mlx5: create advanced RxQ table using new API
>   net/mlx5: allocate door-bells using new API
>   net/mlx5: rename RxQ verbs to general RxQ object
>   net/mlx5: rename verbs indirection table to obj
>   net/mlx5: rename hash RxQ verbs to general
>   net/mlx5: update queue state modify function
>   net/mlx5: store protection domain number on create
>   net/mlx5: func to create Rx verbs completion queue
>   net/mlx5: function to create Rx verbs work queue
>   net/mlx5: create advanced RxQ using new API
>   net/mlx5: support LRO with single RxQ object
>   doc: update MLX5 doc and release notes with LRO
> 
> Matan Azrad (5):
>   net/mlx5: replace the external mbuf shared memory
>   net/mlx5: update LRO fields in completion entry
>   net/mlx5: handle LRO packets in Rx queue
>   net/mlx5: zero the LRO mbuf headroom
>   net/mlx5: adjust the maximum LRO message size

I am getting build error on patch by patch build, didn't dig to figure out which
exact patch to fail, please investigate.

And this patchset, adding a new feature, is sent on last day of the rc2, and
merged in the same day, do you guys really sure it has been reviewed well?

There are two group of build errors [1] & [2], both of them are observed with
both gcc and clang.


[1]
.../drivers/net/mlx5/mlx5_rxq.c:2150:7: error: variable 'qp' is used
uninitialized whenever 'if' condition is true [-Werror,-Wsometimes-uninitialized]...
                if (!tir) {...
                    ^~~~...
.../drivers/net/mlx5/mlx5_rxq.c:2191:6: note: uninitialized use occurs here...
        if (qp)...
            ^~...
.../drivers/net/mlx5/mlx5_rxq.c:2150:3: note: remove the 'if' if its condition
is always false...
                if (!tir) {...
                ^~~~~~~~~~~...
.../drivers/net/mlx5/mlx5_rxq.c:2046:19: note: initialize the variable 'qp' to
silence this warning...
        struct ibv_qp *qp;...
                         ^...
                          = NULL...
1 error generated....


[2]
.../drivers/net/mlx5/mlx5.c:1450:17: error: incomplete definition of type
'struct mlx5dv_devx_umem'
                if (page->umem->umem_id == umem_id)
                    ~~~~~~~~~~^
.../drivers/net/mlx5/mlx5_glue.h:64:8: note: forward declaration of 'struct
mlx5dv_devx_umem'
struct mlx5dv_devx_umem;
       ^
1 error generated.

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO
  2019-07-22 12:48 ` Ferruh Yigit
@ 2019-07-22 13:32   ` Matan Azrad
  0 siblings, 0 replies; 92+ messages in thread
From: Matan Azrad @ 2019-07-22 13:32 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Slava Ovsiienko
  Cc: dev, Dekel Peled

Hi Ferrue

From: Ferruh Yigit
> On 7/22/2019 10:12 AM, Matan Azrad wrote:
> > Introduction:
> > LRO (Large Receive Offload) is intended to reduce host CPU overhead
> when processing Rx TCP packets.
> > LRO works by aggregating multiple incoming packets from a single stream
> into a larger buffer, before they are passed higher up the networking stack.
> Thus reducing the number of packets that have to be processed.
> >
> > Use:
> > MLX5 PMD will query the HCA capabilities on initialization to check if LRO is
> supported and can be used.
> > LRO in MLX5 PMD is intended for use by applications using a relatively small
> number of flows.
> > LRO support can be enabled only per port.
> > In each LRO session, packets of the same flow will be coalesced until one of
> the following occur:
> >   *   Buffer size limit is exceeded.
> >   *   Session timeout is exceeded.
> >   *   Packet from a different flow is received on the same queue.
> >
> > When LRO session ends the coalesced packet is passed to the PMD, which
> will update the header fields before passing the packet to the application.
> > For efficient memory utilization, the MPRQ mechanism is used.
> > Support of Non-LRO flows will not be impacted.
> >
> > Existing API:
> > Offload capability DEV_RX_OFFLOAD_TCP_LRO will be used to indicate
> device supports LRO.
> > testpmd command-line option "-enable-lro" will be used to request LRO
> feature enable on application start.
> > testpmd rx_offload "tcp_lro" on or off will be used to request LRO feature
> enable or disable during application runtime.
> > Offload flag PKT_RX_LRO will be used. This flag can be set in Rx mbuf to
> indicate this is a LRO coalesced packet.
> >
> > New API:
> > PMD configuration parameter lro_timeout_usec will be added.
> > This parameter can be used by application to select LRO session timeout (in
> microseconds).
> > If this value is not specified, the minimal value supported by device will be
> used.
> >
> > Known limitations:
> > mbuf head-room is zero for any packet if LRO is configured in the port.
> > Keep CRC offload cannot be supported with LRO.
> > CQE compression is not supported with LRO.
> >
> > Dekel Peled (23):
> >   net/mlx5: remove redundant item from union
> >   net/mlx5: add LRO APIs and initial settings
> >   net/mlx5: support LRO caps query using devx API
> >   net/mlx5: glue func for queue query using new API
> >   net/mlx5: glue function for action using new API
> >   net/mlx5: check conditions to enable LRO
> >   net/mlx5: support Tx interface query using new API
> >   net/mlx5: update Tx queue create for LRO
> >   net/mlx5: create advanced RxQ object using new API
> >   net/mlx5: modify advanced RxQ object using new API
> >   net/mlx5: create advanced Rx object using new API
> >   net/mlx5: create advanced RxQ table using new API
> >   net/mlx5: allocate door-bells using new API
> >   net/mlx5: rename RxQ verbs to general RxQ object
> >   net/mlx5: rename verbs indirection table to obj
> >   net/mlx5: rename hash RxQ verbs to general
> >   net/mlx5: update queue state modify function
> >   net/mlx5: store protection domain number on create
> >   net/mlx5: func to create Rx verbs completion queue
> >   net/mlx5: function to create Rx verbs work queue
> >   net/mlx5: create advanced RxQ using new API
> >   net/mlx5: support LRO with single RxQ object
> >   doc: update MLX5 doc and release notes with LRO
> >
> > Matan Azrad (5):
> >   net/mlx5: replace the external mbuf shared memory
> >   net/mlx5: update LRO fields in completion entry
> >   net/mlx5: handle LRO packets in Rx queue
> >   net/mlx5: zero the LRO mbuf headroom
> >   net/mlx5: adjust the maximum LRO message size
> 
> I am getting build error on patch by patch build, didn't dig to figure out which
> exact patch to fail, please investigate.
> 
> And this patchset, adding a new feature, is sent on last day of the rc2, and
> merged in the same day, do you guys really sure it has been reviewed well?

Yes, most of the patches were reviewed by 2 guys, all the others by 1.
It is in testing for all the week, so it is good enough for RC2.

May be some fixes\enhancements in RC3.
 
> There are two group of build errors [1] & [2], both of them are observed with
> both gcc and clang.
> 
> 
> [1]
> .../drivers/net/mlx5/mlx5_rxq.c:2150:7: error: variable 'qp' is used
> uninitialized whenever 'if' condition is true [-Werror,-Wsometimes-
> uninitialized]...
>                 if (!tir) {...
>                     ^~~~...
> .../drivers/net/mlx5/mlx5_rxq.c:2191:6: note: uninitialized use occurs here...
>         if (qp)...
>             ^~...
> .../drivers/net/mlx5/mlx5_rxq.c:2150:3: note: remove the 'if' if its condition is
> always false...
>                 if (!tir) {...
>                 ^~~~~~~~~~~...
> .../drivers/net/mlx5/mlx5_rxq.c:2046:19: note: initialize the variable 'qp' to
> silence this warning...
>         struct ibv_qp *qp;...
>                          ^...
>                           = NULL...
> 1 error generated....
> 
> 
> [2]
> .../drivers/net/mlx5/mlx5.c:1450:17: error: incomplete definition of type
> 'struct mlx5dv_devx_umem'
>                 if (page->umem->umem_id == umem_id)
>                     ~~~~~~~~~~^
> .../drivers/net/mlx5/mlx5_glue.h:64:8: note: forward declaration of 'struct
> mlx5dv_devx_umem'
> struct mlx5dv_devx_umem;
>        ^
> 1 error generated.


Will address it, now...

Thanks


^ permalink raw reply	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH v2 00/28] net/mlx5: support LRO
  2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
                   ` (29 preceding siblings ...)
  2019-07-22 12:48 ` Ferruh Yigit
@ 2019-07-22 14:51 ` Matan Azrad
  2019-07-22 14:51   ` [dpdk-dev] [PATCH v2 01/28] net/mlx5: remove redundant item from union Matan Azrad
                     ` (28 more replies)
  30 siblings, 29 replies; 92+ messages in thread
From: Matan Azrad @ 2019-07-22 14:51 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
  Cc: dev, Dekel Peled

Introduction:
LRO (Large Receive Offload) is intended to reduce host CPU overhead when processing Rx TCP packets.
LRO works by aggregating multiple incoming packets from a single stream into a larger buffer, before they are passed higher up the networking stack. Thus reducing the number of packets that have to be processed.

Use:
MLX5 PMD will query the HCA capabilities on initialization to check if LRO is supported and can be used.
LRO in MLX5 PMD is intended for use by applications using a relatively small number of flows.
LRO support can be enabled only per port.
In each LRO session, packets of the same flow will be coalesced until one of the following occur:
  *   Buffer size limit is exceeded.
  *   Session timeout is exceeded.
  *   Packet from a different flow is received on the same queue.

When LRO session ends the coalesced packet is passed to the PMD, which will update the header fields before passing the packet to the application.
For efficient memory utilization, the MPRQ mechanism is used.
Support of Non-LRO flows will not be impacted.

Existing API:
Offload capability DEV_RX_OFFLOAD_TCP_LRO will be used to indicate device supports LRO.
testpmd command-line option "-enable-lro" will be used to request LRO feature enable on application start.
testpmd rx_offload "tcp_lro" on or off will be used to request LRO feature enable or disable during application runtime.
Offload flag PKT_RX_LRO will be used. This flag can be set in Rx mbuf to indicate this is a LRO coalesced packet.

New API:
PMD configuration parameter lro_timeout_usec will be added.
This parameter can be used by application to select LRO session timeout (in microseconds).
If this value is not specified, the minimal value supported by device will be used.

Known limitations:
mbuf head-room is zero for any packet if LRO is configured in the port.
Keep CRC offload cannot be supported with LRO.
CQE compression is not supported with LRO.

v2:
Fix small compilation issue detected per commit (Found By Ferruh).

Dekel Peled (23):
  net/mlx5: remove redundant item from union
  net/mlx5: add LRO APIs and initial settings
  net/mlx5: support LRO caps query using devx API
  net/mlx5: glue func for queue query using new API
  net/mlx5: glue function for action using new API
  net/mlx5: check conditions to enable LRO
  net/mlx5: support Tx interface query using new API
  net/mlx5: update Tx queue create for LRO
  net/mlx5: create advanced RxQ object using new API
  net/mlx5: modify advanced RxQ object using new API
  net/mlx5: create advanced Rx object using new API
  net/mlx5: create advanced RxQ table using new API
  net/mlx5: allocate door-bells using new API
  net/mlx5: rename RxQ verbs to general RxQ object
  net/mlx5: rename verbs indirection table to obj
  net/mlx5: rename hash RxQ verbs to general
  net/mlx5: update queue state modify function
  net/mlx5: store protection domain number on create
  net/mlx5: func to create Rx verbs completion queue
  net/mlx5: function to create Rx verbs work queue
  net/mlx5: create advanced RxQ using new API
  net/mlx5: support LRO with single RxQ object
  doc: update MLX5 doc and release notes with LRO

Matan Azrad (5):
  net/mlx5: replace the external mbuf shared memory
  net/mlx5: update LRO fields in completion entry
  net/mlx5: handle LRO packets in Rx queue
  net/mlx5: zero the LRO mbuf headroom
  net/mlx5: adjust the maximum LRO message size

 doc/guides/nics/features/mlx5.ini      |    1 +
 doc/guides/nics/mlx5.rst               |   14 +
 doc/guides/rel_notes/release_19_08.rst |    2 +-
 drivers/net/mlx5/Makefile              |    5 +
 drivers/net/mlx5/meson.build           |    2 +
 drivers/net/mlx5/mlx5.c                |  223 ++++++-
 drivers/net/mlx5/mlx5.h                |  160 ++++-
 drivers/net/mlx5/mlx5_devx_cmds.c      |  326 +++++++++
 drivers/net/mlx5/mlx5_ethdev.c         |   14 +-
 drivers/net/mlx5/mlx5_flow.h           |    6 +
 drivers/net/mlx5/mlx5_flow_dv.c        |   28 +-
 drivers/net/mlx5/mlx5_flow_verbs.c     |    3 +-
 drivers/net/mlx5/mlx5_glue.c           |   33 +
 drivers/net/mlx5/mlx5_glue.h           |    6 +-
 drivers/net/mlx5/mlx5_prm.h            |  379 ++++++++++-
 drivers/net/mlx5/mlx5_rxq.c            | 1132 ++++++++++++++++++++++----------
 drivers/net/mlx5/mlx5_rxtx.c           |  167 ++++-
 drivers/net/mlx5/mlx5_rxtx.h           |   80 ++-
 drivers/net/mlx5/mlx5_rxtx_vec.h       |    6 +-
 drivers/net/mlx5/mlx5_rxtx_vec_sse.h   |   16 +-
 drivers/net/mlx5/mlx5_trigger.c        |   12 +-
 drivers/net/mlx5/mlx5_txq.c            |   27 +-
 drivers/net/mlx5/mlx5_vlan.c           |   32 +-
 23 files changed, 2194 insertions(+), 480 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH v2 01/28] net/mlx5: remove redundant item from union
  2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
@ 2019-07-22 14:51   ` Matan Azrad
  2019-07-23 10:53     ` Ferruh Yigit
  2019-07-22 14:51   ` [dpdk-dev] [PATCH v2 02/28] net/mlx5: add LRO APIs and initial settings Matan Azrad
                     ` (27 subsequent siblings)
  28 siblings, 1 reply; 92+ messages in thread
From: Matan Azrad @ 2019-07-22 14:51 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
  Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

A variable of type struct ibv_cq_ex is declared in 2 unions, but
isn't used.
This patch removes the 2 redundant declarations.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/net/mlx5/mlx5_rxq.c | 1 -
 drivers/net/mlx5/mlx5_txq.c | 1 -
 2 files changed, 2 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 39b8b7a..0535ce3 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -839,7 +839,6 @@ struct mlx5_rxq_ibv *
 			struct mlx5dv_wq_init_attr mlx5;
 #endif
 		} wq;
-		struct ibv_cq_ex cq_attr;
 	} attr;
 	unsigned int cqe_n;
 	unsigned int wqe_n = 1 << rxq_data->elts_n;
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index 2f3aa5b..dbad361 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -388,7 +388,6 @@ struct mlx5_txq_ibv *
 		struct ibv_qp_init_attr_ex init;
 		struct ibv_cq_init_attr_ex cq;
 		struct ibv_qp_attr mod;
-		struct ibv_cq_ex cq_attr;
 	} attr;
 	unsigned int cqe_n;
 	struct mlx5dv_qp qp = { .comp_mask = MLX5DV_QP_MASK_UAR_MMAP_OFFSET };
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH v2 02/28] net/mlx5: add LRO APIs and initial settings
  2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
  2019-07-22 14:51   ` [dpdk-dev] [PATCH v2 01/28] net/mlx5: remove redundant item from union Matan Azrad
@ 2019-07-22 14:51   ` Matan Azrad
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 03/28] net/mlx5: support LRO caps query using devx API Matan Azrad
                     ` (26 subsequent siblings)
  28 siblings, 0 replies; 92+ messages in thread
From: Matan Azrad @ 2019-07-22 14:51 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
  Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Add command-line argument to set LRO session timeout.
Add LRO settings struct in PMD configuration struct.
Add support of LRO offload in port configuration.
Add macros and function to check if LRO is supported and enabled.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/net/mlx5/mlx5.c        |  6 ++++++
 drivers/net/mlx5/mlx5.h        | 16 ++++++++++++++++
 drivers/net/mlx5/mlx5_ethdev.c |  2 +-
 drivers/net/mlx5/mlx5_rxq.c    | 22 +++++++++++++++++++++-
 drivers/net/mlx5/mlx5_rxtx.h   |  3 ++-
 5 files changed, 46 insertions(+), 3 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index aff1345..365246b 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -138,6 +138,9 @@
 /* Device parameter to configure the maximum number of dump files per queue. */
 #define MLX5_MAX_DUMP_FILES_NUM "max_dump_files_num"
 
+/* Configure timeout of LRO session (in microseconds). */
+#define MLX5_LRO_TIMEOUT_USEC "lro_timeout_usec"
+
 #ifndef HAVE_IBV_MLX5_MOD_MPW
 #define MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED (1 << 2)
 #define MLX5DV_CONTEXT_FLAGS_ENHANCED_MPW (1 << 3)
@@ -1052,6 +1055,8 @@ struct mlx5_dev_spawn_data {
 		config->mr_ext_memseg_en = !!tmp;
 	} else if (strcmp(MLX5_MAX_DUMP_FILES_NUM, key) == 0) {
 		config->max_dump_files_num = tmp;
+	} else if (strcmp(MLX5_LRO_TIMEOUT_USEC, key) == 0) {
+		config->lro.timeout = tmp;
 	} else {
 		DRV_LOG(WARNING, "%s: unknown parameter", key);
 		rte_errno = EINVAL;
@@ -1100,6 +1105,7 @@ struct mlx5_dev_spawn_data {
 		MLX5_MR_EXT_MEMSEG_EN,
 		MLX5_REPRESENTOR,
 		MLX5_MAX_DUMP_FILES_NUM,
+		MLX5_LRO_TIMEOUT_USEC,
 		NULL,
 	};
 	struct rte_kvargs *kvlist;
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 2f2ed57..e494572 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -183,6 +183,21 @@ struct mlx5_hca_attr {
 /* Default PMD specific parameter value. */
 #define MLX5_ARG_UNSET (-1)
 
+#define MLX5_LRO_SUPPORTED(dev) \
+	(((struct mlx5_priv *)((dev)->data->dev_private))->config.lro.supported)
+
+#define MLX5_LRO_ENABLED(dev) \
+	((dev)->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO)
+
+#define MLX5_FLOW_IPV4_LRO	(1 << 0)
+#define MLX5_FLOW_IPV6_LRO	(1 << 1)
+
+/* LRO configurations structure. */
+struct mlx5_lro_config {
+	uint32_t supported:1; /* Whether LRO is supported. */
+	uint32_t timeout; /* User configuration. */
+};
+
 /*
  * Device configuration structure.
  *
@@ -233,6 +248,7 @@ struct mlx5_dev_config {
 	int txq_inline_max; /* Max packet size for inlining with SEND. */
 	int txq_inline_mpw; /* Max packet size for inlining with eMPW. */
 	struct mlx5_hca_attr hca_attr; /* HCA attributes. */
+	struct mlx5_lro_config lro; /* LRO configuration. */
 };
 
 /**
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 9629cfb..cf50e6e 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -565,7 +565,7 @@ struct ethtool_link_settings {
 	info->max_tx_queues = max;
 	info->max_mac_addrs = MLX5_MAX_UC_MAC_ADDRESSES;
 	info->rx_queue_offload_capa = mlx5_get_rx_queue_offloads(dev);
-	info->rx_offload_capa = (mlx5_get_rx_port_offloads() |
+	info->rx_offload_capa = (mlx5_get_rx_port_offloads(dev) |
 				 info->rx_queue_offload_capa);
 	info->tx_offload_capa = mlx5_get_tx_port_offloads(dev);
 	info->if_index = mlx5_ifindex(dev);
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 0535ce3..e68de50 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -124,6 +124,21 @@
 }
 
 /**
+ * Check whether LRO is supported and enabled for the device.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ *
+ * @return
+ *   0 if disabled, 1 if enabled.
+ */
+inline int
+mlx5_lro_on(struct rte_eth_dev *dev)
+{
+	return (MLX5_LRO_SUPPORTED(dev) && MLX5_LRO_ENABLED(dev));
+}
+
+/**
  * Allocate RX queue elements for Multi-Packet RQ.
  *
  * @param rxq_ctrl
@@ -386,14 +401,19 @@
 /**
  * Returns the per-port supported offloads.
  *
+ * @param dev
+ *   Pointer to Ethernet device.
+ *
  * @return
  *   Supported Rx offloads.
  */
 uint64_t
-mlx5_get_rx_port_offloads(void)
+mlx5_get_rx_port_offloads(struct rte_eth_dev *dev)
 {
 	uint64_t offloads = DEV_RX_OFFLOAD_VLAN_FILTER;
 
+	if (MLX5_LRO_SUPPORTED(dev))
+		offloads |= DEV_RX_OFFLOAD_TCP_LRO;
 	return offloads;
 }
 
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index bc8a61a..2f688ac 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -323,8 +323,9 @@ struct mlx5_hrxq *mlx5_hrxq_get(struct rte_eth_dev *dev,
 int mlx5_hrxq_ibv_verify(struct rte_eth_dev *dev);
 struct mlx5_hrxq *mlx5_hrxq_drop_new(struct rte_eth_dev *dev);
 void mlx5_hrxq_drop_release(struct rte_eth_dev *dev);
-uint64_t mlx5_get_rx_port_offloads(void);
+uint64_t mlx5_get_rx_port_offloads(struct rte_eth_dev *dev);
 uint64_t mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev);
+int mlx5_lro_on(struct rte_eth_dev *dev);
 
 /* mlx5_txq.c */
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH v2 03/28] net/mlx5: support LRO caps query using devx API
  2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
  2019-07-22 14:51   ` [dpdk-dev] [PATCH v2 01/28] net/mlx5: remove redundant item from union Matan Azrad
  2019-07-22 14:51   ` [dpdk-dev] [PATCH v2 02/28] net/mlx5: add LRO APIs and initial settings Matan Azrad
@ 2019-07-22 14:52   ` Matan Azrad
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 04/28] net/mlx5: glue func for queue query using new API Matan Azrad
                     ` (25 subsequent siblings)
  28 siblings, 0 replies; 92+ messages in thread
From: Matan Azrad @ 2019-07-22 14:52 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
  Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Update function mlx5_devx_cmd_query_hca_attr() to query HCA
capabilities related to LRO.

Add relevant structs in drivers/net/mlx5/mlx5_prm.h.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/net/mlx5/mlx5.h           |  8 ++++++
 drivers/net/mlx5/mlx5_devx_cmds.c | 14 ++++++++++
 drivers/net/mlx5/mlx5_prm.h       | 58 ++++++++++++++++++++-------------------
 3 files changed, 52 insertions(+), 28 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index e494572..738c55b 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -165,6 +165,9 @@ struct mlx5_devx_mkey_attr {
 	uint32_t pd;
 };
 
+/* HCA supports this number of time periods for LRO. */
+#define MLX5_LRO_NUM_SUPP_PERIODS 4
+
 /* HCA attributes. */
 struct mlx5_hca_attr {
 	uint32_t eswitch_manager:1;
@@ -175,6 +178,11 @@ struct mlx5_hca_attr {
 	uint32_t wqe_vlan_insert:1;
 	uint32_t wqe_inline_mode:2;
 	uint32_t vport_inline_mode:3;
+	uint32_t lro_cap:1;
+	uint32_t tunnel_lro_gre:1;
+	uint32_t tunnel_lro_vxlan:1;
+	uint32_t lro_max_msg_sz_mode:2;
+	uint32_t lro_timer_supported_periods[MLX5_LRO_NUM_SUPP_PERIODS];
 };
 
 /* Flow list . */
diff --git a/drivers/net/mlx5/mlx5_devx_cmds.c b/drivers/net/mlx5/mlx5_devx_cmds.c
index 18f8ab6..1cba00f 100644
--- a/drivers/net/mlx5/mlx5_devx_cmds.c
+++ b/drivers/net/mlx5/mlx5_devx_cmds.c
@@ -360,6 +360,20 @@ struct mlx5_devx_obj *
 	hcattr = MLX5_ADDR_OF(query_hca_cap_out, out, capability);
 	attr->wqe_vlan_insert = MLX5_GET(per_protocol_networking_offload_caps,
 					 hcattr, wqe_vlan_insert);
+	attr->lro_cap = MLX5_GET(per_protocol_networking_offload_caps, hcattr,
+				 lro_cap);
+	attr->tunnel_lro_gre = MLX5_GET(per_protocol_networking_offload_caps,
+					hcattr, tunnel_lro_gre);
+	attr->tunnel_lro_vxlan = MLX5_GET(per_protocol_networking_offload_caps,
+					  hcattr, tunnel_lro_vxlan);
+	attr->lro_max_msg_sz_mode = MLX5_GET
+					(per_protocol_networking_offload_caps,
+					 hcattr, lro_max_msg_sz_mode);
+	for (int i = 0 ; i < MLX5_LRO_NUM_SUPP_PERIODS ; i++) {
+		attr->lro_timer_supported_periods[i] =
+			MLX5_GET(per_protocol_networking_offload_caps, hcattr,
+				 lro_timer_supported_periods[i]);
+	}
 	attr->wqe_inline_mode = MLX5_GET(per_protocol_networking_offload_caps,
 					 hcattr, wqe_inline_mode);
 	if (attr->wqe_inline_mode != MLX5_CAP_INLINE_MODE_VPORT_CONTEXT)
diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index 3c2b3d8..4f20dea 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -1084,16 +1084,42 @@ struct mlx5_ifc_cmd_hca_cap_bits {
 	u8 reserved_at_61f[0x1e1];
 };
 
+struct mlx5_ifc_qos_cap_bits {
+	u8 packet_pacing[0x1];
+	u8 esw_scheduling[0x1];
+	u8 esw_bw_share[0x1];
+	u8 esw_rate_limit[0x1];
+	u8 reserved_at_4[0x1];
+	u8 packet_pacing_burst_bound[0x1];
+	u8 packet_pacing_typical_size[0x1];
+	u8 flow_meter_srtcm[0x1];
+	u8 reserved_at_8[0x8];
+	u8 log_max_flow_meter[0x8];
+	u8 flow_meter_reg_id[0x8];
+	u8 reserved_at_25[0x20];
+	u8 packet_pacing_max_rate[0x20];
+	u8 packet_pacing_min_rate[0x20];
+	u8 reserved_at_80[0x10];
+	u8 packet_pacing_rate_table_size[0x10];
+	u8 esw_element_type[0x10];
+	u8 esw_tsar_type[0x10];
+	u8 reserved_at_c0[0x10];
+	u8 max_qos_para_vport[0x10];
+	u8 max_tsar_bw_share[0x20];
+	u8 reserved_at_100[0x6e8];
+};
+
 struct mlx5_ifc_per_protocol_networking_offload_caps_bits {
 	u8 csum_cap[0x1];
 	u8 vlan_cap[0x1];
 	u8 lro_cap[0x1];
 	u8 lro_psh_flag[0x1];
 	u8 lro_time_stamp[0x1];
-	u8 reserved_at_5[0x2];
+	u8 lro_max_msg_sz_mode[0x2];
 	u8 wqe_vlan_insert[0x1];
 	u8 self_lb_en_modifiable[0x1];
-	u8 reserved_at_9[0x2];
+	u8 self_lb_mc[0x1];
+	u8 self_lb_uc[0x1];
 	u8 max_lso_cap[0x5];
 	u8 multi_pkt_send_wqe[0x2];
 	u8 wqe_inline_mode[0x2];
@@ -1102,7 +1128,8 @@ struct mlx5_ifc_per_protocol_networking_offload_caps_bits {
 	u8 scatter_fcs[0x1];
 	u8 enhanced_multi_pkt_send_wqe[0x1];
 	u8 tunnel_lso_const_out_ip_id[0x1];
-	u8 reserved_at_1c[0x2];
+	u8 tunnel_lro_gre[0x1];
+	u8 tunnel_lro_vxlan[0x1];
 	u8 tunnel_stateless_gre[0x1];
 	u8 tunnel_stateless_vxlan[0x1];
 	u8 swp[0x1];
@@ -1120,31 +1147,6 @@ struct mlx5_ifc_per_protocol_networking_offload_caps_bits {
 	u8 reserved_at_200[0x600];
 };
 
-struct mlx5_ifc_qos_cap_bits {
-	u8 packet_pacing[0x1];
-	u8 esw_scheduling[0x1];
-	u8 esw_bw_share[0x1];
-	u8 esw_rate_limit[0x1];
-	u8 reserved_at_4[0x1];
-	u8 packet_pacing_burst_bound[0x1];
-	u8 packet_pacing_typical_size[0x1];
-	u8 flow_meter_srtcm[0x1];
-	u8 reserved_at_8[0x8];
-	u8 log_max_flow_meter[0x8];
-	u8 flow_meter_reg_id[0x8];
-	u8 reserved_at_25[0x20];
-	u8 packet_pacing_max_rate[0x20];
-	u8 packet_pacing_min_rate[0x20];
-	u8 reserved_at_80[0x10];
-	u8 packet_pacing_rate_table_size[0x10];
-	u8 esw_element_type[0x10];
-	u8 esw_tsar_type[0x10];
-	u8 reserved_at_c0[0x10];
-	u8 max_qos_para_vport[0x10];
-	u8 max_tsar_bw_share[0x20];
-	u8 reserved_at_100[0x6e8];
-};
-
 union mlx5_ifc_hca_cap_union_bits {
 	struct mlx5_ifc_cmd_hca_cap_bits cmd_hca_cap;
 	struct mlx5_ifc_per_protocol_networking_offload_caps_bits
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH v2 04/28] net/mlx5: glue func for queue query using new API
  2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
                     ` (2 preceding siblings ...)
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 03/28] net/mlx5: support LRO caps query using devx API Matan Azrad
@ 2019-07-22 14:52   ` Matan Azrad
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 05/28] net/mlx5: glue function for action " Matan Azrad
                     ` (24 subsequent siblings)
  28 siblings, 0 replies; 92+ messages in thread
From: Matan Azrad @ 2019-07-22 14:52 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
  Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Add function mlx5_glue_devx_qp_query().
Add glue function pointer devx_qp_query to run it.
Glue version updated to 19.08.0.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/net/mlx5/mlx5_glue.c | 19 +++++++++++++++++++
 drivers/net/mlx5/mlx5_glue.h |  3 +++
 2 files changed, 22 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_glue.c b/drivers/net/mlx5/mlx5_glue.c
index 942f89d..05474a0 100644
--- a/drivers/net/mlx5/mlx5_glue.c
+++ b/drivers/net/mlx5/mlx5_glue.c
@@ -934,6 +934,24 @@
 #endif
 }
 
+static int
+mlx5_glue_devx_qp_query(struct ibv_qp *qp,
+			const void *in, size_t inlen,
+			void *out, size_t outlen)
+{
+#ifdef HAVE_IBV_DEVX_ASYNC
+	return mlx5dv_devx_qp_query(qp, in, inlen, out, outlen);
+#else
+	(void)qp;
+	(void)in;
+	(void)inlen;
+	(void)out;
+	(void)outlen;
+	errno = ENOTSUP;
+	return errno;
+#endif
+}
+
 alignas(RTE_CACHE_LINE_SIZE)
 const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue){
 	.version = MLX5_GLUE_VERSION,
@@ -1021,4 +1039,5 @@
 	.devx_get_async_cmd_comp = mlx5_glue_devx_get_async_cmd_comp,
 	.devx_umem_reg = mlx5_glue_devx_umem_reg,
 	.devx_umem_dereg = mlx5_glue_devx_umem_dereg,
+	.devx_qp_query = mlx5_glue_devx_qp_query,
 };
diff --git a/drivers/net/mlx5/mlx5_glue.h b/drivers/net/mlx5/mlx5_glue.h
index 9facdb9..d5c7523 100644
--- a/drivers/net/mlx5/mlx5_glue.h
+++ b/drivers/net/mlx5/mlx5_glue.h
@@ -229,6 +229,9 @@ struct mlx5_glue {
 						  void *addr, size_t size,
 						  uint32_t access);
 	int (*devx_umem_dereg)(struct mlx5dv_devx_umem *dv_devx_umem);
+	int (*devx_qp_query)(struct ibv_qp *qp,
+			     const void *in, size_t inlen,
+			     void *out, size_t outlen);
 };
 
 const struct mlx5_glue *mlx5_glue;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH v2 05/28] net/mlx5: glue function for action using new API
  2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
                     ` (3 preceding siblings ...)
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 04/28] net/mlx5: glue func for queue query using new API Matan Azrad
@ 2019-07-22 14:52   ` Matan Azrad
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 06/28] net/mlx5: check conditions to enable LRO Matan Azrad
                     ` (23 subsequent siblings)
  28 siblings, 0 replies; 92+ messages in thread
From: Matan Azrad @ 2019-07-22 14:52 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
  Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Add compile option HAVE_MLX5DV_DR_ACTION_DEST_DEVX_TIR, and matching
dest_tir flag in device configuration structure.
Add glue function pointer dv_create_flow_action_dest_devx_tir, and
function mlx5_glue_dv_create_flow_action_dest_devx_tir(),
to invoke API mlx5dv_dr_action_create_dest_devx_tir();

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/net/mlx5/Makefile    |  5 +++++
 drivers/net/mlx5/meson.build |  2 ++
 drivers/net/mlx5/mlx5.c      |  3 +++
 drivers/net/mlx5/mlx5.h      |  1 +
 drivers/net/mlx5/mlx5_glue.c | 14 ++++++++++++++
 drivers/net/mlx5/mlx5_glue.h |  1 +
 6 files changed, 26 insertions(+)

diff --git a/drivers/net/mlx5/Makefile b/drivers/net/mlx5/Makefile
index 76d40b1..dbb2a4e 100644
--- a/drivers/net/mlx5/Makefile
+++ b/drivers/net/mlx5/Makefile
@@ -178,6 +178,11 @@ mlx5_autoconf.h.new: $(RTE_SDK)/buildtools/auto-config-h.sh
 		func mlx5dv_devx_obj_query_async \
 		$(AUTOCONF_OUTPUT)
 	$Q sh -- '$<' '$@' \
+		HAVE_MLX5DV_DR_ACTION_DEST_DEVX_TIR \
+		infiniband/mlx5dv.h \
+		func mlx5dv_dr_action_create_dest_devx_tir \
+		$(AUTOCONF_OUTPUT)
+	$Q sh -- '$<' '$@' \
 		HAVE_ETHTOOL_LINK_MODE_25G \
 		/usr/include/linux/ethtool.h \
 		enum ETHTOOL_LINK_MODE_25000baseCR_Full_BIT \
diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build
index ed42641..62b41ca 100644
--- a/drivers/net/mlx5/meson.build
+++ b/drivers/net/mlx5/meson.build
@@ -124,6 +124,8 @@ if build
 		'MLX5DV_FLOW_ACTION_COUNTERS_DEVX' ],
 		[ 'HAVE_IBV_DEVX_ASYNC', 'infiniband/mlx5dv.h',
 		'mlx5dv_devx_obj_query_async' ],
+		[ 'HAVE_MLX5DV_DR_ACTION_DEST_DEVX_TIR', 'infiniband/mlx5dv.h',
+		'mlx5dv_dr_action_create_dest_devx_tir' ],
 		[ 'HAVE_MLX5DV_DR', 'infiniband/mlx5dv.h',
 		'MLX5DV_DR_DOMAIN_TYPE_NIC_RX' ],
 		[ 'HAVE_MLX5DV_DR_ESWITCH', 'infiniband/mlx5dv.h',
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 365246b..3209c3c 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1423,6 +1423,9 @@ struct mlx5_dev_spawn_data {
 	if (!sh)
 		return NULL;
 	config.devx = sh->devx;
+#ifdef HAVE_MLX5DV_DR_ACTION_DEST_DEVX_TIR
+	config.dest_tir = 1;
+#endif
 #ifdef HAVE_IBV_MLX5_MOD_SWP
 	dv_attr.comp_mask |= MLX5DV_CONTEXT_MASK_SWP;
 #endif
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 738c55b..7041bbc 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -236,6 +236,7 @@ struct mlx5_dev_config {
 	unsigned int dv_flow_en:1; /* Enable DV flow. */
 	unsigned int swp:1; /* Tx generic tunnel checksum and TSO offload. */
 	unsigned int devx:1; /* Whether devx interface is available or not. */
+	unsigned int dest_tir:1; /* Whether advanced DR API is available. */
 	struct {
 		unsigned int enabled:1; /* Whether MPRQ is enabled. */
 		unsigned int stride_num_n; /* Number of strides. */
diff --git a/drivers/net/mlx5/mlx5_glue.c b/drivers/net/mlx5/mlx5_glue.c
index 05474a0..50c369a 100644
--- a/drivers/net/mlx5/mlx5_glue.c
+++ b/drivers/net/mlx5/mlx5_glue.c
@@ -628,6 +628,18 @@
 }
 
 static void *
+mlx5_glue_dv_create_flow_action_dest_devx_tir(void *tir)
+{
+#ifdef HAVE_MLX5DV_DR_ACTION_DEST_DEVX_TIR
+	return mlx5dv_dr_action_create_dest_devx_tir(tir);
+#else
+	(void)tir;
+	errno = ENOTSUP;
+	return NULL;
+#endif
+}
+
+static void *
 mlx5_glue_dv_create_flow_action_modify_header
 					(struct ibv_context *ctx,
 					 enum mlx5dv_flow_table_type ft_type,
@@ -1020,6 +1032,8 @@
 		mlx5_glue_dv_create_flow_action_counter,
 	.dv_create_flow_action_dest_ibv_qp =
 		mlx5_glue_dv_create_flow_action_dest_ibv_qp,
+	.dv_create_flow_action_dest_devx_tir =
+		mlx5_glue_dv_create_flow_action_dest_devx_tir,
 	.dv_create_flow_action_modify_header =
 		mlx5_glue_dv_create_flow_action_modify_header,
 	.dv_create_flow_action_packet_reformat =
diff --git a/drivers/net/mlx5/mlx5_glue.h b/drivers/net/mlx5/mlx5_glue.h
index d5c7523..f8e2b9a 100644
--- a/drivers/net/mlx5/mlx5_glue.h
+++ b/drivers/net/mlx5/mlx5_glue.h
@@ -187,6 +187,7 @@ struct mlx5_glue {
 			  size_t num_actions, void *actions[]);
 	void *(*dv_create_flow_action_counter)(void *obj, uint32_t  offset);
 	void *(*dv_create_flow_action_dest_ibv_qp)(void *qp);
+	void *(*dv_create_flow_action_dest_devx_tir)(void *tir);
 	void *(*dv_create_flow_action_modify_header)
 		(struct ibv_context *ctx, enum mlx5dv_flow_table_type ft_type,
 		 void *domain, uint64_t flags, size_t actions_sz,
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH v2 06/28] net/mlx5: check conditions to enable LRO
  2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
                     ` (4 preceding siblings ...)
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 05/28] net/mlx5: glue function for action " Matan Azrad
@ 2019-07-22 14:52   ` Matan Azrad
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 07/28] net/mlx5: support Tx interface query using new API Matan Azrad
                     ` (22 subsequent siblings)
  28 siblings, 0 replies; 92+ messages in thread
From: Matan Azrad @ 2019-07-22 14:52 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
  Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Use DevX API to read device LRO capabilities.
Check if LRO is supported and can be enabled.
Check if MPRQ is supported and can be used.
Enable MPRQ for LRO use if not enabled by user.
Added note for mlx5_mprq_enabled(), to emphasize that LRO
enables MPRQ.
Disable CQE compression and CRC stripping if LRO is enabled.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/net/mlx5/mlx5.c        | 49 +++++++++++++++++++++++++++---------------
 drivers/net/mlx5/mlx5_ethdev.c | 12 +++++++++++
 drivers/net/mlx5/mlx5_rxq.c    | 14 +++++++++++-
 3 files changed, 57 insertions(+), 18 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 3209c3c..6dc2792 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1683,6 +1683,38 @@ struct mlx5_dev_spawn_data {
 	} else if (config.cqe_pad) {
 		DRV_LOG(INFO, "Rx CQE padding is enabled");
 	}
+	if (config.devx) {
+		priv->counter_fallback = 0;
+		err = mlx5_devx_cmd_query_hca_attr(sh->ctx, &config.hca_attr);
+		if (err) {
+			err = -err;
+			goto error;
+		}
+		if (!config.hca_attr.flow_counters_dump)
+			priv->counter_fallback = 1;
+#ifndef HAVE_IBV_DEVX_ASYNC
+		priv->counter_fallback = 1;
+#endif
+		if (priv->counter_fallback)
+			DRV_LOG(INFO, "Use fall-back DV counter management\n");
+		/* Check for LRO support. */
+		if (config.dest_tir && mprq && config.hca_attr.lro_cap) {
+			/* TBD check tunnel lro caps. */
+			config.lro.supported = config.hca_attr.lro_cap;
+			DRV_LOG(DEBUG, "Device supports LRO");
+			/*
+			 * If LRO timeout is not configured by application,
+			 * use the minimal supported value.
+			 */
+			if (!config.lro.timeout)
+				config.lro.timeout =
+				config.hca_attr.lro_timer_supported_periods[0];
+			DRV_LOG(DEBUG, "LRO session timeout set to %d usec",
+				config.lro.timeout);
+			config.mprq.enabled = 1;
+			DRV_LOG(DEBUG, "Enable MPRQ for LRO use");
+		}
+	}
 	if (config.mprq.enabled && mprq) {
 		if (config.mprq.stride_num_n > mprq_max_stride_num_n ||
 		    config.mprq.stride_num_n < mprq_min_stride_num_n) {
@@ -1790,23 +1822,6 @@ struct mlx5_dev_spawn_data {
 	 * Verbs context returned by ibv_open_device().
 	 */
 	mlx5_link_update(eth_dev, 0);
-#ifdef HAVE_IBV_DEVX_OBJ
-	if (config.devx) {
-		priv->counter_fallback = 0;
-		err = mlx5_devx_cmd_query_hca_attr(sh->ctx, &config.hca_attr);
-		if (err) {
-			err = -err;
-			goto error;
-		}
-		if (!config.hca_attr.flow_counters_dump)
-			priv->counter_fallback = 1;
-#ifndef HAVE_IBV_DEVX_ASYNC
-		priv->counter_fallback = 1;
-#endif
-		if (priv->counter_fallback)
-			DRV_LOG(INFO, "Use fall-back DV counter management\n");
-	}
-#endif
 #ifdef HAVE_MLX5DV_DR_ESWITCH
 	if (!(config.hca_attr.eswitch_manager && config.dv_flow_en &&
 	      (switch_info->representor || switch_info->master)))
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index cf50e6e..e627909 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -389,6 +389,7 @@ struct ethtool_link_settings {
 	const uint8_t use_app_rss_key =
 		!!dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key;
 	int ret = 0;
+	unsigned int lro_on = mlx5_lro_on(dev);
 
 	if (use_app_rss_key &&
 	    (dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len !=
@@ -432,6 +433,12 @@ struct ethtool_link_settings {
 			dev->data->port_id, priv->rxqs_n, rxqs_n);
 		priv->rxqs_n = rxqs_n;
 		/*
+		 * WHen using LRO, MPRQ is implicitly enabled.
+		 * Adjust threshold value to ensure MPRQ can be enabled.
+		 */
+		if (lro_on && priv->config.mprq.min_rxqs_num > priv->rxqs_n)
+			priv->config.mprq.min_rxqs_num = priv->rxqs_n;
+		/*
 		 * If the requested number of RX queues is not a power of two,
 		 * use the maximum indirection table size for better balancing.
 		 * The result is always rounded to the next power of two.
@@ -453,6 +460,11 @@ struct ethtool_link_settings {
 				j = 0;
 		}
 	}
+	if (lro_on && priv->config.cqe_comp) {
+		/* CQE compressing is not supported for LRO CQEs. */
+		DRV_LOG(WARNING, "Rx CQE compression isn't supported with LRO");
+		priv->config.cqe_comp = 0;
+	}
 	ret = mlx5_proc_priv_init(dev);
 	if (ret)
 		return ret;
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index e68de50..8567ee5 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -93,6 +93,7 @@
 
 /**
  * Check whether Multi-Packet RQ is enabled for the device.
+ * MPRQ can be enabled explicitly, or implicitly by enabling LRO.
  *
  * @param dev
  *   Pointer to Ethernet device.
@@ -1431,7 +1432,18 @@ struct mlx5_rxq_ctrl *
 	tmpl->rxq.crc_present = 0;
 	if (offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
 		if (config->hw_fcs_strip) {
-			tmpl->rxq.crc_present = 1;
+			/*
+			 * RQs used for LRO-enabled TIRs should not be
+			 * configured to scatter the FCS.
+			 */
+			if (mlx5_lro_on(dev))
+				DRV_LOG(WARNING,
+					"port %u CRC stripping has been "
+					"disabled but will still be performed "
+					"by hardware, because LRO is enabled",
+					dev->data->port_id);
+			else
+				tmpl->rxq.crc_present = 1;
 		} else {
 			DRV_LOG(WARNING,
 				"port %u CRC stripping has been disabled but will"
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH v2 07/28] net/mlx5: support Tx interface query using new API
  2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
                     ` (5 preceding siblings ...)
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 06/28] net/mlx5: check conditions to enable LRO Matan Azrad
@ 2019-07-22 14:52   ` Matan Azrad
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 08/28] net/mlx5: update Tx queue create for LRO Matan Azrad
                     ` (21 subsequent siblings)
  28 siblings, 0 replies; 92+ messages in thread
From: Matan Azrad @ 2019-07-22 14:52 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
  Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Implement function mlx5_devx_cmd_qp_tis_td_query(), to query
QP TIS Transport Domain value.

Add related structs in mlx5_prm.h.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/net/mlx5/mlx5.h           |  2 ++
 drivers/net/mlx5/mlx5_devx_cmds.c | 34 ++++++++++++++++++++++++++++++++++
 drivers/net/mlx5/mlx5_prm.h       | 34 ++++++++++++++++++++++++++++++++++
 3 files changed, 70 insertions(+)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 7041bbc..5ae99dc 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -737,4 +737,6 @@ int mlx5_devx_cmd_query_hca_attr(struct ibv_context *ctx,
 struct mlx5_devx_obj *mlx5_devx_cmd_mkey_create(struct ibv_context *ctx,
 					     struct mlx5_devx_mkey_attr *attr);
 int mlx5_devx_get_out_command_status(void *out);
+int mlx5_devx_cmd_qp_query_tis_td(struct ibv_qp *qp, uint32_t tis_num,
+				  uint32_t *tis_td);
 #endif /* RTE_PMD_MLX5_H_ */
diff --git a/drivers/net/mlx5/mlx5_devx_cmds.c b/drivers/net/mlx5/mlx5_devx_cmds.c
index 1cba00f..3d07fcf 100644
--- a/drivers/net/mlx5/mlx5_devx_cmds.c
+++ b/drivers/net/mlx5/mlx5_devx_cmds.c
@@ -390,3 +390,37 @@ struct mlx5_devx_obj *
 	rc = (rc > 0) ? -rc : rc;
 	return rc;
 }
+
+/**
+ * Query TIS transport domain from QP verbs object using DevX API.
+ *
+ * @param[in] qp
+ *   Pointer to verbs QP returned by ibv_create_qp .
+ * @param[in] tis_num
+ *   TIS number of TIS to query.
+ * @param[out] tis_td
+ *   Pointer to TIS transport domain variable, to be set by the routine.
+ *
+ * @return
+ *   0 on success, a negative value otherwise.
+ */
+int
+mlx5_devx_cmd_qp_query_tis_td(struct ibv_qp *qp, uint32_t tis_num,
+			      uint32_t *tis_td)
+{
+	uint32_t in[MLX5_ST_SZ_DW(query_tis_in)] = {0};
+	uint32_t out[MLX5_ST_SZ_DW(query_tis_out)] = {0};
+	int rc;
+	void *tis_ctx;
+
+	MLX5_SET(query_tis_in, in, opcode, MLX5_CMD_OP_QUERY_TIS);
+	MLX5_SET(query_tis_in, in, tisn, tis_num);
+	rc = mlx5_glue->devx_qp_query(qp, in, sizeof(in), out, sizeof(out));
+	if (rc) {
+		DRV_LOG(ERR, "Failed to query QP using DevX");
+		return -rc;
+	};
+	tis_ctx = MLX5_ADDR_OF(query_tis_out, out, tis_context);
+	*tis_td = MLX5_GET(tisc, tis_ctx, transport_domain);
+	return 0;
+}
diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index 4f20dea..b5de0c3 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -627,6 +627,7 @@ enum {
 	MLX5_CMD_OP_QUERY_HCA_CAP = 0x100,
 	MLX5_CMD_OP_CREATE_MKEY = 0x200,
 	MLX5_CMD_OP_QUERY_NIC_VPORT_CONTEXT = 0x754,
+	MLX5_CMD_OP_QUERY_TIS = 0x915,
 	MLX5_CMD_OP_ALLOC_FLOW_COUNTER = 0x939,
 	MLX5_CMD_OP_QUERY_FLOW_COUNTER = 0x93b,
 };
@@ -1234,6 +1235,39 @@ struct mlx5_ifc_query_nic_vport_context_in_bits {
 	u8 reserved_at_68[0x18];
 };
 
+struct mlx5_ifc_tisc_bits {
+	u8 strict_lag_tx_port_affinity[0x1];
+	u8 reserved_at_1[0x3];
+	u8 lag_tx_port_affinity[0x04];
+	u8 reserved_at_8[0x4];
+	u8 prio[0x4];
+	u8 reserved_at_10[0x10];
+	u8 reserved_at_20[0x100];
+	u8 reserved_at_120[0x8];
+	u8 transport_domain[0x18];
+	u8 reserved_at_140[0x8];
+	u8 underlay_qpn[0x18];
+	u8 reserved_at_160[0x3a0];
+};
+
+struct mlx5_ifc_query_tis_out_bits {
+	u8 status[0x8];
+	u8 reserved_at_8[0x18];
+	u8 syndrome[0x20];
+	u8 reserved_at_40[0x40];
+	struct mlx5_ifc_tisc_bits tis_context;
+};
+
+struct mlx5_ifc_query_tis_in_bits {
+	u8 opcode[0x10];
+	u8 reserved_at_10[0x10];
+	u8 reserved_at_20[0x10];
+	u8 op_mod[0x10];
+	u8 reserved_at_40[0x8];
+	u8 tisn[0x18];
+	u8 reserved_at_60[0x20];
+};
+
 /* CQE format mask. */
 #define MLX5E_CQE_FORMAT_MASK 0xc
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH v2 08/28] net/mlx5: update Tx queue create for LRO
  2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
                     ` (6 preceding siblings ...)
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 07/28] net/mlx5: support Tx interface query using new API Matan Azrad
@ 2019-07-22 14:52   ` Matan Azrad
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 09/28] net/mlx5: create advanced RxQ object using new API Matan Azrad
                     ` (20 subsequent siblings)
  28 siblings, 0 replies; 92+ messages in thread
From: Matan Azrad @ 2019-07-22 14:52 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
  Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Update function mlx5_txq_ibv_new(), query and store the TIS
transport domain value.
It is required later on Rx side when creating matching TIR.
Add field in mlx5 data structure to store Transport Domain ID.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/net/mlx5/mlx5.h     |  1 +
 drivers/net/mlx5/mlx5_txq.c | 26 ++++++++++++++++++++++++++
 2 files changed, 27 insertions(+)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 5ae99dc..84975ed 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -418,6 +418,7 @@ struct mlx5_ibv_shared {
 	uint32_t max_port; /* Maximal IB device port index. */
 	struct ibv_context *ctx; /* Verbs/DV context. */
 	struct ibv_pd *pd; /* Protection Domain. */
+	uint32_t tdn; /* Transport Domain number. */
 	char ibdev_name[IBV_SYSFS_NAME_MAX]; /* IB device name. */
 	char ibdev_path[IBV_SYSFS_PATH_MAX]; /* IB device path for secondary */
 	struct ibv_device_attr_ex device_attr; /* Device properties. */
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index dbad361..fe3b4ec 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -396,6 +396,11 @@ struct mlx5_txq_ibv *
 	const int desc = 1 << txq_data->elts_n;
 	int ret = 0;
 
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+	/* If using DevX, need additional mask to read tisn value. */
+	if (priv->config.devx && !priv->sh->tdn)
+		qp.comp_mask |= MLX5DV_QP_MASK_RAW_QP_HANDLES;
+#endif
 	assert(txq_data);
 	priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_TX_QUEUE;
 	priv->verbs_alloc_ctx.obj = txq_ctrl;
@@ -542,6 +547,27 @@ struct mlx5_txq_ibv *
 	txq_data->wqe_pi = 0;
 	txq_data->wqe_comp = 0;
 	txq_data->wqe_thres = txq_data->wqe_s / MLX5_TX_COMP_THRESH_INLINE_DIV;
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+	/*
+	 * If using DevX need to query and store TIS transport domain value.
+	 * This is done once per port.
+	 * Will use this value on Rx, when creating matching TIR.
+	 */
+	if (priv->config.devx && !priv->sh->tdn) {
+		ret = mlx5_devx_cmd_qp_query_tis_td(tmpl.qp, qp.tisn,
+						    &priv->sh->tdn);
+		if (ret) {
+			DRV_LOG(ERR, "Fail to query port %u Tx queue %u QP TIS "
+				"transport domain", dev->data->port_id, idx);
+			rte_errno = EINVAL;
+			goto error;
+		} else {
+			DRV_LOG(DEBUG, "port %u Tx queue %u TIS number %d "
+				"transport domain %d", dev->data->port_id,
+				idx, qp.tisn, priv->sh->tdn);
+		}
+	}
+#endif
 	txq_ibv->qp = tmpl.qp;
 	txq_ibv->cq = tmpl.cq;
 	rte_atomic32_inc(&txq_ibv->refcnt);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH v2 09/28] net/mlx5: create advanced RxQ object using new API
  2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
                     ` (7 preceding siblings ...)
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 08/28] net/mlx5: update Tx queue create for LRO Matan Azrad
@ 2019-07-22 14:52   ` Matan Azrad
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 10/28] net/mlx5: modify " Matan Azrad
                     ` (19 subsequent siblings)
  28 siblings, 0 replies; 92+ messages in thread
From: Matan Azrad @ 2019-07-22 14:52 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
  Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Implement function mlx5_devx_cmd_create_rq() to create RQ object using
DevX API.
Add related structs in mlx5.h and mlx5_prm.h.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/net/mlx5/mlx5.h           |  50 +++++++++++++++++
 drivers/net/mlx5/mlx5_devx_cmds.c | 102 +++++++++++++++++++++++++++++++++++
 drivers/net/mlx5/mlx5_prm.h       | 110 ++++++++++++++++++++++++++++++++++++++
 3 files changed, 262 insertions(+)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 84975ed..4f4c4a7 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -260,6 +260,52 @@ struct mlx5_dev_config {
 	struct mlx5_lro_config lro; /* LRO configuration. */
 };
 
+struct mlx5_devx_wq_attr {
+	uint32_t wq_type:4;
+	uint32_t wq_signature:1;
+	uint32_t end_padding_mode:2;
+	uint32_t cd_slave:1;
+	uint32_t hds_skip_first_sge:1;
+	uint32_t log2_hds_buf_size:3;
+	uint32_t page_offset:5;
+	uint32_t lwm:16;
+	uint32_t pd:24;
+	uint32_t uar_page:24;
+	uint64_t dbr_addr;
+	uint32_t hw_counter;
+	uint32_t sw_counter;
+	uint32_t log_wq_stride:4;
+	uint32_t log_wq_pg_sz:5;
+	uint32_t log_wq_sz:5;
+	uint32_t dbr_umem_valid:1;
+	uint32_t wq_umem_valid:1;
+	uint32_t log_hairpin_num_packets:5;
+	uint32_t log_hairpin_data_sz:5;
+	uint32_t single_wqe_log_num_of_strides:4;
+	uint32_t two_byte_shift_en:1;
+	uint32_t single_stride_log_num_of_bytes:3;
+	uint32_t dbr_umem_id;
+	uint32_t wq_umem_id;
+	uint64_t wq_umem_offset;
+};
+
+/* Create RQ attributes structure, used by create RQ operation. */
+struct mlx5_devx_create_rq_attr {
+	uint32_t rlky:1;
+	uint32_t delay_drop_en:1;
+	uint32_t scatter_fcs:1;
+	uint32_t vsd:1;
+	uint32_t mem_rq_type:4;
+	uint32_t state:4;
+	uint32_t flush_in_error_en:1;
+	uint32_t hairpin:1;
+	uint32_t user_index:24;
+	uint32_t cqn:24;
+	uint32_t counter_set_id:8;
+	uint32_t rmpn:24;
+	struct mlx5_devx_wq_attr wq_attr;
+};
+
 /**
  * Type of object being allocated.
  */
@@ -740,4 +786,8 @@ struct mlx5_devx_obj *mlx5_devx_cmd_mkey_create(struct ibv_context *ctx,
 int mlx5_devx_get_out_command_status(void *out);
 int mlx5_devx_cmd_qp_query_tis_td(struct ibv_qp *qp, uint32_t tis_num,
 				  uint32_t *tis_td);
+struct mlx5_devx_obj *mlx5_devx_cmd_create_rq(struct ibv_context *ctx,
+				struct mlx5_devx_create_rq_attr *rq_attr,
+				int socket);
+
 #endif /* RTE_PMD_MLX5_H_ */
diff --git a/drivers/net/mlx5/mlx5_devx_cmds.c b/drivers/net/mlx5/mlx5_devx_cmds.c
index 3d07fcf..f68c94b 100644
--- a/drivers/net/mlx5/mlx5_devx_cmds.c
+++ b/drivers/net/mlx5/mlx5_devx_cmds.c
@@ -424,3 +424,105 @@ struct mlx5_devx_obj *
 	*tis_td = MLX5_GET(tisc, tis_ctx, transport_domain);
 	return 0;
 }
+
+/**
+ * Fill WQ data for DevX API command.
+ * Utility function for use when creating DevX objects containing a WQ.
+ *
+ * @param[in] wq_ctx
+ *   Pointer to WQ context to fill with data.
+ * @param [in] wq_attr
+ *   Pointer to WQ attributes structure to fill in WQ context.
+ */
+static void
+devx_cmd_fill_wq_data(void *wq_ctx, struct mlx5_devx_wq_attr *wq_attr)
+{
+	MLX5_SET(wq, wq_ctx, wq_type, wq_attr->wq_type);
+	MLX5_SET(wq, wq_ctx, wq_signature, wq_attr->wq_signature);
+	MLX5_SET(wq, wq_ctx, end_padding_mode, wq_attr->end_padding_mode);
+	MLX5_SET(wq, wq_ctx, cd_slave, wq_attr->cd_slave);
+	MLX5_SET(wq, wq_ctx, hds_skip_first_sge, wq_attr->hds_skip_first_sge);
+	MLX5_SET(wq, wq_ctx, log2_hds_buf_size, wq_attr->log2_hds_buf_size);
+	MLX5_SET(wq, wq_ctx, page_offset, wq_attr->page_offset);
+	MLX5_SET(wq, wq_ctx, lwm, wq_attr->lwm);
+	MLX5_SET(wq, wq_ctx, pd, wq_attr->pd);
+	MLX5_SET(wq, wq_ctx, uar_page, wq_attr->uar_page);
+	MLX5_SET64(wq, wq_ctx, dbr_addr, wq_attr->dbr_addr);
+	MLX5_SET(wq, wq_ctx, hw_counter, wq_attr->hw_counter);
+	MLX5_SET(wq, wq_ctx, sw_counter, wq_attr->sw_counter);
+	MLX5_SET(wq, wq_ctx, log_wq_stride, wq_attr->log_wq_stride);
+	MLX5_SET(wq, wq_ctx, log_wq_pg_sz, wq_attr->log_wq_pg_sz);
+	MLX5_SET(wq, wq_ctx, log_wq_sz, wq_attr->log_wq_sz);
+	MLX5_SET(wq, wq_ctx, dbr_umem_valid, wq_attr->dbr_umem_valid);
+	MLX5_SET(wq, wq_ctx, wq_umem_valid, wq_attr->wq_umem_valid);
+	MLX5_SET(wq, wq_ctx, log_hairpin_num_packets,
+		 wq_attr->log_hairpin_num_packets);
+	MLX5_SET(wq, wq_ctx, log_hairpin_data_sz, wq_attr->log_hairpin_data_sz);
+	MLX5_SET(wq, wq_ctx, single_wqe_log_num_of_strides,
+		 wq_attr->single_wqe_log_num_of_strides);
+	MLX5_SET(wq, wq_ctx, two_byte_shift_en, wq_attr->two_byte_shift_en);
+	MLX5_SET(wq, wq_ctx, single_stride_log_num_of_bytes,
+		 wq_attr->single_stride_log_num_of_bytes);
+	MLX5_SET(wq, wq_ctx, dbr_umem_id, wq_attr->dbr_umem_id);
+	MLX5_SET(wq, wq_ctx, wq_umem_id, wq_attr->wq_umem_id);
+	MLX5_SET64(wq, wq_ctx, wq_umem_offset, wq_attr->wq_umem_offset);
+}
+
+/**
+ * Create RQ using DevX API.
+ *
+ * @param[in] ctx
+ *   ibv_context returned from mlx5dv_open_device.
+ * @param [in] rq_attr
+ *   Pointer to create RQ attributes structure.
+ * @param [in] socket
+ *   CPU socket ID for allocations.
+ *
+ * @return
+ *   The DevX object created, NULL otherwise and rte_errno is set.
+ */
+struct mlx5_devx_obj *
+mlx5_devx_cmd_create_rq(struct ibv_context *ctx,
+			struct mlx5_devx_create_rq_attr *rq_attr,
+			int socket)
+{
+	uint32_t in[MLX5_ST_SZ_DW(create_rq_in)] = {0};
+	uint32_t out[MLX5_ST_SZ_DW(create_rq_out)] = {0};
+	void *rq_ctx, *wq_ctx;
+	struct mlx5_devx_wq_attr *wq_attr;
+	struct mlx5_devx_obj *rq = NULL;
+
+	rq = rte_calloc_socket(__func__, 1, sizeof(*rq), 0, socket);
+	if (!rq) {
+		DRV_LOG(ERR, "Failed to allocate RQ data");
+		rte_errno = ENOMEM;
+		return NULL;
+	}
+	MLX5_SET(create_rq_in, in, opcode, MLX5_CMD_OP_CREATE_RQ);
+	rq_ctx = MLX5_ADDR_OF(create_rq_in, in, ctx);
+	MLX5_SET(rqc, rq_ctx, rlky, rq_attr->rlky);
+	MLX5_SET(rqc, rq_ctx, delay_drop_en, rq_attr->delay_drop_en);
+	MLX5_SET(rqc, rq_ctx, scatter_fcs, rq_attr->scatter_fcs);
+	MLX5_SET(rqc, rq_ctx, vsd, rq_attr->vsd);
+	MLX5_SET(rqc, rq_ctx, mem_rq_type, rq_attr->mem_rq_type);
+	MLX5_SET(rqc, rq_ctx, state, rq_attr->state);
+	MLX5_SET(rqc, rq_ctx, flush_in_error_en, rq_attr->flush_in_error_en);
+	MLX5_SET(rqc, rq_ctx, hairpin, rq_attr->hairpin);
+	MLX5_SET(rqc, rq_ctx, user_index, rq_attr->user_index);
+	MLX5_SET(rqc, rq_ctx, cqn, rq_attr->cqn);
+	MLX5_SET(rqc, rq_ctx, counter_set_id, rq_attr->counter_set_id);
+	MLX5_SET(rqc, rq_ctx, rmpn, rq_attr->rmpn);
+	wq_ctx = MLX5_ADDR_OF(rqc, rq_ctx, wq);
+	wq_attr = &rq_attr->wq_attr;
+	devx_cmd_fill_wq_data(wq_ctx, wq_attr);
+	rq->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in),
+						  out, sizeof(out));
+	if (!rq->obj) {
+		DRV_LOG(ERR, "Failed to create RQ using DevX");
+		rte_errno = errno;
+		rte_free(rq);
+		return NULL;
+	}
+	rq->id = MLX5_GET(create_rq_out, out, rqn);
+	return rq;
+}
diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index b5de0c3..fbf00a0 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -627,6 +627,7 @@ enum {
 	MLX5_CMD_OP_QUERY_HCA_CAP = 0x100,
 	MLX5_CMD_OP_CREATE_MKEY = 0x200,
 	MLX5_CMD_OP_QUERY_NIC_VPORT_CONTEXT = 0x754,
+	MLX5_CMD_OP_CREATE_RQ = 0x908,
 	MLX5_CMD_OP_QUERY_TIS = 0x915,
 	MLX5_CMD_OP_ALLOC_FLOW_COUNTER = 0x939,
 	MLX5_CMD_OP_QUERY_FLOW_COUNTER = 0x93b,
@@ -1268,6 +1269,115 @@ struct mlx5_ifc_query_tis_in_bits {
 	u8 reserved_at_60[0x20];
 };
 
+enum {
+	MLX5_WQ_TYPE_LINKED_LIST                = 0x0,
+	MLX5_WQ_TYPE_CYCLIC                     = 0x1,
+	MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ    = 0x2,
+	MLX5_WQ_TYPE_CYCLIC_STRIDING_RQ         = 0x3,
+};
+
+enum {
+	MLX5_WQ_END_PAD_MODE_NONE  = 0x0,
+	MLX5_WQ_END_PAD_MODE_ALIGN = 0x1,
+};
+
+struct mlx5_ifc_wq_bits {
+	u8 wq_type[0x4];
+	u8 wq_signature[0x1];
+	u8 end_padding_mode[0x2];
+	u8 cd_slave[0x1];
+	u8 reserved_at_8[0x18];
+	u8 hds_skip_first_sge[0x1];
+	u8 log2_hds_buf_size[0x3];
+	u8 reserved_at_24[0x7];
+	u8 page_offset[0x5];
+	u8 lwm[0x10];
+	u8 reserved_at_40[0x8];
+	u8 pd[0x18];
+	u8 reserved_at_60[0x8];
+	u8 uar_page[0x18];
+	u8 dbr_addr[0x40];
+	u8 hw_counter[0x20];
+	u8 sw_counter[0x20];
+	u8 reserved_at_100[0xc];
+	u8 log_wq_stride[0x4];
+	u8 reserved_at_110[0x3];
+	u8 log_wq_pg_sz[0x5];
+	u8 reserved_at_118[0x3];
+	u8 log_wq_sz[0x5];
+	u8 dbr_umem_valid[0x1];
+	u8 wq_umem_valid[0x1];
+	u8 reserved_at_122[0x1];
+	u8 log_hairpin_num_packets[0x5];
+	u8 reserved_at_128[0x3];
+	u8 log_hairpin_data_sz[0x5];
+	u8 reserved_at_130[0x4];
+	u8 single_wqe_log_num_of_strides[0x4];
+	u8 two_byte_shift_en[0x1];
+	u8 reserved_at_139[0x4];
+	u8 single_stride_log_num_of_bytes[0x3];
+	u8 dbr_umem_id[0x20];
+	u8 wq_umem_id[0x20];
+	u8 wq_umem_offset[0x40];
+	u8 reserved_at_1c0[0x440];
+};
+
+enum {
+	MLX5_RQC_MEM_RQ_TYPE_MEMORY_RQ_INLINE  = 0x0,
+	MLX5_RQC_MEM_RQ_TYPE_MEMORY_RQ_RMP     = 0x1,
+};
+
+enum {
+	MLX5_RQC_STATE_RST  = 0x0,
+	MLX5_RQC_STATE_RDY  = 0x1,
+	MLX5_RQC_STATE_ERR  = 0x3,
+};
+
+struct mlx5_ifc_rqc_bits {
+	u8 rlky[0x1];
+	u8 delay_drop_en[0x1];
+	u8 scatter_fcs[0x1];
+	u8 vsd[0x1];
+	u8 mem_rq_type[0x4];
+	u8 state[0x4];
+	u8 reserved_at_c[0x1];
+	u8 flush_in_error_en[0x1];
+	u8 hairpin[0x1];
+	u8 reserved_at_f[0x11];
+	u8 reserved_at_20[0x8];
+	u8 user_index[0x18];
+	u8 reserved_at_40[0x8];
+	u8 cqn[0x18];
+	u8 counter_set_id[0x8];
+	u8 reserved_at_68[0x18];
+	u8 reserved_at_80[0x8];
+	u8 rmpn[0x18];
+	u8 reserved_at_a0[0x8];
+	u8 hairpin_peer_sq[0x18];
+	u8 reserved_at_c0[0x10];
+	u8 hairpin_peer_vhca[0x10];
+	u8 reserved_at_e0[0xa0];
+	struct mlx5_ifc_wq_bits wq; /* Not used in LRO RQ. */
+};
+
+struct mlx5_ifc_create_rq_out_bits {
+	u8 status[0x8];
+	u8 reserved_at_8[0x18];
+	u8 syndrome[0x20];
+	u8 reserved_at_40[0x8];
+	u8 rqn[0x18];
+	u8 reserved_at_60[0x20];
+};
+
+struct mlx5_ifc_create_rq_in_bits {
+	u8 opcode[0x10];
+	u8 uid[0x10];
+	u8 reserved_at_20[0x10];
+	u8 op_mod[0x10];
+	u8 reserved_at_40[0xc0];
+	struct mlx5_ifc_rqc_bits ctx;
+};
+
 /* CQE format mask. */
 #define MLX5E_CQE_FORMAT_MASK 0xc
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH v2 10/28] net/mlx5: modify advanced RxQ object using new API
  2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
                     ` (8 preceding siblings ...)
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 09/28] net/mlx5: create advanced RxQ object using new API Matan Azrad
@ 2019-07-22 14:52   ` Matan Azrad
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 11/28] net/mlx5: create advanced Rx " Matan Azrad
                     ` (18 subsequent siblings)
  28 siblings, 0 replies; 92+ messages in thread
From: Matan Azrad @ 2019-07-22 14:52 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
  Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Implement function mlx5_devx_cmd_modify_rq() to modify RQ.
Add related structs in mlx5.h and mlx5_prm.h.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/net/mlx5/mlx5.h           | 16 +++++++++++++
 drivers/net/mlx5/mlx5_devx_cmds.c | 50 +++++++++++++++++++++++++++++++++++++++
 drivers/net/mlx5/mlx5_prm.h       | 29 +++++++++++++++++++++++
 3 files changed, 95 insertions(+)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 4f4c4a7..7a837b6 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -306,6 +306,20 @@ struct mlx5_devx_create_rq_attr {
 	struct mlx5_devx_wq_attr wq_attr;
 };
 
+/* Modify RQ attributes structure, used by modify RQ operation. */
+struct mlx5_devx_modify_rq_attr {
+	uint32_t rqn:24;
+	uint32_t rq_state:4; /* Current RQ state. */
+	uint32_t state:4; /* Required RQ state. */
+	uint32_t scatter_fcs:1;
+	uint32_t vsd:1;
+	uint32_t counter_set_id:8;
+	uint32_t hairpin_peer_sq:24;
+	uint32_t hairpin_peer_vhca:16;
+	uint64_t modify_bitmask;
+	uint32_t lwm:16; /* Contained WQ lwm. */
+};
+
 /**
  * Type of object being allocated.
  */
@@ -789,5 +803,7 @@ int mlx5_devx_cmd_qp_query_tis_td(struct ibv_qp *qp, uint32_t tis_num,
 struct mlx5_devx_obj *mlx5_devx_cmd_create_rq(struct ibv_context *ctx,
 				struct mlx5_devx_create_rq_attr *rq_attr,
 				int socket);
+int mlx5_devx_cmd_modify_rq(struct mlx5_devx_obj *rq,
+			    struct mlx5_devx_modify_rq_attr *rq_attr);
 
 #endif /* RTE_PMD_MLX5_H_ */
diff --git a/drivers/net/mlx5/mlx5_devx_cmds.c b/drivers/net/mlx5/mlx5_devx_cmds.c
index f68c94b..e8953bb 100644
--- a/drivers/net/mlx5/mlx5_devx_cmds.c
+++ b/drivers/net/mlx5/mlx5_devx_cmds.c
@@ -526,3 +526,53 @@ struct mlx5_devx_obj *
 	rq->id = MLX5_GET(create_rq_out, out, rqn);
 	return rq;
 }
+
+/**
+ * Modify RQ using DevX API.
+ *
+ * @param[in] rq
+ *   Pointer to RQ object structure.
+ * @param [in] rq_attr
+ *   Pointer to modify RQ attributes structure.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_devx_cmd_modify_rq(struct mlx5_devx_obj *rq,
+			struct mlx5_devx_modify_rq_attr *rq_attr)
+{
+	uint32_t in[MLX5_ST_SZ_DW(modify_rq_in)] = {0};
+	uint32_t out[MLX5_ST_SZ_DW(modify_rq_out)] = {0};
+	void *rq_ctx, *wq_ctx;
+	int ret;
+
+	MLX5_SET(modify_rq_in, in, opcode, MLX5_CMD_OP_MODIFY_RQ);
+	MLX5_SET(modify_rq_in, in, rq_state, rq_attr->rq_state);
+	MLX5_SET(modify_rq_in, in, rqn, rq->id);
+	MLX5_SET64(modify_rq_in, in, modify_bitmask, rq_attr->modify_bitmask);
+	rq_ctx = MLX5_ADDR_OF(modify_rq_in, in, ctx);
+	MLX5_SET(rqc, rq_ctx, state, rq_attr->state);
+	if (rq_attr->modify_bitmask &
+			MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_SCATTER_FCS)
+		MLX5_SET(rqc, rq_ctx, scatter_fcs, rq_attr->scatter_fcs);
+	if (rq_attr->modify_bitmask & MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_VSD)
+		MLX5_SET(rqc, rq_ctx, vsd, rq_attr->vsd);
+	if (rq_attr->modify_bitmask &
+			MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_RQ_COUNTER_SET_ID)
+		MLX5_SET(rqc, rq_ctx, counter_set_id, rq_attr->counter_set_id);
+	MLX5_SET(rqc, rq_ctx, hairpin_peer_sq, rq_attr->hairpin_peer_sq);
+	MLX5_SET(rqc, rq_ctx, hairpin_peer_vhca, rq_attr->hairpin_peer_vhca);
+	if (rq_attr->modify_bitmask & MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_WQ_LWM) {
+		wq_ctx = MLX5_ADDR_OF(rqc, rq_ctx, wq);
+		MLX5_SET(wq, wq_ctx, lwm, rq_attr->lwm);
+	}
+	ret = mlx5_glue->devx_obj_modify(rq->obj, in, sizeof(in),
+					 out, sizeof(out));
+	if (ret) {
+		DRV_LOG(ERR, "Failed to modify RQ using DevX");
+		rte_errno = errno;
+		return -errno;
+	}
+	return ret;
+}
diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index fbf00a0..7ec709b 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -628,6 +628,7 @@ enum {
 	MLX5_CMD_OP_CREATE_MKEY = 0x200,
 	MLX5_CMD_OP_QUERY_NIC_VPORT_CONTEXT = 0x754,
 	MLX5_CMD_OP_CREATE_RQ = 0x908,
+	MLX5_CMD_OP_MODIFY_RQ = 0x909,
 	MLX5_CMD_OP_QUERY_TIS = 0x915,
 	MLX5_CMD_OP_ALLOC_FLOW_COUNTER = 0x939,
 	MLX5_CMD_OP_QUERY_FLOW_COUNTER = 0x93b,
@@ -1378,6 +1379,34 @@ struct mlx5_ifc_create_rq_in_bits {
 	struct mlx5_ifc_rqc_bits ctx;
 };
 
+struct mlx5_ifc_modify_rq_out_bits {
+	u8 status[0x8];
+	u8 reserved_at_8[0x18];
+	u8 syndrome[0x20];
+	u8 reserved_at_40[0x40];
+};
+
+enum {
+	MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_WQ_LWM = 1ULL << 0,
+	MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_VSD = 1ULL << 1,
+	MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_SCATTER_FCS = 1ULL << 2,
+	MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_RQ_COUNTER_SET_ID = 1ULL << 3,
+};
+
+struct mlx5_ifc_modify_rq_in_bits {
+	u8 opcode[0x10];
+	u8 uid[0x10];
+	u8 reserved_at_20[0x10];
+	u8 op_mod[0x10];
+	u8 rq_state[0x4];
+	u8 reserved_at_44[0x4];
+	u8 rqn[0x18];
+	u8 reserved_at_60[0x20];
+	u8 modify_bitmask[0x40];
+	u8 reserved_at_c0[0x40];
+	struct mlx5_ifc_rqc_bits ctx;
+};
+
 /* CQE format mask. */
 #define MLX5E_CQE_FORMAT_MASK 0xc
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH v2 11/28] net/mlx5: create advanced Rx object using new API
  2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
                     ` (9 preceding siblings ...)
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 10/28] net/mlx5: modify " Matan Azrad
@ 2019-07-22 14:52   ` Matan Azrad
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 12/28] net/mlx5: create advanced RxQ table " Matan Azrad
                     ` (17 subsequent siblings)
  28 siblings, 0 replies; 92+ messages in thread
From: Matan Azrad @ 2019-07-22 14:52 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
  Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Implement function mlx5_devx_cmd_create_tir() to create TIR
object using DevX API..
Add related structs in mlx5.h and mlx5_prm.h.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/net/mlx5/mlx5.h           | 26 +++++++++++++
 drivers/net/mlx5/mlx5_devx_cmds.c | 72 ++++++++++++++++++++++++++++++++++
 drivers/net/mlx5/mlx5_prm.h       | 81 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 179 insertions(+)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 7a837b6..422a70f 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -320,6 +320,30 @@ struct mlx5_devx_modify_rq_attr {
 	uint32_t lwm:16; /* Contained WQ lwm. */
 };
 
+struct mlx5_rx_hash_field_select {
+	uint32_t l3_prot_type:1;
+	uint32_t l4_prot_type:1;
+	uint32_t selected_fields:30;
+};
+
+/* TIR attributes structure, used by TIR operations. */
+struct mlx5_devx_tir_attr {
+	uint32_t disp_type:4;
+	uint32_t lro_timeout_period_usecs:16;
+	uint32_t lro_enable_mask:4;
+	uint32_t lro_max_msg_sz:8;
+	uint32_t inline_rqn:24;
+	uint32_t rx_hash_symmetric:1;
+	uint32_t tunneled_offload_en:1;
+	uint32_t indirect_table:24;
+	uint32_t rx_hash_fn:4;
+	uint32_t self_lb_block:2;
+	uint32_t transport_domain:24;
+	uint32_t rx_hash_toeplitz_key[10];
+	struct mlx5_rx_hash_field_select rx_hash_field_selector_outer;
+	struct mlx5_rx_hash_field_select rx_hash_field_selector_inner;
+};
+
 /**
  * Type of object being allocated.
  */
@@ -805,5 +829,7 @@ struct mlx5_devx_obj *mlx5_devx_cmd_create_rq(struct ibv_context *ctx,
 				int socket);
 int mlx5_devx_cmd_modify_rq(struct mlx5_devx_obj *rq,
 			    struct mlx5_devx_modify_rq_attr *rq_attr);
+struct mlx5_devx_obj *mlx5_devx_cmd_create_tir(struct ibv_context *ctx,
+					struct mlx5_devx_tir_attr *tir_attr);
 
 #endif /* RTE_PMD_MLX5_H_ */
diff --git a/drivers/net/mlx5/mlx5_devx_cmds.c b/drivers/net/mlx5/mlx5_devx_cmds.c
index e8953bb..5faa2a0 100644
--- a/drivers/net/mlx5/mlx5_devx_cmds.c
+++ b/drivers/net/mlx5/mlx5_devx_cmds.c
@@ -576,3 +576,75 @@ struct mlx5_devx_obj *
 	}
 	return ret;
 }
+
+/**
+ * Create TIR using DevX API.
+ *
+ * @param[in] ctx
+ *   ibv_context returned from mlx5dv_open_device.
+ * @param [in] tir_attr
+ *   Pointer to TIR attributes structure.
+ *
+ * @return
+ *   The DevX object created, NULL otherwise and rte_errno is set.
+ */
+struct mlx5_devx_obj *
+mlx5_devx_cmd_create_tir(struct ibv_context *ctx,
+			 struct mlx5_devx_tir_attr *tir_attr)
+{
+	uint32_t in[MLX5_ST_SZ_DW(create_tir_in)] = {0};
+	uint32_t out[MLX5_ST_SZ_DW(create_tir_out)] = {0};
+	void *tir_ctx, *outer, *inner;
+	struct mlx5_devx_obj *tir = NULL;
+	int i;
+
+	tir = rte_calloc(__func__, 1, sizeof(*tir), 0);
+	if (!tir) {
+		DRV_LOG(ERR, "Failed to allocate TIR data");
+		rte_errno = ENOMEM;
+		return NULL;
+	}
+	MLX5_SET(create_tir_in, in, opcode, MLX5_CMD_OP_CREATE_TIR);
+	tir_ctx = MLX5_ADDR_OF(create_tir_in, in, ctx);
+	MLX5_SET(tirc, tir_ctx, disp_type, tir_attr->disp_type);
+	MLX5_SET(tirc, tir_ctx, lro_timeout_period_usecs,
+		 tir_attr->lro_timeout_period_usecs);
+	MLX5_SET(tirc, tir_ctx, lro_enable_mask, tir_attr->lro_enable_mask);
+	MLX5_SET(tirc, tir_ctx, lro_max_msg_sz, tir_attr->lro_max_msg_sz);
+	MLX5_SET(tirc, tir_ctx, inline_rqn, tir_attr->inline_rqn);
+	MLX5_SET(tirc, tir_ctx, rx_hash_symmetric, tir_attr->rx_hash_symmetric);
+	MLX5_SET(tirc, tir_ctx, tunneled_offload_en,
+		 tir_attr->tunneled_offload_en);
+	MLX5_SET(tirc, tir_ctx, indirect_table, tir_attr->indirect_table);
+	MLX5_SET(tirc, tir_ctx, rx_hash_fn, tir_attr->rx_hash_fn);
+	MLX5_SET(tirc, tir_ctx, self_lb_block, tir_attr->self_lb_block);
+	MLX5_SET(tirc, tir_ctx, transport_domain, tir_attr->transport_domain);
+	for (i = 0; i < 10; i++) {
+		MLX5_SET(tirc, tir_ctx, rx_hash_toeplitz_key[i],
+			 tir_attr->rx_hash_toeplitz_key[i]);
+	}
+	outer = MLX5_ADDR_OF(tirc, tir_ctx, rx_hash_field_selector_outer);
+	MLX5_SET(rx_hash_field_select, outer, l3_prot_type,
+		 tir_attr->rx_hash_field_selector_outer.l3_prot_type);
+	MLX5_SET(rx_hash_field_select, outer, l4_prot_type,
+		 tir_attr->rx_hash_field_selector_outer.l4_prot_type);
+	MLX5_SET(rx_hash_field_select, outer, selected_fields,
+		 tir_attr->rx_hash_field_selector_outer.selected_fields);
+	inner = MLX5_ADDR_OF(tirc, tir_ctx, rx_hash_field_selector_inner);
+	MLX5_SET(rx_hash_field_select, inner, l3_prot_type,
+		 tir_attr->rx_hash_field_selector_inner.l3_prot_type);
+	MLX5_SET(rx_hash_field_select, inner, l4_prot_type,
+		 tir_attr->rx_hash_field_selector_inner.l4_prot_type);
+	MLX5_SET(rx_hash_field_select, inner, selected_fields,
+		 tir_attr->rx_hash_field_selector_inner.selected_fields);
+	tir->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in),
+						   out, sizeof(out));
+	if (!tir->obj) {
+		DRV_LOG(ERR, "Failed to create TIR using DevX");
+		rte_errno = errno;
+		rte_free(tir);
+		return NULL;
+	}
+	tir->id = MLX5_GET(create_tir_out, out, tirn);
+	return tir;
+}
diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index 7ec709b..970dee0 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -627,6 +627,7 @@ enum {
 	MLX5_CMD_OP_QUERY_HCA_CAP = 0x100,
 	MLX5_CMD_OP_CREATE_MKEY = 0x200,
 	MLX5_CMD_OP_QUERY_NIC_VPORT_CONTEXT = 0x754,
+	MLX5_CMD_OP_CREATE_TIR = 0x900,
 	MLX5_CMD_OP_CREATE_RQ = 0x908,
 	MLX5_CMD_OP_MODIFY_RQ = 0x909,
 	MLX5_CMD_OP_QUERY_TIS = 0x915,
@@ -1407,6 +1408,86 @@ struct mlx5_ifc_modify_rq_in_bits {
 	struct mlx5_ifc_rqc_bits ctx;
 };
 
+enum {
+	MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_SRC_IP     = 0x0,
+	MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_DST_IP     = 0x1,
+	MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_L4_SPORT   = 0x2,
+	MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_L4_DPORT   = 0x3,
+	MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_IPSEC_SPI  = 0x4,
+};
+
+struct mlx5_ifc_rx_hash_field_select_bits {
+	u8 l3_prot_type[0x1];
+	u8 l4_prot_type[0x1];
+	u8 selected_fields[0x1e];
+};
+
+enum {
+	MLX5_TIRC_DISP_TYPE_DIRECT    = 0x0,
+	MLX5_TIRC_DISP_TYPE_INDIRECT  = 0x1,
+};
+
+enum {
+	MLX5_TIRC_LRO_ENABLE_MASK_IPV4_LRO  = 0x1,
+	MLX5_TIRC_LRO_ENABLE_MASK_IPV6_LRO  = 0x2,
+};
+
+enum {
+	MLX5_RX_HASH_FN_NONE           = 0x0,
+	MLX5_RX_HASH_FN_INVERTED_XOR8  = 0x1,
+	MLX5_RX_HASH_FN_TOEPLITZ       = 0x2,
+};
+
+enum {
+	MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST    = 0x1,
+	MLX5_TIRC_SELF_LB_BLOCK_BLOCK_MULTICAST  = 0x2,
+};
+
+struct mlx5_ifc_tirc_bits {
+	u8 reserved_at_0[0x20];
+	u8 disp_type[0x4];
+	u8 reserved_at_24[0x1c];
+	u8 reserved_at_40[0x40];
+	u8 reserved_at_80[0x4];
+	u8 lro_timeout_period_usecs[0x10];
+	u8 lro_enable_mask[0x4];
+	u8 lro_max_msg_sz[0x8];
+	u8 reserved_at_a0[0x40];
+	u8 reserved_at_e0[0x8];
+	u8 inline_rqn[0x18];
+	u8 rx_hash_symmetric[0x1];
+	u8 reserved_at_101[0x1];
+	u8 tunneled_offload_en[0x1];
+	u8 reserved_at_103[0x5];
+	u8 indirect_table[0x18];
+	u8 rx_hash_fn[0x4];
+	u8 reserved_at_124[0x2];
+	u8 self_lb_block[0x2];
+	u8 transport_domain[0x18];
+	u8 rx_hash_toeplitz_key[10][0x20];
+	struct mlx5_ifc_rx_hash_field_select_bits rx_hash_field_selector_outer;
+	struct mlx5_ifc_rx_hash_field_select_bits rx_hash_field_selector_inner;
+	u8 reserved_at_2c0[0x4c0];
+};
+
+struct mlx5_ifc_create_tir_out_bits {
+	u8 status[0x8];
+	u8 reserved_at_8[0x18];
+	u8 syndrome[0x20];
+	u8 reserved_at_40[0x8];
+	u8 tirn[0x18];
+	u8 reserved_at_60[0x20];
+};
+
+struct mlx5_ifc_create_tir_in_bits {
+	u8 opcode[0x10];
+	u8 uid[0x10];
+	u8 reserved_at_20[0x10];
+	u8 op_mod[0x10];
+	u8 reserved_at_40[0xc0];
+	struct mlx5_ifc_tirc_bits ctx;
+};
+
 /* CQE format mask. */
 #define MLX5E_CQE_FORMAT_MASK 0xc
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH v2 12/28] net/mlx5: create advanced RxQ table using new API
  2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
                     ` (10 preceding siblings ...)
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 11/28] net/mlx5: create advanced Rx " Matan Azrad
@ 2019-07-22 14:52   ` Matan Azrad
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 13/28] net/mlx5: allocate door-bells " Matan Azrad
                     ` (16 subsequent siblings)
  28 siblings, 0 replies; 92+ messages in thread
From: Matan Azrad @ 2019-07-22 14:52 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
  Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Implement function mlx5_devx_cmd_create_rqt() to create RQT
object using DevX API.
Add related structs in mlx5.h and mlx5_prm.h.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/net/mlx5/mlx5.h           |  9 +++++++
 drivers/net/mlx5/mlx5_devx_cmds.c | 54 +++++++++++++++++++++++++++++++++++++++
 drivers/net/mlx5/mlx5_prm.h       | 40 +++++++++++++++++++++++++++++
 3 files changed, 103 insertions(+)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 422a70f..8aa5240 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -344,6 +344,13 @@ struct mlx5_devx_tir_attr {
 	struct mlx5_rx_hash_field_select rx_hash_field_selector_inner;
 };
 
+/* RQT attributes structure, used by RQT operations. */
+struct mlx5_devx_rqt_attr {
+	uint32_t rqt_max_size:16;
+	uint32_t rqt_actual_size:16;
+	uint32_t rq_list[];
+};
+
 /**
  * Type of object being allocated.
  */
@@ -831,5 +838,7 @@ int mlx5_devx_cmd_modify_rq(struct mlx5_devx_obj *rq,
 			    struct mlx5_devx_modify_rq_attr *rq_attr);
 struct mlx5_devx_obj *mlx5_devx_cmd_create_tir(struct ibv_context *ctx,
 					struct mlx5_devx_tir_attr *tir_attr);
+struct mlx5_devx_obj *mlx5_devx_cmd_create_rqt(struct ibv_context *ctx,
+					struct mlx5_devx_rqt_attr *rqt_attr);
 
 #endif /* RTE_PMD_MLX5_H_ */
diff --git a/drivers/net/mlx5/mlx5_devx_cmds.c b/drivers/net/mlx5/mlx5_devx_cmds.c
index 5faa2a0..acfe1de 100644
--- a/drivers/net/mlx5/mlx5_devx_cmds.c
+++ b/drivers/net/mlx5/mlx5_devx_cmds.c
@@ -648,3 +648,57 @@ struct mlx5_devx_obj *
 	tir->id = MLX5_GET(create_tir_out, out, tirn);
 	return tir;
 }
+
+/**
+ * Create RQT using DevX API.
+ *
+ * @param[in] ctx
+ *   ibv_context returned from mlx5dv_open_device.
+ * @param [in] rqt_attr
+ *   Pointer to RQT attributes structure.
+ *
+ * @return
+ *   The DevX object created, NULL otherwise and rte_errno is set.
+ */
+struct mlx5_devx_obj *
+mlx5_devx_cmd_create_rqt(struct ibv_context *ctx,
+			 struct mlx5_devx_rqt_attr *rqt_attr)
+{
+	uint32_t *in = NULL;
+	uint32_t inlen = MLX5_ST_SZ_BYTES(create_rqt_in) +
+			 rqt_attr->rqt_actual_size * sizeof(uint32_t);
+	uint32_t out[MLX5_ST_SZ_DW(create_rqt_out)] = {0};
+	void *rqt_ctx;
+	struct mlx5_devx_obj *rqt = NULL;
+	int i;
+
+	in = rte_calloc(__func__, 1, inlen, 0);
+	if (!in) {
+		DRV_LOG(ERR, "Failed to allocate RQT IN data");
+		rte_errno = ENOMEM;
+		return NULL;
+	}
+	rqt = rte_calloc(__func__, 1, sizeof(*rqt), 0);
+	if (!rqt) {
+		DRV_LOG(ERR, "Failed to allocate RQT data");
+		rte_errno = ENOMEM;
+		rte_free(in);
+		return NULL;
+	}
+	MLX5_SET(create_rqt_in, in, opcode, MLX5_CMD_OP_CREATE_RQT);
+	rqt_ctx = MLX5_ADDR_OF(create_rqt_in, in, rqt_context);
+	MLX5_SET(rqtc, rqt_ctx, rqt_max_size, rqt_attr->rqt_max_size);
+	MLX5_SET(rqtc, rqt_ctx, rqt_actual_size, rqt_attr->rqt_actual_size);
+	for (i = 0; i < rqt_attr->rqt_actual_size; i++)
+		MLX5_SET(rqtc, rqt_ctx, rq_num[i], rqt_attr->rq_list[i]);
+	rqt->obj = mlx5_glue->devx_obj_create(ctx, in, inlen, out, sizeof(out));
+	rte_free(in);
+	if (!rqt->obj) {
+		DRV_LOG(ERR, "Failed to create RQT using DevX");
+		rte_errno = errno;
+		rte_free(rqt);
+		return NULL;
+	}
+	rqt->id = MLX5_GET(create_rqt_out, out, rqtn);
+	return rqt;
+}
diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index 970dee0..b0e281f 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -631,6 +631,7 @@ enum {
 	MLX5_CMD_OP_CREATE_RQ = 0x908,
 	MLX5_CMD_OP_MODIFY_RQ = 0x909,
 	MLX5_CMD_OP_QUERY_TIS = 0x915,
+	MLX5_CMD_OP_CREATE_RQT = 0x916,
 	MLX5_CMD_OP_ALLOC_FLOW_COUNTER = 0x939,
 	MLX5_CMD_OP_QUERY_FLOW_COUNTER = 0x93b,
 };
@@ -1488,6 +1489,45 @@ struct mlx5_ifc_create_tir_in_bits {
 	struct mlx5_ifc_tirc_bits ctx;
 };
 
+struct mlx5_ifc_rq_num_bits {
+	u8 reserved_at_0[0x8];
+	u8 rq_num[0x18];
+};
+
+struct mlx5_ifc_rqtc_bits {
+	u8 reserved_at_0[0xa0];
+	u8 reserved_at_a0[0x10];
+	u8 rqt_max_size[0x10];
+	u8 reserved_at_c0[0x10];
+	u8 rqt_actual_size[0x10];
+	u8 reserved_at_e0[0x6a0];
+	struct mlx5_ifc_rq_num_bits rq_num[];
+};
+
+struct mlx5_ifc_create_rqt_out_bits {
+	u8 status[0x8];
+	u8 reserved_at_8[0x18];
+	u8 syndrome[0x20];
+	u8 reserved_at_40[0x8];
+	u8 rqtn[0x18];
+	u8 reserved_at_60[0x20];
+};
+
+#ifdef PEDANTIC
+#pragma GCC diagnostic ignored "-Wpedantic"
+#endif
+struct mlx5_ifc_create_rqt_in_bits {
+	u8 opcode[0x10];
+	u8 uid[0x10];
+	u8 reserved_at_20[0x10];
+	u8 op_mod[0x10];
+	u8 reserved_at_40[0xc0];
+	struct mlx5_ifc_rqtc_bits rqt_context;
+};
+#ifdef PEDANTIC
+#pragma GCC diagnostic error "-Wpedantic"
+#endif
+
 /* CQE format mask. */
 #define MLX5E_CQE_FORMAT_MASK 0xc
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH v2 13/28] net/mlx5: allocate door-bells using new API
  2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
                     ` (11 preceding siblings ...)
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 12/28] net/mlx5: create advanced RxQ table " Matan Azrad
@ 2019-07-22 14:52   ` Matan Azrad
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 14/28] net/mlx5: rename RxQ verbs to general RxQ object Matan Azrad
                     ` (15 subsequent siblings)
  28 siblings, 0 replies; 92+ messages in thread
From: Matan Azrad @ 2019-07-22 14:52 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
  Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

When using DevX API, memory for door-bell records should be allocated
by PMD and registered using DevX API.

This patch implements the utility functions to support it:
- Add struct mlx5_devx_dbr_page, containing door-bells page data.
- Add list of struct mlx5_devx_dbr_page door-bell pages to device
  private data.
- Implement function mlx5_alloc_dbr_page() to allocate page for
  door-bell records, and register it using DevX API.
- Implement function mlx5_get_dbr(). to acquire a door-bell record
  from the door-bells page, allocating a new page if needed.
- Implement function mlx5_release_dbr() to release a door-bell
  record that is no longer needed, freeing the containing page if
  it becomes empty.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/net/mlx5/mlx5.c      | 119 +++++++++++++++++++++++++++++++++++++++++++
 drivers/net/mlx5/mlx5.h      |  21 ++++++++
 drivers/net/mlx5/mlx5_glue.h |   2 +-
 drivers/net/mlx5/mlx5_rxtx.h |   1 +
 4 files changed, 142 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 6dc2792..c47672d 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1312,6 +1312,125 @@ struct mlx5_dev_spawn_data {
 }
 
 /**
+ * Allocate page of door-bells and register it using DevX API.
+ *
+ * @param [in] dev
+ *   Pointer to Ethernet device.
+ *
+ * @return
+ *   Pointer to new page on success, NULL otherwise.
+ */
+static struct mlx5_devx_dbr_page *
+mlx5_alloc_dbr_page(struct rte_eth_dev *dev)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_devx_dbr_page *page;
+
+	/* Allocate space for door-bell page and management data. */
+	page = rte_calloc_socket(__func__, 1, sizeof(struct mlx5_devx_dbr_page),
+				 RTE_CACHE_LINE_SIZE, dev->device->numa_node);
+	if (!page) {
+		DRV_LOG(ERR, "port %u cannot allocate dbr page",
+			dev->data->port_id);
+		return NULL;
+	}
+	/* Register allocated memory. */
+	page->umem = mlx5_glue->devx_umem_reg(priv->sh->ctx, page->dbrs,
+					      MLX5_DBR_PAGE_SIZE, 0);
+	if (!page->umem) {
+		DRV_LOG(ERR, "port %u cannot umem reg dbr page",
+			dev->data->port_id);
+		rte_free(page);
+		return NULL;
+	}
+	return page;
+}
+
+/**
+ * Find the next available door-bell, allocate new page if needed.
+ *
+ * @param [in] dev
+ *   Pointer to Ethernet device.
+ * @param [out] dbr_page
+ *   Door-bell page containing the page data.
+ *
+ * @return
+ *   Door-bell address offset on success, a negative error value otherwise.
+ */
+int64_t
+mlx5_get_dbr(struct rte_eth_dev *dev, struct mlx5_devx_dbr_page **dbr_page)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_devx_dbr_page *page = NULL;
+	uint32_t i, j;
+
+	LIST_FOREACH(page, &priv->dbrpgs, next)
+		if (page->dbr_count < MLX5_DBR_PER_PAGE)
+			break;
+	if (!page) { /* No page with free door-bell exists. */
+		page = mlx5_alloc_dbr_page(dev);
+		if (!page) /* Failed to allocate new page. */
+			return (-1);
+		LIST_INSERT_HEAD(&priv->dbrpgs, page, next);
+	}
+	/* Loop to find bitmap part with clear bit. */
+	for (i = 0;
+	     i < MLX5_DBR_BITMAP_SIZE && page->dbr_bitmap[i] == UINT64_MAX;
+	     i++)
+		; /* Empty. */
+	/* Find the first clear bit. */
+	j = rte_bsf64(~page->dbr_bitmap[i]);
+	assert(i < (MLX5_DBR_PER_PAGE / 64));
+	page->dbr_bitmap[i] |= (1 << j);
+	page->dbr_count++;
+	*dbr_page = page;
+	return (((i * 64) + j) * sizeof(uint64_t));
+}
+
+/**
+ * Release a door-bell record.
+ *
+ * @param [in] dev
+ *   Pointer to Ethernet device.
+ * @param [in] umem_id
+ *   UMEM ID of page containing the door-bell record to release.
+ * @param [in] offset
+ *   Offset of door-bell record in page.
+ *
+ * @return
+ *   0 on success, a negative error value otherwise.
+ */
+int32_t
+mlx5_release_dbr(struct rte_eth_dev *dev, uint32_t umem_id, uint64_t offset)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_devx_dbr_page *page = NULL;
+	int ret = 0;
+
+	LIST_FOREACH(page, &priv->dbrpgs, next)
+		/* Find the page this address belongs to. */
+		if (page->umem->umem_id == umem_id)
+			break;
+	if (!page)
+		return -EINVAL;
+	page->dbr_count--;
+	if (!page->dbr_count) {
+		/* Page not used, free it and remove from list. */
+		LIST_REMOVE(page, next);
+		if (page->umem)
+			ret = -mlx5_glue->devx_umem_dereg(page->umem);
+		rte_free(page);
+	} else {
+		/* Mark in bitmap that this door-bell is not in use. */
+		int i = offset / 64;
+		int j = offset % 64;
+
+		page->dbr_bitmap[i] &= ~(1 << j);
+	}
+	return ret;
+}
+
+/**
  * Spawn an Ethernet device from Verbs information.
  *
  * @param dpdk_dev
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 8aa5240..92daf86 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -36,6 +36,7 @@
 #include "mlx5_mr.h"
 #include "mlx5_autoconf.h"
 #include "mlx5_defs.h"
+#include "mlx5_glue.h"
 
 enum {
 	PCI_VENDOR_ID_MELLANOX = 0x15b3,
@@ -498,6 +499,21 @@ struct mlx5_flow_tbl_resource {
 #define MLX5_MAX_TABLES_FDB 32
 #define MLX5_GROUP_FACTOR 1
 
+#define MLX5_DBR_PAGE_SIZE 4096 /* Must be >= 512. */
+#define MLX5_DBR_SIZE 8
+#define MLX5_DBR_PER_PAGE (MLX5_DBR_PAGE_SIZE / MLX5_DBR_SIZE)
+#define MLX5_DBR_BITMAP_SIZE (MLX5_DBR_PER_PAGE / 64)
+
+struct mlx5_devx_dbr_page {
+	/* Door-bell records, must be first member in structure. */
+	uint8_t dbrs[MLX5_DBR_PAGE_SIZE];
+	LIST_ENTRY(mlx5_devx_dbr_page) next; /* Pointer to the next element. */
+	struct mlx5dv_devx_umem *umem;
+	uint32_t dbr_count; /* Number of door-bell records in use. */
+	/* 1 bit marks matching door-bell is in use. */
+	uint64_t dbr_bitmap[MLX5_DBR_BITMAP_SIZE];
+};
+
 /*
  * Shared Infiniband device context for Master/Representors
  * which belong to same IB device with multiple IB ports.
@@ -618,6 +634,7 @@ struct mlx5_priv {
 	int nl_socket_rdma; /* Netlink socket (NETLINK_RDMA). */
 	int nl_socket_route; /* Netlink socket (NETLINK_ROUTE). */
 	uint32_t nl_sn; /* Netlink message sequence number. */
+	LIST_HEAD(dbrpage, mlx5_devx_dbr_page) dbrpgs; /* Door-bell pages. */
 #ifndef RTE_ARCH_64
 	rte_spinlock_t uar_lock_cq; /* CQs share a common distinct UAR */
 	rte_spinlock_t uar_lock[MLX5_UAR_PAGE_NUM_MAX];
@@ -632,6 +649,10 @@ struct mlx5_priv {
 
 int mlx5_getenv_int(const char *);
 int mlx5_proc_priv_init(struct rte_eth_dev *dev);
+int64_t mlx5_get_dbr(struct rte_eth_dev *dev,
+		     struct mlx5_devx_dbr_page **dbr_page);
+int32_t mlx5_release_dbr(struct rte_eth_dev *dev, uint32_t umem_id,
+			 uint64_t offset);
 
 /* mlx5_ethdev.c */
 
diff --git a/drivers/net/mlx5/mlx5_glue.h b/drivers/net/mlx5/mlx5_glue.h
index f8e2b9a..6b5dadf 100644
--- a/drivers/net/mlx5/mlx5_glue.h
+++ b/drivers/net/mlx5/mlx5_glue.h
@@ -61,7 +61,7 @@
 
 #ifndef HAVE_IBV_DEVX_OBJ
 struct mlx5dv_devx_obj;
-struct mlx5dv_devx_umem;
+struct mlx5dv_devx_umem { uint32_t umem_id; };
 #endif
 
 #ifndef HAVE_IBV_DEVX_ASYNC
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index 2f688ac..436b453 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -29,6 +29,7 @@
 #include <rte_spinlock.h>
 #include <rte_io.h>
 #include <rte_bus_pci.h>
+#include <rte_malloc.h>
 
 #include "mlx5_utils.h"
 #include "mlx5.h"
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH v2 14/28] net/mlx5: rename RxQ verbs to general RxQ object
  2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
                     ` (12 preceding siblings ...)
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 13/28] net/mlx5: allocate door-bells " Matan Azrad
@ 2019-07-22 14:52   ` Matan Azrad
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 15/28] net/mlx5: rename verbs indirection table to obj Matan Azrad
                     ` (14 subsequent siblings)
  28 siblings, 0 replies; 92+ messages in thread
From: Matan Azrad @ 2019-07-22 14:52 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
  Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Prepare for introducing of DevX RxQ object.
RxQ object is currently created using verbs only.
The next patches will add the option to create RxQ object using DevX.
This patch renames rxq_ibv to rxq_obj wherever relevant, and adds the
DevX items to relevant structs.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/net/mlx5/mlx5.c         |   4 +-
 drivers/net/mlx5/mlx5.h         |   4 +-
 drivers/net/mlx5/mlx5_rxq.c     | 142 ++++++++++++++++++++--------------------
 drivers/net/mlx5/mlx5_rxtx.c    |   2 +-
 drivers/net/mlx5/mlx5_rxtx.h    |  24 +++++--
 drivers/net/mlx5/mlx5_trigger.c |   6 +-
 drivers/net/mlx5/mlx5_vlan.c    |   4 +-
 7 files changed, 98 insertions(+), 88 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index c47672d..946a22b 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -823,9 +823,9 @@ struct mlx5_dev_spawn_data {
 	if (ret)
 		DRV_LOG(WARNING, "port %u some indirection table still remain",
 			dev->data->port_id);
-	ret = mlx5_rxq_ibv_verify(dev);
+	ret = mlx5_rxq_obj_verify(dev);
 	if (ret)
-		DRV_LOG(WARNING, "port %u some Verbs Rx queue still remain",
+		DRV_LOG(WARNING, "port %u some Rx queue objects still remain",
 			dev->data->port_id);
 	ret = mlx5_rxq_verify(dev);
 	if (ret)
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 92daf86..697e6c4 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -375,7 +375,7 @@ struct mlx5_verbs_alloc_ctx {
 /* Flow drop context necessary due to Verbs API. */
 struct mlx5_drop {
 	struct mlx5_hrxq *hrxq; /* Hash Rx queue queue. */
-	struct mlx5_rxq_ibv *rxq; /* Verbs Rx queue. */
+	struct mlx5_rxq_obj *rxq; /* Rx queue object. */
 };
 
 #define MLX5_COUNTERS_PER_POOL 512
@@ -613,7 +613,7 @@ struct mlx5_priv {
 	struct mlx5_flows flows; /* RTE Flow rules. */
 	struct mlx5_flows ctrl_flows; /* Control flow rules. */
 	LIST_HEAD(rxq, mlx5_rxq_ctrl) rxqsctrl; /* DPDK Rx queues. */
-	LIST_HEAD(rxqibv, mlx5_rxq_ibv) rxqsibv; /* Verbs Rx queues. */
+	LIST_HEAD(rxqobj, mlx5_rxq_obj) rxqsobj; /* Verbs/DevX Rx queues. */
 	LIST_HEAD(hrxq, mlx5_hrxq) hrxqs; /* Verbs Hash Rx queues. */
 	LIST_HEAD(txq, mlx5_txq_ctrl) txqsctrl; /* DPDK Tx queues. */
 	LIST_HEAD(txqibv, mlx5_txq_ibv) txqsibv; /* Verbs Tx queues. */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 8567ee5..20a4695 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -533,7 +533,7 @@
 }
 
 /**
- * Get an Rx queue Verbs object.
+ * Get an Rx queue Verbs/DevX object.
  *
  * @param dev
  *   Pointer to Ethernet device.
@@ -541,10 +541,10 @@
  *   Queue index in DPDK Rx queue array
  *
  * @return
- *   The Verbs object if it exists.
+ *   The Verbs/DevX object if it exists.
  */
-static struct mlx5_rxq_ibv *
-mlx5_rxq_ibv_get(struct rte_eth_dev *dev, uint16_t idx)
+static struct mlx5_rxq_obj *
+mlx5_rxq_obj_get(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
@@ -555,35 +555,35 @@
 	if (!rxq_data)
 		return NULL;
 	rxq_ctrl = container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
-	if (rxq_ctrl->ibv)
-		rte_atomic32_inc(&rxq_ctrl->ibv->refcnt);
-	return rxq_ctrl->ibv;
+	if (rxq_ctrl->obj)
+		rte_atomic32_inc(&rxq_ctrl->obj->refcnt);
+	return rxq_ctrl->obj;
 }
 
 /**
- * Release an Rx verbs queue object.
+ * Release an Rx verbs/DevX queue object.
  *
- * @param rxq_ibv
- *   Verbs Rx queue object.
+ * @param rxq_obj
+ *   Verbs/DevX Rx queue object.
  *
  * @return
  *   1 while a reference on it exists, 0 when freed.
  */
 static int
-mlx5_rxq_ibv_release(struct mlx5_rxq_ibv *rxq_ibv)
+mlx5_rxq_obj_release(struct mlx5_rxq_obj *rxq_obj)
 {
-	assert(rxq_ibv);
-	assert(rxq_ibv->wq);
-	assert(rxq_ibv->cq);
-	if (rte_atomic32_dec_and_test(&rxq_ibv->refcnt)) {
-		rxq_free_elts(rxq_ibv->rxq_ctrl);
-		claim_zero(mlx5_glue->destroy_wq(rxq_ibv->wq));
-		claim_zero(mlx5_glue->destroy_cq(rxq_ibv->cq));
-		if (rxq_ibv->channel)
+	assert(rxq_obj);
+	assert(rxq_obj->wq);
+	assert(rxq_obj->cq);
+	if (rte_atomic32_dec_and_test(&rxq_obj->refcnt)) {
+		rxq_free_elts(rxq_obj->rxq_ctrl);
+		claim_zero(mlx5_glue->destroy_wq(rxq_obj->wq));
+		claim_zero(mlx5_glue->destroy_cq(rxq_obj->cq));
+		if (rxq_obj->channel)
 			claim_zero(mlx5_glue->destroy_comp_channel
-				   (rxq_ibv->channel));
-		LIST_REMOVE(rxq_ibv, next);
-		rte_free(rxq_ibv);
+				   (rxq_obj->channel));
+		LIST_REMOVE(rxq_obj, next);
+		rte_free(rxq_obj);
 		return 0;
 	}
 	return 1;
@@ -622,14 +622,14 @@
 	}
 	intr_handle->type = RTE_INTR_HANDLE_EXT;
 	for (i = 0; i != n; ++i) {
-		/* This rxq ibv must not be released in this function. */
-		struct mlx5_rxq_ibv *rxq_ibv = mlx5_rxq_ibv_get(dev, i);
+		/* This rxq obj must not be released in this function. */
+		struct mlx5_rxq_obj *rxq_obj = mlx5_rxq_obj_get(dev, i);
 		int fd;
 		int flags;
 		int rc;
 
 		/* Skip queues that cannot request interrupts. */
-		if (!rxq_ibv || !rxq_ibv->channel) {
+		if (!rxq_obj || !rxq_obj->channel) {
 			/* Use invalid intr_vec[] index to disable entry. */
 			intr_handle->intr_vec[i] =
 				RTE_INTR_VEC_RXTX_OFFSET +
@@ -646,7 +646,7 @@
 			rte_errno = ENOMEM;
 			return -rte_errno;
 		}
-		fd = rxq_ibv->channel->fd;
+		fd = rxq_obj->channel->fd;
 		flags = fcntl(fd, F_GETFL);
 		rc = fcntl(fd, F_SETFL, flags | O_NONBLOCK);
 		if (rc < 0) {
@@ -702,8 +702,8 @@
 		 */
 		rxq_data = (*priv->rxqs)[i];
 		rxq_ctrl = container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
-		if (rxq_ctrl->ibv)
-			mlx5_rxq_ibv_release(rxq_ctrl->ibv);
+		if (rxq_ctrl->obj)
+			mlx5_rxq_obj_release(rxq_ctrl->obj);
 	}
 free:
 	rte_intr_free_epoll_fd(intr_handle);
@@ -763,15 +763,15 @@
 	}
 	rxq_ctrl = container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
 	if (rxq_ctrl->irq) {
-		struct mlx5_rxq_ibv *rxq_ibv;
+		struct mlx5_rxq_obj *rxq_obj;
 
-		rxq_ibv = mlx5_rxq_ibv_get(dev, rx_queue_id);
-		if (!rxq_ibv) {
+		rxq_obj = mlx5_rxq_obj_get(dev, rx_queue_id);
+		if (!rxq_obj) {
 			rte_errno = EINVAL;
 			return -rte_errno;
 		}
 		mlx5_arm_cq(rxq_data, rxq_data->cq_arm_sn);
-		mlx5_rxq_ibv_release(rxq_ibv);
+		mlx5_rxq_obj_release(rxq_obj);
 	}
 	return 0;
 }
@@ -793,7 +793,7 @@
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_rxq_data *rxq_data;
 	struct mlx5_rxq_ctrl *rxq_ctrl;
-	struct mlx5_rxq_ibv *rxq_ibv = NULL;
+	struct mlx5_rxq_obj *rxq_obj = NULL;
 	struct ibv_cq *ev_cq;
 	void *ev_ctx;
 	int ret;
@@ -806,24 +806,24 @@
 	rxq_ctrl = container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
 	if (!rxq_ctrl->irq)
 		return 0;
-	rxq_ibv = mlx5_rxq_ibv_get(dev, rx_queue_id);
-	if (!rxq_ibv) {
+	rxq_obj = mlx5_rxq_obj_get(dev, rx_queue_id);
+	if (!rxq_obj) {
 		rte_errno = EINVAL;
 		return -rte_errno;
 	}
-	ret = mlx5_glue->get_cq_event(rxq_ibv->channel, &ev_cq, &ev_ctx);
-	if (ret || ev_cq != rxq_ibv->cq) {
+	ret = mlx5_glue->get_cq_event(rxq_obj->channel, &ev_cq, &ev_ctx);
+	if (ret || ev_cq != rxq_obj->cq) {
 		rte_errno = EINVAL;
 		goto exit;
 	}
 	rxq_data->cq_arm_sn++;
-	mlx5_glue->ack_cq_events(rxq_ibv->cq, 1);
-	mlx5_rxq_ibv_release(rxq_ibv);
+	mlx5_glue->ack_cq_events(rxq_obj->cq, 1);
+	mlx5_rxq_obj_release(rxq_obj);
 	return 0;
 exit:
 	ret = rte_errno; /* Save rte_errno before cleanup. */
-	if (rxq_ibv)
-		mlx5_rxq_ibv_release(rxq_ibv);
+	if (rxq_obj)
+		mlx5_rxq_obj_release(rxq_obj);
 	DRV_LOG(WARNING, "port %u unable to disable interrupt on Rx queue %d",
 		dev->data->port_id, rx_queue_id);
 	rte_errno = ret; /* Restore rte_errno. */
@@ -831,7 +831,7 @@
 }
 
 /**
- * Create the Rx queue Verbs object.
+ * Create the Rx queue Verbs/DevX object.
  *
  * @param dev
  *   Pointer to Ethernet device.
@@ -839,10 +839,10 @@
  *   Queue index in DPDK Rx queue array
  *
  * @return
- *   The Verbs object initialised, NULL otherwise and rte_errno is set.
+ *   The Verbs/DevX object initialised, NULL otherwise and rte_errno is set.
  */
-struct mlx5_rxq_ibv *
-mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx)
+struct mlx5_rxq_obj *
+mlx5_rxq_obj_new(struct rte_eth_dev *dev, uint16_t idx)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
@@ -863,7 +863,7 @@ struct mlx5_rxq_ibv *
 	} attr;
 	unsigned int cqe_n;
 	unsigned int wqe_n = 1 << rxq_data->elts_n;
-	struct mlx5_rxq_ibv *tmpl = NULL;
+	struct mlx5_rxq_obj *tmpl = NULL;
 	struct mlx5dv_cq cq_info;
 	struct mlx5dv_rwq rwq;
 	int ret = 0;
@@ -1062,7 +1062,7 @@ struct mlx5_rxq_ibv *
 	DRV_LOG(DEBUG, "port %u rxq %u updated with %p", dev->data->port_id,
 		idx, (void *)&tmpl);
 	rte_atomic32_inc(&tmpl->refcnt);
-	LIST_INSERT_HEAD(&priv->rxqsibv, tmpl, next);
+	LIST_INSERT_HEAD(&priv->rxqsobj, tmpl, next);
 	priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE;
 	return tmpl;
 error:
@@ -1083,24 +1083,24 @@ struct mlx5_rxq_ibv *
 }
 
 /**
- * Verify the Verbs Rx queue list is empty
+ * Verify the Rx queue objects list is empty
  *
  * @param dev
  *   Pointer to Ethernet device.
  *
  * @return
- *   The number of object not released.
+ *   The number of objects not released.
  */
 int
-mlx5_rxq_ibv_verify(struct rte_eth_dev *dev)
+mlx5_rxq_obj_verify(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	int ret = 0;
-	struct mlx5_rxq_ibv *rxq_ibv;
+	struct mlx5_rxq_obj *rxq_obj;
 
-	LIST_FOREACH(rxq_ibv, &priv->rxqsibv, next) {
-		DRV_LOG(DEBUG, "port %u Verbs Rx queue %u still referenced",
-			dev->data->port_id, rxq_ibv->rxq_ctrl->rxq.idx);
+	LIST_FOREACH(rxq_obj, &priv->rxqsobj, next) {
+		DRV_LOG(DEBUG, "port %u Rx queue %u still referenced",
+			dev->data->port_id, rxq_obj->rxq_ctrl->rxq.idx);
 		++ret;
 	}
 	return ret;
@@ -1502,7 +1502,7 @@ struct mlx5_rxq_ctrl *
 		rxq_ctrl = container_of((*priv->rxqs)[idx],
 					struct mlx5_rxq_ctrl,
 					rxq);
-		mlx5_rxq_ibv_get(dev, idx);
+		mlx5_rxq_obj_get(dev, idx);
 		rte_atomic32_inc(&rxq_ctrl->refcnt);
 	}
 	return rxq_ctrl;
@@ -1529,8 +1529,8 @@ struct mlx5_rxq_ctrl *
 		return 0;
 	rxq_ctrl = container_of((*priv->rxqs)[idx], struct mlx5_rxq_ctrl, rxq);
 	assert(rxq_ctrl->priv);
-	if (rxq_ctrl->ibv && !mlx5_rxq_ibv_release(rxq_ctrl->ibv))
-		rxq_ctrl->ibv = NULL;
+	if (rxq_ctrl->obj && !mlx5_rxq_obj_release(rxq_ctrl->obj))
+		rxq_ctrl->obj = NULL;
 	if (rte_atomic32_dec_and_test(&rxq_ctrl->refcnt)) {
 		mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh);
 		LIST_REMOVE(rxq_ctrl, next);
@@ -1602,7 +1602,7 @@ struct mlx5_rxq_ctrl *
 
 		if (!rxq)
 			goto error;
-		wq[i] = rxq->ibv->wq;
+		wq[i] = rxq->obj->wq;
 		ind_tbl->queues[i] = queues[i];
 	}
 	ind_tbl->queues_n = queues_n;
@@ -1953,22 +1953,22 @@ struct mlx5_hrxq *
 }
 
 /**
- * Create a drop Rx queue Verbs object.
+ * Create a drop Rx queue Verbs/DevX object.
  *
  * @param dev
  *   Pointer to Ethernet device.
  *
  * @return
- *   The Verbs object initialised, NULL otherwise and rte_errno is set.
+ *   The Verbs/DevX object initialised, NULL otherwise and rte_errno is set.
  */
-static struct mlx5_rxq_ibv *
-mlx5_rxq_ibv_drop_new(struct rte_eth_dev *dev)
+static struct mlx5_rxq_obj *
+mlx5_rxq_obj_drop_new(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct ibv_context *ctx = priv->sh->ctx;
 	struct ibv_cq *cq;
 	struct ibv_wq *wq = NULL;
-	struct mlx5_rxq_ibv *rxq;
+	struct mlx5_rxq_obj *rxq;
 
 	if (priv->drop_queue.rxq)
 		return priv->drop_queue.rxq;
@@ -2013,19 +2013,19 @@ struct mlx5_hrxq *
 }
 
 /**
- * Release a drop Rx queue Verbs object.
+ * Release a drop Rx queue Verbs/DevX object.
  *
  * @param dev
  *   Pointer to Ethernet device.
  *
  * @return
- *   The Verbs object initialised, NULL otherwise and rte_errno is set.
+ *   The Verbs/DevX object initialised, NULL otherwise and rte_errno is set.
  */
 static void
-mlx5_rxq_ibv_drop_release(struct rte_eth_dev *dev)
+mlx5_rxq_obj_drop_release(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_ibv *rxq = priv->drop_queue.rxq;
+	struct mlx5_rxq_obj *rxq = priv->drop_queue.rxq;
 
 	if (rxq->wq)
 		claim_zero(mlx5_glue->destroy_wq(rxq->wq));
@@ -2049,10 +2049,10 @@ struct mlx5_hrxq *
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_ind_table_ibv *ind_tbl;
-	struct mlx5_rxq_ibv *rxq;
+	struct mlx5_rxq_obj *rxq;
 	struct mlx5_ind_table_ibv tmpl;
 
-	rxq = mlx5_rxq_ibv_drop_new(dev);
+	rxq = mlx5_rxq_obj_drop_new(dev);
 	if (!rxq)
 		return NULL;
 	tmpl.ind_table = mlx5_glue->create_rwq_ind_table
@@ -2077,7 +2077,7 @@ struct mlx5_hrxq *
 	ind_tbl->ind_table = tmpl.ind_table;
 	return ind_tbl;
 error:
-	mlx5_rxq_ibv_drop_release(dev);
+	mlx5_rxq_obj_drop_release(dev);
 	return NULL;
 }
 
@@ -2094,7 +2094,7 @@ struct mlx5_hrxq *
 	struct mlx5_ind_table_ibv *ind_tbl = priv->drop_queue.hrxq->ind_table;
 
 	claim_zero(mlx5_glue->destroy_rwq_ind_table(ind_tbl->ind_table));
-	mlx5_rxq_ibv_drop_release(dev);
+	mlx5_rxq_obj_drop_release(dev);
 	rte_free(ind_tbl);
 	priv->drop_queue.hrxq->ind_table = NULL;
 }
diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 35d60f4..9bfb002 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -815,7 +815,7 @@ enum mlx5_txcmp_code {
 		struct mlx5_rxq_ctrl *rxq_ctrl =
 			container_of(rxq, struct mlx5_rxq_ctrl, rxq);
 
-		ret = mlx5_glue->modify_wq(rxq_ctrl->ibv->wq, &mod);
+		ret = mlx5_glue->modify_wq(rxq_ctrl->obj->wq, &mod);
 		if (ret) {
 			DRV_LOG(ERR, "Cannot change Rx WQ state to %u  - %s\n",
 					sm->state, strerror(errno));
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index 436b453..eb20a07 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -141,13 +141,23 @@ struct mlx5_rxq_data {
 	uint32_t tunnel; /* Tunnel information. */
 } __rte_cache_aligned;
 
-/* Verbs Rx queue elements. */
-struct mlx5_rxq_ibv {
-	LIST_ENTRY(mlx5_rxq_ibv) next; /* Pointer to the next element. */
+enum mlx5_rxq_obj_type {
+	MLX5_RXQ_OBJ_TYPE_IBV,		/* mlx5_rxq_obj with ibv_wq. */
+	MLX5_RXQ_OBJ_TYPE_DEVX_RQ,	/* mlx5_rxq_obj with mlx5_devx_rq. */
+};
+
+/* Verbs/DevX Rx queue elements. */
+struct mlx5_rxq_obj {
+	LIST_ENTRY(mlx5_rxq_obj) next; /* Pointer to the next element. */
 	rte_atomic32_t refcnt; /* Reference counter. */
 	struct mlx5_rxq_ctrl *rxq_ctrl; /* Back pointer to parent. */
 	struct ibv_cq *cq; /* Completion Queue. */
-	struct ibv_wq *wq; /* Work Queue. */
+	enum mlx5_rxq_obj_type type;
+	RTE_STD_C11
+	union {
+		struct ibv_wq *wq; /* Work Queue. */
+		struct mlx5_devx_obj *rq; /* DevX object for Rx Queue. */
+	};
 	struct ibv_comp_channel *channel;
 };
 
@@ -156,7 +166,7 @@ struct mlx5_rxq_ctrl {
 	struct mlx5_rxq_data rxq; /* Data path structure. */
 	LIST_ENTRY(mlx5_rxq_ctrl) next; /* Pointer to the next element. */
 	rte_atomic32_t refcnt; /* Reference counter. */
-	struct mlx5_rxq_ibv *ibv; /* Verbs elements. */
+	struct mlx5_rxq_obj *obj; /* Verbs/DevX elements. */
 	struct mlx5_priv *priv; /* Back pointer to private data. */
 	unsigned int socket; /* CPU socket ID for allocations. */
 	unsigned int irq:1; /* Whether IRQ is enabled. */
@@ -300,8 +310,8 @@ int mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 void mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev);
 int mlx5_rx_intr_enable(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id);
-struct mlx5_rxq_ibv *mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx);
-int mlx5_rxq_ibv_verify(struct rte_eth_dev *dev);
+struct mlx5_rxq_obj *mlx5_rxq_obj_new(struct rte_eth_dev *dev, uint16_t idx);
+int mlx5_rxq_obj_verify(struct rte_eth_dev *dev);
 struct mlx5_rxq_ctrl *mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx,
 				   uint16_t desc, unsigned int socket,
 				   const struct rte_eth_rxconf *conf,
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 864c985..54353ee 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -123,10 +123,10 @@
 		ret = rxq_alloc_elts(rxq_ctrl);
 		if (ret)
 			goto error;
-		rxq_ctrl->ibv = mlx5_rxq_ibv_new(dev, i);
-		if (!rxq_ctrl->ibv)
+		rxq_ctrl->obj = mlx5_rxq_obj_new(dev, i);
+		if (!rxq_ctrl->obj)
 			goto error;
-		rxq_ctrl->wqn = rxq_ctrl->ibv->wq->wq_num;
+		rxq_ctrl->wqn = rxq_ctrl->obj->wq->wq_num;
 	}
 	return 0;
 error:
diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c
index 4004930..67518c2 100644
--- a/drivers/net/mlx5/mlx5_vlan.c
+++ b/drivers/net/mlx5/mlx5_vlan.c
@@ -127,7 +127,7 @@
 	}
 	DRV_LOG(DEBUG, "port %u set VLAN offloads 0x%x for port %uqueue %d",
 		dev->data->port_id, vlan_offloads, rxq->port_id, queue);
-	if (!rxq_ctrl->ibv) {
+	if (!rxq_ctrl->obj) {
 		/* Update related bits in RX queue. */
 		rxq->vlan_strip = !!on;
 		return;
@@ -137,7 +137,7 @@
 		.flags_mask = IBV_WQ_FLAGS_CVLAN_STRIPPING,
 		.flags = vlan_offloads,
 	};
-	ret = mlx5_glue->modify_wq(rxq_ctrl->ibv->wq, &mod);
+	ret = mlx5_glue->modify_wq(rxq_ctrl->obj->wq, &mod);
 	if (ret) {
 		DRV_LOG(ERR, "port %u failed to modified stripping mode: %s",
 			dev->data->port_id, strerror(rte_errno));
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH v2 15/28] net/mlx5: rename verbs indirection table to obj
  2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
                     ` (13 preceding siblings ...)
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 14/28] net/mlx5: rename RxQ verbs to general RxQ object Matan Azrad
@ 2019-07-22 14:52   ` Matan Azrad
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 16/28] net/mlx5: rename hash RxQ verbs to general Matan Azrad
                     ` (13 subsequent siblings)
  28 siblings, 0 replies; 92+ messages in thread
From: Matan Azrad @ 2019-07-22 14:52 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
  Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Prepare for introducing of DevX RQT object.
Rx indirection table object is currently created using verbs only.
The next patches will add the option to create an RQT object using
DevX.
This patch renames ind_table_ibv to ind_table_obj wherever relevant,
and adds the DevX items to relevant structs.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/net/mlx5/mlx5.c      |  2 +-
 drivers/net/mlx5/mlx5.h      |  4 +--
 drivers/net/mlx5/mlx5_rxq.c  | 64 ++++++++++++++++++++++----------------------
 drivers/net/mlx5/mlx5_rxtx.h | 20 ++++++++++----
 4 files changed, 50 insertions(+), 40 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 946a22b..da1a47f 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -819,7 +819,7 @@ struct mlx5_dev_spawn_data {
 	if (ret)
 		DRV_LOG(WARNING, "port %u some hash Rx queue still remain",
 			dev->data->port_id);
-	ret = mlx5_ind_table_ibv_verify(dev);
+	ret = mlx5_ind_table_obj_verify(dev);
 	if (ret)
 		DRV_LOG(WARNING, "port %u some indirection table still remain",
 			dev->data->port_id);
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 697e6c4..b080064 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -617,8 +617,8 @@ struct mlx5_priv {
 	LIST_HEAD(hrxq, mlx5_hrxq) hrxqs; /* Verbs Hash Rx queues. */
 	LIST_HEAD(txq, mlx5_txq_ctrl) txqsctrl; /* DPDK Tx queues. */
 	LIST_HEAD(txqibv, mlx5_txq_ibv) txqsibv; /* Verbs Tx queues. */
-	/* Verbs Indirection tables. */
-	LIST_HEAD(ind_tables, mlx5_ind_table_ibv) ind_tbls;
+	/* Indirection tables. */
+	LIST_HEAD(ind_tables, mlx5_ind_table_obj) ind_tbls;
 	/* Pointer to next element. */
 	rte_atomic32_t refcnt; /**< Reference counter. */
 	struct ibv_flow_action *verbs_action;
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 20a4695..507a1ab 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1576,14 +1576,14 @@ struct mlx5_rxq_ctrl *
  *   Number of queues in the array.
  *
  * @return
- *   The Verbs object initialised, NULL otherwise and rte_errno is set.
+ *   The Verbs/DevX object initialised, NULL otherwise and rte_errno is set.
  */
-static struct mlx5_ind_table_ibv *
-mlx5_ind_table_ibv_new(struct rte_eth_dev *dev, const uint16_t *queues,
+static struct mlx5_ind_table_obj *
+mlx5_ind_table_obj_new(struct rte_eth_dev *dev, const uint16_t *queues,
 		       uint32_t queues_n)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_ind_table_ibv *ind_tbl;
+	struct mlx5_ind_table_obj *ind_tbl;
 	const unsigned int wq_n = rte_is_power_of_2(queues_n) ?
 		log2above(queues_n) :
 		log2above(priv->config.ind_table_max_size);
@@ -1642,12 +1642,12 @@ struct mlx5_rxq_ctrl *
  * @return
  *   An indirection table if found.
  */
-static struct mlx5_ind_table_ibv *
-mlx5_ind_table_ibv_get(struct rte_eth_dev *dev, const uint16_t *queues,
+static struct mlx5_ind_table_obj *
+mlx5_ind_table_obj_get(struct rte_eth_dev *dev, const uint16_t *queues,
 		       uint32_t queues_n)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_ind_table_ibv *ind_tbl;
+	struct mlx5_ind_table_obj *ind_tbl;
 
 	LIST_FOREACH(ind_tbl, &priv->ind_tbls, next) {
 		if ((ind_tbl->queues_n == queues_n) &&
@@ -1678,8 +1678,8 @@ struct mlx5_rxq_ctrl *
  *   1 while a reference on it exists, 0 when freed.
  */
 static int
-mlx5_ind_table_ibv_release(struct rte_eth_dev *dev,
-			   struct mlx5_ind_table_ibv *ind_tbl)
+mlx5_ind_table_obj_release(struct rte_eth_dev *dev,
+			   struct mlx5_ind_table_obj *ind_tbl)
 {
 	unsigned int i;
 
@@ -1706,15 +1706,15 @@ struct mlx5_rxq_ctrl *
  *   The number of object not released.
  */
 int
-mlx5_ind_table_ibv_verify(struct rte_eth_dev *dev)
+mlx5_ind_table_obj_verify(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_ind_table_ibv *ind_tbl;
+	struct mlx5_ind_table_obj *ind_tbl;
 	int ret = 0;
 
 	LIST_FOREACH(ind_tbl, &priv->ind_tbls, next) {
 		DRV_LOG(DEBUG,
-			"port %u Verbs indirection table %p still referenced",
+			"port %u indirection table obj %p still referenced",
 			dev->data->port_id, (void *)ind_tbl);
 		++ret;
 	}
@@ -1752,7 +1752,7 @@ struct mlx5_hrxq *
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_hrxq *hrxq;
-	struct mlx5_ind_table_ibv *ind_tbl;
+	struct mlx5_ind_table_obj *ind_tbl;
 	struct ibv_qp *qp;
 #ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT
 	struct mlx5dv_qp_init_attr qp_init_attr;
@@ -1760,9 +1760,9 @@ struct mlx5_hrxq *
 	int err;
 
 	queues_n = hash_fields ? queues_n : 1;
-	ind_tbl = mlx5_ind_table_ibv_get(dev, queues, queues_n);
+	ind_tbl = mlx5_ind_table_obj_get(dev, queues, queues_n);
 	if (!ind_tbl)
-		ind_tbl = mlx5_ind_table_ibv_new(dev, queues, queues_n);
+		ind_tbl = mlx5_ind_table_obj_new(dev, queues, queues_n);
 	if (!ind_tbl) {
 		rte_errno = ENOMEM;
 		return NULL;
@@ -1844,7 +1844,7 @@ struct mlx5_hrxq *
 	return hrxq;
 error:
 	err = rte_errno; /* Save rte_errno before cleanup. */
-	mlx5_ind_table_ibv_release(dev, ind_tbl);
+	mlx5_ind_table_obj_release(dev, ind_tbl);
 	if (qp)
 		claim_zero(mlx5_glue->destroy_qp(qp));
 	rte_errno = err; /* Restore rte_errno. */
@@ -1878,7 +1878,7 @@ struct mlx5_hrxq *
 
 	queues_n = hash_fields ? queues_n : 1;
 	LIST_FOREACH(hrxq, &priv->hrxqs, next) {
-		struct mlx5_ind_table_ibv *ind_tbl;
+		struct mlx5_ind_table_obj *ind_tbl;
 
 		if (hrxq->rss_key_len != rss_key_len)
 			continue;
@@ -1886,11 +1886,11 @@ struct mlx5_hrxq *
 			continue;
 		if (hrxq->hash_fields != hash_fields)
 			continue;
-		ind_tbl = mlx5_ind_table_ibv_get(dev, queues, queues_n);
+		ind_tbl = mlx5_ind_table_obj_get(dev, queues, queues_n);
 		if (!ind_tbl)
 			continue;
 		if (ind_tbl != hrxq->ind_table) {
-			mlx5_ind_table_ibv_release(dev, ind_tbl);
+			mlx5_ind_table_obj_release(dev, ind_tbl);
 			continue;
 		}
 		rte_atomic32_inc(&hrxq->refcnt);
@@ -1918,12 +1918,12 @@ struct mlx5_hrxq *
 		mlx5_glue->destroy_flow_action(hrxq->action);
 #endif
 		claim_zero(mlx5_glue->destroy_qp(hrxq->qp));
-		mlx5_ind_table_ibv_release(dev, hrxq->ind_table);
+		mlx5_ind_table_obj_release(dev, hrxq->ind_table);
 		LIST_REMOVE(hrxq, next);
 		rte_free(hrxq);
 		return 0;
 	}
-	claim_nonzero(mlx5_ind_table_ibv_release(dev, hrxq->ind_table));
+	claim_nonzero(mlx5_ind_table_obj_release(dev, hrxq->ind_table));
 	return 1;
 }
 
@@ -2042,15 +2042,15 @@ struct mlx5_hrxq *
  *   Pointer to Ethernet device.
  *
  * @return
- *   The Verbs object initialised, NULL otherwise and rte_errno is set.
+ *   The Verbs/DevX object initialised, NULL otherwise and rte_errno is set.
  */
-static struct mlx5_ind_table_ibv *
-mlx5_ind_table_ibv_drop_new(struct rte_eth_dev *dev)
+static struct mlx5_ind_table_obj *
+mlx5_ind_table_obj_drop_new(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_ind_table_ibv *ind_tbl;
+	struct mlx5_ind_table_obj *ind_tbl;
 	struct mlx5_rxq_obj *rxq;
-	struct mlx5_ind_table_ibv tmpl;
+	struct mlx5_ind_table_obj tmpl;
 
 	rxq = mlx5_rxq_obj_drop_new(dev);
 	if (!rxq)
@@ -2088,10 +2088,10 @@ struct mlx5_hrxq *
  *   Pointer to Ethernet device.
  */
 static void
-mlx5_ind_table_ibv_drop_release(struct rte_eth_dev *dev)
+mlx5_ind_table_obj_drop_release(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_ind_table_ibv *ind_tbl = priv->drop_queue.hrxq->ind_table;
+	struct mlx5_ind_table_obj *ind_tbl = priv->drop_queue.hrxq->ind_table;
 
 	claim_zero(mlx5_glue->destroy_rwq_ind_table(ind_tbl->ind_table));
 	mlx5_rxq_obj_drop_release(dev);
@@ -2112,7 +2112,7 @@ struct mlx5_hrxq *
 mlx5_hrxq_drop_new(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_ind_table_ibv *ind_tbl;
+	struct mlx5_ind_table_obj *ind_tbl;
 	struct ibv_qp *qp;
 	struct mlx5_hrxq *hrxq;
 
@@ -2120,7 +2120,7 @@ struct mlx5_hrxq *
 		rte_atomic32_inc(&priv->drop_queue.hrxq->refcnt);
 		return priv->drop_queue.hrxq;
 	}
-	ind_tbl = mlx5_ind_table_ibv_drop_new(dev);
+	ind_tbl = mlx5_ind_table_obj_drop_new(dev);
 	if (!ind_tbl)
 		return NULL;
 	qp = mlx5_glue->create_qp_ex(priv->sh->ctx,
@@ -2168,7 +2168,7 @@ struct mlx5_hrxq *
 	return hrxq;
 error:
 	if (ind_tbl)
-		mlx5_ind_table_ibv_drop_release(dev);
+		mlx5_ind_table_obj_drop_release(dev);
 	return NULL;
 }
 
@@ -2189,7 +2189,7 @@ struct mlx5_hrxq *
 		mlx5_glue->destroy_flow_action(hrxq->action);
 #endif
 		claim_zero(mlx5_glue->destroy_qp(hrxq->qp));
-		mlx5_ind_table_ibv_drop_release(dev);
+		mlx5_ind_table_obj_drop_release(dev);
 		rte_free(hrxq);
 		priv->drop_queue.hrxq = NULL;
 	}
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index eb20a07..0e7b428 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -176,11 +176,21 @@ struct mlx5_rxq_ctrl {
 	uint16_t dump_file_n; /* Number of dump files. */
 };
 
+enum mlx5_ind_tbl_type {
+	MLX5_IND_TBL_TYPE_IBV,
+	MLX5_IND_TBL_TYPE_DEVX,
+};
+
 /* Indirection table. */
-struct mlx5_ind_table_ibv {
-	LIST_ENTRY(mlx5_ind_table_ibv) next; /* Pointer to the next element. */
+struct mlx5_ind_table_obj {
+	LIST_ENTRY(mlx5_ind_table_obj) next; /* Pointer to the next element. */
 	rte_atomic32_t refcnt; /* Reference counter. */
-	struct ibv_rwq_ind_table *ind_table; /**< Indirection table. */
+	enum mlx5_ind_tbl_type type;
+	RTE_STD_C11
+	union {
+		struct ibv_rwq_ind_table *ind_table; /**< Indirection table. */
+		struct mlx5_devx_obj *rqt; /* DevX RQT object. */
+	};
 	uint32_t queues_n; /**< Number of queues in the list. */
 	uint16_t queues[]; /**< Queue list. */
 };
@@ -189,7 +199,7 @@ struct mlx5_ind_table_ibv {
 struct mlx5_hrxq {
 	LIST_ENTRY(mlx5_hrxq) next; /* Pointer to the next element. */
 	rte_atomic32_t refcnt; /* Reference counter. */
-	struct mlx5_ind_table_ibv *ind_table; /* Indirection table. */
+	struct mlx5_ind_table_obj *ind_table; /* Indirection table. */
 	struct ibv_qp *qp; /* Verbs queue pair. */
 #ifdef HAVE_IBV_FLOW_DV_SUPPORT
 	void *action; /* DV QP action pointer. */
@@ -320,7 +330,7 @@ struct mlx5_rxq_ctrl *mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx,
 int mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx);
 int mlx5_rxq_verify(struct rte_eth_dev *dev);
 int rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl);
-int mlx5_ind_table_ibv_verify(struct rte_eth_dev *dev);
+int mlx5_ind_table_obj_verify(struct rte_eth_dev *dev);
 struct mlx5_hrxq *mlx5_hrxq_new(struct rte_eth_dev *dev,
 				const uint8_t *rss_key, uint32_t rss_key_len,
 				uint64_t hash_fields,
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH v2 16/28] net/mlx5: rename hash RxQ verbs to general
  2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
                     ` (14 preceding siblings ...)
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 15/28] net/mlx5: rename verbs indirection table to obj Matan Azrad
@ 2019-07-22 14:52   ` Matan Azrad
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 17/28] net/mlx5: update queue state modify function Matan Azrad
                     ` (12 subsequent siblings)
  28 siblings, 0 replies; 92+ messages in thread
From: Matan Azrad @ 2019-07-22 14:52 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
  Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Prepare for introducing use of DevX TIR object.
Hash Rx queue is currently created using verbs QP only.
The next patches will add the option to create it with a TIR object
using DevX.
This patch renames hrxq_ibv to hrxq wherever relevant, and adds
the DevX items to relevant structs.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/net/mlx5/mlx5.c      | 2 +-
 drivers/net/mlx5/mlx5_rxq.c  | 8 ++++----
 drivers/net/mlx5/mlx5_rxtx.h | 8 ++++++--
 3 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index da1a47f..5d07183 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -815,7 +815,7 @@ struct mlx5_dev_spawn_data {
 		mlx5_free_shared_ibctx(priv->sh);
 		priv->sh = NULL;
 	}
-	ret = mlx5_hrxq_ibv_verify(dev);
+	ret = mlx5_hrxq_verify(dev);
 	if (ret)
 		DRV_LOG(WARNING, "port %u some hash Rx queue still remain",
 			dev->data->port_id);
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 507a1ab..d3bc3ee 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1741,7 +1741,7 @@ struct mlx5_rxq_ctrl *
  *   Tunnel type.
  *
  * @return
- *   The Verbs object initialised, NULL otherwise and rte_errno is set.
+ *   The Verbs/DevX object initialised, NULL otherwise and rte_errno is set.
  */
 struct mlx5_hrxq *
 mlx5_hrxq_new(struct rte_eth_dev *dev,
@@ -1937,7 +1937,7 @@ struct mlx5_hrxq *
  *   The number of object not released.
  */
 int
-mlx5_hrxq_ibv_verify(struct rte_eth_dev *dev)
+mlx5_hrxq_verify(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_hrxq *hrxq;
@@ -1945,7 +1945,7 @@ struct mlx5_hrxq *
 
 	LIST_FOREACH(hrxq, &priv->hrxqs, next) {
 		DRV_LOG(DEBUG,
-			"port %u Verbs hash Rx queue %p still referenced",
+			"port %u hash Rx queue %p still referenced",
 			dev->data->port_id, (void *)hrxq);
 		++ret;
 	}
@@ -2106,7 +2106,7 @@ struct mlx5_hrxq *
  *   Pointer to Ethernet device.
  *
  * @return
- *   The Verbs object initialised, NULL otherwise and rte_errno is set.
+ *   The Verbs/DevX object initialised, NULL otherwise and rte_errno is set.
  */
 struct mlx5_hrxq *
 mlx5_hrxq_drop_new(struct rte_eth_dev *dev)
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index 0e7b428..f4f5c0d 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -200,7 +200,11 @@ struct mlx5_hrxq {
 	LIST_ENTRY(mlx5_hrxq) next; /* Pointer to the next element. */
 	rte_atomic32_t refcnt; /* Reference counter. */
 	struct mlx5_ind_table_obj *ind_table; /* Indirection table. */
-	struct ibv_qp *qp; /* Verbs queue pair. */
+	RTE_STD_C11
+	union {
+		struct ibv_qp *qp; /* Verbs queue pair. */
+		struct mlx5_devx_obj *tir; /* DevX TIR object. */
+	};
 #ifdef HAVE_IBV_FLOW_DV_SUPPORT
 	void *action; /* DV QP action pointer. */
 #endif
@@ -341,7 +345,7 @@ struct mlx5_hrxq *mlx5_hrxq_get(struct rte_eth_dev *dev,
 				uint64_t hash_fields,
 				const uint16_t *queues, uint32_t queues_n);
 int mlx5_hrxq_release(struct rte_eth_dev *dev, struct mlx5_hrxq *hxrq);
-int mlx5_hrxq_ibv_verify(struct rte_eth_dev *dev);
+int mlx5_hrxq_verify(struct rte_eth_dev *dev);
 struct mlx5_hrxq *mlx5_hrxq_drop_new(struct rte_eth_dev *dev);
 void mlx5_hrxq_drop_release(struct rte_eth_dev *dev);
 uint64_t mlx5_get_rx_port_offloads(struct rte_eth_dev *dev);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH v2 17/28] net/mlx5: update queue state modify function
  2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
                     ` (15 preceding siblings ...)
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 16/28] net/mlx5: rename hash RxQ verbs to general Matan Azrad
@ 2019-07-22 14:52   ` Matan Azrad
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 18/28] net/mlx5: store protection domain number on create Matan Azrad
                     ` (11 subsequent siblings)
  28 siblings, 0 replies; 92+ messages in thread
From: Matan Azrad @ 2019-07-22 14:52 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
  Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Function mlx5_queue_state_modify_primary() was implemented to handle
state change for queues created using Verbs API.

This patch update function mlx5_queue_state_modify_primary() to
support state change of RQ object created using DevX API.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/net/mlx5/mlx5_rxtx.c | 31 +++++++++++++++++++++++++------
 1 file changed, 25 insertions(+), 6 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 9bfb002..584da3e 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -788,7 +788,7 @@ enum mlx5_txcmp_code {
 }
 
 /**
- * Modify a Verbs queue state.
+ * Modify a Verbs/DevX queue state.
  * This must be called from the primary process.
  *
  * @param dev
@@ -807,15 +807,34 @@ enum mlx5_txcmp_code {
 	struct mlx5_priv *priv = dev->data->dev_private;
 
 	if (sm->is_wq) {
-		struct ibv_wq_attr mod = {
-			.attr_mask = IBV_WQ_ATTR_STATE,
-			.wq_state = sm->state,
-		};
 		struct mlx5_rxq_data *rxq = (*priv->rxqs)[sm->queue_id];
 		struct mlx5_rxq_ctrl *rxq_ctrl =
 			container_of(rxq, struct mlx5_rxq_ctrl, rxq);
 
-		ret = mlx5_glue->modify_wq(rxq_ctrl->obj->wq, &mod);
+		if (rxq_ctrl->obj->type == MLX5_RXQ_OBJ_TYPE_IBV) {
+			struct ibv_wq_attr mod = {
+				.attr_mask = IBV_WQ_ATTR_STATE,
+				.wq_state = sm->state,
+			};
+
+			ret = mlx5_glue->modify_wq(rxq_ctrl->obj->wq, &mod);
+		} else { /* rxq_ctrl->obj->type == MLX5_RXQ_OBJ_TYPE_DEVX_RQ. */
+			struct mlx5_devx_modify_rq_attr rq_attr;
+
+			memset(&rq_attr, 0, sizeof(rq_attr));
+			if (sm->state == IBV_WQS_RESET) {
+				rq_attr.rq_state = MLX5_RQC_STATE_ERR;
+				rq_attr.state = MLX5_RQC_STATE_RST;
+			} else if (sm->state == IBV_WQS_RDY) {
+				rq_attr.rq_state = MLX5_RQC_STATE_RST;
+				rq_attr.state = MLX5_RQC_STATE_RDY;
+			} else if (sm->state == IBV_WQS_ERR) {
+				rq_attr.rq_state = MLX5_RQC_STATE_RDY;
+				rq_attr.state = MLX5_RQC_STATE_ERR;
+			}
+			ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq,
+						      &rq_attr);
+		}
 		if (ret) {
 			DRV_LOG(ERR, "Cannot change Rx WQ state to %u  - %s\n",
 					sm->state, strerror(errno));
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH v2 18/28] net/mlx5: store protection domain number on create
  2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
                     ` (16 preceding siblings ...)
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 17/28] net/mlx5: update queue state modify function Matan Azrad
@ 2019-07-22 14:52   ` Matan Azrad
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 19/28] net/mlx5: func to create Rx verbs completion queue Matan Azrad
                     ` (10 subsequent siblings)
  28 siblings, 0 replies; 92+ messages in thread
From: Matan Azrad @ 2019-07-22 14:52 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
  Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Function mlx5_alloc_shared_ibctx() allocates Protection Domain using
verbs API, as part of shared IB device context.
This patch adds reading and storing of pdn value from the created PD
object, using DV API.
The pdn value is required when creating WQ using DevX API.

This patch also updates function flow_dv_create_counter_stat_mem_mng()
which uses the pdn value as well.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/net/mlx5/mlx5.c         | 38 ++++++++++++++++++++++++++++++++++++++
 drivers/net/mlx5/mlx5.h         |  1 +
 drivers/net/mlx5/mlx5_flow_dv.c |  7 +------
 3 files changed, 40 insertions(+), 6 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 5d07183..973c80a 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -270,6 +270,37 @@ struct mlx5_dev_spawn_data {
 }
 
 /**
+ * Extract pdn of PD object using DV API.
+ *
+ * @param[in] pd
+ *   Pointer to the verbs PD object.
+ * @param[out] pdn
+ *   Pointer to the PD object number variable.
+ *
+ * @return
+ *   0 on success, error value otherwise.
+ */
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+static int
+mlx5_get_pdn(struct ibv_pd *pd __rte_unused, uint32_t *pdn __rte_unused)
+{
+	struct mlx5dv_obj obj;
+	struct mlx5dv_pd pd_info;
+	int ret = 0;
+
+	obj.pd.in = pd;
+	obj.pd.out = &pd_info;
+	ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_PD);
+	if (ret) {
+		DRV_LOG(DEBUG, "Fail to get PD object info");
+		return ret;
+	}
+	*pdn = pd_info.pdn;
+	return 0;
+}
+#endif /* HAVE_IBV_FLOW_DV_SUPPORT */
+
+/**
  * Allocate shared IB device context. If there is multiport device the
  * master and representors will share this context, if there is single
  * port dedicated IB device, the context will be used by only given
@@ -357,6 +388,13 @@ struct mlx5_dev_spawn_data {
 		err = ENOMEM;
 		goto error;
 	}
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+	err = mlx5_get_pdn(sh->pd, &sh->pdn);
+	if (err) {
+		DRV_LOG(ERR, "Fail to extract pdn from PD");
+		goto error;
+	}
+#endif /* HAVE_IBV_FLOW_DV_SUPPORT */
 	/*
 	 * Once the device is added to the list of memory event
 	 * callback, its global MR cache table cannot be expanded
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index b080064..b98c406 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -525,6 +525,7 @@ struct mlx5_ibv_shared {
 	uint32_t max_port; /* Maximal IB device port index. */
 	struct ibv_context *ctx; /* Verbs/DV context. */
 	struct ibv_pd *pd; /* Protection Domain. */
+	uint32_t pdn; /* Protection Domain number. */
 	uint32_t tdn; /* Transport Domain number. */
 	char ibdev_name[IBV_SYSFS_NAME_MAX]; /* IB device name. */
 	char ibdev_path[IBV_SYSFS_PATH_MAX]; /* IB device path for secondary */
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index afaa19c..36696c8 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -2319,8 +2319,6 @@ struct field_modify_info modify_tcp[] = {
 {
 	struct mlx5_ibv_shared *sh = ((struct mlx5_priv *)
 					(dev->data->dev_private))->sh;
-	struct mlx5dv_pd dv_pd;
-	struct mlx5dv_obj dv_obj;
 	struct mlx5_devx_mkey_attr mkey_attr;
 	struct mlx5_counter_stats_mem_mng *mem_mng;
 	volatile struct flow_counter_stats *raw_data;
@@ -2344,13 +2342,10 @@ struct field_modify_info modify_tcp[] = {
 		rte_free(mem);
 		return NULL;
 	}
-	dv_obj.pd.in = sh->pd;
-	dv_obj.pd.out = &dv_pd;
-	mlx5_glue->dv_init_obj(&dv_obj, MLX5DV_OBJ_PD);
 	mkey_attr.addr = (uintptr_t)mem;
 	mkey_attr.size = size;
 	mkey_attr.umem_id = mem_mng->umem->umem_id;
-	mkey_attr.pd = dv_pd.pdn;
+	mkey_attr.pd = sh->pdn;
 	mem_mng->dm = mlx5_devx_cmd_mkey_create(sh->ctx, &mkey_attr);
 	if (!mem_mng->dm) {
 		mlx5_glue->devx_umem_dereg(mem_mng->umem);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH v2 19/28] net/mlx5: func to create Rx verbs completion queue
  2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
                     ` (17 preceding siblings ...)
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 18/28] net/mlx5: store protection domain number on create Matan Azrad
@ 2019-07-22 14:52   ` Matan Azrad
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 20/28] net/mlx5: function to create Rx verbs work queue Matan Azrad
                     ` (9 subsequent siblings)
  28 siblings, 0 replies; 92+ messages in thread
From: Matan Azrad @ 2019-07-22 14:52 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
  Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Verbs CQ for RxQ is created inside function mlx5_rxq_obj_new().
This patch moves the creation of CQ to dedicated function
mlx5_ibv_cq_new().

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/net/mlx5/mlx5_rxq.c | 159 +++++++++++++++++++++++++-------------------
 1 file changed, 92 insertions(+), 67 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index d3bc3ee..e5015bb 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -831,6 +831,75 @@
 }
 
 /**
+ * Create a CQ Verbs object.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param priv
+ *   Pointer to device private data.
+ * @param rxq_data
+ *   Pointer to Rx queue data.
+ * @param cqe_n
+ *   Number of CQEs in CQ.
+ * @param rxq_obj
+ *   Pointer to Rx queue object data.
+ *
+ * @return
+ *   The Verbs object initialised, NULL otherwise and rte_errno is set.
+ */
+static struct ibv_cq *
+mlx5_ibv_cq_new(struct rte_eth_dev *dev, struct mlx5_priv *priv,
+		struct mlx5_rxq_data *rxq_data,
+		unsigned int cqe_n, struct mlx5_rxq_obj *rxq_obj)
+{
+	struct {
+		struct ibv_cq_init_attr_ex ibv;
+		struct mlx5dv_cq_init_attr mlx5;
+	} cq_attr;
+
+	cq_attr.ibv = (struct ibv_cq_init_attr_ex){
+		.cqe = cqe_n,
+		.channel = rxq_obj->channel,
+		.comp_mask = 0,
+	};
+	cq_attr.mlx5 = (struct mlx5dv_cq_init_attr){
+		.comp_mask = 0,
+	};
+	if (priv->config.cqe_comp && !rxq_data->hw_timestamp) {
+		cq_attr.mlx5.comp_mask |=
+				MLX5DV_CQ_INIT_ATTR_MASK_COMPRESSED_CQE;
+#ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT
+		cq_attr.mlx5.cqe_comp_res_format =
+				mlx5_rxq_mprq_enabled(rxq_data) ?
+				MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX :
+				MLX5DV_CQE_RES_FORMAT_HASH;
+#else
+		cq_attr.mlx5.cqe_comp_res_format = MLX5DV_CQE_RES_FORMAT_HASH;
+#endif
+		/*
+		 * For vectorized Rx, it must not be doubled in order to
+		 * make cq_ci and rq_ci aligned.
+		 */
+		if (mlx5_rxq_check_vec_support(rxq_data) < 0)
+			cq_attr.ibv.cqe *= 2;
+	} else if (priv->config.cqe_comp && rxq_data->hw_timestamp) {
+		DRV_LOG(DEBUG,
+			"port %u Rx CQE compression is disabled for HW"
+			" timestamp",
+			dev->data->port_id);
+	}
+#ifdef HAVE_IBV_MLX5_MOD_CQE_128B_PAD
+	if (priv->config.cqe_pad) {
+		cq_attr.mlx5.comp_mask |= MLX5DV_CQ_INIT_ATTR_MASK_FLAGS;
+		cq_attr.mlx5.flags |= MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD;
+	}
+#endif
+	return mlx5_glue->cq_ex_to_cq(mlx5_glue->dv_create_cq(priv->sh->ctx,
+							      &cq_attr.ibv,
+							      &cq_attr.mlx5));
+}
+
+/**
  * Create the Rx queue Verbs/DevX object.
  *
  * @param dev
@@ -849,18 +918,12 @@ struct mlx5_rxq_obj *
 	struct mlx5_rxq_ctrl *rxq_ctrl =
 		container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
 	struct ibv_wq_attr mod;
-	union {
-		struct {
-			struct ibv_cq_init_attr_ex ibv;
-			struct mlx5dv_cq_init_attr mlx5;
-		} cq;
-		struct {
-			struct ibv_wq_init_attr ibv;
+	struct {
+		struct ibv_wq_init_attr ibv;
 #ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT
-			struct mlx5dv_wq_init_attr mlx5;
+		struct mlx5dv_wq_init_attr mlx5;
 #endif
-		} wq;
-	} attr;
+	} wq_attr;
 	unsigned int cqe_n;
 	unsigned int wqe_n = 1 << rxq_data->elts_n;
 	struct mlx5_rxq_obj *tmpl = NULL;
@@ -872,7 +935,7 @@ struct mlx5_rxq_obj *
 	const int mprq_en = mlx5_rxq_mprq_enabled(rxq_data);
 
 	assert(rxq_data);
-	assert(!rxq_ctrl->ibv);
+	assert(!rxq_ctrl->obj);
 	priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_RX_QUEUE;
 	priv->verbs_alloc_ctx.obj = rxq_ctrl;
 	tmpl = rte_calloc_socket(__func__, 1, sizeof(*tmpl), 0,
@@ -898,46 +961,8 @@ struct mlx5_rxq_obj *
 		cqe_n = wqe_n * (1 << rxq_data->strd_num_n) - 1;
 	else
 		cqe_n = wqe_n  - 1;
-	attr.cq.ibv = (struct ibv_cq_init_attr_ex){
-		.cqe = cqe_n,
-		.channel = tmpl->channel,
-		.comp_mask = 0,
-	};
-	attr.cq.mlx5 = (struct mlx5dv_cq_init_attr){
-		.comp_mask = 0,
-	};
-	if (config->cqe_comp && !rxq_data->hw_timestamp) {
-		attr.cq.mlx5.comp_mask |=
-			MLX5DV_CQ_INIT_ATTR_MASK_COMPRESSED_CQE;
-#ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT
-		attr.cq.mlx5.cqe_comp_res_format =
-			mprq_en ? MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX :
-				  MLX5DV_CQE_RES_FORMAT_HASH;
-#else
-		attr.cq.mlx5.cqe_comp_res_format = MLX5DV_CQE_RES_FORMAT_HASH;
-#endif
-		/*
-		 * For vectorized Rx, it must not be doubled in order to
-		 * make cq_ci and rq_ci aligned.
-		 */
-		if (mlx5_rxq_check_vec_support(rxq_data) < 0)
-			attr.cq.ibv.cqe *= 2;
-	} else if (config->cqe_comp && rxq_data->hw_timestamp) {
-		DRV_LOG(DEBUG,
-			"port %u Rx CQE compression is disabled for HW"
-			" timestamp",
-			dev->data->port_id);
-	}
-#ifdef HAVE_IBV_MLX5_MOD_CQE_128B_PAD
-	if (config->cqe_pad) {
-		attr.cq.mlx5.comp_mask |= MLX5DV_CQ_INIT_ATTR_MASK_FLAGS;
-		attr.cq.mlx5.flags |= MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD;
-	}
-#endif
-	tmpl->cq = mlx5_glue->cq_ex_to_cq
-		(mlx5_glue->dv_create_cq(priv->sh->ctx, &attr.cq.ibv,
-					 &attr.cq.mlx5));
-	if (tmpl->cq == NULL) {
+	tmpl->cq = mlx5_ibv_cq_new(dev, priv, rxq_data, cqe_n, tmpl);
+	if (!tmpl->cq) {
 		DRV_LOG(ERR, "port %u Rx queue %u CQ creation failure",
 			dev->data->port_id, idx);
 		rte_errno = ENOMEM;
@@ -947,7 +972,7 @@ struct mlx5_rxq_obj *
 		dev->data->port_id, priv->sh->device_attr.orig_attr.max_qp_wr);
 	DRV_LOG(DEBUG, "port %u device_attr.max_sge is %d",
 		dev->data->port_id, priv->sh->device_attr.orig_attr.max_sge);
-	attr.wq.ibv = (struct ibv_wq_init_attr){
+	wq_attr.ibv = (struct ibv_wq_init_attr){
 		.wq_context = NULL, /* Could be useful in the future. */
 		.wq_type = IBV_WQT_RQ,
 		/* Max number of outstanding WRs. */
@@ -965,37 +990,37 @@ struct mlx5_rxq_obj *
 	};
 	/* By default, FCS (CRC) is stripped by hardware. */
 	if (rxq_data->crc_present) {
-		attr.wq.ibv.create_flags |= IBV_WQ_FLAGS_SCATTER_FCS;
-		attr.wq.ibv.comp_mask |= IBV_WQ_INIT_ATTR_FLAGS;
+		wq_attr.ibv.create_flags |= IBV_WQ_FLAGS_SCATTER_FCS;
+		wq_attr.ibv.comp_mask |= IBV_WQ_INIT_ATTR_FLAGS;
 	}
 	if (config->hw_padding) {
 #if defined(HAVE_IBV_WQ_FLAG_RX_END_PADDING)
-		attr.wq.ibv.create_flags |= IBV_WQ_FLAG_RX_END_PADDING;
-		attr.wq.ibv.comp_mask |= IBV_WQ_INIT_ATTR_FLAGS;
+		wq_attr.ibv.create_flags |= IBV_WQ_FLAG_RX_END_PADDING;
+		wq_attr.ibv.comp_mask |= IBV_WQ_INIT_ATTR_FLAGS;
 #elif defined(HAVE_IBV_WQ_FLAGS_PCI_WRITE_END_PADDING)
-		attr.wq.ibv.create_flags |= IBV_WQ_FLAGS_PCI_WRITE_END_PADDING;
-		attr.wq.ibv.comp_mask |= IBV_WQ_INIT_ATTR_FLAGS;
+		wq_attr.ibv.create_flags |= IBV_WQ_FLAGS_PCI_WRITE_END_PADDING;
+		wq_attr.ibv.comp_mask |= IBV_WQ_INIT_ATTR_FLAGS;
 #endif
 	}
 #ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT
-	attr.wq.mlx5 = (struct mlx5dv_wq_init_attr){
+	wq_attr.mlx5 = (struct mlx5dv_wq_init_attr){
 		.comp_mask = 0,
 	};
 	if (mprq_en) {
 		struct mlx5dv_striding_rq_init_attr *mprq_attr =
-			&attr.wq.mlx5.striding_rq_attrs;
+			&wq_attr.mlx5.striding_rq_attrs;
 
-		attr.wq.mlx5.comp_mask |= MLX5DV_WQ_INIT_ATTR_MASK_STRIDING_RQ;
+		wq_attr.mlx5.comp_mask |= MLX5DV_WQ_INIT_ATTR_MASK_STRIDING_RQ;
 		*mprq_attr = (struct mlx5dv_striding_rq_init_attr){
 			.single_stride_log_num_of_bytes = rxq_data->strd_sz_n,
 			.single_wqe_log_num_of_strides = rxq_data->strd_num_n,
 			.two_byte_shift_en = MLX5_MPRQ_TWO_BYTE_SHIFT,
 		};
 	}
-	tmpl->wq = mlx5_glue->dv_create_wq(priv->sh->ctx, &attr.wq.ibv,
-					   &attr.wq.mlx5);
+	tmpl->wq = mlx5_glue->dv_create_wq(priv->sh->ctx, &wq_attr.ibv,
+					   &wq_attr.mlx5);
 #else
-	tmpl->wq = mlx5_glue->create_wq(priv->sh->ctx, &attr.wq.ibv);
+	tmpl->wq = mlx5_glue->create_wq(priv->sh->ctx, &wq_attr.ibv);
 #endif
 	if (tmpl->wq == NULL) {
 		DRV_LOG(ERR, "port %u Rx queue %u WQ creation failure",
@@ -1007,14 +1032,14 @@ struct mlx5_rxq_obj *
 	 * Make sure number of WRs*SGEs match expectations since a queue
 	 * cannot allocate more than "desc" buffers.
 	 */
-	if (attr.wq.ibv.max_wr != (wqe_n >> rxq_data->sges_n) ||
-	    attr.wq.ibv.max_sge != (1u << rxq_data->sges_n)) {
+	if (wq_attr.ibv.max_wr != (wqe_n >> rxq_data->sges_n) ||
+	    wq_attr.ibv.max_sge != (1u << rxq_data->sges_n)) {
 		DRV_LOG(ERR,
 			"port %u Rx queue %u requested %u*%u but got %u*%u"
 			" WRs*SGEs",
 			dev->data->port_id, idx,
 			wqe_n >> rxq_data->sges_n, (1 << rxq_data->sges_n),
-			attr.wq.ibv.max_wr, attr.wq.ibv.max_sge);
+			wq_attr.ibv.max_wr, wq_attr.ibv.max_sge);
 		rte_errno = EINVAL;
 		goto error;
 	}
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH v2 20/28] net/mlx5: function to create Rx verbs work queue
  2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
                     ` (18 preceding siblings ...)
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 19/28] net/mlx5: func to create Rx verbs completion queue Matan Azrad
@ 2019-07-22 14:52   ` Matan Azrad
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 21/28] net/mlx5: create advanced RxQ using new API Matan Azrad
                     ` (8 subsequent siblings)
  28 siblings, 0 replies; 92+ messages in thread
From: Matan Azrad @ 2019-07-22 14:52 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
  Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Verbs WQ for RxQ is created inside function mlx5_rxq_obj_new().
This patch moves the creation of verbs WQ to dedicated function
mlx5_ibv_wq_new().

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/net/mlx5/mlx5_rxq.c | 178 +++++++++++++++++++++++++-------------------
 1 file changed, 103 insertions(+), 75 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index e5015bb..9d859df 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -900,6 +900,106 @@
 }
 
 /**
+ * Create a WQ Verbs object.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param priv
+ *   Pointer to device private data.
+ * @param rxq_data
+ *   Pointer to Rx queue data.
+ * @param idx
+ *   Queue index in DPDK Rx queue array
+ * @param wqe_n
+ *   Number of WQEs in WQ.
+ * @param rxq_obj
+ *   Pointer to Rx queue object data.
+ *
+ * @return
+ *   The Verbs object initialised, NULL otherwise and rte_errno is set.
+ */
+static struct ibv_wq *
+mlx5_ibv_wq_new(struct rte_eth_dev *dev, struct mlx5_priv *priv,
+		struct mlx5_rxq_data *rxq_data, uint16_t idx,
+		unsigned int wqe_n, struct mlx5_rxq_obj *rxq_obj)
+{
+	struct {
+		struct ibv_wq_init_attr ibv;
+#ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT
+		struct mlx5dv_wq_init_attr mlx5;
+#endif
+	} wq_attr;
+
+	wq_attr.ibv = (struct ibv_wq_init_attr){
+		.wq_context = NULL, /* Could be useful in the future. */
+		.wq_type = IBV_WQT_RQ,
+		/* Max number of outstanding WRs. */
+		.max_wr = wqe_n >> rxq_data->sges_n,
+		/* Max number of scatter/gather elements in a WR. */
+		.max_sge = 1 << rxq_data->sges_n,
+		.pd = priv->sh->pd,
+		.cq = rxq_obj->cq,
+		.comp_mask = IBV_WQ_FLAGS_CVLAN_STRIPPING | 0,
+		.create_flags = (rxq_data->vlan_strip ?
+				 IBV_WQ_FLAGS_CVLAN_STRIPPING : 0),
+	};
+	/* By default, FCS (CRC) is stripped by hardware. */
+	if (rxq_data->crc_present) {
+		wq_attr.ibv.create_flags |= IBV_WQ_FLAGS_SCATTER_FCS;
+		wq_attr.ibv.comp_mask |= IBV_WQ_INIT_ATTR_FLAGS;
+	}
+	if (priv->config.hw_padding) {
+#if defined(HAVE_IBV_WQ_FLAG_RX_END_PADDING)
+		wq_attr.ibv.create_flags |= IBV_WQ_FLAG_RX_END_PADDING;
+		wq_attr.ibv.comp_mask |= IBV_WQ_INIT_ATTR_FLAGS;
+#elif defined(HAVE_IBV_WQ_FLAGS_PCI_WRITE_END_PADDING)
+		wq_attr.ibv.create_flags |= IBV_WQ_FLAGS_PCI_WRITE_END_PADDING;
+		wq_attr.ibv.comp_mask |= IBV_WQ_INIT_ATTR_FLAGS;
+#endif
+	}
+#ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT
+	wq_attr.mlx5 = (struct mlx5dv_wq_init_attr){
+		.comp_mask = 0,
+	};
+	if (mlx5_rxq_mprq_enabled(rxq_data)) {
+		struct mlx5dv_striding_rq_init_attr *mprq_attr =
+						&wq_attr.mlx5.striding_rq_attrs;
+
+		wq_attr.mlx5.comp_mask |= MLX5DV_WQ_INIT_ATTR_MASK_STRIDING_RQ;
+		*mprq_attr = (struct mlx5dv_striding_rq_init_attr){
+			.single_stride_log_num_of_bytes = rxq_data->strd_sz_n,
+			.single_wqe_log_num_of_strides = rxq_data->strd_num_n,
+			.two_byte_shift_en = MLX5_MPRQ_TWO_BYTE_SHIFT,
+		};
+	}
+	rxq_obj->wq = mlx5_glue->dv_create_wq(priv->sh->ctx, &wq_attr.ibv,
+					      &wq_attr.mlx5);
+#else
+	rxq_obj->wq = mlx5_glue->create_wq(priv->sh->ctx, &wq_attr.ibv);
+#endif
+	if (rxq_obj->wq) {
+		/*
+		 * Make sure number of WRs*SGEs match expectations since a queue
+		 * cannot allocate more than "desc" buffers.
+		 */
+		if (wq_attr.ibv.max_wr != (wqe_n >> rxq_data->sges_n) ||
+		    wq_attr.ibv.max_sge != (1u << rxq_data->sges_n)) {
+			DRV_LOG(ERR,
+				"port %u Rx queue %u requested %u*%u but got"
+				" %u*%u WRs*SGEs",
+				dev->data->port_id, idx,
+				wqe_n >> rxq_data->sges_n,
+				(1 << rxq_data->sges_n),
+				wq_attr.ibv.max_wr, wq_attr.ibv.max_sge);
+			claim_zero(mlx5_glue->destroy_wq(rxq_obj->wq));
+			rxq_obj->wq = NULL;
+			rte_errno = EINVAL;
+		}
+	}
+	return rxq_obj->wq;
+}
+
+/**
  * Create the Rx queue Verbs/DevX object.
  *
  * @param dev
@@ -918,12 +1018,6 @@ struct mlx5_rxq_obj *
 	struct mlx5_rxq_ctrl *rxq_ctrl =
 		container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
 	struct ibv_wq_attr mod;
-	struct {
-		struct ibv_wq_init_attr ibv;
-#ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT
-		struct mlx5dv_wq_init_attr mlx5;
-#endif
-	} wq_attr;
 	unsigned int cqe_n;
 	unsigned int wqe_n = 1 << rxq_data->elts_n;
 	struct mlx5_rxq_obj *tmpl = NULL;
@@ -931,8 +1025,6 @@ struct mlx5_rxq_obj *
 	struct mlx5dv_rwq rwq;
 	int ret = 0;
 	struct mlx5dv_obj obj;
-	struct mlx5_dev_config *config = &priv->config;
-	const int mprq_en = mlx5_rxq_mprq_enabled(rxq_data);
 
 	assert(rxq_data);
 	assert(!rxq_ctrl->obj);
@@ -957,7 +1049,7 @@ struct mlx5_rxq_obj *
 			goto error;
 		}
 	}
-	if (mprq_en)
+	if (mlx5_rxq_mprq_enabled(rxq_data))
 		cqe_n = wqe_n * (1 << rxq_data->strd_num_n) - 1;
 	else
 		cqe_n = wqe_n  - 1;
@@ -972,77 +1064,13 @@ struct mlx5_rxq_obj *
 		dev->data->port_id, priv->sh->device_attr.orig_attr.max_qp_wr);
 	DRV_LOG(DEBUG, "port %u device_attr.max_sge is %d",
 		dev->data->port_id, priv->sh->device_attr.orig_attr.max_sge);
-	wq_attr.ibv = (struct ibv_wq_init_attr){
-		.wq_context = NULL, /* Could be useful in the future. */
-		.wq_type = IBV_WQT_RQ,
-		/* Max number of outstanding WRs. */
-		.max_wr = wqe_n >> rxq_data->sges_n,
-		/* Max number of scatter/gather elements in a WR. */
-		.max_sge = 1 << rxq_data->sges_n,
-		.pd = priv->sh->pd,
-		.cq = tmpl->cq,
-		.comp_mask =
-			IBV_WQ_FLAGS_CVLAN_STRIPPING |
-			0,
-		.create_flags = (rxq_data->vlan_strip ?
-				 IBV_WQ_FLAGS_CVLAN_STRIPPING :
-				 0),
-	};
-	/* By default, FCS (CRC) is stripped by hardware. */
-	if (rxq_data->crc_present) {
-		wq_attr.ibv.create_flags |= IBV_WQ_FLAGS_SCATTER_FCS;
-		wq_attr.ibv.comp_mask |= IBV_WQ_INIT_ATTR_FLAGS;
-	}
-	if (config->hw_padding) {
-#if defined(HAVE_IBV_WQ_FLAG_RX_END_PADDING)
-		wq_attr.ibv.create_flags |= IBV_WQ_FLAG_RX_END_PADDING;
-		wq_attr.ibv.comp_mask |= IBV_WQ_INIT_ATTR_FLAGS;
-#elif defined(HAVE_IBV_WQ_FLAGS_PCI_WRITE_END_PADDING)
-		wq_attr.ibv.create_flags |= IBV_WQ_FLAGS_PCI_WRITE_END_PADDING;
-		wq_attr.ibv.comp_mask |= IBV_WQ_INIT_ATTR_FLAGS;
-#endif
-	}
-#ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT
-	wq_attr.mlx5 = (struct mlx5dv_wq_init_attr){
-		.comp_mask = 0,
-	};
-	if (mprq_en) {
-		struct mlx5dv_striding_rq_init_attr *mprq_attr =
-			&wq_attr.mlx5.striding_rq_attrs;
-
-		wq_attr.mlx5.comp_mask |= MLX5DV_WQ_INIT_ATTR_MASK_STRIDING_RQ;
-		*mprq_attr = (struct mlx5dv_striding_rq_init_attr){
-			.single_stride_log_num_of_bytes = rxq_data->strd_sz_n,
-			.single_wqe_log_num_of_strides = rxq_data->strd_num_n,
-			.two_byte_shift_en = MLX5_MPRQ_TWO_BYTE_SHIFT,
-		};
-	}
-	tmpl->wq = mlx5_glue->dv_create_wq(priv->sh->ctx, &wq_attr.ibv,
-					   &wq_attr.mlx5);
-#else
-	tmpl->wq = mlx5_glue->create_wq(priv->sh->ctx, &wq_attr.ibv);
-#endif
-	if (tmpl->wq == NULL) {
+	tmpl->wq = mlx5_ibv_wq_new(dev, priv, rxq_data, idx, wqe_n, tmpl);
+	if (!tmpl->wq) {
 		DRV_LOG(ERR, "port %u Rx queue %u WQ creation failure",
 			dev->data->port_id, idx);
 		rte_errno = ENOMEM;
 		goto error;
 	}
-	/*
-	 * Make sure number of WRs*SGEs match expectations since a queue
-	 * cannot allocate more than "desc" buffers.
-	 */
-	if (wq_attr.ibv.max_wr != (wqe_n >> rxq_data->sges_n) ||
-	    wq_attr.ibv.max_sge != (1u << rxq_data->sges_n)) {
-		DRV_LOG(ERR,
-			"port %u Rx queue %u requested %u*%u but got %u*%u"
-			" WRs*SGEs",
-			dev->data->port_id, idx,
-			wqe_n >> rxq_data->sges_n, (1 << rxq_data->sges_n),
-			wq_attr.ibv.max_wr, wq_attr.ibv.max_sge);
-		rte_errno = EINVAL;
-		goto error;
-	}
 	/* Change queue state to ready. */
 	mod = (struct ibv_wq_attr){
 		.attr_mask = IBV_WQ_ATTR_STATE,
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH v2 21/28] net/mlx5: create advanced RxQ using new API
  2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
                     ` (19 preceding siblings ...)
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 20/28] net/mlx5: function to create Rx verbs work queue Matan Azrad
@ 2019-07-22 14:52   ` Matan Azrad
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 22/28] net/mlx5: support LRO with single RxQ object Matan Azrad
                     ` (7 subsequent siblings)
  28 siblings, 0 replies; 92+ messages in thread
From: Matan Azrad @ 2019-07-22 14:52 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
  Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Function mlx5_rxq_obj_new(), previously called mlx5_rxq_ibv_new(),
supports creating Rx queue objects using verbs.
This patch expands the relevant functions, to support creating
verbs or DevX Rx queue objects:
Function mlx5_rxq_obj_new() updated to create RQ object using DevX.
Function mlx5_ind_table_obj_new() updated to create RQT object using DevX.
Function mlx5_hrxq_new() updated to create TIR object using DevX.
New utility functions added to perform specific operations:
mlx5_devx_rq_new(),  mlx5_devx_wq_attr_fill(),
mlx5_devx_create_rq_attr_fill().

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/net/mlx5/mlx5_rxq.c     | 547 +++++++++++++++++++++++++++++++---------
 drivers/net/mlx5/mlx5_rxtx.h    |   9 +-
 drivers/net/mlx5/mlx5_trigger.c |   3 +-
 drivers/net/mlx5/mlx5_vlan.c    |  30 ++-
 4 files changed, 452 insertions(+), 137 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 9d859df..1e09078 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -561,6 +561,23 @@
 }
 
 /**
+ * Release the resources allocated for an RQ DevX object.
+ *
+ * @param rxq_ctrl
+ *   DevX Rx queue object.
+ */
+static void
+rxq_release_rq_resources(struct mlx5_rxq_ctrl *rxq_ctrl)
+{
+	if (rxq_ctrl->rxq.wqes) {
+		rte_free((void *)(uintptr_t)rxq_ctrl->rxq.wqes);
+		rxq_ctrl->rxq.wqes = NULL;
+	}
+	if (rxq_ctrl->wq_umem)
+		mlx5_glue->devx_umem_dereg(rxq_ctrl->wq_umem);
+}
+
+/**
  * Release an Rx verbs/DevX queue object.
  *
  * @param rxq_obj
@@ -573,11 +590,17 @@
 mlx5_rxq_obj_release(struct mlx5_rxq_obj *rxq_obj)
 {
 	assert(rxq_obj);
-	assert(rxq_obj->wq);
+	if (rxq_obj->type == MLX5_RXQ_OBJ_TYPE_IBV)
+		assert(rxq_obj->wq);
 	assert(rxq_obj->cq);
 	if (rte_atomic32_dec_and_test(&rxq_obj->refcnt)) {
 		rxq_free_elts(rxq_obj->rxq_ctrl);
-		claim_zero(mlx5_glue->destroy_wq(rxq_obj->wq));
+		if (rxq_obj->type == MLX5_RXQ_OBJ_TYPE_IBV) {
+			claim_zero(mlx5_glue->destroy_wq(rxq_obj->wq));
+		} else if (rxq_obj->type == MLX5_RXQ_OBJ_TYPE_DEVX_RQ) {
+			claim_zero(mlx5_devx_cmd_destroy(rxq_obj->rq));
+			rxq_release_rq_resources(rxq_obj->rxq_ctrl);
+		}
 		claim_zero(mlx5_glue->destroy_cq(rxq_obj->cq));
 		if (rxq_obj->channel)
 			claim_zero(mlx5_glue->destroy_comp_channel
@@ -1000,18 +1023,147 @@
 }
 
 /**
+ * Fill common fields of create RQ attributes structure.
+ *
+ * @param rxq_data
+ *   Pointer to Rx queue data.
+ * @param cqn
+ *   CQ number to use with this RQ.
+ * @param rq_attr
+ *   RQ attributes structure to fill..
+ */
+static void
+mlx5_devx_create_rq_attr_fill(struct mlx5_rxq_data *rxq_data, uint32_t cqn,
+			      struct mlx5_devx_create_rq_attr *rq_attr)
+{
+	rq_attr->state = MLX5_RQC_STATE_RST;
+	rq_attr->vsd = (rxq_data->vlan_strip) ? 0 : 1;
+	rq_attr->cqn = cqn;
+	rq_attr->scatter_fcs = (rxq_data->crc_present) ? 1 : 0;
+}
+
+/**
+ * Fill common fields of DevX WQ attributes structure.
+ *
+ * @param priv
+ *   Pointer to device private data.
+ * @param rxq_ctrl
+ *   Pointer to Rx queue control structure.
+ * @param wq_attr
+ *   WQ attributes structure to fill..
+ */
+static void
+mlx5_devx_wq_attr_fill(struct mlx5_priv *priv, struct mlx5_rxq_ctrl *rxq_ctrl,
+		       struct mlx5_devx_wq_attr *wq_attr)
+{
+	wq_attr->end_padding_mode = priv->config.cqe_pad ?
+					MLX5_WQ_END_PAD_MODE_ALIGN :
+					MLX5_WQ_END_PAD_MODE_NONE;
+	wq_attr->pd = priv->sh->pdn;
+	wq_attr->dbr_addr = rxq_ctrl->dbr_offset;
+	wq_attr->dbr_umem_id = rxq_ctrl->dbr_umem_id;
+	wq_attr->dbr_umem_valid = 1;
+	wq_attr->wq_umem_id = rxq_ctrl->wq_umem->umem_id;
+	wq_attr->wq_umem_valid = 1;
+}
+
+/**
+ * Create a RQ object using DevX.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param idx
+ *   Queue index in DPDK Rx queue array
+ * @param cqn
+ *   CQ number to use with this RQ.
+ *
+ * @return
+ *   The DevX object initialised, NULL otherwise and rte_errno is set.
+ */
+static struct mlx5_devx_obj *
+mlx5_devx_rq_new(struct rte_eth_dev *dev, uint16_t idx, uint32_t cqn)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
+	struct mlx5_rxq_ctrl *rxq_ctrl =
+		container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
+	struct mlx5_devx_create_rq_attr rq_attr;
+	uint32_t wqe_n = 1 << rxq_data->elts_n;
+	uint32_t wq_size = 0;
+	uint32_t wqe_size = 0;
+	uint32_t log_wqe_size = 0;
+	void *buf = NULL;
+	struct mlx5_devx_obj *rq;
+
+	memset(&rq_attr, 0, sizeof(rq_attr));
+	/* Fill RQ attributes. */
+	rq_attr.mem_rq_type = MLX5_RQC_MEM_RQ_TYPE_MEMORY_RQ_INLINE;
+	rq_attr.flush_in_error_en = 1;
+	mlx5_devx_create_rq_attr_fill(rxq_data, cqn, &rq_attr);
+	/* Fill WQ attributes for this RQ. */
+	if (mlx5_rxq_mprq_enabled(rxq_data)) {
+		rq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC_STRIDING_RQ;
+		/*
+		 * Number of strides in each WQE:
+		 * 512*2^single_wqe_log_num_of_strides.
+		 */
+		rq_attr.wq_attr.single_wqe_log_num_of_strides =
+				rxq_data->strd_num_n -
+				MLX5_MIN_SINGLE_WQE_LOG_NUM_STRIDES;
+		/* Stride size = (2^single_stride_log_num_of_bytes)*64B. */
+		rq_attr.wq_attr.single_stride_log_num_of_bytes =
+				rxq_data->strd_sz_n -
+				MLX5_MIN_SINGLE_STRIDE_LOG_NUM_BYTES;
+		wqe_size = sizeof(struct mlx5_wqe_mprq);
+	} else {
+		int max_sge = 0;
+		int num_scatter = 0;
+
+		rq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC;
+		max_sge = 1 << rxq_data->sges_n;
+		num_scatter = RTE_MAX(max_sge, 1);
+		wqe_size = sizeof(struct mlx5_wqe_data_seg) * num_scatter;
+	}
+	log_wqe_size = log2above(wqe_size);
+	rq_attr.wq_attr.log_wq_stride = log_wqe_size;
+	rq_attr.wq_attr.log_wq_sz = rxq_data->elts_n;
+	/* Calculate and allocate WQ memory space. */
+	wqe_size = 1 << log_wqe_size; /* round up power of two.*/
+	wq_size = wqe_n * wqe_size;
+	buf = rte_calloc_socket(__func__, 1, wq_size, RTE_CACHE_LINE_SIZE,
+				rxq_ctrl->socket);
+	if (!buf)
+		return NULL;
+	rxq_data->wqes = buf;
+	rxq_ctrl->wq_umem = mlx5_glue->devx_umem_reg(priv->sh->ctx,
+						     buf, wq_size, 0);
+	if (!rxq_ctrl->wq_umem) {
+		rte_free(buf);
+		return NULL;
+	}
+	mlx5_devx_wq_attr_fill(priv, rxq_ctrl, &rq_attr.wq_attr);
+	rq = mlx5_devx_cmd_create_rq(priv->sh->ctx, &rq_attr, rxq_ctrl->socket);
+	if (!rq)
+		rxq_release_rq_resources(rxq_ctrl);
+	return rq;
+}
+
+/**
  * Create the Rx queue Verbs/DevX object.
  *
  * @param dev
  *   Pointer to Ethernet device.
  * @param idx
  *   Queue index in DPDK Rx queue array
+ * @param type
+ *   Type of Rx queue object to create.
  *
  * @return
  *   The Verbs/DevX object initialised, NULL otherwise and rte_errno is set.
  */
 struct mlx5_rxq_obj *
-mlx5_rxq_obj_new(struct rte_eth_dev *dev, uint16_t idx)
+mlx5_rxq_obj_new(struct rte_eth_dev *dev, uint16_t idx,
+		 enum mlx5_rxq_obj_type type)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx];
@@ -1039,6 +1191,7 @@ struct mlx5_rxq_obj *
 		rte_errno = ENOMEM;
 		goto error;
 	}
+	tmpl->type = type;
 	tmpl->rxq_ctrl = rxq_ctrl;
 	if (rxq_ctrl->irq) {
 		tmpl->channel = mlx5_glue->create_comp_channel(priv->sh->ctx);
@@ -1060,35 +1213,9 @@ struct mlx5_rxq_obj *
 		rte_errno = ENOMEM;
 		goto error;
 	}
-	DRV_LOG(DEBUG, "port %u device_attr.max_qp_wr is %d",
-		dev->data->port_id, priv->sh->device_attr.orig_attr.max_qp_wr);
-	DRV_LOG(DEBUG, "port %u device_attr.max_sge is %d",
-		dev->data->port_id, priv->sh->device_attr.orig_attr.max_sge);
-	tmpl->wq = mlx5_ibv_wq_new(dev, priv, rxq_data, idx, wqe_n, tmpl);
-	if (!tmpl->wq) {
-		DRV_LOG(ERR, "port %u Rx queue %u WQ creation failure",
-			dev->data->port_id, idx);
-		rte_errno = ENOMEM;
-		goto error;
-	}
-	/* Change queue state to ready. */
-	mod = (struct ibv_wq_attr){
-		.attr_mask = IBV_WQ_ATTR_STATE,
-		.wq_state = IBV_WQS_RDY,
-	};
-	ret = mlx5_glue->modify_wq(tmpl->wq, &mod);
-	if (ret) {
-		DRV_LOG(ERR,
-			"port %u Rx queue %u WQ state to IBV_WQS_RDY failed",
-			dev->data->port_id, idx);
-		rte_errno = ret;
-		goto error;
-	}
 	obj.cq.in = tmpl->cq;
 	obj.cq.out = &cq_info;
-	obj.rwq.in = tmpl->wq;
-	obj.rwq.out = &rwq;
-	ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_CQ | MLX5DV_OBJ_RWQ);
+	ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_CQ);
 	if (ret) {
 		rte_errno = ret;
 		goto error;
@@ -1101,9 +1228,73 @@ struct mlx5_rxq_obj *
 		rte_errno = EINVAL;
 		goto error;
 	}
+	DRV_LOG(DEBUG, "port %u device_attr.max_qp_wr is %d",
+		dev->data->port_id, priv->sh->device_attr.orig_attr.max_qp_wr);
+	DRV_LOG(DEBUG, "port %u device_attr.max_sge is %d",
+		dev->data->port_id, priv->sh->device_attr.orig_attr.max_sge);
+	/* Allocate door-bell for types created with DevX. */
+	if (tmpl->type != MLX5_RXQ_OBJ_TYPE_IBV) {
+		struct mlx5_devx_dbr_page *dbr_page;
+		int64_t dbr_offset;
+
+		dbr_offset = mlx5_get_dbr(dev, &dbr_page);
+		if (dbr_offset < 0)
+			goto error;
+		rxq_ctrl->dbr_offset = dbr_offset;
+		rxq_ctrl->dbr_umem_id = dbr_page->umem->umem_id;
+		rxq_data->rq_db = (uint32_t *)((uintptr_t)dbr_page->dbrs +
+					       (uintptr_t)rxq_ctrl->dbr_offset);
+	}
+	if (tmpl->type == MLX5_RXQ_OBJ_TYPE_IBV) {
+		tmpl->wq = mlx5_ibv_wq_new(dev, priv, rxq_data, idx, wqe_n,
+					   tmpl);
+		if (!tmpl->wq) {
+			DRV_LOG(ERR, "port %u Rx queue %u WQ creation failure",
+				dev->data->port_id, idx);
+			rte_errno = ENOMEM;
+			goto error;
+		}
+		/* Change queue state to ready. */
+		mod = (struct ibv_wq_attr){
+			.attr_mask = IBV_WQ_ATTR_STATE,
+			.wq_state = IBV_WQS_RDY,
+		};
+		ret = mlx5_glue->modify_wq(tmpl->wq, &mod);
+		if (ret) {
+			DRV_LOG(ERR,
+				"port %u Rx queue %u WQ state to IBV_WQS_RDY"
+				" failed", dev->data->port_id, idx);
+			rte_errno = ret;
+			goto error;
+		}
+		obj.rwq.in = tmpl->wq;
+		obj.rwq.out = &rwq;
+		ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_RWQ);
+		if (ret) {
+			rte_errno = ret;
+			goto error;
+		}
+		rxq_data->wqes = rwq.buf;
+		rxq_data->rq_db = rwq.dbrec;
+	} else if (tmpl->type == MLX5_RXQ_OBJ_TYPE_DEVX_RQ) {
+		struct mlx5_devx_modify_rq_attr rq_attr;
+
+		memset(&rq_attr, 0, sizeof(rq_attr));
+		tmpl->rq = mlx5_devx_rq_new(dev, idx, cq_info.cqn);
+		if (!tmpl->rq) {
+			DRV_LOG(ERR, "port %u Rx queue %u RQ creation failure",
+				dev->data->port_id, idx);
+			rte_errno = ENOMEM;
+			goto error;
+		}
+		/* Change queue state to ready. */
+		rq_attr.rq_state = MLX5_RQC_STATE_RST;
+		rq_attr.state = MLX5_RQC_STATE_RDY;
+		ret = mlx5_devx_cmd_modify_rq(tmpl->rq, &rq_attr);
+		if (ret)
+			goto error;
+	}
 	/* Fill the rings. */
-	rxq_data->wqes = rwq.buf;
-	rxq_data->rq_db = rwq.dbrec;
 	rxq_data->cqe_n = log2above(cq_info.cqe_cnt);
 	rxq_data->cq_db = cq_info.dbrec;
 	rxq_data->cqes = (volatile struct mlx5_cqe (*)[])(uintptr_t)cq_info.buf;
@@ -1121,8 +1312,10 @@ struct mlx5_rxq_obj *
 error:
 	if (tmpl) {
 		ret = rte_errno; /* Save rte_errno before cleanup. */
-		if (tmpl->wq)
+		if (tmpl->type == MLX5_RXQ_OBJ_TYPE_IBV && tmpl->wq)
 			claim_zero(mlx5_glue->destroy_wq(tmpl->wq));
+		else if (tmpl->type == MLX5_RXQ_OBJ_TYPE_DEVX_RQ && tmpl->rq)
+			claim_zero(mlx5_devx_cmd_destroy(tmpl->rq));
 		if (tmpl->cq)
 			claim_zero(mlx5_glue->destroy_cq(tmpl->cq));
 		if (tmpl->channel)
@@ -1131,6 +1324,8 @@ struct mlx5_rxq_obj *
 		rte_free(tmpl);
 		rte_errno = ret; /* Restore rte_errno. */
 	}
+	if (type == MLX5_RXQ_OBJ_TYPE_DEVX_RQ)
+		rxq_release_rq_resources(rxq_ctrl);
 	priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE;
 	return NULL;
 }
@@ -1585,6 +1780,8 @@ struct mlx5_rxq_ctrl *
 	if (rxq_ctrl->obj && !mlx5_rxq_obj_release(rxq_ctrl->obj))
 		rxq_ctrl->obj = NULL;
 	if (rte_atomic32_dec_and_test(&rxq_ctrl->refcnt)) {
+		claim_zero(mlx5_release_dbr(dev, rxq_ctrl->dbr_umem_id,
+					    rxq_ctrl->dbr_offset));
 		mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh);
 		LIST_REMOVE(rxq_ctrl, next);
 		rte_free(rxq_ctrl);
@@ -1633,16 +1830,11 @@ struct mlx5_rxq_ctrl *
  */
 static struct mlx5_ind_table_obj *
 mlx5_ind_table_obj_new(struct rte_eth_dev *dev, const uint16_t *queues,
-		       uint32_t queues_n)
+		       uint32_t queues_n, enum mlx5_ind_tbl_type type)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_ind_table_obj *ind_tbl;
-	const unsigned int wq_n = rte_is_power_of_2(queues_n) ?
-		log2above(queues_n) :
-		log2above(priv->config.ind_table_max_size);
-	struct ibv_wq *wq[1 << wq_n];
-	unsigned int i;
-	unsigned int j;
+	unsigned int i = 0, j = 0, k = 0;
 
 	ind_tbl = rte_calloc(__func__, 1, sizeof(*ind_tbl) +
 			     queues_n * sizeof(uint16_t), 0);
@@ -1650,33 +1842,75 @@ struct mlx5_rxq_ctrl *
 		rte_errno = ENOMEM;
 		return NULL;
 	}
-	for (i = 0; i != queues_n; ++i) {
-		struct mlx5_rxq_ctrl *rxq = mlx5_rxq_get(dev, queues[i]);
+	ind_tbl->type = type;
+	if (ind_tbl->type == MLX5_IND_TBL_TYPE_IBV) {
+		const unsigned int wq_n = rte_is_power_of_2(queues_n) ?
+			log2above(queues_n) :
+			log2above(priv->config.ind_table_max_size);
+		struct ibv_wq *wq[1 << wq_n];
+
+		for (i = 0; i != queues_n; ++i) {
+			struct mlx5_rxq_ctrl *rxq = mlx5_rxq_get(dev,
+								 queues[i]);
+			if (!rxq)
+				goto error;
+			wq[i] = rxq->obj->wq;
+			ind_tbl->queues[i] = queues[i];
+		}
+		ind_tbl->queues_n = queues_n;
+		/* Finalise indirection table. */
+		k = i; /* Retain value of i for use in error case. */
+		for (j = 0; k != (unsigned int)(1 << wq_n); ++k, ++j)
+			wq[k] = wq[j];
+		ind_tbl->ind_table = mlx5_glue->create_rwq_ind_table
+			(priv->sh->ctx,
+			 &(struct ibv_rwq_ind_table_init_attr){
+				.log_ind_tbl_size = wq_n,
+				.ind_tbl = wq,
+				.comp_mask = 0,
+			});
+		if (!ind_tbl->ind_table) {
+			rte_errno = errno;
+			goto error;
+		}
+	} else { /* ind_tbl->type == MLX5_IND_TBL_TYPE_DEVX */
+		struct mlx5_devx_rqt_attr *rqt_attr = NULL;
 
-		if (!rxq)
+		rqt_attr = rte_calloc(__func__, 1, sizeof(*rqt_attr) +
+				      queues_n * sizeof(uint16_t), 0);
+		if (!rqt_attr) {
+			DRV_LOG(ERR, "port %u cannot allocate RQT resources",
+				dev->data->port_id);
+			rte_errno = ENOMEM;
 			goto error;
-		wq[i] = rxq->obj->wq;
-		ind_tbl->queues[i] = queues[i];
-	}
-	ind_tbl->queues_n = queues_n;
-	/* Finalise indirection table. */
-	for (j = 0; i != (unsigned int)(1 << wq_n); ++i, ++j)
-		wq[i] = wq[j];
-	ind_tbl->ind_table = mlx5_glue->create_rwq_ind_table
-		(priv->sh->ctx,
-		 &(struct ibv_rwq_ind_table_init_attr){
-			.log_ind_tbl_size = wq_n,
-			.ind_tbl = wq,
-			.comp_mask = 0,
-		 });
-	if (!ind_tbl->ind_table) {
-		rte_errno = errno;
-		goto error;
+		}
+		rqt_attr->rqt_max_size = priv->config.ind_table_max_size;
+		rqt_attr->rqt_actual_size = queues_n;
+		for (i = 0; i != queues_n; ++i) {
+			struct mlx5_rxq_ctrl *rxq = mlx5_rxq_get(dev,
+								 queues[i]);
+			if (!rxq)
+				goto error;
+			rqt_attr->rq_list[i] = rxq->obj->rq->id;
+			ind_tbl->queues[i] = queues[i];
+		}
+		ind_tbl->rqt = mlx5_devx_cmd_create_rqt(priv->sh->ctx,
+							rqt_attr);
+		rte_free(rqt_attr);
+		if (!ind_tbl->rqt) {
+			DRV_LOG(ERR, "port %u cannot create DevX RQT",
+				dev->data->port_id);
+			rte_errno = errno;
+			goto error;
+		}
+		ind_tbl->queues_n = queues_n;
 	}
 	rte_atomic32_inc(&ind_tbl->refcnt);
 	LIST_INSERT_HEAD(&priv->ind_tbls, ind_tbl, next);
 	return ind_tbl;
 error:
+	for (j = 0; j < i; j++)
+		mlx5_rxq_release(dev, ind_tbl->queues[j]);
 	rte_free(ind_tbl);
 	DEBUG("port %u cannot create indirection table", dev->data->port_id);
 	return NULL;
@@ -1736,9 +1970,13 @@ struct mlx5_rxq_ctrl *
 {
 	unsigned int i;
 
-	if (rte_atomic32_dec_and_test(&ind_tbl->refcnt))
-		claim_zero(mlx5_glue->destroy_rwq_ind_table
-			   (ind_tbl->ind_table));
+	if (rte_atomic32_dec_and_test(&ind_tbl->refcnt)) {
+		if (ind_tbl->type == MLX5_IND_TBL_TYPE_IBV)
+			claim_zero(mlx5_glue->destroy_rwq_ind_table
+							(ind_tbl->ind_table));
+		else if (ind_tbl->type == MLX5_IND_TBL_TYPE_DEVX)
+			claim_zero(mlx5_devx_cmd_destroy(ind_tbl->rqt));
+	}
 	for (i = 0; i != ind_tbl->queues_n; ++i)
 		claim_nonzero(mlx5_rxq_release(dev, ind_tbl->queues[i]));
 	if (!rte_atomic32_read(&ind_tbl->refcnt)) {
@@ -1805,93 +2043,145 @@ struct mlx5_hrxq *
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_hrxq *hrxq;
+	struct ibv_qp *qp = NULL;
 	struct mlx5_ind_table_obj *ind_tbl;
-	struct ibv_qp *qp;
-#ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT
-	struct mlx5dv_qp_init_attr qp_init_attr;
-#endif
 	int err;
+	struct mlx5_devx_obj *tir = NULL;
 
 	queues_n = hash_fields ? queues_n : 1;
 	ind_tbl = mlx5_ind_table_obj_get(dev, queues, queues_n);
-	if (!ind_tbl)
-		ind_tbl = mlx5_ind_table_obj_new(dev, queues, queues_n);
+	if (!ind_tbl) {
+		struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[queues[0]];
+		struct mlx5_rxq_ctrl *rxq_ctrl =
+			container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
+		enum mlx5_ind_tbl_type type;
+
+		type = rxq_ctrl->obj->type == MLX5_RXQ_OBJ_TYPE_IBV ?
+				MLX5_IND_TBL_TYPE_IBV : MLX5_IND_TBL_TYPE_DEVX;
+		ind_tbl = mlx5_ind_table_obj_new(dev, queues, queues_n, type);
+	}
 	if (!ind_tbl) {
 		rte_errno = ENOMEM;
 		return NULL;
 	}
+	if (ind_tbl->type == MLX5_IND_TBL_TYPE_IBV) {
 #ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT
-	memset(&qp_init_attr, 0, sizeof(qp_init_attr));
-	if (tunnel) {
-		qp_init_attr.comp_mask =
+		struct mlx5dv_qp_init_attr qp_init_attr;
+
+		memset(&qp_init_attr, 0, sizeof(qp_init_attr));
+		if (tunnel) {
+			qp_init_attr.comp_mask =
 				MLX5DV_QP_INIT_ATTR_MASK_QP_CREATE_FLAGS;
-		qp_init_attr.create_flags = MLX5DV_QP_CREATE_TUNNEL_OFFLOADS;
-	}
+			qp_init_attr.create_flags =
+				MLX5DV_QP_CREATE_TUNNEL_OFFLOADS;
+		}
 #ifdef HAVE_IBV_FLOW_DV_SUPPORT
-	if (dev->data->dev_conf.lpbk_mode) {
-		/* Allow packet sent from NIC loop back w/o source MAC check. */
-		qp_init_attr.comp_mask |=
+		if (dev->data->dev_conf.lpbk_mode) {
+			/*
+			 * Allow packet sent from NIC loop back
+			 * w/o source MAC check.
+			 */
+			qp_init_attr.comp_mask |=
 				MLX5DV_QP_INIT_ATTR_MASK_QP_CREATE_FLAGS;
-		qp_init_attr.create_flags |=
+			qp_init_attr.create_flags |=
 				MLX5DV_QP_CREATE_TIR_ALLOW_SELF_LOOPBACK_UC;
-	}
+		}
 #endif
-	qp = mlx5_glue->dv_create_qp
-		(priv->sh->ctx,
-		 &(struct ibv_qp_init_attr_ex){
-			.qp_type = IBV_QPT_RAW_PACKET,
-			.comp_mask =
-				IBV_QP_INIT_ATTR_PD |
-				IBV_QP_INIT_ATTR_IND_TABLE |
-				IBV_QP_INIT_ATTR_RX_HASH,
-			.rx_hash_conf = (struct ibv_rx_hash_conf){
-				.rx_hash_function = IBV_RX_HASH_FUNC_TOEPLITZ,
-				.rx_hash_key_len = rss_key_len,
-				.rx_hash_key = (void *)(uintptr_t)rss_key,
-				.rx_hash_fields_mask = hash_fields,
-			},
-			.rwq_ind_tbl = ind_tbl->ind_table,
-			.pd = priv->sh->pd,
-		 },
-		 &qp_init_attr);
+		qp = mlx5_glue->dv_create_qp
+			(priv->sh->ctx,
+			 &(struct ibv_qp_init_attr_ex){
+				.qp_type = IBV_QPT_RAW_PACKET,
+				.comp_mask =
+					IBV_QP_INIT_ATTR_PD |
+					IBV_QP_INIT_ATTR_IND_TABLE |
+					IBV_QP_INIT_ATTR_RX_HASH,
+				.rx_hash_conf = (struct ibv_rx_hash_conf){
+					.rx_hash_function =
+						IBV_RX_HASH_FUNC_TOEPLITZ,
+					.rx_hash_key_len = rss_key_len,
+					.rx_hash_key =
+						(void *)(uintptr_t)rss_key,
+					.rx_hash_fields_mask = hash_fields,
+				},
+				.rwq_ind_tbl = ind_tbl->ind_table,
+				.pd = priv->sh->pd,
+			  },
+			  &qp_init_attr);
 #else
-	qp = mlx5_glue->create_qp_ex
-		(priv->sh->ctx,
-		 &(struct ibv_qp_init_attr_ex){
-			.qp_type = IBV_QPT_RAW_PACKET,
-			.comp_mask =
-				IBV_QP_INIT_ATTR_PD |
-				IBV_QP_INIT_ATTR_IND_TABLE |
-				IBV_QP_INIT_ATTR_RX_HASH,
-			.rx_hash_conf = (struct ibv_rx_hash_conf){
-				.rx_hash_function = IBV_RX_HASH_FUNC_TOEPLITZ,
-				.rx_hash_key_len = rss_key_len,
-				.rx_hash_key = (void *)(uintptr_t)rss_key,
-				.rx_hash_fields_mask = hash_fields,
-			},
-			.rwq_ind_tbl = ind_tbl->ind_table,
-			.pd = priv->sh->pd,
-		 });
+		qp = mlx5_glue->create_qp_ex
+			(priv->sh->ctx,
+			 &(struct ibv_qp_init_attr_ex){
+				.qp_type = IBV_QPT_RAW_PACKET,
+				.comp_mask =
+					IBV_QP_INIT_ATTR_PD |
+					IBV_QP_INIT_ATTR_IND_TABLE |
+					IBV_QP_INIT_ATTR_RX_HASH,
+				.rx_hash_conf = (struct ibv_rx_hash_conf){
+					.rx_hash_function =
+						IBV_RX_HASH_FUNC_TOEPLITZ,
+					.rx_hash_key_len = rss_key_len,
+					.rx_hash_key =
+						(void *)(uintptr_t)rss_key,
+					.rx_hash_fields_mask = hash_fields,
+				},
+				.rwq_ind_tbl = ind_tbl->ind_table,
+				.pd = priv->sh->pd,
+			 });
 #endif
-	if (!qp) {
-		rte_errno = errno;
-		goto error;
+		if (!qp) {
+			rte_errno = errno;
+			goto error;
+		}
+	} else { /* ind_tbl->type == MLX5_IND_TBL_TYPE_DEVX */
+		struct mlx5_devx_tir_attr tir_attr;
+
+		memset(&tir_attr, 0, sizeof(tir_attr));
+		tir_attr.disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT;
+		tir_attr.rx_hash_fn = MLX5_RX_HASH_FN_TOEPLITZ;
+		memcpy(&tir_attr.rx_hash_field_selector_outer, &hash_fields,
+		       sizeof(uint64_t));
+		tir_attr.transport_domain = priv->sh->tdn;
+		memcpy(tir_attr.rx_hash_toeplitz_key, rss_key, rss_key_len);
+		tir_attr.indirect_table = ind_tbl->rqt->id;
+		if (dev->data->dev_conf.lpbk_mode)
+			tir_attr.self_lb_block =
+					MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST;
+		tir = mlx5_devx_cmd_create_tir(priv->sh->ctx, &tir_attr);
+		if (!tir) {
+			DRV_LOG(ERR, "port %u cannot create DevX TIR",
+				dev->data->port_id);
+			rte_errno = errno;
+			goto error;
+		}
 	}
 	hrxq = rte_calloc(__func__, 1, sizeof(*hrxq) + rss_key_len, 0);
 	if (!hrxq)
 		goto error;
 	hrxq->ind_table = ind_tbl;
-	hrxq->qp = qp;
+	if (ind_tbl->type == MLX5_IND_TBL_TYPE_IBV) {
+		hrxq->qp = qp;
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+		hrxq->action =
+			mlx5_glue->dv_create_flow_action_dest_ibv_qp(hrxq->qp);
+		if (!hrxq->action) {
+			rte_errno = errno;
+			goto error;
+		}
+#endif
+	} else { /* ind_tbl->type == MLX5_IND_TBL_TYPE_DEVX */
+		hrxq->tir = tir;
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+		hrxq->action = mlx5_glue->dv_create_flow_action_dest_devx_tir
+							(hrxq->tir->obj);
+		if (!hrxq->action) {
+			rte_errno = errno;
+			goto error;
+		}
+#endif
+	}
 	hrxq->rss_key_len = rss_key_len;
 	hrxq->hash_fields = hash_fields;
 	memcpy(hrxq->rss_key, rss_key, rss_key_len);
-#ifdef HAVE_IBV_FLOW_DV_SUPPORT
-	hrxq->action = mlx5_glue->dv_create_flow_action_dest_ibv_qp(hrxq->qp);
-	if (!hrxq->action) {
-		rte_errno = errno;
-		goto error;
-	}
-#endif
 	rte_atomic32_inc(&hrxq->refcnt);
 	LIST_INSERT_HEAD(&priv->hrxqs, hrxq, next);
 	return hrxq;
@@ -1900,6 +2190,8 @@ struct mlx5_hrxq *
 	mlx5_ind_table_obj_release(dev, ind_tbl);
 	if (qp)
 		claim_zero(mlx5_glue->destroy_qp(qp));
+	else if (tir)
+		claim_zero(mlx5_devx_cmd_destroy(tir));
 	rte_errno = err; /* Restore rte_errno. */
 	return NULL;
 }
@@ -1970,7 +2262,10 @@ struct mlx5_hrxq *
 #ifdef HAVE_IBV_FLOW_DV_SUPPORT
 		mlx5_glue->destroy_flow_action(hrxq->action);
 #endif
-		claim_zero(mlx5_glue->destroy_qp(hrxq->qp));
+		if (hrxq->ind_table->type == MLX5_IND_TBL_TYPE_IBV)
+			claim_zero(mlx5_glue->destroy_qp(hrxq->qp));
+		else /* hrxq->ind_table->type == MLX5_IND_TBL_TYPE_DEVX */
+			claim_zero(mlx5_devx_cmd_destroy(hrxq->tir));
 		mlx5_ind_table_obj_release(dev, hrxq->ind_table);
 		LIST_REMOVE(hrxq, next);
 		rte_free(hrxq);
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index f4f5c0d..bd4ae80 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -80,6 +80,9 @@ struct mlx5_mprq_buf {
 /* Get pointer to the first stride. */
 #define mlx5_mprq_buf_addr(ptr) ((ptr) + 1)
 
+#define MLX5_MIN_SINGLE_STRIDE_LOG_NUM_BYTES 6
+#define MLX5_MIN_SINGLE_WQE_LOG_NUM_STRIDES 9
+
 enum mlx5_rxq_err_state {
 	MLX5_RXQ_ERR_STATE_NO_ERROR = 0,
 	MLX5_RXQ_ERR_STATE_NEED_RESET,
@@ -174,6 +177,9 @@ struct mlx5_rxq_ctrl {
 	uint32_t flow_tunnels_n[MLX5_FLOW_TUNNEL]; /* Tunnels counters. */
 	uint32_t wqn; /* WQ number. */
 	uint16_t dump_file_n; /* Number of dump files. */
+	uint32_t dbr_umem_id; /* Storing door-bell information, */
+	uint64_t dbr_offset;  /* needed when freeing door-bell. */
+	struct mlx5dv_devx_umem *wq_umem; /* WQ buffer registration info. */
 };
 
 enum mlx5_ind_tbl_type {
@@ -324,7 +330,8 @@ int mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 void mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev);
 int mlx5_rx_intr_enable(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id);
-struct mlx5_rxq_obj *mlx5_rxq_obj_new(struct rte_eth_dev *dev, uint16_t idx);
+struct mlx5_rxq_obj *mlx5_rxq_obj_new(struct rte_eth_dev *dev, uint16_t idx,
+				      enum mlx5_rxq_obj_type type);
 int mlx5_rxq_obj_verify(struct rte_eth_dev *dev);
 struct mlx5_rxq_ctrl *mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx,
 				   uint16_t desc, unsigned int socket,
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 54353ee..acd2902 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -123,7 +123,8 @@
 		ret = rxq_alloc_elts(rxq_ctrl);
 		if (ret)
 			goto error;
-		rxq_ctrl->obj = mlx5_rxq_obj_new(dev, i);
+		rxq_ctrl->obj = mlx5_rxq_obj_new(dev, i,
+						 MLX5_RXQ_OBJ_TYPE_DEVX_RQ);
 		if (!rxq_ctrl->obj)
 			goto error;
 		rxq_ctrl->wqn = rxq_ctrl->obj->wq->wq_num;
diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c
index 67518c2..5f6554a 100644
--- a/drivers/net/mlx5/mlx5_vlan.c
+++ b/drivers/net/mlx5/mlx5_vlan.c
@@ -111,7 +111,7 @@
 	uint16_t vlan_offloads =
 		(on ? IBV_WQ_FLAGS_CVLAN_STRIPPING : 0) |
 		0;
-	int ret;
+	int ret = 0;
 
 	/* Validate hw support */
 	if (!priv->config.hw_vlan_strip) {
@@ -132,15 +132,27 @@
 		rxq->vlan_strip = !!on;
 		return;
 	}
-	mod = (struct ibv_wq_attr){
-		.attr_mask = IBV_WQ_ATTR_FLAGS,
-		.flags_mask = IBV_WQ_FLAGS_CVLAN_STRIPPING,
-		.flags = vlan_offloads,
-	};
-	ret = mlx5_glue->modify_wq(rxq_ctrl->obj->wq, &mod);
+	if (rxq_ctrl->obj->type == MLX5_RXQ_OBJ_TYPE_IBV) {
+		mod = (struct ibv_wq_attr){
+			.attr_mask = IBV_WQ_ATTR_FLAGS,
+			.flags_mask = IBV_WQ_FLAGS_CVLAN_STRIPPING,
+			.flags = vlan_offloads,
+		};
+		ret = mlx5_glue->modify_wq(rxq_ctrl->obj->wq, &mod);
+	} else if (rxq_ctrl->obj->type == MLX5_RXQ_OBJ_TYPE_DEVX_RQ) {
+		struct mlx5_devx_modify_rq_attr rq_attr;
+
+		memset(&rq_attr, 0, sizeof(rq_attr));
+		rq_attr.rq_state = MLX5_RQC_STATE_RDY;
+		rq_attr.state = MLX5_RQC_STATE_RDY;
+		rq_attr.vsd = (on ? 0 : 1);
+		rq_attr.modify_bitmask = MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_VSD;
+		ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr);
+	}
 	if (ret) {
-		DRV_LOG(ERR, "port %u failed to modified stripping mode: %s",
-			dev->data->port_id, strerror(rte_errno));
+		DRV_LOG(ERR, "port %u failed to modify object %d stripping "
+			"mode: %s", dev->data->port_id,
+			rxq_ctrl->obj->type, strerror(rte_errno));
 		return;
 	}
 	/* Update related bits in RX queue. */
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH v2 22/28] net/mlx5: support LRO with single RxQ object
  2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
                     ` (20 preceding siblings ...)
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 21/28] net/mlx5: create advanced RxQ using new API Matan Azrad
@ 2019-07-22 14:52   ` Matan Azrad
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 23/28] net/mlx5: replace the external mbuf shared memory Matan Azrad
                     ` (6 subsequent siblings)
  28 siblings, 0 replies; 92+ messages in thread
From: Matan Azrad @ 2019-07-22 14:52 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
  Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Implement LRO support using a single RQ object per DPDK RxQ.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/net/mlx5/mlx5_flow.h       |  6 ++++++
 drivers/net/mlx5/mlx5_flow_dv.c    | 21 +++++++++++++++++++--
 drivers/net/mlx5/mlx5_flow_verbs.c |  3 ++-
 drivers/net/mlx5/mlx5_rxq.c        | 10 +++++++++-
 drivers/net/mlx5/mlx5_rxtx.h       |  2 +-
 drivers/net/mlx5/mlx5_trigger.c    | 11 ++++++++---
 6 files changed, 45 insertions(+), 8 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index f3c563e..3f96bec 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -70,6 +70,12 @@
 	(MLX5_FLOW_LAYER_OUTER_L2 | MLX5_FLOW_LAYER_OUTER_L3 | \
 	 MLX5_FLOW_LAYER_OUTER_L4)
 
+/* LRO support mask, i.e. flow contains IPv4/IPv6 and TCP. */
+#define MLX5_FLOW_LAYER_IPV4_LRO \
+	(MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L4_TCP)
+#define MLX5_FLOW_LAYER_IPV6_LRO \
+	(MLX5_FLOW_LAYER_OUTER_L3_IPV6 | MLX5_FLOW_LAYER_OUTER_L4_TCP)
+
 /* Tunnel Masks. */
 #define MLX5_FLOW_LAYER_TUNNEL \
 	(MLX5_FLOW_LAYER_VXLAN | MLX5_FLOW_LAYER_VXLAN_GPE | \
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 36696c8..7240d3b 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -62,6 +62,9 @@
 	uint32_t attr;
 };
 
+#define MLX5_FLOW_IPV4_LRO (1 << 0)
+#define MLX5_FLOW_IPV6_LRO (1 << 1)
+
 /**
  * Initialize flow attributes structure according to flow items' types.
  *
@@ -5161,13 +5164,27 @@ struct field_modify_info modify_tcp[] = {
 					     dv->hash_fields,
 					     (*flow->queue),
 					     flow->rss.queue_num);
-			if (!hrxq)
+			if (!hrxq) {
+				int lro = 0;
+
+				if (mlx5_lro_on(dev)) {
+					if ((dev_flow->layers &
+					     MLX5_FLOW_LAYER_IPV4_LRO)
+					    == MLX5_FLOW_LAYER_IPV4_LRO)
+						lro = MLX5_FLOW_IPV4_LRO;
+					else if ((dev_flow->layers &
+						  MLX5_FLOW_LAYER_IPV6_LRO)
+						 == MLX5_FLOW_LAYER_IPV6_LRO)
+						lro = MLX5_FLOW_IPV6_LRO;
+				}
 				hrxq = mlx5_hrxq_new
 					(dev, flow->key, MLX5_RSS_HASH_KEY_LEN,
 					 dv->hash_fields, (*flow->queue),
 					 flow->rss.queue_num,
 					 !!(dev_flow->layers &
-					    MLX5_FLOW_LAYER_TUNNEL));
+					    MLX5_FLOW_LAYER_TUNNEL), lro);
+			}
+
 			if (!hrxq) {
 				rte_flow_error_set
 					(error, rte_errno,
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index b3395b8..bcec3b4 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -1669,7 +1669,8 @@
 						     (*flow->queue),
 						     flow->rss.queue_num,
 						     !!(dev_flow->layers &
-						      MLX5_FLOW_LAYER_TUNNEL));
+							MLX5_FLOW_LAYER_TUNNEL),
+						     0);
 			if (!hrxq) {
 				rte_flow_error_set
 					(error, rte_errno,
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 1e09078..b87eecc 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -2030,6 +2030,8 @@ struct mlx5_rxq_ctrl *
  *   Number of queues.
  * @param tunnel
  *   Tunnel type.
+ * @param lro
+ *   Flow rule is relevant for LRO, i.e. contains IPv4/IPv6 and TCP.
  *
  * @return
  *   The Verbs/DevX object initialised, NULL otherwise and rte_errno is set.
@@ -2039,7 +2041,7 @@ struct mlx5_hrxq *
 	      const uint8_t *rss_key, uint32_t rss_key_len,
 	      uint64_t hash_fields,
 	      const uint16_t *queues, uint32_t queues_n,
-	      int tunnel __rte_unused)
+	      int tunnel __rte_unused, int lro)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_hrxq *hrxq;
@@ -2146,6 +2148,12 @@ struct mlx5_hrxq *
 		if (dev->data->dev_conf.lpbk_mode)
 			tir_attr.self_lb_block =
 					MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST;
+		if (lro) {
+			tir_attr.lro_timeout_period_usecs =
+					priv->config.lro.timeout;
+			tir_attr.lro_max_msg_sz = 0xff;
+			tir_attr.lro_enable_mask = lro;
+		}
 		tir = mlx5_devx_cmd_create_tir(priv->sh->ctx, &tir_attr);
 		if (!tir) {
 			DRV_LOG(ERR, "port %u cannot create DevX TIR",
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index bd4ae80..ed5f637 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -346,7 +346,7 @@ struct mlx5_hrxq *mlx5_hrxq_new(struct rte_eth_dev *dev,
 				const uint8_t *rss_key, uint32_t rss_key_len,
 				uint64_t hash_fields,
 				const uint16_t *queues, uint32_t queues_n,
-				int tunnel __rte_unused);
+				int tunnel __rte_unused, int lro);
 struct mlx5_hrxq *mlx5_hrxq_get(struct rte_eth_dev *dev,
 				const uint8_t *rss_key, uint32_t rss_key_len,
 				uint64_t hash_fields,
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index acd2902..8bc2174 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -99,6 +99,9 @@
 	struct mlx5_priv *priv = dev->data->dev_private;
 	unsigned int i;
 	int ret = 0;
+	unsigned int lro_on = mlx5_lro_on(dev);
+	enum mlx5_rxq_obj_type obj_type = lro_on ? MLX5_RXQ_OBJ_TYPE_DEVX_RQ :
+						   MLX5_RXQ_OBJ_TYPE_IBV;
 
 	/* Allocate/reuse/resize mempool for Multi-Packet RQ. */
 	if (mlx5_mprq_alloc_mp(dev)) {
@@ -123,11 +126,13 @@
 		ret = rxq_alloc_elts(rxq_ctrl);
 		if (ret)
 			goto error;
-		rxq_ctrl->obj = mlx5_rxq_obj_new(dev, i,
-						 MLX5_RXQ_OBJ_TYPE_DEVX_RQ);
+		rxq_ctrl->obj = mlx5_rxq_obj_new(dev, i, obj_type);
 		if (!rxq_ctrl->obj)
 			goto error;
-		rxq_ctrl->wqn = rxq_ctrl->obj->wq->wq_num;
+		if (obj_type == MLX5_RXQ_OBJ_TYPE_IBV)
+			rxq_ctrl->wqn = rxq_ctrl->obj->wq->wq_num;
+		else if (obj_type == MLX5_RXQ_OBJ_TYPE_DEVX_RQ)
+			rxq_ctrl->wqn = rxq_ctrl->obj->rq->id;
 	}
 	return 0;
 error:
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH v2 23/28] net/mlx5: replace the external mbuf shared memory
  2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
                     ` (21 preceding siblings ...)
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 22/28] net/mlx5: support LRO with single RxQ object Matan Azrad
@ 2019-07-22 14:52   ` Matan Azrad
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 24/28] net/mlx5: update LRO fields in completion entry Matan Azrad
                     ` (5 subsequent siblings)
  28 siblings, 0 replies; 92+ messages in thread
From: Matan Azrad @ 2019-07-22 14:52 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
  Cc: dev, Dekel Peled

As an arrangement to the LRO support when a packet can consume all the
stride memory, the external mbuf shared information cannot be anymore
in the end of the stride, because the HW may write the packet data to
all the stride memory.

Move the shared information memory from the stride to the control
memory of the external mbuf.

Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/net/mlx5/mlx5_rxq.c  | 22 +++++++++++++++-------
 drivers/net/mlx5/mlx5_rxtx.c | 19 +++++++++++--------
 drivers/net/mlx5/mlx5_rxtx.h | 12 +++++++++++-
 3 files changed, 37 insertions(+), 16 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index b87eecc..0538caf 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1358,14 +1358,22 @@ struct mlx5_rxq_obj *
  * Callback function to initialize mbufs for Multi-Packet RQ.
  */
 static inline void
-mlx5_mprq_buf_init(struct rte_mempool *mp, void *opaque_arg __rte_unused,
+mlx5_mprq_buf_init(struct rte_mempool *mp, void *opaque_arg,
 		    void *_m, unsigned int i __rte_unused)
 {
 	struct mlx5_mprq_buf *buf = _m;
+	struct rte_mbuf_ext_shared_info *shinfo;
+	unsigned int strd_n = (unsigned int)(uintptr_t)opaque_arg;
+	unsigned int j;
 
 	memset(_m, 0, sizeof(*buf));
 	buf->mp = mp;
 	rte_atomic16_set(&buf->refcnt, 1);
+	for (j = 0; j != strd_n; ++j) {
+		shinfo = &buf->shinfos[j];
+		shinfo->free_cb = mlx5_mprq_buf_free_cb;
+		shinfo->fcb_opaque = buf;
+	}
 }
 
 /**
@@ -1460,7 +1468,8 @@ struct mlx5_rxq_obj *
 	}
 	assert(strd_num_n && strd_sz_n);
 	buf_len = (1 << strd_num_n) * (1 << strd_sz_n);
-	obj_size = buf_len + sizeof(struct mlx5_mprq_buf);
+	obj_size = sizeof(struct mlx5_mprq_buf) + buf_len + (1 << strd_num_n) *
+		sizeof(struct rte_mbuf_ext_shared_info) + RTE_PKTMBUF_HEADROOM;
 	/*
 	 * Received packets can be either memcpy'd or externally referenced. In
 	 * case that the packet is attached to an mbuf as an external buffer, as
@@ -1505,7 +1514,8 @@ struct mlx5_rxq_obj *
 	}
 	snprintf(name, sizeof(name), "port-%u-mprq", dev->data->port_id);
 	mp = rte_mempool_create(name, obj_num, obj_size, MLX5_MPRQ_MP_CACHE_SZ,
-				0, NULL, NULL, mlx5_mprq_buf_init, NULL,
+				0, NULL, NULL, mlx5_mprq_buf_init,
+				(void *)(uintptr_t)(1 << strd_num_n),
 				dev->device->numa_node, 0);
 	if (mp == NULL) {
 		DRV_LOG(ERR,
@@ -1591,10 +1601,8 @@ struct mlx5_rxq_ctrl *
 	 *  Otherwise, enable Rx scatter if necessary.
 	 */
 	assert(mb_len >= RTE_PKTMBUF_HEADROOM);
-	mprq_stride_size =
-		dev->data->dev_conf.rxmode.max_rx_pkt_len +
-		sizeof(struct rte_mbuf_ext_shared_info) +
-		RTE_PKTMBUF_HEADROOM;
+	mprq_stride_size = dev->data->dev_conf.rxmode.max_rx_pkt_len +
+				RTE_PKTMBUF_HEADROOM;
 	if (mprq_en &&
 	    desc > (1U << config->mprq.stride_num_n) &&
 	    mprq_stride_size <= (1U << config->mprq.max_stride_size_n)) {
diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 584da3e..241e01b 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -100,7 +100,8 @@ enum mlx5_txcmp_code {
 	       volatile struct mlx5_cqe *cqe, uint32_t rss_hash_res);
 
 static __rte_always_inline void
-mprq_buf_replace(struct mlx5_rxq_data *rxq, uint16_t rq_idx);
+mprq_buf_replace(struct mlx5_rxq_data *rxq, uint16_t rq_idx,
+		 const unsigned int strd_n);
 
 static int
 mlx5_queue_state_modify(struct rte_eth_dev *dev,
@@ -756,7 +757,8 @@ enum mlx5_txcmp_code {
 
 			scat = &((volatile struct mlx5_wqe_mprq *)
 				rxq->wqes)[i].dseg;
-			addr = (uintptr_t)mlx5_mprq_buf_addr(buf);
+			addr = (uintptr_t)mlx5_mprq_buf_addr(buf,
+							 1 << rxq->strd_num_n);
 			byte_count = (1 << rxq->strd_sz_n) *
 					(1 << rxq->strd_num_n);
 		} else {
@@ -1392,7 +1394,8 @@ enum mlx5_txcmp_code {
 }
 
 static inline void
-mprq_buf_replace(struct mlx5_rxq_data *rxq, uint16_t rq_idx)
+mprq_buf_replace(struct mlx5_rxq_data *rxq, uint16_t rq_idx,
+		 const unsigned int strd_n)
 {
 	struct mlx5_mprq_buf *rep = rxq->mprq_repl;
 	volatile struct mlx5_wqe_data_seg *wqe =
@@ -1403,7 +1406,7 @@ enum mlx5_txcmp_code {
 	/* Replace MPRQ buf. */
 	(*rxq->mprq_bufs)[rq_idx] = rep;
 	/* Replace WQE. */
-	addr = mlx5_mprq_buf_addr(rep);
+	addr = mlx5_mprq_buf_addr(rep, strd_n);
 	wqe->addr = rte_cpu_to_be_64((uintptr_t)addr);
 	/* If there's only one MR, no need to replace LKey in WQE. */
 	if (unlikely(mlx5_mr_btree_len(&rxq->mr_ctrl.cache_bh) > 1))
@@ -1459,7 +1462,7 @@ enum mlx5_txcmp_code {
 		if (consumed_strd == strd_n) {
 			/* Replace WQE only if the buffer is still in use. */
 			if (rte_atomic16_read(&buf->refcnt) > 1) {
-				mprq_buf_replace(rxq, rq_ci & wq_mask);
+				mprq_buf_replace(rxq, rq_ci & wq_mask, strd_n);
 				/* Release the old buffer. */
 				mlx5_mprq_buf_free(buf);
 			} else if (unlikely(rxq->mprq_repl == NULL)) {
@@ -1521,7 +1524,7 @@ enum mlx5_txcmp_code {
 		if (rxq->crc_present)
 			len -= RTE_ETHER_CRC_LEN;
 		offset = strd_idx * strd_sz + strd_shift;
-		addr = RTE_PTR_ADD(mlx5_mprq_buf_addr(buf), offset);
+		addr = RTE_PTR_ADD(mlx5_mprq_buf_addr(buf, strd_n), offset);
 		/* Initialize the offload flag. */
 		pkt->ol_flags = 0;
 		/*
@@ -1557,8 +1560,8 @@ enum mlx5_txcmp_code {
 			 */
 			buf_iova = rte_mempool_virt2iova(buf) +
 				   RTE_PTR_DIFF(addr, buf);
-			shinfo = rte_pktmbuf_ext_shinfo_init_helper(addr,
-					&buf_len, mlx5_mprq_buf_free_cb, buf);
+			shinfo = &buf->shinfos[strd_idx];
+			rte_mbuf_ext_refcnt_set(shinfo, 1);
 			/*
 			 * EXT_ATTACHED_MBUF will be set to pkt->ol_flags when
 			 * attaching the stride to mbuf and more offload flags
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index ed5f637..bbd9b31 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -75,10 +75,20 @@ struct mlx5_mprq_buf {
 	struct rte_mempool *mp;
 	rte_atomic16_t refcnt; /* Atomically accessed refcnt. */
 	uint8_t pad[RTE_PKTMBUF_HEADROOM]; /* Headroom for the first packet. */
+	struct rte_mbuf_ext_shared_info shinfos[];
+	/*
+	 * Shared information per stride.
+	 * More memory will be allocated for the first stride head-room and for
+	 * the strides data.
+	 */
 } __rte_cache_aligned;
 
 /* Get pointer to the first stride. */
-#define mlx5_mprq_buf_addr(ptr) ((ptr) + 1)
+#define mlx5_mprq_buf_addr(ptr, strd_n) (RTE_PTR_ADD((ptr), \
+				sizeof(struct mlx5_mprq_buf) + \
+				(strd_n) * \
+				sizeof(struct rte_mbuf_ext_shared_info) + \
+				RTE_PKTMBUF_HEADROOM))
 
 #define MLX5_MIN_SINGLE_STRIDE_LOG_NUM_BYTES 6
 #define MLX5_MIN_SINGLE_WQE_LOG_NUM_STRIDES 9
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH v2 24/28] net/mlx5: update LRO fields in completion entry
  2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
                     ` (22 preceding siblings ...)
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 23/28] net/mlx5: replace the external mbuf shared memory Matan Azrad
@ 2019-07-22 14:52   ` Matan Azrad
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 25/28] net/mlx5: handle LRO packets in Rx queue Matan Azrad
                     ` (4 subsequent siblings)
  28 siblings, 0 replies; 92+ messages in thread
From: Matan Azrad @ 2019-07-22 14:52 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
  Cc: dev, Dekel Peled

Update the CQE structure to include LRO fields.

Some reserved values were changed, hence also data-path code used the
reserved values were updated accordingly.

Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/net/mlx5/mlx5_prm.h          | 12 +++++++++---
 drivers/net/mlx5/mlx5_rxtx_vec.h     |  6 ++----
 drivers/net/mlx5/mlx5_rxtx_vec_sse.h | 16 ++++++++--------
 3 files changed, 19 insertions(+), 15 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index b0e281f..3f73a28 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -317,13 +317,19 @@ struct mlx5_cqe {
 	uint8_t pkt_info;
 	uint8_t rsvd0;
 	uint16_t wqe_id;
-	uint8_t rsvd3[8];
+	uint8_t lro_tcppsh_abort_dupack;
+	uint8_t lro_min_ttl;
+	uint16_t lro_tcp_win;
+	uint32_t lro_ack_seq_num;
 	uint32_t rx_hash_res;
 	uint8_t rx_hash_type;
-	uint8_t rsvd1[11];
+	uint8_t rsvd1[3];
+	uint16_t csum;
+	uint8_t rsvd2[6];
 	uint16_t hdr_type_etc;
 	uint16_t vlan_info;
-	uint8_t rsvd2[12];
+	uint8_t lro_num_seg;
+	uint8_t rsvd3[11];
 	uint32_t byte_cnt;
 	uint64_t timestamp;
 	uint32_t sop_drop_qpn;
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.h b/drivers/net/mlx5/mlx5_rxtx_vec.h
index 4220b08..b54ff72 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec.h
@@ -60,13 +60,11 @@
 #endif
 S_ASSERT_MLX5_CQE(offsetof(struct mlx5_cqe, rx_hash_res) ==
 		  offsetof(struct mlx5_cqe, pkt_info) + 12);
-S_ASSERT_MLX5_CQE(offsetof(struct mlx5_cqe, rsvd1) +
-		  sizeof(((struct mlx5_cqe *)0)->rsvd1) ==
+S_ASSERT_MLX5_CQE(offsetof(struct mlx5_cqe, rsvd1) + 11 ==
 		  offsetof(struct mlx5_cqe, hdr_type_etc));
 S_ASSERT_MLX5_CQE(offsetof(struct mlx5_cqe, vlan_info) ==
 		  offsetof(struct mlx5_cqe, hdr_type_etc) + 2);
-S_ASSERT_MLX5_CQE(offsetof(struct mlx5_cqe, rsvd2) +
-		  sizeof(((struct mlx5_cqe *)0)->rsvd2) ==
+S_ASSERT_MLX5_CQE(offsetof(struct mlx5_cqe, lro_num_seg) + 12 ==
 		  offsetof(struct mlx5_cqe, byte_cnt));
 S_ASSERT_MLX5_CQE(offsetof(struct mlx5_cqe, sop_drop_qpn) ==
 		  RTE_ALIGN(offsetof(struct mlx5_cqe, sop_drop_qpn), 8));
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
index 7bd254f..ca8ed41 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
@@ -533,12 +533,12 @@
 		cqe_tmp1 = _mm_load_si128((__m128i *)&cq[pos + p2]);
 		cqes[3] = _mm_blendv_epi8(cqes[3], cqe_tmp2, blend_mask);
 		cqes[2] = _mm_blendv_epi8(cqes[2], cqe_tmp1, blend_mask);
-		cqe_tmp2 = _mm_loadu_si128((__m128i *)&cq[pos + p3].rsvd1[3]);
-		cqe_tmp1 = _mm_loadu_si128((__m128i *)&cq[pos + p2].rsvd1[3]);
+		cqe_tmp2 = _mm_loadu_si128((__m128i *)&cq[pos + p3].csum);
+		cqe_tmp1 = _mm_loadu_si128((__m128i *)&cq[pos + p2].csum);
 		cqes[3] = _mm_blend_epi16(cqes[3], cqe_tmp2, 0x30);
 		cqes[2] = _mm_blend_epi16(cqes[2], cqe_tmp1, 0x30);
-		cqe_tmp2 = _mm_loadl_epi64((__m128i *)&cq[pos + p3].rsvd2[10]);
-		cqe_tmp1 = _mm_loadl_epi64((__m128i *)&cq[pos + p2].rsvd2[10]);
+		cqe_tmp2 = _mm_loadl_epi64((__m128i *)&cq[pos + p3].rsvd3[9]);
+		cqe_tmp1 = _mm_loadl_epi64((__m128i *)&cq[pos + p2].rsvd3[9]);
 		cqes[3] = _mm_blend_epi16(cqes[3], cqe_tmp2, 0x04);
 		cqes[2] = _mm_blend_epi16(cqes[2], cqe_tmp1, 0x04);
 		/* C.2 generate final structure for mbuf with swapping bytes. */
@@ -560,12 +560,12 @@
 		cqe_tmp1 = _mm_load_si128((__m128i *)&cq[pos]);
 		cqes[1] = _mm_blendv_epi8(cqes[1], cqe_tmp2, blend_mask);
 		cqes[0] = _mm_blendv_epi8(cqes[0], cqe_tmp1, blend_mask);
-		cqe_tmp2 = _mm_loadu_si128((__m128i *)&cq[pos + p1].rsvd1[3]);
-		cqe_tmp1 = _mm_loadu_si128((__m128i *)&cq[pos].rsvd1[3]);
+		cqe_tmp2 = _mm_loadu_si128((__m128i *)&cq[pos + p1].csum);
+		cqe_tmp1 = _mm_loadu_si128((__m128i *)&cq[pos].csum);
 		cqes[1] = _mm_blend_epi16(cqes[1], cqe_tmp2, 0x30);
 		cqes[0] = _mm_blend_epi16(cqes[0], cqe_tmp1, 0x30);
-		cqe_tmp2 = _mm_loadl_epi64((__m128i *)&cq[pos + p1].rsvd2[10]);
-		cqe_tmp1 = _mm_loadl_epi64((__m128i *)&cq[pos].rsvd2[10]);
+		cqe_tmp2 = _mm_loadl_epi64((__m128i *)&cq[pos + p1].rsvd3[9]);
+		cqe_tmp1 = _mm_loadl_epi64((__m128i *)&cq[pos].rsvd3[9]);
 		cqes[1] = _mm_blend_epi16(cqes[1], cqe_tmp2, 0x04);
 		cqes[0] = _mm_blend_epi16(cqes[0], cqe_tmp1, 0x04);
 		/* C.2 generate final structure for mbuf with swapping bytes. */
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH v2 25/28] net/mlx5: handle LRO packets in Rx queue
  2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
                     ` (23 preceding siblings ...)
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 24/28] net/mlx5: update LRO fields in completion entry Matan Azrad
@ 2019-07-22 14:52   ` Matan Azrad
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 26/28] net/mlx5: zero the LRO mbuf headroom Matan Azrad
                     ` (3 subsequent siblings)
  28 siblings, 0 replies; 92+ messages in thread
From: Matan Azrad @ 2019-07-22 14:52 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
  Cc: dev, Dekel Peled

When LRO offload is configured in Rx queue, the HW may coalesce TCP
packets from same TCP connection into single packet.

In this case the SW should fix the relevant packet headers because the
HW doesn't update them according to the new created packet
characteristics.

Add update header code to the mprq Rx burst function to support LRO
feature.

Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/net/mlx5/mlx5_prm.h  |  15 ++++++
 drivers/net/mlx5/mlx5_rxtx.c | 113 +++++++++++++++++++++++++++++++++++++++++--
 2 files changed, 123 insertions(+), 5 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index 3f73a28..32bc7a6 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -155,6 +155,21 @@
 /* Tunnel packet bit in the CQE. */
 #define MLX5_CQE_RX_TUNNEL_PACKET (1u << 0)
 
+/* Mask for LRO push flag in the CQE lro_tcppsh_abort_dupack field. */
+#define MLX5_CQE_LRO_PUSH_MASK 0x40
+
+/* Mask for L4 type in the CQE hdr_type_etc field. */
+#define MLX5_CQE_L4_TYPE_MASK 0x70
+
+/* The bit index of L4 type in CQE hdr_type_etc field. */
+#define MLX5_CQE_L4_TYPE_SHIFT 0x4
+
+/* L4 type to indicate TCP packet without acknowledgment. */
+#define MLX5_L4_HDR_TYPE_TCP_EMPTY_ACK 0x3
+
+/* L4 type to indicate TCP packet with acknowledgment. */
+#define MLX5_L4_HDR_TYPE_TCP_WITH_ACL 0x4
+
 /* Inner L3 checksum offload (Tunneled packets only). */
 #define MLX5_ETH_WQE_L3_INNER_CSUM (1u << 4)
 
diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 241e01b..c7487ac 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -1374,6 +1374,101 @@ enum mlx5_txcmp_code {
 	return i;
 }
 
+/**
+ * Update LRO packet TCP header.
+ * The HW LRO feature doesn't update the TCP header after coalescing the
+ * TCP segments but supplies information in CQE to fill it by SW.
+ *
+ * @param tcp
+ *   Pointer to the TCP header.
+ * @param cqe
+ *   Pointer to the completion entry..
+ * @param phcsum
+ *   The L3 pseudo-header checksum.
+ */
+static inline void
+mlx5_lro_update_tcp_hdr(struct rte_tcp_hdr *restrict tcp,
+			volatile struct mlx5_cqe *restrict cqe,
+			uint32_t phcsum)
+{
+	uint8_t l4_type = (rte_be_to_cpu_16(cqe->hdr_type_etc) &
+			   MLX5_CQE_L4_TYPE_MASK) >> MLX5_CQE_L4_TYPE_SHIFT;
+	/*
+	 * The HW calculates only the TCP payload checksum, need to complete
+	 * the TCP header checksum and the L3 pseudo-header checksum.
+	 */
+	uint32_t csum = phcsum + cqe->csum;
+
+	if (l4_type == MLX5_L4_HDR_TYPE_TCP_EMPTY_ACK ||
+	    l4_type == MLX5_L4_HDR_TYPE_TCP_WITH_ACL) {
+		tcp->tcp_flags |= RTE_TCP_ACK_FLAG;
+		tcp->recv_ack = cqe->lro_ack_seq_num;
+		tcp->rx_win = cqe->lro_tcp_win;
+	}
+	if (cqe->lro_tcppsh_abort_dupack & MLX5_CQE_LRO_PUSH_MASK)
+		tcp->tcp_flags |= RTE_TCP_PSH_FLAG;
+	tcp->cksum = 0;
+	csum += rte_raw_cksum(tcp, (tcp->data_off & 0xF) * 4);
+	csum = ((csum & 0xffff0000) >> 16) + (csum & 0xffff);
+	csum = (~csum) & 0xffff;
+	if (csum == 0)
+		csum = 0xffff;
+	tcp->cksum = csum;
+}
+
+/**
+ * Update LRO packet headers.
+ * The HW LRO feature doesn't update the L3/TCP headers after coalescing the
+ * TCP segments but supply information in CQE to fill it by SW.
+ *
+ * @param padd
+ *   The packet address.
+ * @param cqe
+ *   Pointer to the completion entry..
+ * @param len
+ *   The packet length.
+ */
+static inline void
+mlx5_lro_update_hdr(uint8_t *restrict padd,
+		    volatile struct mlx5_cqe *restrict cqe,
+		    uint32_t len)
+{
+	union {
+		struct rte_ether_hdr *eth;
+		struct rte_vlan_hdr *vlan;
+		struct rte_ipv4_hdr *ipv4;
+		struct rte_ipv6_hdr *ipv6;
+		struct rte_tcp_hdr *tcp;
+		uint8_t *hdr;
+	} h = {
+			.hdr = padd,
+	};
+	uint16_t proto = h.eth->ether_type;
+	uint32_t phcsum;
+
+	h.eth++;
+	while (proto == RTE_BE16(RTE_ETHER_TYPE_VLAN) ||
+	       proto == RTE_BE16(RTE_ETHER_TYPE_QINQ)) {
+		proto = h.vlan->eth_proto;
+		h.vlan++;
+	}
+	if (proto == RTE_BE16(RTE_ETHER_TYPE_IPV4)) {
+		h.ipv4->time_to_live = cqe->lro_min_ttl;
+		h.ipv4->total_length = rte_cpu_to_be_16(len - (h.hdr - padd));
+		h.ipv4->hdr_checksum = 0;
+		h.ipv4->hdr_checksum = rte_ipv4_cksum(h.ipv4);
+		phcsum = rte_ipv4_phdr_cksum(h.ipv4, 0);
+		h.ipv4++;
+	} else {
+		h.ipv6->hop_limits = cqe->lro_min_ttl;
+		h.ipv6->payload_len = rte_cpu_to_be_16(len - (h.hdr - padd) -
+						       sizeof(*h.ipv6));
+		phcsum = rte_ipv6_phdr_cksum(h.ipv6, 0);
+		h.ipv6++;
+	}
+	mlx5_lro_update_tcp_hdr(h.tcp, cqe, phcsum);
+}
+
 void
 mlx5_mprq_buf_free_cb(void *addr __rte_unused, void *opaque)
 {
@@ -1458,6 +1553,7 @@ enum mlx5_txcmp_code {
 		uint32_t byte_cnt;
 		volatile struct mlx5_mini_cqe8 *mcqe = NULL;
 		uint32_t rss_hash_res = 0;
+		uint8_t lro_num_seg;
 
 		if (consumed_strd == strd_n) {
 			/* Replace WQE only if the buffer is still in use. */
@@ -1503,6 +1599,7 @@ enum mlx5_txcmp_code {
 		}
 		assert(strd_idx < strd_n);
 		assert(!((rte_be_to_cpu_16(cqe->wqe_id) ^ rq_ci) & wq_mask));
+		lro_num_seg = cqe->lro_num_seg;
 		/*
 		 * Currently configured to receive a packet per a stride. But if
 		 * MTU is adjusted through kernel interface, device could
@@ -1510,7 +1607,7 @@ enum mlx5_txcmp_code {
 		 * case, the packet should be dropped because it is bigger than
 		 * the max_rx_pkt_len.
 		 */
-		if (unlikely(strd_cnt > 1)) {
+		if (unlikely(!lro_num_seg && strd_cnt > 1)) {
 			++rxq->stats.idropped;
 			continue;
 		}
@@ -1547,19 +1644,20 @@ enum mlx5_txcmp_code {
 			rte_iova_t buf_iova;
 			struct rte_mbuf_ext_shared_info *shinfo;
 			uint16_t buf_len = strd_cnt * strd_sz;
+			void *buf_addr;
 
 			/* Increment the refcnt of the whole chunk. */
 			rte_atomic16_add_return(&buf->refcnt, 1);
 			assert((uint16_t)rte_atomic16_read(&buf->refcnt) <=
 			       strd_n + 1);
-			addr = RTE_PTR_SUB(addr, RTE_PKTMBUF_HEADROOM);
+			buf_addr = RTE_PTR_SUB(addr, RTE_PKTMBUF_HEADROOM);
 			/*
 			 * MLX5 device doesn't use iova but it is necessary in a
 			 * case where the Rx packet is transmitted via a
 			 * different PMD.
 			 */
 			buf_iova = rte_mempool_virt2iova(buf) +
-				   RTE_PTR_DIFF(addr, buf);
+				   RTE_PTR_DIFF(buf_addr, buf);
 			shinfo = &buf->shinfos[strd_idx];
 			rte_mbuf_ext_refcnt_set(shinfo, 1);
 			/*
@@ -1568,8 +1666,8 @@ enum mlx5_txcmp_code {
 			 * will be added below by calling rxq_cq_to_mbuf().
 			 * Other fields will be overwritten.
 			 */
-			rte_pktmbuf_attach_extbuf(pkt, addr, buf_iova, buf_len,
-						  shinfo);
+			rte_pktmbuf_attach_extbuf(pkt, buf_addr, buf_iova,
+						  buf_len, shinfo);
 			rte_pktmbuf_reset_headroom(pkt);
 			assert(pkt->ol_flags == EXT_ATTACHED_MBUF);
 			/*
@@ -1583,6 +1681,11 @@ enum mlx5_txcmp_code {
 			}
 		}
 		rxq_cq_to_mbuf(rxq, pkt, cqe, rss_hash_res);
+		if (lro_num_seg > 1) {
+			mlx5_lro_update_hdr(addr, cqe, len);
+			pkt->ol_flags |= PKT_RX_LRO;
+			pkt->tso_segsz = strd_sz;
+		}
 		PKT_LEN(pkt) = len;
 		DATA_LEN(pkt) = len;
 		PORT(pkt) = rxq->port_id;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH v2 26/28] net/mlx5: zero the LRO mbuf headroom
  2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
                     ` (24 preceding siblings ...)
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 25/28] net/mlx5: handle LRO packets in Rx queue Matan Azrad
@ 2019-07-22 14:52   ` Matan Azrad
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 27/28] net/mlx5: adjust the maximum LRO message size Matan Azrad
                     ` (2 subsequent siblings)
  28 siblings, 0 replies; 92+ messages in thread
From: Matan Azrad @ 2019-07-22 14:52 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
  Cc: dev, Dekel Peled

LRO packet may consume all the stride memory, hence the PMD cannot
guaranty head-room for the LRO mbuf.

The issue is lack in HW support to write the packet in offset from the
stride start.

A new striding RQ feature may be added in CX6 DX to allow head-room and
tail-room for the LRO strides.

Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/net/mlx5/mlx5_rxq.c  | 16 +++++++++++-----
 drivers/net/mlx5/mlx5_rxtx.c |  6 ++++--
 drivers/net/mlx5/mlx5_rxtx.h |  3 ++-
 3 files changed, 17 insertions(+), 8 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 0538caf..edfcdd1 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1566,6 +1566,12 @@ struct mlx5_rxq_ctrl *
 	unsigned int mprq_stride_size;
 	struct mlx5_dev_config *config = &priv->config;
 	/*
+	 * LRO packet may consume all the stride memory, hence we cannot
+	 * guaranty head-room. A new striding RQ feature may be added in CX6 DX
+	 * to allow head-room and tail-room for the LRO packets.
+	 */
+	unsigned int strd_headroom_en = mlx5_lro_on(dev) ? 0 : 1;
+	/*
 	 * Always allocate extra slots, even if eventually
 	 * the vector Rx will not be used.
 	 */
@@ -1600,9 +1606,9 @@ struct mlx5_rxq_ctrl *
 	 *    stride.
 	 *  Otherwise, enable Rx scatter if necessary.
 	 */
-	assert(mb_len >= RTE_PKTMBUF_HEADROOM);
+	assert(mb_len >= RTE_PKTMBUF_HEADROOM * strd_headroom_en);
 	mprq_stride_size = dev->data->dev_conf.rxmode.max_rx_pkt_len +
-				RTE_PKTMBUF_HEADROOM;
+				RTE_PKTMBUF_HEADROOM * strd_headroom_en;
 	if (mprq_en &&
 	    desc > (1U << config->mprq.stride_num_n) &&
 	    mprq_stride_size <= (1U << config->mprq.max_stride_size_n)) {
@@ -1614,9 +1620,9 @@ struct mlx5_rxq_ctrl *
 		tmpl->rxq.strd_sz_n = RTE_MAX(log2above(mprq_stride_size),
 					      config->mprq.min_stride_size_n);
 		tmpl->rxq.strd_shift_en = MLX5_MPRQ_TWO_BYTE_SHIFT;
-		tmpl->rxq.mprq_max_memcpy_len =
-			RTE_MIN(mb_len - RTE_PKTMBUF_HEADROOM,
-				config->mprq.max_memcpy_len);
+		tmpl->rxq.strd_headroom_en = strd_headroom_en;
+		tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(mb_len -
+			    RTE_PKTMBUF_HEADROOM, config->mprq.max_memcpy_len);
 		DRV_LOG(DEBUG,
 			"port %u Rx queue %u: Multi-Packet RQ is enabled"
 			" strd_num_n = %u, strd_sz_n = %u",
diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index c7487ac..3872966 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -1540,6 +1540,7 @@ enum mlx5_txcmp_code {
 	unsigned int i = 0;
 	uint32_t rq_ci = rxq->rq_ci;
 	uint16_t consumed_strd = rxq->consumed_strd;
+	uint16_t headroom_sz = rxq->strd_headroom_en * RTE_PKTMBUF_HEADROOM;
 	struct mlx5_mprq_buf *buf = (*rxq->mprq_bufs)[rq_ci & wq_mask];
 
 	while (i < pkts_n) {
@@ -1650,7 +1651,7 @@ enum mlx5_txcmp_code {
 			rte_atomic16_add_return(&buf->refcnt, 1);
 			assert((uint16_t)rte_atomic16_read(&buf->refcnt) <=
 			       strd_n + 1);
-			buf_addr = RTE_PTR_SUB(addr, RTE_PKTMBUF_HEADROOM);
+			buf_addr = RTE_PTR_SUB(addr, headroom_sz);
 			/*
 			 * MLX5 device doesn't use iova but it is necessary in a
 			 * case where the Rx packet is transmitted via a
@@ -1668,7 +1669,8 @@ enum mlx5_txcmp_code {
 			 */
 			rte_pktmbuf_attach_extbuf(pkt, buf_addr, buf_iova,
 						  buf_len, shinfo);
-			rte_pktmbuf_reset_headroom(pkt);
+			/* Set mbuf head-room. */
+			pkt->data_off = headroom_sz;
 			assert(pkt->ol_flags == EXT_ATTACHED_MBUF);
 			/*
 			 * Prevent potential overflow due to MTU change through
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index bbd9b31..4252832 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -114,7 +114,8 @@ struct mlx5_rxq_data {
 	unsigned int strd_sz_n:4; /* Log 2 of stride size. */
 	unsigned int strd_shift_en:1; /* Enable 2bytes shift on a stride. */
 	unsigned int err_state:2; /* enum mlx5_rxq_err_state. */
-	unsigned int :4; /* Remaining bits. */
+	unsigned int strd_headroom_en:1; /* Enable mbuf headroom in MPRQ. */
+	unsigned int :3; /* Remaining bits. */
 	volatile uint32_t *rq_db;
 	volatile uint32_t *cq_db;
 	uint16_t port_id;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH v2 27/28] net/mlx5: adjust the maximum LRO message size
  2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
                     ` (25 preceding siblings ...)
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 26/28] net/mlx5: zero the LRO mbuf headroom Matan Azrad
@ 2019-07-22 14:52   ` Matan Azrad
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 28/28] doc: update MLX5 doc and release notes with LRO Matan Azrad
  2019-07-23  6:48   ` [dpdk-dev] [PATCH v2 00/28] net/mlx5: support LRO Raslan Darawsheh
  28 siblings, 0 replies; 92+ messages in thread
From: Matan Azrad @ 2019-07-22 14:52 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
  Cc: dev, Dekel Peled

LRO message is contained in the MPRQ strides.
While the LRO message size cannot be bigger than 65280 according to the
PRM, the strides which contain it may be bigger than the maximum buffer
size allowed in dpdk mbuf - 0xFFFF.

Adjust the maximum LRO message size to avoid buffer length overflow.

Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/net/mlx5/mlx5.h     |  1 +
 drivers/net/mlx5/mlx5_rxq.c | 37 ++++++++++++++++++++++++++++++++++++-
 2 files changed, 37 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index b98c406..6cb8858 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -625,6 +625,7 @@ struct mlx5_priv {
 	struct ibv_flow_action *verbs_action;
 	/**< Verbs modify header action object. */
 	uint8_t ft_type; /**< Flow table type, Rx or Tx. */
+	uint8_t max_lro_msg_size;
 	/* Tags resources cache. */
 	uint32_t link_speed_capa; /* Link speed capabilities. */
 	struct mlx5_xstats_ctrl xstats_ctrl; /* Extended stats control. */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index edfcdd1..b225055 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1541,6 +1541,39 @@ struct mlx5_rxq_obj *
 }
 
 /**
+ * Adjust the maximum LRO massage size.
+ * LRO massage is contained in the MPRQ strides.
+ * While the LRO massage size cannot be bigger than 65280 according to the
+ * PRM, the strides which contain it may be bigger.
+ * Adjust the maximum LRO massage size to avoid the above option.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ * @param strd_n
+ *   Number of strides per WQE..
+ * @param strd_sz
+ *   The stride size.
+ */
+static void
+mlx5_max_lro_msg_size_adjust(struct rte_eth_dev *dev, uint32_t strd_n,
+			     uint32_t strd_sz)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	uint32_t max_buf_len = strd_sz * strd_n;
+
+	if (max_buf_len > (uint64_t)UINT16_MAX)
+		max_buf_len = RTE_ALIGN_FLOOR((uint32_t)UINT16_MAX, strd_sz);
+	max_buf_len /= 256;
+	max_buf_len = RTE_MIN(max_buf_len, (uint32_t)UINT8_MAX);
+	assert(max_buf_len);
+	if (priv->max_lro_msg_size)
+		priv->max_lro_msg_size =
+			RTE_MIN((uint32_t)priv->max_lro_msg_size, max_buf_len);
+	else
+		priv->max_lro_msg_size = max_buf_len;
+}
+
+/**
  * Create a DPDK Rx queue.
  *
  * @param dev
@@ -1623,6 +1656,8 @@ struct mlx5_rxq_ctrl *
 		tmpl->rxq.strd_headroom_en = strd_headroom_en;
 		tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(mb_len -
 			    RTE_PKTMBUF_HEADROOM, config->mprq.max_memcpy_len);
+		mlx5_max_lro_msg_size_adjust(dev, (1 << tmpl->rxq.strd_num_n),
+					     (1 << tmpl->rxq.strd_sz_n));
 		DRV_LOG(DEBUG,
 			"port %u Rx queue %u: Multi-Packet RQ is enabled"
 			" strd_num_n = %u, strd_sz_n = %u",
@@ -2165,7 +2200,7 @@ struct mlx5_hrxq *
 		if (lro) {
 			tir_attr.lro_timeout_period_usecs =
 					priv->config.lro.timeout;
-			tir_attr.lro_max_msg_sz = 0xff;
+			tir_attr.lro_max_msg_sz = priv->max_lro_msg_size;
 			tir_attr.lro_enable_mask = lro;
 		}
 		tir = mlx5_devx_cmd_create_tir(priv->sh->ctx, &tir_attr);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [dpdk-dev] [PATCH v2 28/28] doc: update MLX5 doc and release notes with LRO
  2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
                     ` (26 preceding siblings ...)
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 27/28] net/mlx5: adjust the maximum LRO message size Matan Azrad
@ 2019-07-22 14:52   ` Matan Azrad
  2019-07-23  6:48   ` [dpdk-dev] [PATCH v2 00/28] net/mlx5: support LRO Raslan Darawsheh
  28 siblings, 0 replies; 92+ messages in thread
From: Matan Azrad @ 2019-07-22 14:52 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
  Cc: dev, Dekel Peled

From: Dekel Peled <dekelp@mellanox.com>

Add documentation of LRO feature.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 doc/guides/nics/features/mlx5.ini      |  1 +
 doc/guides/nics/mlx5.rst               | 14 ++++++++++++++
 doc/guides/rel_notes/release_19_08.rst |  2 +-
 3 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini
index f7e7358..c0ebdbd 100644
--- a/doc/guides/nics/features/mlx5.ini
+++ b/doc/guides/nics/features/mlx5.ini
@@ -13,6 +13,7 @@ Queue start/stop     = Y
 MTU update           = Y
 Jumbo frame          = Y
 Scattered Rx         = Y
+LRO                  = Y
 TSO                  = Y
 Promiscuous mode     = Y
 Allmulticast mode    = Y
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 7e87344..85d96be 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -82,6 +82,7 @@ Features
   increment/decrement, count, drop, mark. For details please see :ref:`Supported hardware offloads using rte_flow API`.
 - Flow insertion rate of more then million flows per second, when using Direct Rules.
 - Support for multiple rte_flow groups.
+- Hardware LRO.
 
 Limitations
 -----------
@@ -162,6 +163,11 @@ Limitations
 
 - ICMP/ICMP6 code/type matching cannot be supported togeter with IP-in-IP tunnel.
 
+- LRO:
+
+  - No mbuf head-room space is created for RX packets when LRO is configured.
+  - scatter_fcs is disabled  when LRO is configured.
+
 Statistics
 ----------
 
@@ -556,6 +562,14 @@ Run-time configuration
 
   set to 128 by default.
 
+- ``lro_timeout_usec`` parameter [int]
+
+  The maximum allowed duration of an LRO session, in micro-seconds.
+  PMD will set the nearest value supported by HW, which is not bigger than
+  the input lro_timeout_usec value.
+  If this parameter is not specified, by default PMD will set the smallest value
+  supported by HW.
+
 Firmware configuration
 ~~~~~~~~~~~~~~~~~~~~~~
 
diff --git a/doc/guides/rel_notes/release_19_08.rst b/doc/guides/rel_notes/release_19_08.rst
index 6c382cb..d8676b6 100644
--- a/doc/guides/rel_notes/release_19_08.rst
+++ b/doc/guides/rel_notes/release_19_08.rst
@@ -117,7 +117,7 @@ New Features
   * Accelerate flows with count action creation and destroy.
   * Accelerate flows counter query.
   * Improve Tx datapath improves performance with enabled HW offloads.
-
+  * Added support for LRO.
 
 * **Updated Solarflare network PMD.**
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH v2 00/28] net/mlx5: support LRO
  2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
                     ` (27 preceding siblings ...)
  2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 28/28] doc: update MLX5 doc and release notes with LRO Matan Azrad
@ 2019-07-23  6:48   ` Raslan Darawsheh
  28 siblings, 0 replies; 92+ messages in thread
From: Raslan Darawsheh @ 2019-07-23  6:48 UTC (permalink / raw)
  To: Matan Azrad, Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Slava Ovsiienko
  Cc: dev, Dekel Peled

Hi,

> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Matan Azrad
> Sent: Monday, July 22, 2019 5:52 PM
> To: Ferruh Yigit <ferruh.yigit@intel.com>; Shahaf Shuler
> <shahafs@mellanox.com>; Yongseok Koh <yskoh@mellanox.com>; Slava
> Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [dpdk-dev] [PATCH v2 00/28] net/mlx5: support LRO
> 
> Introduction:
> LRO (Large Receive Offload) is intended to reduce host CPU overhead when
> processing Rx TCP packets.
> LRO works by aggregating multiple incoming packets from a single stream
> into a larger buffer, before they are passed higher up the networking stack.
> Thus reducing the number of packets that have to be processed.
> 
> Use:
> MLX5 PMD will query the HCA capabilities on initialization to check if LRO is
> supported and can be used.
> LRO in MLX5 PMD is intended for use by applications using a relatively small
> number of flows.
> LRO support can be enabled only per port.
> In each LRO session, packets of the same flow will be coalesced until one of
> the following occur:
>   *   Buffer size limit is exceeded.
>   *   Session timeout is exceeded.
>   *   Packet from a different flow is received on the same queue.
> 
> When LRO session ends the coalesced packet is passed to the PMD, which
> will update the header fields before passing the packet to the application.
> For efficient memory utilization, the MPRQ mechanism is used.
> Support of Non-LRO flows will not be impacted.
> 
> Existing API:
> Offload capability DEV_RX_OFFLOAD_TCP_LRO will be used to indicate
> device supports LRO.
> testpmd command-line option "-enable-lro" will be used to request LRO
> feature enable on application start.
> testpmd rx_offload "tcp_lro" on or off will be used to request LRO feature
> enable or disable during application runtime.
> Offload flag PKT_RX_LRO will be used. This flag can be set in Rx mbuf to
> indicate this is a LRO coalesced packet.
> 
> New API:
> PMD configuration parameter lro_timeout_usec will be added.
> This parameter can be used by application to select LRO session timeout (in
> microseconds).
> If this value is not specified, the minimal value supported by device will be
> used.
> 
> Known limitations:
> mbuf head-room is zero for any packet if LRO is configured in the port.
> Keep CRC offload cannot be supported with LRO.
> CQE compression is not supported with LRO.
> 
> v2:
> Fix small compilation issue detected per commit (Found By Ferruh).
> 
> Dekel Peled (23):
>   net/mlx5: remove redundant item from union
>   net/mlx5: add LRO APIs and initial settings
>   net/mlx5: support LRO caps query using devx API
>   net/mlx5: glue func for queue query using new API
>   net/mlx5: glue function for action using new API
>   net/mlx5: check conditions to enable LRO
>   net/mlx5: support Tx interface query using new API
>   net/mlx5: update Tx queue create for LRO
>   net/mlx5: create advanced RxQ object using new API
>   net/mlx5: modify advanced RxQ object using new API
>   net/mlx5: create advanced Rx object using new API
>   net/mlx5: create advanced RxQ table using new API
>   net/mlx5: allocate door-bells using new API
>   net/mlx5: rename RxQ verbs to general RxQ object
>   net/mlx5: rename verbs indirection table to obj
>   net/mlx5: rename hash RxQ verbs to general
>   net/mlx5: update queue state modify function
>   net/mlx5: store protection domain number on create
>   net/mlx5: func to create Rx verbs completion queue
>   net/mlx5: function to create Rx verbs work queue
>   net/mlx5: create advanced RxQ using new API
>   net/mlx5: support LRO with single RxQ object
>   doc: update MLX5 doc and release notes with LRO
> 
> Matan Azrad (5):
>   net/mlx5: replace the external mbuf shared memory
>   net/mlx5: update LRO fields in completion entry
>   net/mlx5: handle LRO packets in Rx queue
>   net/mlx5: zero the LRO mbuf headroom
>   net/mlx5: adjust the maximum LRO message size
> 
>  doc/guides/nics/features/mlx5.ini      |    1 +
>  doc/guides/nics/mlx5.rst               |   14 +
>  doc/guides/rel_notes/release_19_08.rst |    2 +-
>  drivers/net/mlx5/Makefile              |    5 +
>  drivers/net/mlx5/meson.build           |    2 +
>  drivers/net/mlx5/mlx5.c                |  223 ++++++-
>  drivers/net/mlx5/mlx5.h                |  160 ++++-
>  drivers/net/mlx5/mlx5_devx_cmds.c      |  326 +++++++++
>  drivers/net/mlx5/mlx5_ethdev.c         |   14 +-
>  drivers/net/mlx5/mlx5_flow.h           |    6 +
>  drivers/net/mlx5/mlx5_flow_dv.c        |   28 +-
>  drivers/net/mlx5/mlx5_flow_verbs.c     |    3 +-
>  drivers/net/mlx5/mlx5_glue.c           |   33 +
>  drivers/net/mlx5/mlx5_glue.h           |    6 +-
>  drivers/net/mlx5/mlx5_prm.h            |  379 ++++++++++-
>  drivers/net/mlx5/mlx5_rxq.c            | 1132 ++++++++++++++++++++++-------
> ---
>  drivers/net/mlx5/mlx5_rxtx.c           |  167 ++++-
>  drivers/net/mlx5/mlx5_rxtx.h           |   80 ++-
>  drivers/net/mlx5/mlx5_rxtx_vec.h       |    6 +-
>  drivers/net/mlx5/mlx5_rxtx_vec_sse.h   |   16 +-
>  drivers/net/mlx5/mlx5_trigger.c        |   12 +-
>  drivers/net/mlx5/mlx5_txq.c            |   27 +-
>  drivers/net/mlx5/mlx5_vlan.c           |   32 +-
>  23 files changed, 2194 insertions(+), 480 deletions(-)
> 
> --
> 1.8.3.1

Series applied to next-net-mlx,

Kindest regards
Raslan Darawsheh

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH v2 01/28] net/mlx5: remove redundant item from union
  2019-07-22 14:51   ` [dpdk-dev] [PATCH v2 01/28] net/mlx5: remove redundant item from union Matan Azrad
@ 2019-07-23 10:53     ` Ferruh Yigit
  2019-07-23 12:10       ` Matan Azrad
  0 siblings, 1 reply; 92+ messages in thread
From: Ferruh Yigit @ 2019-07-23 10:53 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
  Cc: dev, Dekel Peled, Yongseok Koh, Luca Boccassi, Kevin Traynor,
	Thomas Monjalon

On 7/22/2019 3:51 PM, Matan Azrad wrote:
> From: Dekel Peled <dekelp@mellanox.com>
> 
> A variable of type struct ibv_cq_ex is declared in 2 unions, but
> isn't used.
> This patch removes the 2 redundant declarations.
This is not a fix, but I think it still make sense to request backporting these
kind of refactoring too,

1) It won't hurt to have these cleanups in the LTS
2) Not getting them potentially may cause conflicts for actual fixes in long
run. This is small change on its own, but when these kind of changes accumulated
it may make difference.

cc'ed LTS maintainers in case I am missing a point.

> 
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> Acked-by: Matan Azrad <matan@mellanox.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
> ---
>  drivers/net/mlx5/mlx5_rxq.c | 1 -
>  drivers/net/mlx5/mlx5_txq.c | 1 -
>  2 files changed, 2 deletions(-)
> 
> diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
> index 39b8b7a..0535ce3 100644
> --- a/drivers/net/mlx5/mlx5_rxq.c
> +++ b/drivers/net/mlx5/mlx5_rxq.c
> @@ -839,7 +839,6 @@ struct mlx5_rxq_ibv *
>  			struct mlx5dv_wq_init_attr mlx5;
>  #endif
>  		} wq;
> -		struct ibv_cq_ex cq_attr;
>  	} attr;
>  	unsigned int cqe_n;
>  	unsigned int wqe_n = 1 << rxq_data->elts_n;
> diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
> index 2f3aa5b..dbad361 100644
> --- a/drivers/net/mlx5/mlx5_txq.c
> +++ b/drivers/net/mlx5/mlx5_txq.c
> @@ -388,7 +388,6 @@ struct mlx5_txq_ibv *
>  		struct ibv_qp_init_attr_ex init;
>  		struct ibv_cq_init_attr_ex cq;
>  		struct ibv_qp_attr mod;
> -		struct ibv_cq_ex cq_attr;
>  	} attr;
>  	unsigned int cqe_n;
>  	struct mlx5dv_qp qp = { .comp_mask = MLX5DV_QP_MASK_UAR_MMAP_OFFSET };
> 


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [dpdk-dev] [PATCH v2 01/28] net/mlx5: remove redundant item from union
  2019-07-23 10:53     ` Ferruh Yigit
@ 2019-07-23 12:10       ` Matan Azrad
  0 siblings, 0 replies; 92+ messages in thread
From: Matan Azrad @ 2019-07-23 12:10 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler, Yongseok Koh, Slava Ovsiienko
  Cc: dev, Dekel Peled, Yongseok Koh, Luca Boccassi, Kevin Traynor,
	Thomas Monjalon

Hi Ferruh

 From: Ferruh Yigit 
> On 7/22/2019 3:51 PM, Matan Azrad wrote:
> > From: Dekel Peled <dekelp@mellanox.com>
> >
> > A variable of type struct ibv_cq_ex is declared in 2 unions, but isn't
> > used.
> > This patch removes the 2 redundant declarations.
> This is not a fix, but I think it still make sense to request backporting these
> kind of refactoring too,
> 
> 1) It won't hurt to have these cleanups in the LTS
> 2) Not getting them potentially may cause conflicts for actual fixes in long
> run. This is small change on its own, but when these kind of changes
> accumulated it may make difference.
> 
> cc'ed LTS maintainers in case I am missing a point.
> 
Agree.
You can add it for stable releases.

Matan.

> >
> > Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> > Acked-by: Matan Azrad <matan@mellanox.com>
> > Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
> > ---
> >  drivers/net/mlx5/mlx5_rxq.c | 1 -
> >  drivers/net/mlx5/mlx5_txq.c | 1 -
> >  2 files changed, 2 deletions(-)
> >
> > diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
> > index 39b8b7a..0535ce3 100644
> > --- a/drivers/net/mlx5/mlx5_rxq.c
> > +++ b/drivers/net/mlx5/mlx5_rxq.c
> > @@ -839,7 +839,6 @@ struct mlx5_rxq_ibv *
> >  			struct mlx5dv_wq_init_attr mlx5;
> >  #endif
> >  		} wq;
> > -		struct ibv_cq_ex cq_attr;
> >  	} attr;
> >  	unsigned int cqe_n;
> >  	unsigned int wqe_n = 1 << rxq_data->elts_n; diff --git
> > a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c index
> > 2f3aa5b..dbad361 100644
> > --- a/drivers/net/mlx5/mlx5_txq.c
> > +++ b/drivers/net/mlx5/mlx5_txq.c
> > @@ -388,7 +388,6 @@ struct mlx5_txq_ibv *
> >  		struct ibv_qp_init_attr_ex init;
> >  		struct ibv_cq_init_attr_ex cq;
> >  		struct ibv_qp_attr mod;
> > -		struct ibv_cq_ex cq_attr;
> >  	} attr;
> >  	unsigned int cqe_n;
> >  	struct mlx5dv_qp qp = { .comp_mask =
> MLX5DV_QP_MASK_UAR_MMAP_OFFSET
> > };
> >


^ permalink raw reply	[flat|nested] 92+ messages in thread

end of thread, other threads:[~2019-07-23 12:10 UTC | newest]

Thread overview: 92+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-22  9:12 [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Matan Azrad
2019-07-22  9:12 ` [dpdk-dev] [PATCH 01/28] net/mlx5: remove redundant item from union Matan Azrad
2019-07-22  9:17   ` Slava Ovsiienko
2019-07-22  9:12 ` [dpdk-dev] [PATCH 02/28] net/mlx5: add LRO APIs and initial settings Matan Azrad
2019-07-22  9:25   ` Slava Ovsiienko
2019-07-22  9:12 ` [dpdk-dev] [PATCH 03/28] net/mlx5: support LRO caps query using devx API Matan Azrad
2019-07-22  9:17   ` Slava Ovsiienko
2019-07-22  9:12 ` [dpdk-dev] [PATCH 04/28] net/mlx5: glue func for queue query using new API Matan Azrad
2019-07-22  9:18   ` Slava Ovsiienko
2019-07-22  9:12 ` [dpdk-dev] [PATCH 05/28] net/mlx5: glue function for action " Matan Azrad
2019-07-22  9:18   ` Slava Ovsiienko
2019-07-22  9:12 ` [dpdk-dev] [PATCH 06/28] net/mlx5: check conditions to enable LRO Matan Azrad
2019-07-22  9:18   ` Slava Ovsiienko
2019-07-22  9:12 ` [dpdk-dev] [PATCH 07/28] net/mlx5: support Tx interface query using new API Matan Azrad
2019-07-22  9:19   ` Slava Ovsiienko
2019-07-22  9:12 ` [dpdk-dev] [PATCH 08/28] net/mlx5: update Tx queue create for LRO Matan Azrad
2019-07-22  9:18   ` Slava Ovsiienko
2019-07-22  9:12 ` [dpdk-dev] [PATCH 09/28] net/mlx5: create advanced RxQ object using new API Matan Azrad
2019-07-22  9:17   ` Slava Ovsiienko
2019-07-22  9:12 ` [dpdk-dev] [PATCH 10/28] net/mlx5: modify " Matan Azrad
2019-07-22  9:20   ` Slava Ovsiienko
2019-07-22  9:12 ` [dpdk-dev] [PATCH 11/28] net/mlx5: create advanced Rx " Matan Azrad
2019-07-22  9:20   ` Slava Ovsiienko
2019-07-22  9:12 ` [dpdk-dev] [PATCH 12/28] net/mlx5: create advanced RxQ table " Matan Azrad
2019-07-22  9:21   ` Slava Ovsiienko
2019-07-22  9:13 ` [dpdk-dev] [PATCH 13/28] net/mlx5: allocate door-bells " Matan Azrad
2019-07-22  9:20   ` Slava Ovsiienko
2019-07-22  9:13 ` [dpdk-dev] [PATCH 14/28] net/mlx5: rename RxQ verbs to general RxQ object Matan Azrad
2019-07-22  9:22   ` Slava Ovsiienko
2019-07-22  9:13 ` [dpdk-dev] [PATCH 15/28] net/mlx5: rename verbs indirection table to obj Matan Azrad
2019-07-22  9:22   ` Slava Ovsiienko
2019-07-22  9:13 ` [dpdk-dev] [PATCH 16/28] net/mlx5: rename hash RxQ verbs to general Matan Azrad
2019-07-22  9:22   ` Slava Ovsiienko
2019-07-22  9:13 ` [dpdk-dev] [PATCH 17/28] net/mlx5: update queue state modify function Matan Azrad
2019-07-22  9:22   ` Slava Ovsiienko
2019-07-22  9:13 ` [dpdk-dev] [PATCH 18/28] net/mlx5: store protection domain number on create Matan Azrad
2019-07-22  9:21   ` Slava Ovsiienko
2019-07-22  9:13 ` [dpdk-dev] [PATCH 19/28] net/mlx5: func to create Rx verbs completion queue Matan Azrad
2019-07-22  9:23   ` Slava Ovsiienko
2019-07-22  9:13 ` [dpdk-dev] [PATCH 20/28] net/mlx5: function to create Rx verbs work queue Matan Azrad
2019-07-22  9:21   ` Slava Ovsiienko
2019-07-22  9:13 ` [dpdk-dev] [PATCH 21/28] net/mlx5: create advanced RxQ using new API Matan Azrad
2019-07-22  9:21   ` Slava Ovsiienko
2019-07-22  9:13 ` [dpdk-dev] [PATCH 22/28] net/mlx5: support LRO with single RxQ object Matan Azrad
2019-07-22  9:22   ` Slava Ovsiienko
2019-07-22  9:13 ` [dpdk-dev] [PATCH 23/28] net/mlx5: replace the external mbuf shared memory Matan Azrad
2019-07-22  9:21   ` Slava Ovsiienko
2019-07-22  9:13 ` [dpdk-dev] [PATCH 24/28] net/mlx5: update LRO fields in completion entry Matan Azrad
2019-07-22  9:23   ` Slava Ovsiienko
2019-07-22  9:13 ` [dpdk-dev] [PATCH 25/28] net/mlx5: handle LRO packets in Rx queue Matan Azrad
2019-07-22  9:26   ` Slava Ovsiienko
2019-07-22  9:13 ` [dpdk-dev] [PATCH 26/28] net/mlx5: zero the LRO mbuf headroom Matan Azrad
2019-07-22  9:23   ` Slava Ovsiienko
2019-07-22  9:13 ` [dpdk-dev] [PATCH 27/28] net/mlx5: adjust the maximum LRO message size Matan Azrad
2019-07-22  9:23   ` Slava Ovsiienko
2019-07-22  9:13 ` [dpdk-dev] [PATCH 28/28] doc: update MLX5 doc and release notes with LRO Matan Azrad
2019-07-22  9:23   ` Slava Ovsiienko
2019-07-22 10:42 ` [dpdk-dev] [PATCH 00/28] net/mlx5: support LRO Raslan Darawsheh
2019-07-22 12:48 ` Ferruh Yigit
2019-07-22 13:32   ` Matan Azrad
2019-07-22 14:51 ` [dpdk-dev] [PATCH v2 " Matan Azrad
2019-07-22 14:51   ` [dpdk-dev] [PATCH v2 01/28] net/mlx5: remove redundant item from union Matan Azrad
2019-07-23 10:53     ` Ferruh Yigit
2019-07-23 12:10       ` Matan Azrad
2019-07-22 14:51   ` [dpdk-dev] [PATCH v2 02/28] net/mlx5: add LRO APIs and initial settings Matan Azrad
2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 03/28] net/mlx5: support LRO caps query using devx API Matan Azrad
2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 04/28] net/mlx5: glue func for queue query using new API Matan Azrad
2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 05/28] net/mlx5: glue function for action " Matan Azrad
2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 06/28] net/mlx5: check conditions to enable LRO Matan Azrad
2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 07/28] net/mlx5: support Tx interface query using new API Matan Azrad
2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 08/28] net/mlx5: update Tx queue create for LRO Matan Azrad
2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 09/28] net/mlx5: create advanced RxQ object using new API Matan Azrad
2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 10/28] net/mlx5: modify " Matan Azrad
2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 11/28] net/mlx5: create advanced Rx " Matan Azrad
2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 12/28] net/mlx5: create advanced RxQ table " Matan Azrad
2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 13/28] net/mlx5: allocate door-bells " Matan Azrad
2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 14/28] net/mlx5: rename RxQ verbs to general RxQ object Matan Azrad
2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 15/28] net/mlx5: rename verbs indirection table to obj Matan Azrad
2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 16/28] net/mlx5: rename hash RxQ verbs to general Matan Azrad
2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 17/28] net/mlx5: update queue state modify function Matan Azrad
2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 18/28] net/mlx5: store protection domain number on create Matan Azrad
2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 19/28] net/mlx5: func to create Rx verbs completion queue Matan Azrad
2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 20/28] net/mlx5: function to create Rx verbs work queue Matan Azrad
2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 21/28] net/mlx5: create advanced RxQ using new API Matan Azrad
2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 22/28] net/mlx5: support LRO with single RxQ object Matan Azrad
2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 23/28] net/mlx5: replace the external mbuf shared memory Matan Azrad
2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 24/28] net/mlx5: update LRO fields in completion entry Matan Azrad
2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 25/28] net/mlx5: handle LRO packets in Rx queue Matan Azrad
2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 26/28] net/mlx5: zero the LRO mbuf headroom Matan Azrad
2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 27/28] net/mlx5: adjust the maximum LRO message size Matan Azrad
2019-07-22 14:52   ` [dpdk-dev] [PATCH v2 28/28] doc: update MLX5 doc and release notes with LRO Matan Azrad
2019-07-23  6:48   ` [dpdk-dev] [PATCH v2 00/28] net/mlx5: support LRO Raslan Darawsheh

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.