All of lore.kernel.org
 help / color / mirror / Atom feed
* [dpdk-dev] [RFC 0/7] hide eth dev related structures
@ 2021-08-20 16:28 Konstantin Ananyev
  2021-08-20 16:28 ` [dpdk-dev] [RFC 1/7] eth: move ethdev 'burst' API into separate structure Konstantin Ananyev
                   ` (8 more replies)
  0 siblings, 9 replies; 112+ messages in thread
From: Konstantin Ananyev @ 2021-08-20 16:28 UTC (permalink / raw)
  To: dev
  Cc: thomas, ferruh.yigit, andrew.rybchenko, qiming.yang, qi.z.zhang,
	beilei.xing, techboard, Konstantin Ananyev

NOTE: This is just an RFC to start further discussion and collect the feedback.
Due to significant amount of work, changes required are applied only to two
PMDs so far: net/i40e and net/ice.
So to build it you'll need to add:
-Denable_drivers='common/*,mempool/*,net/ice,net/i40e'
to your config options. 

The aim of these patch series is to make rte_ethdev core data structures
(rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback, etc.) internal to DPDK
and not visible to the user.
That should allow future possible changes to core ethdev related structures
to be transparent to the user and help to improve ABI/API stability.
Note that current ethdev API is preserved, though it is an ABI break for sure.

The work is based on previous discussion at:
https://www.mail-archive.com/dev@dpdk.org/msg211405.html
and consists of the following main points:
1. Move public 'fast' function pointers (rx_pkt_burst(), etc.) from
   rte_eth_dev into a separate flat array. We keep it public to still be
   able to use inline functions for these 'fast' calls
   (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
2. Change prototype within PMDs for these 'fast' functions
   (pkt_rx_burst(), etc.) to accept pair of <port_id, queue_id>
   instead of queue pointer.
3. Also some mechanical changes in function start/finish code is required.
   Basically to avoid extra level of indirection - PMD required to do some
   preliminary checks and data retrieval that are currently done at user level
   by inline rte_eth* functions. 
4. Special _rte_eth_*_prolog(/epilog) inline functions and helper macros
   are provided to make these changes inside PMDs as straightforward
   as possible.
5. Change implementation of 'fast' ethdev functions (rte_eth_rx_burst(), etc.)
   to use new public flat array. 
6. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related things
   into internal header: <ethdev_driver.h>.

That approach was selected to avoid(/minimize) possible performance losses.
 
So far I done only limited amount functional and performance testing.
Didn't spot any functional problems, and performance numbers
remains the same before and after the patch on my box (testpmd, macswap fwd).

Remaining items:
==============
- implement required changes for all PMD at drivers/net.
  So far I done changes only for two drivers, and definitely would use some
  help from other PMD maintainers. Required changes are mechanical,
  but we have a lot of drivers these days.
- <rte_bus_pci.h> contains reference to rte_eth_dev field
  RTE_ETH_DEV_TO_PCI(eth_dev).
  Need to move this macro into some internal header.
- Extra testing
- checkpatch warnings
- docs update

Konstantin Ananyev (7):
  eth: move ethdev 'burst' API into separate structure
  eth: make drivers to use new API for Rx
  eth: make drivers to use new API for Tx
  eth: make drivers to use new API for Tx prepare
  eth: make drivers to use new API to obtain descriptor status
  eth: make drivers to use new API for Rx queue count
  eth: hide eth dev related structures

 app/test-pmd/config.c                         |  23 +-
 app/test/virtual_pmd.c                        |  27 +-
 drivers/common/octeontx2/otx2_sec_idev.c      |   2 +-
 drivers/crypto/octeontx2/otx2_cryptodev_ops.c |   2 +-
 drivers/net/i40e/i40e_ethdev.c                |  15 +-
 drivers/net/i40e/i40e_ethdev_vf.c             |  15 +-
 drivers/net/i40e/i40e_rxtx.c                  | 243 ++++---
 drivers/net/i40e/i40e_rxtx.h                  |  68 +-
 drivers/net/i40e/i40e_rxtx_vec_avx2.c         |  11 +-
 drivers/net/i40e/i40e_rxtx_vec_avx512.c       |  12 +-
 drivers/net/i40e/i40e_rxtx_vec_sse.c          |   8 +-
 drivers/net/i40e/i40e_vf_representor.c        |  10 +-
 drivers/net/ice/ice_dcf_ethdev.c              |  10 +-
 drivers/net/ice/ice_dcf_vf_representor.c      |  10 +-
 drivers/net/ice/ice_ethdev.c                  |  15 +-
 drivers/net/ice/ice_rxtx.c                    | 236 ++++---
 drivers/net/ice/ice_rxtx.h                    |  73 +--
 drivers/net/ice/ice_rxtx_vec_avx2.c           |  24 +-
 drivers/net/ice/ice_rxtx_vec_avx512.c         |  24 +-
 drivers/net/ice/ice_rxtx_vec_common.h         |   7 +-
 drivers/net/ice/ice_rxtx_vec_sse.c            |  12 +-
 lib/ethdev/ethdev_driver.h                    | 601 ++++++++++++++++++
 lib/ethdev/ethdev_private.c                   |  74 +++
 lib/ethdev/ethdev_private.h                   |   3 +
 lib/ethdev/rte_ethdev.c                       | 176 ++++-
 lib/ethdev/rte_ethdev.h                       | 194 ++----
 lib/ethdev/rte_ethdev_core.h                  | 182 ++----
 lib/ethdev/version.map                        |  16 +
 lib/eventdev/rte_event_eth_rx_adapter.c       |   2 +-
 lib/eventdev/rte_event_eth_tx_adapter.c       |   2 +-
 lib/eventdev/rte_eventdev.c                   |   2 +-
 31 files changed, 1488 insertions(+), 611 deletions(-)

-- 
2.26.3


^ permalink raw reply	[flat|nested] 112+ messages in thread

* [dpdk-dev] [RFC 1/7] eth: move ethdev 'burst' API into separate structure
  2021-08-20 16:28 [dpdk-dev] [RFC 0/7] hide eth dev related structures Konstantin Ananyev
@ 2021-08-20 16:28 ` Konstantin Ananyev
  2021-08-20 16:28 ` [dpdk-dev] [RFC 2/7] eth: make drivers to use new API for Rx Konstantin Ananyev
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 112+ messages in thread
From: Konstantin Ananyev @ 2021-08-20 16:28 UTC (permalink / raw)
  To: dev
  Cc: thomas, ferruh.yigit, andrew.rybchenko, qiming.yang, qi.z.zhang,
	beilei.xing, techboard, Konstantin Ananyev

Move public function pointers (rx_pkt_burst(), etc.) from rte_eth_dev
into a separate flat array. We can keep it public to still use inline
functions for 'fast' calls (like rte_eth_rx_burst(), etc.) to
avoid/minimize slowdown.
The intention is to make rte_eth_dev and related structures internal.
That should allow future possible changes to core eth_dev strcutures
to be transaprent to the user and help to avoid ABI/API breakages.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/ethdev/ethdev_private.c  | 74 ++++++++++++++++++++++++++++++++++++
 lib/ethdev/ethdev_private.h  |  3 ++
 lib/ethdev/rte_ethdev.c      | 12 ++++++
 lib/ethdev/rte_ethdev_core.h | 33 ++++++++++++++++
 4 files changed, 122 insertions(+)

diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 012cf73ca2..1ab64d24cf 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -174,3 +174,77 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
 		RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
 	return str == NULL ? -1 : 0;
 }
+
+static uint16_t
+dummy_eth_rx_burst(__rte_unused uint16_t port_id,
+		__rte_unused uint16_t queue_id,
+		__rte_unused struct rte_mbuf **rx_pkts,
+		__rte_unused uint16_t nb_pkts)
+{
+	RTE_LOG(ERR, EAL, "rx_pkt_burst for unconfigured port %u\n", port_id);
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+static uint16_t
+dummy_eth_tx_burst(__rte_unused uint16_t port_id,
+		__rte_unused uint16_t queue_id,
+		__rte_unused struct rte_mbuf **tx_pkts,
+		__rte_unused uint16_t nb_pkts)
+{
+	RTE_LOG(ERR, EAL, "tx_pkt_burst for unconfigured port %u\n", port_id);
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+static uint16_t
+dummy_eth_tx_prepare(__rte_unused uint16_t port_id,
+		__rte_unused uint16_t queue_id,
+		__rte_unused struct rte_mbuf **tx_pkts,
+		__rte_unused uint16_t nb_pkts)
+{
+	RTE_LOG(ERR, EAL, "tx_pkt_prepare for unconfigured port %u\n", port_id);
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+static int
+dummy_eth_rx_queue_count(__rte_unused uint16_t port_id,
+		__rte_unused uint16_t queue_id)
+{
+	RTE_LOG(ERR, EAL, "rx_queue_count for unconfigured port %u\n", port_id);
+	return -ENOTSUP;
+}
+
+static int
+dummy_eth_rx_descriptor_status(__rte_unused uint16_t port_id,
+		__rte_unused uint16_t queue_id, __rte_unused uint16_t offset)
+{
+	RTE_LOG(ERR, EAL, "rx_descriptor_status for unconfigured port %u\n",
+		port_id);
+	return -ENOTSUP;
+}
+
+static int
+dummy_eth_tx_descriptor_status(__rte_unused uint16_t port_id,
+		__rte_unused uint16_t queue_id, __rte_unused uint16_t offset)
+{
+	RTE_LOG(ERR, EAL, "tx_descriptor_status for unconfigured port %u\n",
+		port_id);
+	return -ENOTSUP;
+}
+
+void
+rte_eth_dev_burst_api_reset(struct rte_eth_burst_api *rba)
+{
+	static const struct rte_eth_burst_api dummy = {
+		.rx_pkt_burst = dummy_eth_rx_burst,
+		.tx_pkt_burst = dummy_eth_tx_burst,
+		.tx_pkt_prepare = dummy_eth_tx_prepare,
+		.rx_queue_count = dummy_eth_rx_queue_count,
+		.rx_descriptor_status = dummy_eth_rx_descriptor_status,
+		.tx_descriptor_status = dummy_eth_tx_descriptor_status,
+	};
+
+	*rba = dummy;
+}
diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
index 9bb0879538..b9b0e6755a 100644
--- a/lib/ethdev/ethdev_private.h
+++ b/lib/ethdev/ethdev_private.h
@@ -30,6 +30,9 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
 /* Parse devargs value for representor parameter. */
 int rte_eth_devargs_parse_representor_ports(char *str, void *data);
 
+/* reset eth 'burst' API to dummy values */
+void rte_eth_dev_burst_api_reset(struct rte_eth_burst_api *rba);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 9d95cd11e1..949292a617 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -44,6 +44,9 @@
 static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
 struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
 
+/* public 'fast/burst' API */
+struct rte_eth_burst_api rte_eth_burst_api[RTE_MAX_ETHPORTS];
+
 /* spinlock for eth device callbacks */
 static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
 
@@ -1336,6 +1339,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	int diag;
 	int ret;
 	uint16_t old_mtu;
+	struct rte_eth_burst_api rba;
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
@@ -1363,6 +1367,9 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 */
 	dev->data->dev_configured = 0;
 
+	rba = rte_eth_burst_api[port_id];
+	rte_eth_dev_burst_api_reset(&rte_eth_burst_api[port_id]);
+
 	 /* Store original config, as rollback required on failure */
 	memcpy(&orig_conf, &dev->data->dev_conf, sizeof(dev->data->dev_conf));
 
@@ -1623,6 +1630,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	if (old_mtu != dev->data->mtu)
 		dev->data->mtu = old_mtu;
 
+	rte_eth_burst_api[port_id] = rba;
+
 	rte_ethdev_trace_configure(port_id, nb_rx_q, nb_tx_q, dev_conf, ret);
 	return ret;
 }
@@ -1871,6 +1880,7 @@ rte_eth_dev_close(uint16_t port_id)
 	dev = &rte_eth_devices[port_id];
 
 	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_close, -ENOTSUP);
+	rte_eth_dev_burst_api_reset(rte_eth_burst_api + port_id);
 	*lasterr = (*dev->dev_ops->dev_close)(dev);
 	if (*lasterr != 0)
 		lasterr = &binerr;
@@ -1892,6 +1902,8 @@ rte_eth_dev_reset(uint16_t port_id)
 
 	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_reset, -ENOTSUP);
 
+	rte_eth_dev_burst_api_reset(rte_eth_burst_api + port_id);
+
 	ret = rte_eth_dev_stop(port_id);
 	if (ret != 0) {
 		RTE_ETHDEV_LOG(ERR,
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index edf96de2dc..fb8526cb9f 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -25,21 +25,31 @@ TAILQ_HEAD(rte_eth_dev_cb_list, rte_eth_dev_callback);
 
 struct rte_eth_dev;
 
+typedef uint16_t (*rte_eth_rx_burst_t)(uint16_t port_id, uint16_t queue_id,
+			struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+
 typedef uint16_t (*eth_rx_burst_t)(void *rxq,
 				   struct rte_mbuf **rx_pkts,
 				   uint16_t nb_pkts);
 /**< @internal Retrieve input packets from a receive queue of an Ethernet device. */
 
+typedef uint16_t (*rte_eth_tx_burst_t)(uint16_t port_id, uint16_t queue_id,
+			struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+
 typedef uint16_t (*eth_tx_burst_t)(void *txq,
 				   struct rte_mbuf **tx_pkts,
 				   uint16_t nb_pkts);
 /**< @internal Send output packets on a transmit queue of an Ethernet device. */
 
+typedef uint16_t (*rte_eth_tx_prep_t)(uint16_t port_id, uint16_t queue_id,
+			struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+
 typedef uint16_t (*eth_tx_prep_t)(void *txq,
 				   struct rte_mbuf **tx_pkts,
 				   uint16_t nb_pkts);
 /**< @internal Prepare output packets on a transmit queue of an Ethernet device. */
 
+typedef int (*rte_eth_rx_queue_count_t)(uint16_t port_id, uint16_t queue_id);
 
 typedef uint32_t (*eth_rx_queue_count_t)(struct rte_eth_dev *dev,
 					 uint16_t rx_queue_id);
@@ -48,12 +58,35 @@ typedef uint32_t (*eth_rx_queue_count_t)(struct rte_eth_dev *dev,
 typedef int (*eth_rx_descriptor_done_t)(void *rxq, uint16_t offset);
 /**< @internal Check DD bit of specific RX descriptor */
 
+typedef int (*rte_eth_rx_descriptor_status_t)(uint16_t port_id,
+			uint16_t queue_id, uint16_t offset);
+
 typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
 /**< @internal Check the status of a Rx descriptor */
 
+typedef int (*rte_eth_tx_descriptor_status_t)(uint16_t port_id,
+			uint16_t queue_id, uint16_t offset);
+
 typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
 /**< @internal Check the status of a Tx descriptor */
 
+struct rte_eth_burst_api {
+	rte_eth_rx_burst_t rx_pkt_burst;
+	/**< PMD receive function. */
+	rte_eth_tx_burst_t tx_pkt_burst;
+	/**< PMD transmit function. */
+	rte_eth_tx_prep_t tx_pkt_prepare;
+	/**< PMD transmit prepare function. */
+	rte_eth_rx_queue_count_t rx_queue_count;
+	/**< Get the number of used RX descriptors. */
+	rte_eth_rx_descriptor_status_t rx_descriptor_status;
+	/**< Check the status of a Rx descriptor. */
+	rte_eth_tx_descriptor_status_t tx_descriptor_status;
+	/**< Check the status of a Tx descriptor. */
+	uintptr_t reserved[2];
+} __rte_cache_min_aligned;
+
+extern struct rte_eth_burst_api rte_eth_burst_api[RTE_MAX_ETHPORTS];
 
 /**
  * @internal
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [dpdk-dev] [RFC 2/7] eth: make drivers to use new API for Rx
  2021-08-20 16:28 [dpdk-dev] [RFC 0/7] hide eth dev related structures Konstantin Ananyev
  2021-08-20 16:28 ` [dpdk-dev] [RFC 1/7] eth: move ethdev 'burst' API into separate structure Konstantin Ananyev
@ 2021-08-20 16:28 ` Konstantin Ananyev
  2021-09-06 18:41   ` Ferruh Yigit
  2021-08-20 16:28 ` [dpdk-dev] [RFC 3/7] eth: make drivers to use new API for Tx Konstantin Ananyev
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 112+ messages in thread
From: Konstantin Ananyev @ 2021-08-20 16:28 UTC (permalink / raw)
  To: dev
  Cc: thomas, ferruh.yigit, andrew.rybchenko, qiming.yang, qi.z.zhang,
	beilei.xing, techboard, Konstantin Ananyev

ethdev:
 - make changes so drivers can start using new API for rx_pkt_burst().
 - provide helper functions/macros.
 - remove rx_pkt_burst() from 'struct rte_eth_dev'.
drivers/net:
 - adjust to new rx_burst API.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 app/test/virtual_pmd.c                   |  15 ++-
 drivers/net/i40e/i40e_ethdev.c           |   2 +-
 drivers/net/i40e/i40e_ethdev_vf.c        |   3 +-
 drivers/net/i40e/i40e_rxtx.c             | 161 ++++++++++++++++-------
 drivers/net/i40e/i40e_rxtx.h             |  36 +++--
 drivers/net/i40e/i40e_rxtx_vec_avx2.c    |   7 +-
 drivers/net/i40e/i40e_rxtx_vec_avx512.c  |   8 +-
 drivers/net/i40e/i40e_rxtx_vec_sse.c     |   8 +-
 drivers/net/i40e/i40e_vf_representor.c   |   5 +-
 drivers/net/ice/ice_dcf_ethdev.c         |   5 +-
 drivers/net/ice/ice_dcf_vf_representor.c |   5 +-
 drivers/net/ice/ice_ethdev.c             |   2 +-
 drivers/net/ice/ice_rxtx.c               | 160 +++++++++++++++-------
 drivers/net/ice/ice_rxtx.h               |  44 +++----
 drivers/net/ice/ice_rxtx_vec_avx2.c      |  16 ++-
 drivers/net/ice/ice_rxtx_vec_avx512.c    |  16 ++-
 drivers/net/ice/ice_rxtx_vec_sse.c       |   8 +-
 lib/ethdev/ethdev_driver.h               | 120 +++++++++++++++++
 lib/ethdev/rte_ethdev.c                  |  23 +++-
 lib/ethdev/rte_ethdev.h                  |  39 +-----
 lib/ethdev/rte_ethdev_core.h             |   9 +-
 lib/ethdev/version.map                   |   5 +
 22 files changed, 483 insertions(+), 214 deletions(-)

diff --git a/app/test/virtual_pmd.c b/app/test/virtual_pmd.c
index 7036f401ed..734ef32c97 100644
--- a/app/test/virtual_pmd.c
+++ b/app/test/virtual_pmd.c
@@ -348,6 +348,8 @@ virtual_ethdev_rx_burst_success(void *queue __rte_unused,
 	return rx_count;
 }
 
+static _RTE_ETH_RX_DEF(virtual_ethdev_rx_burst_success)
+
 static uint16_t
 virtual_ethdev_rx_burst_fail(void *queue __rte_unused,
 							 struct rte_mbuf **bufs __rte_unused,
@@ -356,6 +358,8 @@ virtual_ethdev_rx_burst_fail(void *queue __rte_unused,
 	return 0;
 }
 
+static _RTE_ETH_RX_DEF(virtual_ethdev_rx_burst_fail)
+
 static uint16_t
 virtual_ethdev_tx_burst_success(void *queue, struct rte_mbuf **bufs,
 		uint16_t nb_pkts)
@@ -425,12 +429,12 @@ virtual_ethdev_tx_burst_fail(void *queue, struct rte_mbuf **bufs,
 void
 virtual_ethdev_rx_burst_fn_set_success(uint16_t port_id, uint8_t success)
 {
-	struct rte_eth_dev *vrtl_eth_dev = &rte_eth_devices[port_id];
-
 	if (success)
-		vrtl_eth_dev->rx_pkt_burst = virtual_ethdev_rx_burst_success;
+		rte_eth_set_rx_burst(port_id,
+			_RTE_ETH_FUNC(virtual_ethdev_rx_burst_success));
 	else
-		vrtl_eth_dev->rx_pkt_burst = virtual_ethdev_rx_burst_fail;
+		rte_eth_set_rx_burst(port_id,
+			_RTE_ETH_FUNC(virtual_ethdev_rx_burst_fail));
 }
 
 
@@ -599,7 +603,8 @@ virtual_ethdev_create(const char *name, struct rte_ether_addr *mac_addr,
 	pci_dev->device.driver = &pci_drv->driver;
 	eth_dev->device = &pci_dev->device;
 
-	eth_dev->rx_pkt_burst = virtual_ethdev_rx_burst_success;
+	rte_eth_set_rx_burst(eth_dev->data->port_id,
+			_RTE_ETH_FUNC(virtual_ethdev_rx_burst_success));
 	eth_dev->tx_pkt_burst = virtual_ethdev_tx_burst_success;
 
 	rte_eth_dev_probing_finish(eth_dev);
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 7b230e2ed1..4753af126d 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1437,7 +1437,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
 	dev->rx_descriptor_done = i40e_dev_rx_descriptor_done;
 	dev->rx_descriptor_status = i40e_dev_rx_descriptor_status;
 	dev->tx_descriptor_status = i40e_dev_tx_descriptor_status;
-	dev->rx_pkt_burst = i40e_recv_pkts;
+	rte_eth_set_rx_burst(dev->data->port_id, _RTE_ETH_FUNC(i40e_recv_pkts));
 	dev->tx_pkt_burst = i40e_xmit_pkts;
 	dev->tx_pkt_prepare = i40e_prep_pkts;
 
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 0cfe13b7b2..e08e97276a 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -1576,7 +1576,8 @@ i40evf_dev_init(struct rte_eth_dev *eth_dev)
 	eth_dev->rx_descriptor_done   = i40e_dev_rx_descriptor_done;
 	eth_dev->rx_descriptor_status = i40e_dev_rx_descriptor_status;
 	eth_dev->tx_descriptor_status = i40e_dev_tx_descriptor_status;
-	eth_dev->rx_pkt_burst = &i40e_recv_pkts;
+	rte_eth_set_rx_burst(eth_dev->data->port_id,
+		_RTE_ETH_FUNC(i40e_recv_pkts));
 	eth_dev->tx_pkt_burst = &i40e_xmit_pkts;
 
 	/*
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 026cda948c..f2d0d35538 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -700,7 +700,9 @@ i40e_recv_pkts_bulk_alloc(void __rte_unused *rx_queue,
 }
 #endif /* RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC */
 
-uint16_t
+static _RTE_ETH_RX_DEF(i40e_recv_pkts_bulk_alloc)
+
+static uint16_t
 i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 {
 	struct i40e_rx_queue *rxq;
@@ -822,7 +824,9 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	return nb_rx;
 }
 
-uint16_t
+_RTE_ETH_RX_DEF(i40e_recv_pkts)
+
+static uint16_t
 i40e_recv_scattered_pkts(void *rx_queue,
 			 struct rte_mbuf **rx_pkts,
 			 uint16_t nb_pkts)
@@ -1000,6 +1004,8 @@ i40e_recv_scattered_pkts(void *rx_queue,
 	return nb_rx;
 }
 
+_RTE_ETH_RX_DEF(i40e_recv_scattered_pkts)
+
 /* Check if the context descriptor is needed for TX offloading */
 static inline uint16_t
 i40e_calc_context_desc(uint64_t flags)
@@ -1843,19 +1849,21 @@ i40e_dev_supported_ptypes_get(struct rte_eth_dev *dev)
 		RTE_PTYPE_UNKNOWN
 	};
 
-	if (dev->rx_pkt_burst == i40e_recv_pkts ||
+	rte_eth_rx_burst_t rx_burst = rte_eth_get_rx_burst(dev->data->port_id);
+
+	if (rx_burst == _RTE_ETH_FUNC(i40e_recv_pkts) ||
 #ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC
-	    dev->rx_pkt_burst == i40e_recv_pkts_bulk_alloc ||
+	    rx_burst == _RTE_ETH_FUNC(i40e_recv_pkts_bulk_alloc) ||
 #endif
-	    dev->rx_pkt_burst == i40e_recv_scattered_pkts ||
-	    dev->rx_pkt_burst == i40e_recv_scattered_pkts_vec ||
-	    dev->rx_pkt_burst == i40e_recv_pkts_vec ||
+	    rx_burst == _RTE_ETH_FUNC(i40e_recv_scattered_pkts) ||
+	    rx_burst == _RTE_ETH_FUNC(i40e_recv_scattered_pkts_vec) ||
+	    rx_burst == _RTE_ETH_FUNC(i40e_recv_pkts_vec) ||
 #ifdef CC_AVX512_SUPPORT
-	    dev->rx_pkt_burst == i40e_recv_scattered_pkts_vec_avx512 ||
-	    dev->rx_pkt_burst == i40e_recv_pkts_vec_avx512 ||
+	    rx_burst == _RTE_ETH_FUNC(i40e_recv_scattered_pkts_vec_avx512) ||
+	    rx_burst == _RTE_ETH_FUNC(i40e_recv_pkts_vec_avx512) ||
 #endif
-	    dev->rx_pkt_burst == i40e_recv_scattered_pkts_vec_avx2 ||
-	    dev->rx_pkt_burst == i40e_recv_pkts_vec_avx2)
+	    rx_burst == _RTE_ETH_FUNC(i40e_recv_scattered_pkts_vec_avx2) ||
+	    rx_burst == _RTE_ETH_FUNC(i40e_recv_pkts_vec_avx2))
 		return ptypes;
 	return NULL;
 }
@@ -3265,6 +3273,8 @@ i40e_set_rx_function(struct rte_eth_dev *dev)
 	struct i40e_adapter *ad =
 		I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	uint16_t rx_using_sse, i;
+	rte_eth_rx_burst_t rx_burst;
+
 	/* In order to allow Vector Rx there are a few configuration
 	 * conditions to be met and Rx Bulk Allocation should be allowed.
 	 */
@@ -3309,17 +3319,22 @@ i40e_set_rx_function(struct rte_eth_dev *dev)
 				PMD_DRV_LOG(NOTICE,
 					"Using AVX512 Vector Scattered Rx (port %d).",
 					dev->data->port_id);
-				dev->rx_pkt_burst =
-					i40e_recv_scattered_pkts_vec_avx512;
+				rte_eth_set_rx_burst(dev->data->port_id,
+					_RTE_ETH_FUNC(
+					i40e_recv_scattered_pkts_vec_avx512));
+					
 #endif
 			} else {
 				PMD_INIT_LOG(DEBUG,
 					"Using %sVector Scattered Rx (port %d).",
 					ad->rx_use_avx2 ? "avx2 " : "",
 					dev->data->port_id);
-				dev->rx_pkt_burst = ad->rx_use_avx2 ?
-					i40e_recv_scattered_pkts_vec_avx2 :
-					i40e_recv_scattered_pkts_vec;
+				rte_eth_set_rx_burst(dev->data->port_id,
+					 ad->rx_use_avx2 ?
+					_RTE_ETH_FUNC(
+					 i40e_recv_scattered_pkts_vec_avx2) :
+					 _RTE_ETH_FUNC(
+						 i40e_recv_scattered_pkts_vec));
 			}
 		} else {
 			if (ad->rx_use_avx512) {
@@ -3327,17 +3342,19 @@ i40e_set_rx_function(struct rte_eth_dev *dev)
 				PMD_DRV_LOG(NOTICE,
 					"Using AVX512 Vector Rx (port %d).",
 					dev->data->port_id);
-				dev->rx_pkt_burst =
-					i40e_recv_pkts_vec_avx512;
+				rte_eth_set_rx_burst(dev->data->port_id,
+					_RTE_ETH_FUNC(
+						i40e_recv_pkts_vec_avx512));
 #endif
 			} else {
 				PMD_INIT_LOG(DEBUG,
 					"Using %sVector Rx (port %d).",
 					ad->rx_use_avx2 ? "avx2 " : "",
 					dev->data->port_id);
-				dev->rx_pkt_burst = ad->rx_use_avx2 ?
-					i40e_recv_pkts_vec_avx2 :
-					i40e_recv_pkts_vec;
+				rte_eth_set_rx_burst(dev->data->port_id,
+					ad->rx_use_avx2 ?
+					_RTE_ETH_FUNC(i40e_recv_pkts_vec_avx2) :
+					_RTE_ETH_FUNC(i40e_recv_pkts_vec));
 			}
 		}
 #else /* RTE_ARCH_X86 */
@@ -3345,11 +3362,13 @@ i40e_set_rx_function(struct rte_eth_dev *dev)
 			PMD_INIT_LOG(DEBUG,
 				     "Using Vector Scattered Rx (port %d).",
 				     dev->data->port_id);
-			dev->rx_pkt_burst = i40e_recv_scattered_pkts_vec;
+			rte_eth_set_rx_burst(dev->data->port_id,
+				_RTE_ETH_FUNC(i40e_recv_scattered_pkts_vec));
 		} else {
 			PMD_INIT_LOG(DEBUG, "Using Vector Rx (port %d).",
 				     dev->data->port_id);
-			dev->rx_pkt_burst = i40e_recv_pkts_vec;
+			rte_eth_set_rx_burst(dev->data->port_id,
+				_RTE_ETH_FUNC(i40e_recv_pkts_vec));
 		}
 #endif /* RTE_ARCH_X86 */
 	} else if (!dev->data->scattered_rx && ad->rx_bulk_alloc_allowed) {
@@ -3358,27 +3377,34 @@ i40e_set_rx_function(struct rte_eth_dev *dev)
 				    "will be used on port=%d.",
 			     dev->data->port_id);
 
-		dev->rx_pkt_burst = i40e_recv_pkts_bulk_alloc;
+		rte_eth_set_rx_burst(dev->data->port_id,
+			_RTE_ETH_FUNC(i40e_recv_pkts_bulk_alloc));
 	} else {
 		/* Simple Rx Path. */
 		PMD_INIT_LOG(DEBUG, "Simple Rx path will be used on port=%d.",
 			     dev->data->port_id);
-		dev->rx_pkt_burst = dev->data->scattered_rx ?
-					i40e_recv_scattered_pkts :
-					i40e_recv_pkts;
+		rte_eth_set_rx_burst(dev->data->port_id,
+			dev->data->scattered_rx ?
+			_RTE_ETH_FUNC(i40e_recv_scattered_pkts) :
+			_RTE_ETH_FUNC(i40e_recv_pkts));
 	}
 
+	rx_burst = rte_eth_get_rx_burst(dev->data->port_id);
+
 	/* Propagate information about RX function choice through all queues. */
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
 		rx_using_sse =
-			(dev->rx_pkt_burst == i40e_recv_scattered_pkts_vec ||
-			 dev->rx_pkt_burst == i40e_recv_pkts_vec ||
+			(rx_burst == _RTE_ETH_FUNC(
+					i40e_recv_scattered_pkts_vec) ||
+			 rx_burst == _RTE_ETH_FUNC(i40e_recv_pkts_vec) ||
 #ifdef CC_AVX512_SUPPORT
-			 dev->rx_pkt_burst == i40e_recv_scattered_pkts_vec_avx512 ||
-			 dev->rx_pkt_burst == i40e_recv_pkts_vec_avx512 ||
+			 rx_burst == _RTE_ETH_FUNC(
+				 i40e_recv_scattered_pkts_vec_avx512) ||
+			 rx_burst == _RTE_ETH_FUNC(i40e_recv_pkts_vec_avx512) ||
 #endif
-			 dev->rx_pkt_burst == i40e_recv_scattered_pkts_vec_avx2 ||
-			 dev->rx_pkt_burst == i40e_recv_pkts_vec_avx2);
+			 rx_burst == _RTE_ETH_FUNC(
+				 	i40e_recv_scattered_pkts_vec_avx2) ||
+			 rx_burst == _RTE_ETH_FUNC(i40e_recv_pkts_vec_avx2));
 
 		for (i = 0; i < dev->data->nb_rx_queues; i++) {
 			struct i40e_rx_queue *rxq = dev->data->rx_queues[i];
@@ -3390,27 +3416,66 @@ i40e_set_rx_function(struct rte_eth_dev *dev)
 }
 
 static const struct {
-	eth_rx_burst_t pkt_burst;
+	rte_eth_rx_burst_t pkt_burst;
 	const char *info;
 } i40e_rx_burst_infos[] = {
-	{ i40e_recv_scattered_pkts,          "Scalar Scattered" },
-	{ i40e_recv_pkts_bulk_alloc,         "Scalar Bulk Alloc" },
-	{ i40e_recv_pkts,                    "Scalar" },
+	{
+		_RTE_ETH_FUNC(i40e_recv_scattered_pkts),
+		"Scalar Scattered",
+	},
+	{
+		_RTE_ETH_FUNC(i40e_recv_pkts_bulk_alloc),
+		"Scalar Bulk Alloc",
+	},
+	{
+		_RTE_ETH_FUNC(i40e_recv_pkts),
+		"Scalar",
+	},
 #ifdef RTE_ARCH_X86
 #ifdef CC_AVX512_SUPPORT
-	{ i40e_recv_scattered_pkts_vec_avx512, "Vector AVX512 Scattered" },
-	{ i40e_recv_pkts_vec_avx512,           "Vector AVX512" },
+	{
+		_RTE_ETH_FUNC(i40e_recv_scattered_pkts_vec_avx512),
+		"Vector AVX512 Scattered"
+	},
+	{
+		_RTE_ETH_FUNC(i40e_recv_pkts_vec_avx512),
+		"Vector AVX512",
+	},
 #endif
-	{ i40e_recv_scattered_pkts_vec_avx2, "Vector AVX2 Scattered" },
-	{ i40e_recv_pkts_vec_avx2,           "Vector AVX2" },
-	{ i40e_recv_scattered_pkts_vec,      "Vector SSE Scattered" },
-	{ i40e_recv_pkts_vec,                "Vector SSE" },
+	{
+		_RTE_ETH_FUNC(i40e_recv_scattered_pkts_vec_avx2),
+		"Vector AVX2 Scattered",
+	},
+	{
+		_RTE_ETH_FUNC(i40e_recv_pkts_vec_avx2),
+		"Vector AVX2",
+	},
+	{
+		_RTE_ETH_FUNC(i40e_recv_scattered_pkts_vec),
+		"Vector SSE Scattered",
+	},
+	{
+		_RTE_ETH_FUNC(i40e_recv_pkts_vec),
+		"Vector SSE",
+	},
 #elif defined(RTE_ARCH_ARM64)
-	{ i40e_recv_scattered_pkts_vec,      "Vector Neon Scattered" },
-	{ i40e_recv_pkts_vec,                "Vector Neon" },
+	{
+		_RTE_ETH_FUNC(i40e_recv_scattered_pkts_vec),
+		"Vector Neon Scattered",
+	},
+	{
+		_RTE_ETH_FUNC(i40e_recv_pkts_vec),
+		"Vector Neon",
+	},
 #elif defined(RTE_ARCH_PPC_64)
-	{ i40e_recv_scattered_pkts_vec,      "Vector AltiVec Scattered" },
-	{ i40e_recv_pkts_vec,                "Vector AltiVec" },
+	{
+		_RTE_ETH_FUNC(i40e_recv_scattered_pkts_vec),
+		"Vector AltiVec Scattered",
+	},
+	{
+		_RTE_ETH_FUNC(i40e_recv_pkts_vec,
+		"Vector AltiVec",
+	},
 #endif
 };
 
@@ -3418,7 +3483,7 @@ int
 i40e_rx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id,
 		       struct rte_eth_burst_mode *mode)
 {
-	eth_rx_burst_t pkt_burst = dev->rx_pkt_burst;
+	rte_eth_rx_burst_t pkt_burst = rte_eth_get_rx_burst(dev->data->port_id);
 	int ret = -EINVAL;
 	unsigned int i;
 
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 5ccf5773e8..beeeaae78d 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -199,12 +199,10 @@ int i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
 			    const struct rte_eth_txconf *tx_conf);
 void i40e_dev_rx_queue_release(void *rxq);
 void i40e_dev_tx_queue_release(void *txq);
-uint16_t i40e_recv_pkts(void *rx_queue,
-			struct rte_mbuf **rx_pkts,
-			uint16_t nb_pkts);
-uint16_t i40e_recv_scattered_pkts(void *rx_queue,
-				  struct rte_mbuf **rx_pkts,
-				  uint16_t nb_pkts);
+
+_RTE_ETH_RX_PROTO(i40e_recv_pkts);
+_RTE_ETH_RX_PROTO(i40e_recv_scattered_pkts);
+
 uint16_t i40e_xmit_pkts(void *tx_queue,
 			struct rte_mbuf **tx_pkts,
 			uint16_t nb_pkts);
@@ -231,11 +229,9 @@ int i40e_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
 int i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
 
-uint16_t i40e_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
-			    uint16_t nb_pkts);
-uint16_t i40e_recv_scattered_pkts_vec(void *rx_queue,
-				      struct rte_mbuf **rx_pkts,
-				      uint16_t nb_pkts);
+_RTE_ETH_RX_PROTO(i40e_recv_pkts_vec);
+_RTE_ETH_RX_PROTO(i40e_recv_scattered_pkts_vec);
+
 int i40e_rx_vec_dev_conf_condition_check(struct rte_eth_dev *dev);
 int i40e_rxq_vec_setup(struct i40e_rx_queue *rxq);
 int i40e_txq_vec_setup(struct i40e_tx_queue *txq);
@@ -248,19 +244,17 @@ void i40e_set_tx_function_flag(struct rte_eth_dev *dev,
 void i40e_set_tx_function(struct rte_eth_dev *dev);
 void i40e_set_default_ptype_table(struct rte_eth_dev *dev);
 void i40e_set_default_pctype_table(struct rte_eth_dev *dev);
-uint16_t i40e_recv_pkts_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts,
-	uint16_t nb_pkts);
-uint16_t i40e_recv_scattered_pkts_vec_avx2(void *rx_queue,
-	struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+
+_RTE_ETH_RX_PROTO(i40e_recv_pkts_vec_avx2);
+_RTE_ETH_RX_PROTO(i40e_recv_scattered_pkts_vec_avx2);
+
 uint16_t i40e_xmit_pkts_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
 	uint16_t nb_pkts);
 int i40e_get_monitor_addr(void *rx_queue, struct rte_power_monitor_cond *pmc);
-uint16_t i40e_recv_pkts_vec_avx512(void *rx_queue,
-				   struct rte_mbuf **rx_pkts,
-				   uint16_t nb_pkts);
-uint16_t i40e_recv_scattered_pkts_vec_avx512(void *rx_queue,
-					     struct rte_mbuf **rx_pkts,
-					     uint16_t nb_pkts);
+
+_RTE_ETH_RX_PROTO(i40e_recv_pkts_vec_avx512);
+_RTE_ETH_RX_PROTO(i40e_recv_scattered_pkts_vec_avx512);
+
 uint16_t i40e_xmit_pkts_vec_avx512(void *tx_queue,
 				   struct rte_mbuf **tx_pkts,
 				   uint16_t nb_pkts);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index 3b9eef91a9..5c03d16644 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -628,13 +628,15 @@ _recv_raw_pkts_vec_avx2(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
  * Notice:
  * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet
  */
-uint16_t
+static inline uint16_t
 i40e_recv_pkts_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts,
 		   uint16_t nb_pkts)
 {
 	return _recv_raw_pkts_vec_avx2(rx_queue, rx_pkts, nb_pkts, NULL);
 }
 
+_RTE_ETH_RX_DEF(i40e_recv_pkts_vec_avx2)
+
 /*
  * vPMD receive routine that reassembles single burst of 32 scattered packets
  * Notice:
@@ -682,7 +684,7 @@ i40e_recv_scattered_burst_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts,
  * Notice:
  * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet
  */
-uint16_t
+static inline uint16_t
 i40e_recv_scattered_pkts_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts,
 			     uint16_t nb_pkts)
 {
@@ -699,6 +701,7 @@ i40e_recv_scattered_pkts_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts,
 				rx_pkts + retval, nb_pkts);
 }
 
+_RTE_ETH_RX_DEF(i40e_recv_scattered_pkts_vec_avx2)
 
 static inline void
 vtx1(volatile struct i40e_tx_desc *txdp,
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index bd21d64223..96ff3d60c3 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -802,13 +802,15 @@ _recv_raw_pkts_vec_avx512(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
  * Notice:
  * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet
  */
-uint16_t
+static inline uint16_t
 i40e_recv_pkts_vec_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
 			  uint16_t nb_pkts)
 {
 	return _recv_raw_pkts_vec_avx512(rx_queue, rx_pkts, nb_pkts, NULL);
 }
 
+_RTE_ETH_RX_DEF(i40e_recv_pkts_vec_avx512)
+
 /**
  * vPMD receive routine that reassembles single burst of 32 scattered packets
  * Notice:
@@ -857,7 +859,7 @@ i40e_recv_scattered_burst_vec_avx512(void *rx_queue,
  * Notice:
  * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet
  */
-uint16_t
+static inline uint16_t
 i40e_recv_scattered_pkts_vec_avx512(void *rx_queue,
 				    struct rte_mbuf **rx_pkts,
 				    uint16_t nb_pkts)
@@ -876,6 +878,8 @@ i40e_recv_scattered_pkts_vec_avx512(void *rx_queue,
 				rx_pkts + retval, nb_pkts);
 }
 
+_RTE_ETH_RX_DEF(i40e_recv_scattered_pkts_vec_avx512)
+
 static __rte_always_inline int
 i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq)
 {
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index bfa5aff48d..24687984a7 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -598,13 +598,15 @@ _recv_raw_pkts_vec(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
  * - nb_pkts > RTE_I40E_VPMD_RX_BURST, only scan RTE_I40E_VPMD_RX_BURST
  *   numbers of DD bits
  */
-uint16_t
+static inline uint16_t
 i40e_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
 		   uint16_t nb_pkts)
 {
 	return _recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL);
 }
 
+_RTE_ETH_RX_DEF(i40e_recv_pkts_vec)
+
 /**
  * vPMD receive routine that reassembles single burst of 32 scattered packets
  *
@@ -651,7 +653,7 @@ i40e_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
 /**
  * vPMD receive routine that reassembles scattered packets.
  */
-uint16_t
+static inline uint16_t
 i40e_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
 			     uint16_t nb_pkts)
 {
@@ -674,6 +676,8 @@ i40e_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
 						      nb_pkts);
 }
 
+_RTE_ETH_RX_DEF(i40e_recv_scattered_pkts_vec)
+
 static inline void
 vtx1(volatile struct i40e_tx_desc *txdp,
 		struct rte_mbuf *pkt, uint64_t flags)
diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
index 0481b55381..9d32a5c85d 100644
--- a/drivers/net/i40e/i40e_vf_representor.c
+++ b/drivers/net/i40e/i40e_vf_representor.c
@@ -466,6 +466,8 @@ i40e_vf_representor_rx_burst(__rte_unused void *rx_queue,
 	return 0;
 }
 
+static _RTE_ETH_RX_DEF(i40e_vf_representor_rx_burst)
+
 static uint16_t
 i40e_vf_representor_tx_burst(__rte_unused void *tx_queue,
 	__rte_unused struct rte_mbuf **tx_pkts, __rte_unused uint16_t nb_pkts)
@@ -501,7 +503,8 @@ i40e_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
 	/* No data-path, but need stub Rx/Tx functions to avoid crash
 	 * when testing with the likes of testpmd.
 	 */
-	ethdev->rx_pkt_burst = i40e_vf_representor_rx_burst;
+	rte_eth_set_rx_burst(ethdev->data->port_id,
+			_RTE_ETH_FUNC(i40e_vf_representor_rx_burst));
 	ethdev->tx_pkt_burst = i40e_vf_representor_tx_burst;
 
 	vf = &pf->vfs[representor->vf_id];
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index cab7c4da87..58a4204621 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -40,6 +40,8 @@ ice_dcf_recv_pkts(__rte_unused void *rx_queue,
 	return 0;
 }
 
+static _RTE_ETH_RX_DEF(ice_dcf_recv_pkts)
+
 static uint16_t
 ice_dcf_xmit_pkts(__rte_unused void *tx_queue,
 		  __rte_unused struct rte_mbuf **bufs,
@@ -1039,7 +1041,8 @@ ice_dcf_dev_init(struct rte_eth_dev *eth_dev)
 	struct ice_dcf_adapter *adapter = eth_dev->data->dev_private;
 
 	eth_dev->dev_ops = &ice_dcf_eth_dev_ops;
-	eth_dev->rx_pkt_burst = ice_dcf_recv_pkts;
+	rte_eth_set_rx_burst(eth_dev->data->port_id,
+			_RTE_ETH_FUNC(ice_dcf_recv_pkts));
 	eth_dev->tx_pkt_burst = ice_dcf_xmit_pkts;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index 970461f3e9..8136169ebd 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -18,6 +18,8 @@ ice_dcf_vf_repr_rx_burst(__rte_unused void *rxq,
 	return 0;
 }
 
+static _RTE_ETH_RX_DEF(ice_dcf_vf_repr_rx_burst)
+
 static uint16_t
 ice_dcf_vf_repr_tx_burst(__rte_unused void *txq,
 			 __rte_unused struct rte_mbuf **tx_pkts,
@@ -413,7 +415,8 @@ ice_dcf_vf_repr_init(struct rte_eth_dev *vf_rep_eth_dev, void *init_param)
 
 	vf_rep_eth_dev->dev_ops = &ice_dcf_vf_repr_dev_ops;
 
-	vf_rep_eth_dev->rx_pkt_burst = ice_dcf_vf_repr_rx_burst;
+	rte_eth_set_rx_burst(vf_rep_eth_dev->data->port_id,
+			_RTE_ETH_FUNC(ice_dcf_vf_repr_rx_burst));
 	vf_rep_eth_dev->tx_pkt_burst = ice_dcf_vf_repr_tx_burst;
 
 	vf_rep_eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index a4cd39c954..4d67a2dddf 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1996,7 +1996,7 @@ ice_dev_init(struct rte_eth_dev *dev)
 	dev->rx_queue_count = ice_rx_queue_count;
 	dev->rx_descriptor_status = ice_rx_descriptor_status;
 	dev->tx_descriptor_status = ice_tx_descriptor_status;
-	dev->rx_pkt_burst = ice_recv_pkts;
+	rte_eth_set_rx_burst(dev->data->port_id, _RTE_ETH_FUNC(ice_recv_pkts));
 	dev->tx_pkt_burst = ice_xmit_pkts;
 	dev->tx_pkt_prepare = ice_prep_pkts;
 
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 5d7ab4f047..2cc411d315 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1749,6 +1749,8 @@ ice_recv_pkts_bulk_alloc(void *rx_queue,
 	return nb_rx;
 }
 
+static _RTE_ETH_RX_DEF(ice_recv_pkts_bulk_alloc)
+
 static uint16_t
 ice_recv_scattered_pkts(void *rx_queue,
 			struct rte_mbuf **rx_pkts,
@@ -1917,12 +1919,15 @@ ice_recv_scattered_pkts(void *rx_queue,
 	return nb_rx;
 }
 
+static _RTE_ETH_RX_DEF(ice_recv_scattered_pkts)
+
 const uint32_t *
 ice_dev_supported_ptypes_get(struct rte_eth_dev *dev)
 {
 	struct ice_adapter *ad =
 		ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	const uint32_t *ptypes;
+	rte_eth_rx_burst_t rx_pkt_burst;
 
 	static const uint32_t ptypes_os[] = {
 		/* refers to ice_get_default_pkt_type() */
@@ -1988,24 +1993,28 @@ ice_dev_supported_ptypes_get(struct rte_eth_dev *dev)
 	else
 		ptypes = ptypes_os;
 
-	if (dev->rx_pkt_burst == ice_recv_pkts ||
-	    dev->rx_pkt_burst == ice_recv_pkts_bulk_alloc ||
-	    dev->rx_pkt_burst == ice_recv_scattered_pkts)
+	rx_pkt_burst = rte_eth_get_rx_burst(dev->data->port_id);
+
+	if (rx_pkt_burst == _RTE_ETH_FUNC(ice_recv_pkts) ||
+	    rx_pkt_burst == _RTE_ETH_FUNC(ice_recv_pkts_bulk_alloc) ||
+	    rx_pkt_burst == _RTE_ETH_FUNC(ice_recv_scattered_pkts))
 		return ptypes;
 
 #ifdef RTE_ARCH_X86
-	if (dev->rx_pkt_burst == ice_recv_pkts_vec ||
-	    dev->rx_pkt_burst == ice_recv_scattered_pkts_vec ||
+	if (rx_pkt_burst == _RTE_ETH_FUNC(ice_recv_pkts_vec) ||
+	    rx_pkt_burst == _RTE_ETH_FUNC(ice_recv_scattered_pkts_vec) ||
 #ifdef CC_AVX512_SUPPORT
-	    dev->rx_pkt_burst == ice_recv_pkts_vec_avx512 ||
-	    dev->rx_pkt_burst == ice_recv_pkts_vec_avx512_offload ||
-	    dev->rx_pkt_burst == ice_recv_scattered_pkts_vec_avx512 ||
-	    dev->rx_pkt_burst == ice_recv_scattered_pkts_vec_avx512_offload ||
+	    rx_pkt_burst == _RTE_ETH_FUNC(ice_recv_pkts_vec_avx512) ||
+	    rx_pkt_burst == _RTE_ETH_FUNC(ice_recv_pkts_vec_avx512_offload) ||
+	    rx_pkt_burst == _RTE_ETH_FUNC(ice_recv_scattered_pkts_vec_avx512) ||
+	    rx_pkt_burst ==
+	    	_RTE_ETH_FUNC(ice_recv_scattered_pkts_vec_avx512_offload) ||
 #endif
-	    dev->rx_pkt_burst == ice_recv_pkts_vec_avx2 ||
-	    dev->rx_pkt_burst == ice_recv_pkts_vec_avx2_offload ||
-	    dev->rx_pkt_burst == ice_recv_scattered_pkts_vec_avx2 ||
-	    dev->rx_pkt_burst == ice_recv_scattered_pkts_vec_avx2_offload)
+	    rx_pkt_burst == _RTE_ETH_FUNC(ice_recv_pkts_vec_avx2) ||
+	    rx_pkt_burst == _RTE_ETH_FUNC(ice_recv_pkts_vec_avx2_offload) ||
+	    rx_pkt_burst == _RTE_ETH_FUNC(ice_recv_scattered_pkts_vec_avx2) ||
+	    rx_pkt_burst ==
+	    	_RTE_ETH_FUNC(ice_recv_scattered_pkts_vec_avx2_offload))
 		return ptypes;
 #endif
 
@@ -2216,7 +2225,7 @@ ice_fdir_setup_rx_resources(struct ice_pf *pf)
 	return ICE_SUCCESS;
 }
 
-uint16_t
+static uint16_t
 ice_recv_pkts(void *rx_queue,
 	      struct rte_mbuf **rx_pkts,
 	      uint16_t nb_pkts)
@@ -2313,6 +2322,8 @@ ice_recv_pkts(void *rx_queue,
 	return nb_rx;
 }
 
+_RTE_ETH_RX_DEF(ice_recv_pkts)
+
 static inline void
 ice_parse_tunneling_params(uint64_t ol_flags,
 			    union ice_tx_offload tx_offload,
@@ -3107,14 +3118,16 @@ ice_set_rx_function(struct rte_eth_dev *dev)
 					PMD_DRV_LOG(NOTICE,
 						"Using AVX512 OFFLOAD Vector Scattered Rx (port %d).",
 						dev->data->port_id);
-					dev->rx_pkt_burst =
-						ice_recv_scattered_pkts_vec_avx512_offload;
+					rte_eth_set_rx_burst(dev->data->port_id,
+						_RTE_ETH_FUNC(
+						ice_recv_scattered_pkts_vec_avx512_offload));
 				} else {
 					PMD_DRV_LOG(NOTICE,
 						"Using AVX512 Vector Scattered Rx (port %d).",
 						dev->data->port_id);
-					dev->rx_pkt_burst =
-						ice_recv_scattered_pkts_vec_avx512;
+					rte_eth_set_rx_burst(dev->data->port_id,
+						_RTE_ETH_FUNC(
+						ice_recv_scattered_pkts_vec_avx512));
 				}
 #endif
 			} else if (ad->rx_use_avx2) {
@@ -3122,20 +3135,23 @@ ice_set_rx_function(struct rte_eth_dev *dev)
 					PMD_DRV_LOG(NOTICE,
 						    "Using AVX2 OFFLOAD Vector Scattered Rx (port %d).",
 						    dev->data->port_id);
-					dev->rx_pkt_burst =
-						ice_recv_scattered_pkts_vec_avx2_offload;
+					rte_eth_set_rx_burst(dev->data->port_id,
+						_RTE_ETH_FUNC(
+						ice_recv_scattered_pkts_vec_avx2_offload));
 				} else {
 					PMD_DRV_LOG(NOTICE,
 						    "Using AVX2 Vector Scattered Rx (port %d).",
 						    dev->data->port_id);
-					dev->rx_pkt_burst =
-						ice_recv_scattered_pkts_vec_avx2;
+					rte_eth_set_rx_burst(dev->data->port_id,						_RTE_ETH_FUNC(
+						ice_recv_scattered_pkts_vec_avx2));
 				}
 			} else {
 				PMD_DRV_LOG(DEBUG,
 					"Using Vector Scattered Rx (port %d).",
 					dev->data->port_id);
-				dev->rx_pkt_burst = ice_recv_scattered_pkts_vec;
+				rte_eth_set_rx_burst(dev->data->port_id,
+					_RTE_ETH_FUNC(
+						ice_recv_scattered_pkts_vec));
 			}
 		} else {
 			if (ad->rx_use_avx512) {
@@ -3144,14 +3160,15 @@ ice_set_rx_function(struct rte_eth_dev *dev)
 					PMD_DRV_LOG(NOTICE,
 						"Using AVX512 OFFLOAD Vector Rx (port %d).",
 						dev->data->port_id);
-					dev->rx_pkt_burst =
-						ice_recv_pkts_vec_avx512_offload;
+					rte_eth_set_rx_burst(dev->data->port_id,
+						_RTE_ETH_FUNC(
+						ice_recv_pkts_vec_avx512_offload));
 				} else {
 					PMD_DRV_LOG(NOTICE,
 						"Using AVX512 Vector Rx (port %d).",
 						dev->data->port_id);
-					dev->rx_pkt_burst =
-						ice_recv_pkts_vec_avx512;
+					rte_eth_set_rx_burst(dev->data->port_id,						_RTE_ETH_FUNC(
+						ice_recv_pkts_vec_avx512));
 				}
 #endif
 			} else if (ad->rx_use_avx2) {
@@ -3159,20 +3176,21 @@ ice_set_rx_function(struct rte_eth_dev *dev)
 					PMD_DRV_LOG(NOTICE,
 						    "Using AVX2 OFFLOAD Vector Rx (port %d).",
 						    dev->data->port_id);
-					dev->rx_pkt_burst =
-						ice_recv_pkts_vec_avx2_offload;
+					rte_eth_set_rx_burst(dev->data->port_id,						_RTE_ETH_FUNC(
+						ice_recv_pkts_vec_avx2_offload));
 				} else {
 					PMD_DRV_LOG(NOTICE,
 						    "Using AVX2 Vector Rx (port %d).",
 						    dev->data->port_id);
-					dev->rx_pkt_burst =
-						ice_recv_pkts_vec_avx2;
+					rte_eth_set_rx_burst(dev->data->port_id,						_RTE_ETH_FUNC(
+						ice_recv_pkts_vec_avx2));
 				}
 			} else {
 				PMD_DRV_LOG(DEBUG,
 					"Using Vector Rx (port %d).",
 					dev->data->port_id);
-				dev->rx_pkt_burst = ice_recv_pkts_vec;
+				rte_eth_set_rx_burst(dev->data->port_id,
+					_RTE_ETH_FUNC(ice_recv_pkts_vec));
 			}
 		}
 		return;
@@ -3185,43 +3203,85 @@ ice_set_rx_function(struct rte_eth_dev *dev)
 		PMD_INIT_LOG(DEBUG,
 			     "Using a Scattered function on port %d.",
 			     dev->data->port_id);
-		dev->rx_pkt_burst = ice_recv_scattered_pkts;
+		rte_eth_set_rx_burst(dev->data->port_id,
+			_RTE_ETH_FUNC(ice_recv_scattered_pkts));
 	} else if (ad->rx_bulk_alloc_allowed) {
 		PMD_INIT_LOG(DEBUG,
 			     "Rx Burst Bulk Alloc Preconditions are "
 			     "satisfied. Rx Burst Bulk Alloc function "
 			     "will be used on port %d.",
 			     dev->data->port_id);
-		dev->rx_pkt_burst = ice_recv_pkts_bulk_alloc;
+		rte_eth_set_rx_burst(dev->data->port_id,
+			_RTE_ETH_FUNC(ice_recv_pkts_bulk_alloc));
 	} else {
 		PMD_INIT_LOG(DEBUG,
 			     "Rx Burst Bulk Alloc Preconditions are not "
 			     "satisfied, Normal Rx will be used on port %d.",
 			     dev->data->port_id);
-		dev->rx_pkt_burst = ice_recv_pkts;
+		rte_eth_set_rx_burst(dev->data->port_id,
+			_RTE_ETH_FUNC(ice_recv_pkts));
 	}
 }
 
 static const struct {
-	eth_rx_burst_t pkt_burst;
+	rte_eth_rx_burst_t pkt_burst;
 	const char *info;
 } ice_rx_burst_infos[] = {
-	{ ice_recv_scattered_pkts,          "Scalar Scattered" },
-	{ ice_recv_pkts_bulk_alloc,         "Scalar Bulk Alloc" },
-	{ ice_recv_pkts,                    "Scalar" },
+	{
+		_RTE_ETH_FUNC(ice_recv_scattered_pkts),
+		"Scalar Scattered",
+	},
+	{
+		_RTE_ETH_FUNC(ice_recv_pkts_bulk_alloc),
+		"Scalar Bulk Alloc",
+	},
+	{
+		_RTE_ETH_FUNC(ice_recv_pkts),
+		"Scalar",
+	},
 #ifdef RTE_ARCH_X86
 #ifdef CC_AVX512_SUPPORT
-	{ ice_recv_scattered_pkts_vec_avx512, "Vector AVX512 Scattered" },
-	{ ice_recv_scattered_pkts_vec_avx512_offload, "Offload Vector AVX512 Scattered" },
-	{ ice_recv_pkts_vec_avx512,           "Vector AVX512" },
-	{ ice_recv_pkts_vec_avx512_offload,   "Offload Vector AVX512" },
+	{
+		_RTE_ETH_FUNC(ice_recv_scattered_pkts_vec_avx512),
+		"Vector AVX512 Scattered",
+	},
+	{
+		_RTE_ETH_FUNC(ice_recv_scattered_pkts_vec_avx512_offload),
+		"Offload Vector AVX512 Scattered",
+	},
+	{
+		_RTE_ETH_FUNC(ice_recv_pkts_vec_avx512),
+		"Vector AVX512",
+	},
+	{
+		_RTE_ETH_FUNC(ice_recv_pkts_vec_avx512_offload),
+		"Offload Vector AVX512",
+	},
 #endif
-	{ ice_recv_scattered_pkts_vec_avx2, "Vector AVX2 Scattered" },
-	{ ice_recv_scattered_pkts_vec_avx2_offload, "Offload Vector AVX2 Scattered" },
-	{ ice_recv_pkts_vec_avx2,           "Vector AVX2" },
-	{ ice_recv_pkts_vec_avx2_offload,   "Offload Vector AVX2" },
-	{ ice_recv_scattered_pkts_vec,      "Vector SSE Scattered" },
-	{ ice_recv_pkts_vec,                "Vector SSE" },
+	{
+		_RTE_ETH_FUNC(ice_recv_scattered_pkts_vec_avx2),
+		"Vector AVX2 Scattered",
+	},
+	{
+		_RTE_ETH_FUNC(ice_recv_scattered_pkts_vec_avx2_offload),
+		"Offload Vector AVX2 Scattered",
+	},
+	{
+		_RTE_ETH_FUNC(ice_recv_pkts_vec_avx2),
+		"Vector AVX2",
+	},
+	{
+		_RTE_ETH_FUNC(ice_recv_pkts_vec_avx2_offload),
+		"Offload Vector AVX2",
+	},
+	{
+		_RTE_ETH_FUNC(ice_recv_scattered_pkts_vec),
+		"Vector SSE Scattered",
+	},
+	{
+		_RTE_ETH_FUNC(ice_recv_pkts_vec),
+		"Vector SSE",
+	},
 #endif
 };
 
@@ -3229,7 +3289,7 @@ int
 ice_rx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id,
 		      struct rte_eth_burst_mode *mode)
 {
-	eth_rx_burst_t pkt_burst = dev->rx_pkt_burst;
+	rte_eth_rx_burst_t pkt_burst = rte_eth_get_rx_burst(dev->data->port_id);
 	int ret = -EINVAL;
 	unsigned int i;
 
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index b10db0874d..be8d43a591 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -212,8 +212,7 @@ void ice_tx_queue_release(void *txq);
 void ice_free_queues(struct rte_eth_dev *dev);
 int ice_fdir_setup_tx_resources(struct ice_pf *pf);
 int ice_fdir_setup_rx_resources(struct ice_pf *pf);
-uint16_t ice_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-		       uint16_t nb_pkts);
+_RTE_ETH_RX_PROTO(ice_recv_pkts);
 uint16_t ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		       uint16_t nb_pkts);
 void ice_set_rx_function(struct rte_eth_dev *dev);
@@ -242,37 +241,28 @@ int ice_rx_vec_dev_check(struct rte_eth_dev *dev);
 int ice_tx_vec_dev_check(struct rte_eth_dev *dev);
 int ice_rxq_vec_setup(struct ice_rx_queue *rxq);
 int ice_txq_vec_setup(struct ice_tx_queue *txq);
-uint16_t ice_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
-			   uint16_t nb_pkts);
-uint16_t ice_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
-				     uint16_t nb_pkts);
+
+_RTE_ETH_RX_PROTO(ice_recv_pkts_vec);
+_RTE_ETH_RX_PROTO(ice_recv_scattered_pkts_vec);
+
 uint16_t ice_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
 			   uint16_t nb_pkts);
-uint16_t ice_recv_pkts_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts,
-				uint16_t nb_pkts);
-uint16_t ice_recv_pkts_vec_avx2_offload(void *rx_queue, struct rte_mbuf **rx_pkts,
-					uint16_t nb_pkts);
-uint16_t ice_recv_scattered_pkts_vec_avx2(void *rx_queue,
-					  struct rte_mbuf **rx_pkts,
-					  uint16_t nb_pkts);
-uint16_t ice_recv_scattered_pkts_vec_avx2_offload(void *rx_queue,
-						  struct rte_mbuf **rx_pkts,
-						  uint16_t nb_pkts);
+
+_RTE_ETH_RX_PROTO(ice_recv_pkts_vec_avx2);
+_RTE_ETH_RX_PROTO(ice_recv_pkts_vec_avx2_offload);
+_RTE_ETH_RX_PROTO(ice_recv_scattered_pkts_vec_avx2);
+_RTE_ETH_RX_PROTO(ice_recv_scattered_pkts_vec_avx2_offload);
+
 uint16_t ice_xmit_pkts_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
 				uint16_t nb_pkts);
 uint16_t ice_xmit_pkts_vec_avx2_offload(void *tx_queue, struct rte_mbuf **tx_pkts,
 					uint16_t nb_pkts);
-uint16_t ice_recv_pkts_vec_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
-				  uint16_t nb_pkts);
-uint16_t ice_recv_pkts_vec_avx512_offload(void *rx_queue,
-					  struct rte_mbuf **rx_pkts,
-					  uint16_t nb_pkts);
-uint16_t ice_recv_scattered_pkts_vec_avx512(void *rx_queue,
-					    struct rte_mbuf **rx_pkts,
-					    uint16_t nb_pkts);
-uint16_t ice_recv_scattered_pkts_vec_avx512_offload(void *rx_queue,
-						    struct rte_mbuf **rx_pkts,
-						    uint16_t nb_pkts);
+
+_RTE_ETH_RX_PROTO(ice_recv_pkts_vec_avx512);
+_RTE_ETH_RX_PROTO(ice_recv_pkts_vec_avx512_offload);
+_RTE_ETH_RX_PROTO(ice_recv_scattered_pkts_vec_avx512);
+_RTE_ETH_RX_PROTO(ice_recv_scattered_pkts_vec_avx512_offload);
+
 uint16_t ice_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
 				  uint16_t nb_pkts);
 uint16_t ice_xmit_pkts_vec_avx512_offload(void *tx_queue,
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 9725ac0180..29b9b57f9f 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -704,7 +704,7 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
  * Notice:
  * - nb_pkts < ICE_DESCS_PER_LOOP, just return no packet
  */
-uint16_t
+static inline uint16_t
 ice_recv_pkts_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts,
 		       uint16_t nb_pkts)
 {
@@ -712,7 +712,9 @@ ice_recv_pkts_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts,
 					   nb_pkts, NULL, false);
 }
 
-uint16_t
+_RTE_ETH_RX_DEF(ice_recv_pkts_vec_avx2)
+
+static inline uint16_t
 ice_recv_pkts_vec_avx2_offload(void *rx_queue, struct rte_mbuf **rx_pkts,
 			       uint16_t nb_pkts)
 {
@@ -720,6 +722,8 @@ ice_recv_pkts_vec_avx2_offload(void *rx_queue, struct rte_mbuf **rx_pkts,
 					   nb_pkts, NULL, true);
 }
 
+_RTE_ETH_RX_DEF(ice_recv_pkts_vec_avx2_offload)
+
 /**
  * vPMD receive routine that reassembles single burst of 32 scattered packets
  * Notice:
@@ -787,7 +791,7 @@ ice_recv_scattered_pkts_vec_avx2_common(void *rx_queue,
 				rx_pkts + retval, nb_pkts, offload);
 }
 
-uint16_t
+static inline uint16_t
 ice_recv_scattered_pkts_vec_avx2(void *rx_queue,
 				 struct rte_mbuf **rx_pkts,
 				 uint16_t nb_pkts)
@@ -798,7 +802,9 @@ ice_recv_scattered_pkts_vec_avx2(void *rx_queue,
 						       false);
 }
 
-uint16_t
+_RTE_ETH_RX_DEF(ice_recv_scattered_pkts_vec_avx2)
+
+static inline uint16_t
 ice_recv_scattered_pkts_vec_avx2_offload(void *rx_queue,
 					 struct rte_mbuf **rx_pkts,
 					 uint16_t nb_pkts)
@@ -809,6 +815,8 @@ ice_recv_scattered_pkts_vec_avx2_offload(void *rx_queue,
 						       true);
 }
 
+_RTE_ETH_RX_DEF(ice_recv_scattered_pkts_vec_avx2_offload)
+
 static __rte_always_inline void
 ice_vtx1(volatile struct ice_tx_desc *txdp,
 	 struct rte_mbuf *pkt, uint64_t flags, bool offload)
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 5bba9887d2..30c44c8918 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -819,18 +819,20 @@ _ice_recv_raw_pkts_vec_avx512(struct ice_rx_queue *rxq,
  * Notice:
  * - nb_pkts < ICE_DESCS_PER_LOOP, just return no packet
  */
-uint16_t
+static inline uint16_t
 ice_recv_pkts_vec_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
 			 uint16_t nb_pkts)
 {
 	return _ice_recv_raw_pkts_vec_avx512(rx_queue, rx_pkts, nb_pkts, NULL, false);
 }
 
+_RTE_ETH_RX_DEF(ice_recv_pkts_vec_avx512)
+
 /**
  * Notice:
  * - nb_pkts < ICE_DESCS_PER_LOOP, just return no packet
  */
-uint16_t
+static inline uint16_t
 ice_recv_pkts_vec_avx512_offload(void *rx_queue, struct rte_mbuf **rx_pkts,
 				 uint16_t nb_pkts)
 {
@@ -838,6 +840,8 @@ ice_recv_pkts_vec_avx512_offload(void *rx_queue, struct rte_mbuf **rx_pkts,
 					     nb_pkts, NULL, true);
 }
 
+_RTE_ETH_RX_DEF(ice_recv_pkts_vec_avx512_offload)
+
 /**
  * vPMD receive routine that reassembles single burst of 32 scattered packets
  * Notice:
@@ -927,7 +931,7 @@ ice_recv_scattered_burst_vec_avx512_offload(void *rx_queue,
  * Notice:
  * - nb_pkts < ICE_DESCS_PER_LOOP, just return no packet
  */
-uint16_t
+static inline uint16_t
 ice_recv_scattered_pkts_vec_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
 				   uint16_t nb_pkts)
 {
@@ -945,13 +949,15 @@ ice_recv_scattered_pkts_vec_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
 				rx_pkts + retval, nb_pkts);
 }
 
+_RTE_ETH_RX_DEF(ice_recv_scattered_pkts_vec_avx512)
+
 /**
  * vPMD receive routine that reassembles scattered packets.
  * Main receive routine that can handle arbitrary burst sizes
  * Notice:
  * - nb_pkts < ICE_DESCS_PER_LOOP, just return no packet
  */
-uint16_t
+static inline uint16_t
 ice_recv_scattered_pkts_vec_avx512_offload(void *rx_queue,
 					   struct rte_mbuf **rx_pkts,
 					   uint16_t nb_pkts)
@@ -971,6 +977,8 @@ ice_recv_scattered_pkts_vec_avx512_offload(void *rx_queue,
 				rx_pkts + retval, nb_pkts);
 }
 
+_RTE_ETH_RX_DEF(ice_recv_scattered_pkts_vec_avx512_offload)
+
 static __rte_always_inline int
 ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
 {
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 673e44a243..2caf1c6941 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -587,13 +587,15 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
  * - nb_pkts > ICE_VPMD_RX_BURST, only scan ICE_VPMD_RX_BURST
  *   numbers of DD bits
  */
-uint16_t
+static inline uint16_t
 ice_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
 		  uint16_t nb_pkts)
 {
 	return _ice_recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL);
 }
 
+_RTE_ETH_RX_DEF(ice_recv_pkts_vec)
+
 /**
  * vPMD receive routine that reassembles single burst of 32 scattered packets
  *
@@ -639,7 +641,7 @@ ice_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
 /**
  * vPMD receive routine that reassembles scattered packets.
  */
-uint16_t
+static inline uint16_t
 ice_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
 			    uint16_t nb_pkts)
 {
@@ -662,6 +664,8 @@ ice_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
 						     nb_pkts);
 }
 
+_RTE_ETH_RX_DEF(ice_recv_scattered_pkts_vec)
+
 static inline void
 ice_vtx1(volatile struct ice_tx_desc *txdp, struct rte_mbuf *pkt,
 	 uint64_t flags)
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 40e474aa7e..8b7d1e8840 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -1513,6 +1513,126 @@ struct rte_eth_tunnel_filter_conf {
 	uint16_t queue_id;      /**< Queue assigned to if match. */
 };
 
+/**
+ * @internal
+ * Helper routine for eth driver rx_burst API.
+ * Should be called as first thing on entrance to the PMD's rte_eth_rx_bulk
+ * implementation.
+ * Does necessary checks and returns pointer to device RX queue.
+ *
+ * @param port_id
+ *  The port identifier of the Ethernet device.
+ * @param queue_id
+ *  The index of the receive queue from which to retrieve input packets.
+ *
+ * @return
+ *  Pointer to device RX queue structure on success or NULL otherwise.
+ */
+__rte_internal
+static inline void *
+_rte_eth_rx_prolog(uint16_t port_id, uint16_t queue_id)
+{
+	struct rte_eth_dev *dev;
+
+	dev = &rte_eth_devices[port_id];
+
+#ifdef RTE_ETHDEV_DEBUG_RX
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, NULL);
+
+	if (queue_id >= dev->data->nb_rx_queues) {
+		RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u\n", queue_id);
+		return NULL;
+	}
+#endif
+	return dev->data->rx_queues[queue_id];
+}
+
+/**
+ * @internal
+ * Helper routine for eth driver rx_burst API.
+ * Should be called at exit from PMD's rte_eth_rx_bulk implementation.
+ * Does necessary post-processing - invokes RX callbacks if any, tracing, etc.
+ *
+ * @param port_id
+ *  The port identifier of the Ethernet device.
+ * @param queue_id
+ *  The index of the receive queue from which to retrieve input packets.
+ * @param rx_pkts
+ *   The address of an array of pointers to *rte_mbuf* structures that
+ *   have been retrieved from the device.
+ * @param nb_pkts
+ *   The number of packets that were retrieved from the device.
+ * @param nb_pkts
+ *   The number of elements in *rx_pkts* array.
+ *
+ * @return
+ *  The number of packets effectively supplied to the *rx_pkts* array.
+ */
+__rte_internal
+static inline uint16_t
+_rte_eth_rx_epilog(uint16_t port_id, uint16_t queue_id,
+	struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts)
+{
+	struct rte_eth_dev *dev;
+
+	dev = &rte_eth_devices[port_id];
+
+#ifdef RTE_ETHDEV_RXTX_CALLBACKS
+	struct rte_eth_rxtx_callback *cb;
+
+	/* __ATOMIC_RELEASE memory order was used when the
+	 * call back was inserted into the list.
+	 * Since there is a clear dependency between loading
+	 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
+	 * not required.
+	 */
+	cb = __atomic_load_n(&dev->post_rx_burst_cbs[queue_id],
+				__ATOMIC_RELAXED);
+
+	if (unlikely(cb != NULL)) {
+		do {
+			nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
+				nb_pkts, cb->param);
+			cb = cb->next;
+		} while (cb != NULL);
+	}
+#endif
+
+	rte_ethdev_trace_rx_burst(port_id, queue_id, (void **)rx_pkts, nb_rx);
+	return nb_rx;
+}
+
+#define _RTE_ETH_FUNC(fn)	_rte_eth_##fn
+
+/**
+ * @internal
+ * Helper macro to create new API wrappers for existing PMD rx_burst functions.
+ */
+#define _RTE_ETH_RX_PROTO(fn) \
+	uint16_t _RTE_ETH_FUNC(fn)(uint16_t port_id, uint16_t queue_id, \
+			struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+
+/**
+ * @internal
+ * Helper macro to create new API wrappers for existing PMD rx_burst functions.
+ */
+#define _RTE_ETH_RX_DEF(fn) \
+_RTE_ETH_RX_PROTO(fn) \
+{ \
+	uint16_t nb_rx; \
+	void *rxq = _rte_eth_rx_prolog(port_id, queue_id); \
+	if (rxq == NULL) \
+		return 0; \
+	nb_rx = fn(rxq, rx_pkts, nb_pkts); \
+	return _rte_eth_rx_epilog(port_id, queue_id, rx_pkts, nb_rx, nb_pkts); \
+}
+
+__rte_experimental
+rte_eth_rx_burst_t rte_eth_get_rx_burst(uint16_t port_id);
+
+__rte_experimental
+int rte_eth_set_rx_burst(uint16_t port_id, rte_eth_rx_burst_t rxf);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 949292a617..c126626281 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -588,7 +588,6 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
 	eth_dev->device = NULL;
 	eth_dev->process_private = NULL;
 	eth_dev->intr_handle = NULL;
-	eth_dev->rx_pkt_burst = NULL;
 	eth_dev->tx_pkt_burst = NULL;
 	eth_dev->tx_pkt_prepare = NULL;
 	eth_dev->rx_queue_count = NULL;
@@ -6337,3 +6336,25 @@ RTE_INIT(ethdev_init_telemetry)
 			eth_dev_handle_port_link_status,
 			"Returns the link status for a port. Parameters: int port_id");
 }
+
+__rte_experimental
+rte_eth_rx_burst_t
+rte_eth_get_rx_burst(uint16_t port_id)
+{
+	if (port_id >= RTE_DIM(rte_eth_burst_api)) {
+		rte_errno = EINVAL;
+		return NULL;
+	}
+	return rte_eth_burst_api[port_id].rx_pkt_burst;
+}
+
+__rte_experimental
+int
+rte_eth_set_rx_burst(uint16_t port_id, rte_eth_rx_burst_t rxf)
+{
+	if (port_id >= RTE_DIM(rte_eth_burst_api))
+		return -EINVAL;
+
+	rte_eth_burst_api[port_id].rx_pkt_burst = rxf;
+	return 0;
+}
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index d2b27c351f..a155f255ad 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -4981,44 +4981,11 @@ static inline uint16_t
 rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
 		 struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
 {
-	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
-	uint16_t nb_rx;
-
-#ifdef RTE_ETHDEV_DEBUG_RX
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
-
-	if (queue_id >= dev->data->nb_rx_queues) {
-		RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u\n", queue_id);
+	if (port_id >= RTE_MAX_ETHPORTS)
 		return 0;
-	}
-#endif
-	nb_rx = (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id],
-				     rx_pkts, nb_pkts);
-
-#ifdef RTE_ETHDEV_RXTX_CALLBACKS
-	struct rte_eth_rxtx_callback *cb;
-
-	/* __ATOMIC_RELEASE memory order was used when the
-	 * call back was inserted into the list.
-	 * Since there is a clear dependency between loading
-	 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
-	 * not required.
-	 */
-	cb = __atomic_load_n(&dev->post_rx_burst_cbs[queue_id],
-				__ATOMIC_RELAXED);
-
-	if (unlikely(cb != NULL)) {
-		do {
-			nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
-						nb_pkts, cb->param);
-			cb = cb->next;
-		} while (cb != NULL);
-	}
-#endif
 
-	rte_ethdev_trace_rx_burst(port_id, queue_id, (void **)rx_pkts, nb_rx);
-	return nb_rx;
+	return rte_eth_burst_api[port_id].rx_pkt_burst(port_id, queue_id,
+			rx_pkts, nb_pkts);
 }
 
 /**
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index fb8526cb9f..94ffa071e3 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -25,12 +25,14 @@ TAILQ_HEAD(rte_eth_dev_cb_list, rte_eth_dev_callback);
 
 struct rte_eth_dev;
 
+/* !!! should be removed *** */
+typedef uint16_t (*eth_rx_burst_t)(void *rxq,
+				struct rte_mbuf **rx_pkts,
+				uint16_t nb_pkts);
+
 typedef uint16_t (*rte_eth_rx_burst_t)(uint16_t port_id, uint16_t queue_id,
 			struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
 
-typedef uint16_t (*eth_rx_burst_t)(void *rxq,
-				   struct rte_mbuf **rx_pkts,
-				   uint16_t nb_pkts);
 /**< @internal Retrieve input packets from a receive queue of an Ethernet device. */
 
 typedef uint16_t (*rte_eth_tx_burst_t)(uint16_t port_id, uint16_t queue_id,
@@ -113,7 +115,6 @@ struct rte_eth_rxtx_callback {
  * process, while the actual configuration data for the device is shared.
  */
 struct rte_eth_dev {
-	eth_rx_burst_t rx_pkt_burst; /**< Pointer to PMD receive function. */
 	eth_tx_burst_t tx_pkt_burst; /**< Pointer to PMD transmit function. */
 	eth_tx_prep_t tx_pkt_prepare; /**< Pointer to PMD transmit prepare function. */
 
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 3eece75b72..2698c75940 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -249,6 +249,11 @@ EXPERIMENTAL {
 	rte_mtr_meter_policy_delete;
 	rte_mtr_meter_policy_update;
 	rte_mtr_meter_policy_validate;
+
+	# added in 21.11
+	rte_eth_burst_api;
+	rte_eth_get_rx_burst;
+	rte_eth_set_rx_burst;
 };
 
 INTERNAL {
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [dpdk-dev] [RFC 3/7] eth: make drivers to use new API for Tx
  2021-08-20 16:28 [dpdk-dev] [RFC 0/7] hide eth dev related structures Konstantin Ananyev
  2021-08-20 16:28 ` [dpdk-dev] [RFC 1/7] eth: move ethdev 'burst' API into separate structure Konstantin Ananyev
  2021-08-20 16:28 ` [dpdk-dev] [RFC 2/7] eth: make drivers to use new API for Rx Konstantin Ananyev
@ 2021-08-20 16:28 ` Konstantin Ananyev
  2021-08-20 16:28 ` [dpdk-dev] [RFC 4/7] eth: make drivers to use new API for Tx prepare Konstantin Ananyev
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 112+ messages in thread
From: Konstantin Ananyev @ 2021-08-20 16:28 UTC (permalink / raw)
  To: dev
  Cc: thomas, ferruh.yigit, andrew.rybchenko, qiming.yang, qi.z.zhang,
	beilei.xing, techboard, Konstantin Ananyev

ethdev:
 - make changes so drivers can start using new API for tx_pkt_burst().
 - provide helper functions/macros.
 - remove tx_pkt_burst() from 'struct rte_eth_dev'.
drivers/net:
 - adjust to new tx_burst API.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 app/test/virtual_pmd.c                   | 12 ++-
 drivers/net/i40e/i40e_ethdev.c           |  2 +-
 drivers/net/i40e/i40e_ethdev_vf.c        |  3 +-
 drivers/net/i40e/i40e_rxtx.c             | 56 ++++++++------
 drivers/net/i40e/i40e_rxtx.h             | 16 ++--
 drivers/net/i40e/i40e_rxtx_vec_avx2.c    |  4 +-
 drivers/net/i40e/i40e_rxtx_vec_avx512.c  |  4 +-
 drivers/net/i40e/i40e_vf_representor.c   |  5 +-
 drivers/net/ice/ice_dcf_ethdev.c         |  5 +-
 drivers/net/ice/ice_dcf_vf_representor.c |  5 +-
 drivers/net/ice/ice_ethdev.c             |  2 +-
 drivers/net/ice/ice_rxtx.c               | 47 +++++++-----
 drivers/net/ice/ice_rxtx.h               | 20 ++---
 drivers/net/ice/ice_rxtx_vec_avx2.c      |  8 +-
 drivers/net/ice/ice_rxtx_vec_avx512.c    |  8 +-
 drivers/net/ice/ice_rxtx_vec_common.h    |  7 +-
 drivers/net/ice/ice_rxtx_vec_sse.c       |  4 +-
 lib/ethdev/ethdev_driver.h               | 94 ++++++++++++++++++++++++
 lib/ethdev/rte_ethdev.c                  | 23 +++++-
 lib/ethdev/rte_ethdev.h                  | 37 +---------
 lib/ethdev/rte_ethdev_core.h             |  1 -
 lib/ethdev/version.map                   |  2 +
 22 files changed, 247 insertions(+), 118 deletions(-)

diff --git a/app/test/virtual_pmd.c b/app/test/virtual_pmd.c
index 734ef32c97..940b2af1ab 100644
--- a/app/test/virtual_pmd.c
+++ b/app/test/virtual_pmd.c
@@ -390,6 +390,8 @@ virtual_ethdev_tx_burst_success(void *queue, struct rte_mbuf **bufs,
 	return nb_pkts;
 }
 
+static _RTE_ETH_TX_DEF(virtual_ethdev_tx_burst_success)
+
 static uint16_t
 virtual_ethdev_tx_burst_fail(void *queue, struct rte_mbuf **bufs,
 		uint16_t nb_pkts)
@@ -425,6 +427,7 @@ virtual_ethdev_tx_burst_fail(void *queue, struct rte_mbuf **bufs,
 	return 0;
 }
 
+static _RTE_ETH_TX_DEF(virtual_ethdev_tx_burst_fail)
 
 void
 virtual_ethdev_rx_burst_fn_set_success(uint16_t port_id, uint8_t success)
@@ -447,9 +450,11 @@ virtual_ethdev_tx_burst_fn_set_success(uint16_t port_id, uint8_t success)
 	dev_private = vrtl_eth_dev->data->dev_private;
 
 	if (success)
-		vrtl_eth_dev->tx_pkt_burst = virtual_ethdev_tx_burst_success;
+		rte_eth_set_tx_burst(port_id,
+			_RTE_ETH_FUNC(virtual_ethdev_tx_burst_success));
 	else
-		vrtl_eth_dev->tx_pkt_burst = virtual_ethdev_tx_burst_fail;
+		rte_eth_set_tx_burst(port_id,
+			_RTE_ETH_FUNC(virtual_ethdev_tx_burst_fail));
 
 	dev_private->tx_burst_fail_count = 0;
 }
@@ -605,7 +610,8 @@ virtual_ethdev_create(const char *name, struct rte_ether_addr *mac_addr,
 
 	rte_eth_set_rx_burst(eth_dev->data->port_id,
 			_RTE_ETH_FUNC(virtual_ethdev_rx_burst_success));
-	eth_dev->tx_pkt_burst = virtual_ethdev_tx_burst_success;
+	rte_eth_set_tx_burst(eth_dev->data->port_id,
+			_RTE_ETH_FUNC(virtual_ethdev_tx_burst_success));
 
 	rte_eth_dev_probing_finish(eth_dev);
 
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 4753af126d..9eb9129ae9 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1438,7 +1438,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
 	dev->rx_descriptor_status = i40e_dev_rx_descriptor_status;
 	dev->tx_descriptor_status = i40e_dev_tx_descriptor_status;
 	rte_eth_set_rx_burst(dev->data->port_id, _RTE_ETH_FUNC(i40e_recv_pkts));
-	dev->tx_pkt_burst = i40e_xmit_pkts;
+	rte_eth_set_tx_burst(dev->data->port_id, _RTE_ETH_FUNC(i40e_xmit_pkts));
 	dev->tx_pkt_prepare = i40e_prep_pkts;
 
 	/* for secondary processes, we don't initialise any further as primary
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index e08e97276a..3755bdb66a 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -1578,7 +1578,8 @@ i40evf_dev_init(struct rte_eth_dev *eth_dev)
 	eth_dev->tx_descriptor_status = i40e_dev_tx_descriptor_status;
 	rte_eth_set_rx_burst(eth_dev->data->port_id,
 		_RTE_ETH_FUNC(i40e_recv_pkts));
-	eth_dev->tx_pkt_burst = &i40e_xmit_pkts;
+	rte_eth_set_tx_burst(eth_dev->data->port_id,
+		_RTE_ETH_FUNC(i40e_xmit_pkts));
 
 	/*
 	 * For secondary processes, we don't initialise any further as primary
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index f2d0d35538..5a400435dd 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1067,7 +1067,7 @@ i40e_calc_pkt_desc(struct rte_mbuf *tx_pkt)
 	return count;
 }
 
-uint16_t
+static inline uint16_t
 i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 {
 	struct i40e_tx_queue *txq;
@@ -1315,6 +1315,8 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 	return nb_tx;
 }
 
+_RTE_ETH_TX_DEF(i40e_xmit_pkts)
+
 static __rte_always_inline int
 i40e_tx_free_bufs(struct i40e_tx_queue *txq)
 {
@@ -1509,6 +1511,8 @@ i40e_xmit_pkts_simple(void *tx_queue,
 	return nb_tx;
 }
 
+static _RTE_ETH_TX_DEF(i40e_xmit_pkts_simple)
+
 static uint16_t
 i40e_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
 		   uint16_t nb_pkts)
@@ -1531,6 +1535,8 @@ i40e_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
 	return nb_tx;
 }
 
+static _RTE_ETH_TX_DEF(i40e_xmit_pkts_vec)
+
 /*********************************************************************
  *
  *  TX simple prep functions
@@ -2608,7 +2614,7 @@ i40e_reset_rx_queue(struct i40e_rx_queue *rxq)
 void
 i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq)
 {
-	struct rte_eth_dev *dev;
+	rte_eth_tx_burst_t tx_pkt_burst;
 	uint16_t i;
 
 	if (!txq || !txq->sw_ring) {
@@ -2616,14 +2622,14 @@ i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq)
 		return;
 	}
 
-	dev = &rte_eth_devices[txq->port_id];
+	tx_pkt_burst = rte_eth_get_tx_burst(txq->port_id);
 
 	/**
 	 *  vPMD tx will not set sw_ring's mbuf to NULL after free,
 	 *  so need to free remains more carefully.
 	 */
 #ifdef CC_AVX512_SUPPORT
-	if (dev->tx_pkt_burst == i40e_xmit_pkts_vec_avx512) {
+	if (tx_pkt_burst == _RTE_ETH_FUNC(i40e_xmit_pkts_vec_avx512)) {
 		struct i40e_vec_tx_entry *swr = (void *)txq->sw_ring;
 
 		i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
@@ -2641,8 +2647,8 @@ i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq)
 		return;
 	}
 #endif
-	if (dev->tx_pkt_burst == i40e_xmit_pkts_vec_avx2 ||
-			dev->tx_pkt_burst == i40e_xmit_pkts_vec) {
+	if (tx_pkt_burst == _RTE_ETH_FUNC(i40e_xmit_pkts_vec_avx2) ||
+			tx_pkt_burst == _RTE_ETH_FUNC(i40e_xmit_pkts_vec)) {
 		i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
 		if (txq->tx_tail < i) {
 			for (; i < txq->nb_tx_desc; i++) {
@@ -3564,49 +3570,55 @@ i40e_set_tx_function(struct rte_eth_dev *dev)
 #ifdef CC_AVX512_SUPPORT
 				PMD_DRV_LOG(NOTICE, "Using AVX512 Vector Tx (port %d).",
 					    dev->data->port_id);
-				dev->tx_pkt_burst = i40e_xmit_pkts_vec_avx512;
+				rte_eth_set_tx_burst(dev->data->port_id,
+					_RTE_ETH_FUNC(
+						i40e_xmit_pkts_vec_avx512));
 #endif
 			} else {
 				PMD_INIT_LOG(DEBUG, "Using %sVector Tx (port %d).",
 					     ad->tx_use_avx2 ? "avx2 " : "",
 					     dev->data->port_id);
-				dev->tx_pkt_burst = ad->tx_use_avx2 ?
-						    i40e_xmit_pkts_vec_avx2 :
-						    i40e_xmit_pkts_vec;
+				rte_eth_set_tx_burst(dev->data->port_id,
+					ad->tx_use_avx2 ?
+					_RTE_ETH_FUNC(i40e_xmit_pkts_vec_avx2) :
+					_RTE_ETH_FUNC(i40e_xmit_pkts_vec));
 			}
 #else /* RTE_ARCH_X86 */
 			PMD_INIT_LOG(DEBUG, "Using Vector Tx (port %d).",
 				     dev->data->port_id);
-			dev->tx_pkt_burst = i40e_xmit_pkts_vec;
+			rte_eth_set_tx_burst(dev->data->port_id,
+				_RTE_ETH_FUNC(i40e_xmit_pkts_vec));
 #endif /* RTE_ARCH_X86 */
 		} else {
 			PMD_INIT_LOG(DEBUG, "Simple tx finally be used.");
-			dev->tx_pkt_burst = i40e_xmit_pkts_simple;
+			rte_eth_set_tx_burst(dev->data->port_id,
+				_RTE_ETH_FUNC(i40e_xmit_pkts_simple));
 		}
 		dev->tx_pkt_prepare = i40e_simple_prep_pkts;
 	} else {
 		PMD_INIT_LOG(DEBUG, "Xmit tx finally be used.");
-		dev->tx_pkt_burst = i40e_xmit_pkts;
+		rte_eth_set_tx_burst(dev->data->port_id,
+			_RTE_ETH_FUNC(i40e_xmit_pkts));
 		dev->tx_pkt_prepare = i40e_prep_pkts;
 	}
 }
 
 static const struct {
-	eth_tx_burst_t pkt_burst;
+	rte_eth_tx_burst_t pkt_burst;
 	const char *info;
 } i40e_tx_burst_infos[] = {
-	{ i40e_xmit_pkts_simple,   "Scalar Simple" },
-	{ i40e_xmit_pkts,          "Scalar" },
+	{ _RTE_ETH_FUNC(i40e_xmit_pkts_simple),   "Scalar Simple" },
+	{ _RTE_ETH_FUNC(i40e_xmit_pkts),          "Scalar" },
 #ifdef RTE_ARCH_X86
 #ifdef CC_AVX512_SUPPORT
-	{ i40e_xmit_pkts_vec_avx512, "Vector AVX512" },
+	{ _RTE_ETH_FUNC(i40e_xmit_pkts_vec_avx512), "Vector AVX512" },
 #endif
-	{ i40e_xmit_pkts_vec_avx2, "Vector AVX2" },
-	{ i40e_xmit_pkts_vec,      "Vector SSE" },
+	{ _RTE_ETH_FUNC(i40e_xmit_pkts_vec_avx2), "Vector AVX2" },
+	{ _RTE_ETH_FUNC(i40e_xmit_pkts_vec),      "Vector SSE" },
 #elif defined(RTE_ARCH_ARM64)
-	{ i40e_xmit_pkts_vec,      "Vector Neon" },
+	{ _RTE_ETH_FUNC(i40e_xmit_pkts_vec),      "Vector Neon" },
 #elif defined(RTE_ARCH_PPC_64)
-	{ i40e_xmit_pkts_vec,      "Vector AltiVec" },
+	{ _RTE_ETH_FUNC(i40e_xmit_pkts_vec),      "Vector AltiVec" },
 #endif
 };
 
@@ -3614,7 +3626,7 @@ int
 i40e_tx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id,
 		       struct rte_eth_burst_mode *mode)
 {
-	eth_tx_burst_t pkt_burst = dev->tx_pkt_burst;
+	rte_eth_tx_burst_t pkt_burst = rte_eth_get_tx_burst(dev->data->port_id);
 	int ret = -EINVAL;
 	unsigned int i;
 
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index beeeaae78d..c51d5db2f7 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -203,9 +203,7 @@ void i40e_dev_tx_queue_release(void *txq);
 _RTE_ETH_RX_PROTO(i40e_recv_pkts);
 _RTE_ETH_RX_PROTO(i40e_recv_scattered_pkts);
 
-uint16_t i40e_xmit_pkts(void *tx_queue,
-			struct rte_mbuf **tx_pkts,
-			uint16_t nb_pkts);
+_RTE_ETH_TX_PROTO(i40e_xmit_pkts);
 uint16_t i40e_simple_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			       uint16_t nb_pkts);
 uint16_t i40e_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
@@ -236,8 +234,10 @@ int i40e_rx_vec_dev_conf_condition_check(struct rte_eth_dev *dev);
 int i40e_rxq_vec_setup(struct i40e_rx_queue *rxq);
 int i40e_txq_vec_setup(struct i40e_tx_queue *txq);
 void i40e_rx_queue_release_mbufs_vec(struct i40e_rx_queue *rxq);
+
 uint16_t i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
-				   uint16_t nb_pkts);
+					uint16_t nb_pkts);
+
 void i40e_set_rx_function(struct rte_eth_dev *dev);
 void i40e_set_tx_function_flag(struct rte_eth_dev *dev,
 			       struct i40e_tx_queue *txq);
@@ -248,16 +248,14 @@ void i40e_set_default_pctype_table(struct rte_eth_dev *dev);
 _RTE_ETH_RX_PROTO(i40e_recv_pkts_vec_avx2);
 _RTE_ETH_RX_PROTO(i40e_recv_scattered_pkts_vec_avx2);
 
-uint16_t i40e_xmit_pkts_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
-	uint16_t nb_pkts);
+_RTE_ETH_TX_PROTO(i40e_xmit_pkts_vec_avx2);
+
 int i40e_get_monitor_addr(void *rx_queue, struct rte_power_monitor_cond *pmc);
 
 _RTE_ETH_RX_PROTO(i40e_recv_pkts_vec_avx512);
 _RTE_ETH_RX_PROTO(i40e_recv_scattered_pkts_vec_avx512);
 
-uint16_t i40e_xmit_pkts_vec_avx512(void *tx_queue,
-				   struct rte_mbuf **tx_pkts,
-				   uint16_t nb_pkts);
+_RTE_ETH_TX_PROTO(i40e_xmit_pkts_vec_avx512);
 
 /* For each value it means, datasheet of hardware can tell more details
  *
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index 5c03d16644..f011088ad7 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -824,7 +824,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
 	return nb_pkts;
 }
 
-uint16_t
+static inline uint16_t
 i40e_xmit_pkts_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
 		   uint16_t nb_pkts)
 {
@@ -845,3 +845,5 @@ i40e_xmit_pkts_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 	return nb_tx;
 }
+
+_RTE_ETH_TX_DEF(i40e_xmit_pkts_vec_avx2)
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index 96ff3d60c3..e37dc5a401 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -1120,7 +1120,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
 	return nb_pkts;
 }
 
-uint16_t
+static inline uint16_t
 i40e_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
 			  uint16_t nb_pkts)
 {
@@ -1141,3 +1141,5 @@ i40e_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 	return nb_tx;
 }
+
+_RTE_ETH_TX_DEF(i40e_xmit_pkts_vec_avx512)
diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
index 9d32a5c85d..f488ef51cd 100644
--- a/drivers/net/i40e/i40e_vf_representor.c
+++ b/drivers/net/i40e/i40e_vf_representor.c
@@ -475,6 +475,8 @@ i40e_vf_representor_tx_burst(__rte_unused void *tx_queue,
 	return 0;
 }
 
+static _RTE_ETH_TX_DEF(i40e_vf_representor_tx_burst)
+
 int
 i40e_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
 {
@@ -505,7 +507,8 @@ i40e_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
 	 */
 	rte_eth_set_rx_burst(ethdev->data->port_id,
 			_RTE_ETH_FUNC(i40e_vf_representor_rx_burst));
-	ethdev->tx_pkt_burst = i40e_vf_representor_tx_burst;
+	rte_eth_set_tx_burst(ethdev->data->port_id,
+			_RTE_ETH_FUNC(i40e_vf_representor_tx_burst));
 
 	vf = &pf->vfs[representor->vf_id];
 
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 58a4204621..f9a917a13f 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -50,6 +50,8 @@ ice_dcf_xmit_pkts(__rte_unused void *tx_queue,
 	return 0;
 }
 
+static _RTE_ETH_TX_DEF(ice_dcf_xmit_pkts)
+
 static int
 ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
 {
@@ -1043,7 +1045,8 @@ ice_dcf_dev_init(struct rte_eth_dev *eth_dev)
 	eth_dev->dev_ops = &ice_dcf_eth_dev_ops;
 	rte_eth_set_rx_burst(eth_dev->data->port_id,
 			_RTE_ETH_FUNC(ice_dcf_recv_pkts));
-	eth_dev->tx_pkt_burst = ice_dcf_xmit_pkts;
+	rte_eth_set_tx_burst(eth_dev->data->port_id,
+			_RTE_ETH_FUNC(ice_dcf_xmit_pkts));
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index 8136169ebd..8b46c9614a 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -28,6 +28,8 @@ ice_dcf_vf_repr_tx_burst(__rte_unused void *txq,
 	return 0;
 }
 
+static _RTE_ETH_TX_DEF(ice_dcf_vf_repr_tx_burst)
+
 static int
 ice_dcf_vf_repr_dev_configure(struct rte_eth_dev *dev)
 {
@@ -417,7 +419,8 @@ ice_dcf_vf_repr_init(struct rte_eth_dev *vf_rep_eth_dev, void *init_param)
 
 	rte_eth_set_rx_burst(vf_rep_eth_dev->data->port_id,
 			_RTE_ETH_FUNC(ice_dcf_vf_repr_rx_burst));
-	vf_rep_eth_dev->tx_pkt_burst = ice_dcf_vf_repr_tx_burst;
+	rte_eth_set_tx_burst(vf_rep_eth_dev->data->port_id,
+			_RTE_ETH_FUNC(ice_dcf_vf_repr_tx_burst));
 
 	vf_rep_eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
 	vf_rep_eth_dev->data->representor_id = repr->vf_id;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 4d67a2dddf..9558455f7f 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1997,7 +1997,7 @@ ice_dev_init(struct rte_eth_dev *dev)
 	dev->rx_descriptor_status = ice_rx_descriptor_status;
 	dev->tx_descriptor_status = ice_tx_descriptor_status;
 	rte_eth_set_rx_burst(dev->data->port_id, _RTE_ETH_FUNC(ice_recv_pkts));
-	dev->tx_pkt_burst = ice_xmit_pkts;
+	rte_eth_set_tx_burst(dev->data->port_id, _RTE_ETH_FUNC(ice_xmit_pkts));
 	dev->tx_pkt_prepare = ice_prep_pkts;
 
 	/* for secondary processes, we don't initialise any further as primary
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 2cc411d315..e97564fdd6 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -2558,7 +2558,7 @@ ice_calc_pkt_desc(struct rte_mbuf *tx_pkt)
 	return count;
 }
 
-uint16_t
+static inline uint16_t
 ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 {
 	struct ice_tx_queue *txq;
@@ -2775,6 +2775,8 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 	return nb_tx;
 }
 
+_RTE_ETH_TX_DEF(ice_xmit_pkts)
+
 static __rte_always_inline int
 ice_tx_free_bufs(struct ice_tx_queue *txq)
 {
@@ -3064,6 +3066,8 @@ ice_xmit_pkts_simple(void *tx_queue,
 	return nb_tx;
 }
 
+static _RTE_ETH_TX_DEF(ice_xmit_pkts_simple)
+
 void __rte_cold
 ice_set_rx_function(struct rte_eth_dev *dev)
 {
@@ -3433,14 +3437,15 @@ ice_set_tx_function(struct rte_eth_dev *dev)
 				PMD_DRV_LOG(NOTICE,
 					    "Using AVX512 OFFLOAD Vector Tx (port %d).",
 					    dev->data->port_id);
-				dev->tx_pkt_burst =
-					ice_xmit_pkts_vec_avx512_offload;
+				rte_eth_set_tx_burst(dev->data->port_id,
+					_RTE_ETH_FUNC(ice_xmit_pkts_vec_avx512_offload));
 				dev->tx_pkt_prepare = ice_prep_pkts;
 			} else {
 				PMD_DRV_LOG(NOTICE,
 					    "Using AVX512 Vector Tx (port %d).",
 					    dev->data->port_id);
-				dev->tx_pkt_burst = ice_xmit_pkts_vec_avx512;
+				rte_eth_set_tx_burst(dev->data->port_id,
+					_RTE_ETH_FUNC(ice_xmit_pkts_vec_avx512));
 			}
 #endif
 		} else {
@@ -3448,16 +3453,17 @@ ice_set_tx_function(struct rte_eth_dev *dev)
 				PMD_DRV_LOG(NOTICE,
 					    "Using AVX2 OFFLOAD Vector Tx (port %d).",
 					    dev->data->port_id);
-				dev->tx_pkt_burst =
-					ice_xmit_pkts_vec_avx2_offload;
+				rte_eth_set_tx_burst(dev->data->port_id,
+					_RTE_ETH_FUNC(ice_xmit_pkts_vec_avx2_offload));
 				dev->tx_pkt_prepare = ice_prep_pkts;
 			} else {
 				PMD_DRV_LOG(DEBUG, "Using %sVector Tx (port %d).",
 					    ad->tx_use_avx2 ? "avx2 " : "",
 					    dev->data->port_id);
-				dev->tx_pkt_burst = ad->tx_use_avx2 ?
-						    ice_xmit_pkts_vec_avx2 :
-						    ice_xmit_pkts_vec;
+				rte_eth_set_tx_burst(dev->data->port_id,
+					ad->tx_use_avx2 ?
+					_RTE_ETH_FUNC(ice_xmit_pkts_vec_avx2) :
+					_RTE_ETH_FUNC(ice_xmit_pkts_vec));
 			}
 		}
 
@@ -3467,28 +3473,31 @@ ice_set_tx_function(struct rte_eth_dev *dev)
 
 	if (ad->tx_simple_allowed) {
 		PMD_INIT_LOG(DEBUG, "Simple tx finally be used.");
-		dev->tx_pkt_burst = ice_xmit_pkts_simple;
+		rte_eth_set_tx_burst(dev->data->port_id,
+			_RTE_ETH_FUNC(ice_xmit_pkts_simple));
 		dev->tx_pkt_prepare = NULL;
 	} else {
 		PMD_INIT_LOG(DEBUG, "Normal tx finally be used.");
-		dev->tx_pkt_burst = ice_xmit_pkts;
+		rte_eth_set_tx_burst(dev->data->port_id,
+			_RTE_ETH_FUNC(ice_xmit_pkts));
 		dev->tx_pkt_prepare = ice_prep_pkts;
 	}
 }
 
 static const struct {
-	eth_tx_burst_t pkt_burst;
+	rte_eth_tx_burst_t pkt_burst;
 	const char *info;
 } ice_tx_burst_infos[] = {
-	{ ice_xmit_pkts_simple,   "Scalar Simple" },
-	{ ice_xmit_pkts,          "Scalar" },
+	{ _RTE_ETH_FUNC(ice_xmit_pkts_simple),   "Scalar Simple" },
+	{ _RTE_ETH_FUNC(ice_xmit_pkts),          "Scalar" },
 #ifdef RTE_ARCH_X86
 #ifdef CC_AVX512_SUPPORT
-	{ ice_xmit_pkts_vec_avx512, "Vector AVX512" },
-	{ ice_xmit_pkts_vec_avx512_offload, "Offload Vector AVX512" },
+	{ _RTE_ETH_FUNC(ice_xmit_pkts_vec_avx512), "Vector AVX512" },
+	{ _RTE_ETH_FUNC(ice_xmit_pkts_vec_avx512_offload),
+		"Offload Vector AVX512" },
 #endif
-	{ ice_xmit_pkts_vec_avx2, "Vector AVX2" },
-	{ ice_xmit_pkts_vec,      "Vector SSE" },
+	{ _RTE_ETH_FUNC(ice_xmit_pkts_vec_avx2), "Vector AVX2" },
+	{ _RTE_ETH_FUNC(ice_xmit_pkts_vec),      "Vector SSE" },
 #endif
 };
 
@@ -3496,7 +3505,7 @@ int
 ice_tx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id,
 		      struct rte_eth_burst_mode *mode)
 {
-	eth_tx_burst_t pkt_burst = dev->tx_pkt_burst;
+	rte_eth_tx_burst_t pkt_burst = rte_eth_get_tx_burst(dev->data->port_id);
 	int ret = -EINVAL;
 	unsigned int i;
 
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index be8d43a591..3c06406204 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -213,8 +213,7 @@ void ice_free_queues(struct rte_eth_dev *dev);
 int ice_fdir_setup_tx_resources(struct ice_pf *pf);
 int ice_fdir_setup_rx_resources(struct ice_pf *pf);
 _RTE_ETH_RX_PROTO(ice_recv_pkts);
-uint16_t ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-		       uint16_t nb_pkts);
+_RTE_ETH_TX_PROTO(ice_xmit_pkts);
 void ice_set_rx_function(struct rte_eth_dev *dev);
 uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		       uint16_t nb_pkts);
@@ -245,29 +244,24 @@ int ice_txq_vec_setup(struct ice_tx_queue *txq);
 _RTE_ETH_RX_PROTO(ice_recv_pkts_vec);
 _RTE_ETH_RX_PROTO(ice_recv_scattered_pkts_vec);
 
-uint16_t ice_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
-			   uint16_t nb_pkts);
+_RTE_ETH_TX_PROTO(ice_xmit_pkts_vec);
 
 _RTE_ETH_RX_PROTO(ice_recv_pkts_vec_avx2);
 _RTE_ETH_RX_PROTO(ice_recv_pkts_vec_avx2_offload);
 _RTE_ETH_RX_PROTO(ice_recv_scattered_pkts_vec_avx2);
 _RTE_ETH_RX_PROTO(ice_recv_scattered_pkts_vec_avx2_offload);
 
-uint16_t ice_xmit_pkts_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
-				uint16_t nb_pkts);
-uint16_t ice_xmit_pkts_vec_avx2_offload(void *tx_queue, struct rte_mbuf **tx_pkts,
-					uint16_t nb_pkts);
+_RTE_ETH_TX_PROTO(ice_xmit_pkts_vec_avx2);
+_RTE_ETH_TX_PROTO(ice_xmit_pkts_vec_avx2_offload);
 
 _RTE_ETH_RX_PROTO(ice_recv_pkts_vec_avx512);
 _RTE_ETH_RX_PROTO(ice_recv_pkts_vec_avx512_offload);
 _RTE_ETH_RX_PROTO(ice_recv_scattered_pkts_vec_avx512);
 _RTE_ETH_RX_PROTO(ice_recv_scattered_pkts_vec_avx512_offload);
 
-uint16_t ice_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
-				  uint16_t nb_pkts);
-uint16_t ice_xmit_pkts_vec_avx512_offload(void *tx_queue,
-					  struct rte_mbuf **tx_pkts,
-					  uint16_t nb_pkts);
+_RTE_ETH_TX_PROTO(ice_xmit_pkts_vec_avx512);
+_RTE_ETH_TX_PROTO(ice_xmit_pkts_vec_avx512_offload);
+
 int ice_fdir_programming(struct ice_pf *pf, struct ice_fltr_desc *fdir_desc);
 int ice_tx_done_cleanup(void *txq, uint32_t free_cnt);
 int ice_get_monitor_addr(void *rx_queue, struct rte_power_monitor_cond *pmc);
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 29b9b57f9f..a15a673767 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -985,16 +985,20 @@ ice_xmit_pkts_vec_avx2_common(void *tx_queue, struct rte_mbuf **tx_pkts,
 	return nb_tx;
 }
 
-uint16_t
+static inline uint16_t
 ice_xmit_pkts_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
 		       uint16_t nb_pkts)
 {
 	return ice_xmit_pkts_vec_avx2_common(tx_queue, tx_pkts, nb_pkts, false);
 }
 
-uint16_t
+_RTE_ETH_TX_DEF(ice_xmit_pkts_vec_avx2)
+
+static inline uint16_t
 ice_xmit_pkts_vec_avx2_offload(void *tx_queue, struct rte_mbuf **tx_pkts,
 			       uint16_t nb_pkts)
 {
 	return ice_xmit_pkts_vec_avx2_common(tx_queue, tx_pkts, nb_pkts, true);
 }
+
+_RTE_ETH_TX_DEF(ice_xmit_pkts_vec_avx2_offload)
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 30c44c8918..d2fdd64cf8 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -1235,7 +1235,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
 	return nb_pkts;
 }
 
-uint16_t
+static inline uint16_t
 ice_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
 			 uint16_t nb_pkts)
 {
@@ -1257,7 +1257,9 @@ ice_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
 	return nb_tx;
 }
 
-uint16_t
+_RTE_ETH_TX_DEF(ice_xmit_pkts_vec_avx512)
+
+static inline uint16_t
 ice_xmit_pkts_vec_avx512_offload(void *tx_queue, struct rte_mbuf **tx_pkts,
 				 uint16_t nb_pkts)
 {
@@ -1279,3 +1281,5 @@ ice_xmit_pkts_vec_avx512_offload(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 	return nb_tx;
 }
+
+_RTE_ETH_TX_DEF(ice_xmit_pkts_vec_avx512_offload)
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 2d8ef7dc8a..f7604f960b 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -195,10 +195,11 @@ _ice_tx_queue_release_mbufs_vec(struct ice_tx_queue *txq)
 	i = txq->tx_next_dd - txq->tx_rs_thresh + 1;
 
 #ifdef CC_AVX512_SUPPORT
-	struct rte_eth_dev *dev = &rte_eth_devices[txq->vsi->adapter->pf.dev_data->port_id];
+	rte_eth_tx_burst_t tx_pkt_burst =
+		rte_eth_get_tx_burst(txq->vsi->adapter->pf.dev_data->port_id);
 
-	if (dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512 ||
-	    dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512_offload) {
+	if (tx_pkt_burst == _RTE_ETH_FUNC(ice_xmit_pkts_vec_avx512) ||
+	    tx_pkt_burst == _RTE_ETH_FUNC(ice_xmit_pkts_vec_avx512_offload)) {
 		struct ice_vec_tx_entry *swr = (void *)txq->sw_ring;
 
 		if (txq->tx_tail < i) {
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 2caf1c6941..344bd11508 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -758,7 +758,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
 	return nb_pkts;
 }
 
-uint16_t
+static inline uint16_t
 ice_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
 		  uint16_t nb_pkts)
 {
@@ -779,6 +779,8 @@ ice_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts,
 	return nb_tx;
 }
 
+_RTE_ETH_TX_DEF(ice_xmit_pkts_vec)
+
 int __rte_cold
 ice_rxq_vec_setup(struct ice_rx_queue *rxq)
 {
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 8b7d1e8840..45d1160465 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -1633,6 +1633,100 @@ rte_eth_rx_burst_t rte_eth_get_rx_burst(uint16_t port_id);
 __rte_experimental
 int rte_eth_set_rx_burst(uint16_t port_id, rte_eth_rx_burst_t rxf);
 
+/**
+ * @internal
+ * Helper routine for eth driver tx_burst API.
+ * Should be called as first thing on entrance to the PMD's rte_eth_tx_bulk
+ * implementation.
+ * Does necessary checks and post-processing - invokes TX callbacks if any,
+ * tracing, etc.
+ *
+ * @param port_id
+ *  The port identifier of the Ethernet device.
+ * @param queue_id
+ *  The index of the transmit queues.
+ * @param tx_pkts
+ *   The address of an array of *nb_pkts* pointers to *rte_mbuf* structures
+ *   which contain the output packets.
+ * @param nb_pkts
+ *   The pointer to the maximum number of packets to transmit.
+ *
+ * @return
+ *  Pointer to device TX queue structure on success or NULL otherwise.
+ */
+__rte_internal
+static inline void *
+_rte_eth_tx_prolog(uint16_t port_id, uint16_t queue_id,
+		struct rte_mbuf **tx_pkts, uint16_t *nb_pkts)
+{
+	uint16_t n;
+	struct rte_eth_dev *dev;
+
+	n = *nb_pkts;
+	dev = &rte_eth_devices[port_id];
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, NULL);
+
+	if (queue_id >= dev->data->nb_tx_queues) {
+		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
+		return NULL;
+	}
+#endif
+
+#ifdef RTE_ETHDEV_RXTX_CALLBACKS
+	struct rte_eth_rxtx_callback *cb;
+
+	/* __ATOMIC_RELEASE memory order was used when the
+	 * call back was inserted into the list.
+	 * Since there is a clear dependency between loading
+	 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
+	 * not required.
+	 */
+	cb = __atomic_load_n(&dev->pre_tx_burst_cbs[queue_id],
+				__ATOMIC_RELAXED);
+
+	if (unlikely(cb != NULL)) {
+		do {
+			n = cb->fn.tx(port_id, queue_id, tx_pkts, n, cb->param);
+			cb = cb->next;
+		} while (cb != NULL);
+	}
+
+	*nb_pkts = n;
+#endif
+
+	rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts, n);
+	return dev->data->tx_queues[queue_id];
+}
+
+/**
+ * @internal
+ * Helper macro to create new API wrappers for existing PMD tx_burst functions.
+ */
+#define _RTE_ETH_TX_PROTO(fn) \
+	uint16_t _RTE_ETH_FUNC(fn)(uint16_t port_id, uint16_t queue_id, \
+			struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+
+/**
+ * @internal
+ * Helper macro to create new API wrappers for existing PMD tx_burst functions.
+ */
+#define _RTE_ETH_TX_DEF(fn) \
+_RTE_ETH_TX_PROTO(fn) \
+{ \
+	void *txq = _rte_eth_tx_prolog(port_id, queue_id, tx_pkts, &nb_pkts); \
+	if (txq == NULL) \
+		return 0; \
+	return fn(txq, tx_pkts, nb_pkts); \
+}
+
+__rte_experimental
+rte_eth_tx_burst_t rte_eth_get_tx_burst(uint16_t port_id);
+
+__rte_experimental
+int rte_eth_set_tx_burst(uint16_t port_id, rte_eth_tx_burst_t txf);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index c126626281..1165e0bb32 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -588,7 +588,6 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
 	eth_dev->device = NULL;
 	eth_dev->process_private = NULL;
 	eth_dev->intr_handle = NULL;
-	eth_dev->tx_pkt_burst = NULL;
 	eth_dev->tx_pkt_prepare = NULL;
 	eth_dev->rx_queue_count = NULL;
 	eth_dev->rx_descriptor_done = NULL;
@@ -6358,3 +6357,25 @@ rte_eth_set_rx_burst(uint16_t port_id, rte_eth_rx_burst_t rxf)
 	rte_eth_burst_api[port_id].rx_pkt_burst = rxf;
 	return 0;
 }
+
+__rte_experimental
+rte_eth_tx_burst_t
+rte_eth_get_tx_burst(uint16_t port_id)
+{
+	if (port_id >= RTE_DIM(rte_eth_burst_api)) {
+		rte_errno = EINVAL;
+		return NULL;
+	}
+	return rte_eth_burst_api[port_id].tx_pkt_burst;
+}
+
+__rte_experimental
+int
+rte_eth_set_tx_burst(uint16_t port_id, rte_eth_tx_burst_t txf)
+{
+	if (port_id >= RTE_DIM(rte_eth_burst_api))
+		return -EINVAL;
+
+	rte_eth_burst_api[port_id].tx_pkt_burst = txf;
+	return 0;
+}
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index a155f255ad..3eac61a289 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -5226,42 +5226,11 @@ static inline uint16_t
 rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
 		 struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 {
-	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
-
-#ifdef RTE_ETHDEV_DEBUG_TX
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0);
-
-	if (queue_id >= dev->data->nb_tx_queues) {
-		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
+	if (port_id >= RTE_MAX_ETHPORTS)
 		return 0;
-	}
-#endif
-
-#ifdef RTE_ETHDEV_RXTX_CALLBACKS
-	struct rte_eth_rxtx_callback *cb;
 
-	/* __ATOMIC_RELEASE memory order was used when the
-	 * call back was inserted into the list.
-	 * Since there is a clear dependency between loading
-	 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
-	 * not required.
-	 */
-	cb = __atomic_load_n(&dev->pre_tx_burst_cbs[queue_id],
-				__ATOMIC_RELAXED);
-
-	if (unlikely(cb != NULL)) {
-		do {
-			nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
-					cb->param);
-			cb = cb->next;
-		} while (cb != NULL);
-	}
-#endif
-
-	rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts,
-		nb_pkts);
-	return (*dev->tx_pkt_burst)(dev->data->tx_queues[queue_id], tx_pkts, nb_pkts);
+	return rte_eth_burst_api[port_id].tx_pkt_burst(port_id, queue_id,
+			tx_pkts, nb_pkts);
 }
 
 /**
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index 94ffa071e3..ace77db1b6 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -115,7 +115,6 @@ struct rte_eth_rxtx_callback {
  * process, while the actual configuration data for the device is shared.
  */
 struct rte_eth_dev {
-	eth_tx_burst_t tx_pkt_burst; /**< Pointer to PMD transmit function. */
 	eth_tx_prep_t tx_pkt_prepare; /**< Pointer to PMD transmit prepare function. */
 
 	eth_rx_queue_count_t       rx_queue_count; /**< Get the number of used RX descriptors. */
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 2698c75940..8f8a6b4a5a 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -253,7 +253,9 @@ EXPERIMENTAL {
 	# added in 21.11
 	rte_eth_burst_api;
 	rte_eth_get_rx_burst;
+	rte_eth_get_tx_burst;
 	rte_eth_set_rx_burst;
+	rte_eth_set_tx_burst;
 };
 
 INTERNAL {
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [dpdk-dev] [RFC 4/7] eth: make drivers to use new API for Tx prepare
  2021-08-20 16:28 [dpdk-dev] [RFC 0/7] hide eth dev related structures Konstantin Ananyev
                   ` (2 preceding siblings ...)
  2021-08-20 16:28 ` [dpdk-dev] [RFC 3/7] eth: make drivers to use new API for Tx Konstantin Ananyev
@ 2021-08-20 16:28 ` Konstantin Ananyev
  2021-08-20 16:28 ` [dpdk-dev] [RFC 5/7] eth: make drivers to use new API to obtain descriptor status Konstantin Ananyev
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 112+ messages in thread
From: Konstantin Ananyev @ 2021-08-20 16:28 UTC (permalink / raw)
  To: dev
  Cc: thomas, ferruh.yigit, andrew.rybchenko, qiming.yang, qi.z.zhang,
	beilei.xing, techboard, Konstantin Ananyev

ethdev:
 - make changes so drivers can start using new API for tx_pkt_prepare().
 - provide helper functions/macros.
 - remove tx_pkt_prepare() from 'struct rte_eth_dev'.
drivers/net:
 - adjust to new tx_prepare API.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 drivers/net/i40e/i40e_ethdev.c |  2 +-
 drivers/net/i40e/i40e_rxtx.c   | 14 +++++---
 drivers/net/i40e/i40e_rxtx.h   |  7 ++--
 drivers/net/ice/ice_ethdev.c   |  2 +-
 drivers/net/ice/ice_rxtx.c     | 17 ++++++----
 drivers/net/ice/ice_rxtx.h     |  3 +-
 lib/ethdev/ethdev_driver.h     | 62 ++++++++++++++++++++++++++++++++++
 lib/ethdev/rte_ethdev.c        | 23 ++++++++++++-
 lib/ethdev/rte_ethdev.h        | 23 ++-----------
 lib/ethdev/rte_ethdev_core.h   |  2 --
 lib/ethdev/version.map         |  2 ++
 11 files changed, 116 insertions(+), 41 deletions(-)

diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 9eb9129ae9..bd6408da90 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1439,7 +1439,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
 	dev->tx_descriptor_status = i40e_dev_tx_descriptor_status;
 	rte_eth_set_rx_burst(dev->data->port_id, _RTE_ETH_FUNC(i40e_recv_pkts));
 	rte_eth_set_tx_burst(dev->data->port_id, _RTE_ETH_FUNC(i40e_xmit_pkts));
-	dev->tx_pkt_prepare = i40e_prep_pkts;
+	rte_eth_set_tx_prep(dev->data->port_id, _RTE_ETH_FUNC(i40e_prep_pkts));
 
 	/* for secondary processes, we don't initialise any further as primary
 	 * has already done this work. Only check we don't need a different
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 5a400435dd..44c4d33879 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1542,7 +1542,7 @@ static _RTE_ETH_TX_DEF(i40e_xmit_pkts_vec)
  *  TX simple prep functions
  *
  **********************************************************************/
-uint16_t
+static uint16_t
 i40e_simple_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		      uint16_t nb_pkts)
 {
@@ -1574,12 +1574,14 @@ i40e_simple_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 	return i;
 }
 
+_RTE_ETH_TX_PREP_DEF(i40e_simple_prep_pkts)
+
 /*********************************************************************
  *
  *  TX prep functions
  *
  **********************************************************************/
-uint16_t
+static uint16_t
 i40e_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		uint16_t nb_pkts)
 {
@@ -1636,6 +1638,8 @@ i40e_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 	return i;
 }
 
+_RTE_ETH_TX_PREP_DEF(i40e_prep_pkts)
+
 /*
  * Find the VSI the queue belongs to. 'queue_idx' is the queue index
  * application used, which assume having sequential ones. But from driver's
@@ -3594,12 +3598,14 @@ i40e_set_tx_function(struct rte_eth_dev *dev)
 			rte_eth_set_tx_burst(dev->data->port_id,
 				_RTE_ETH_FUNC(i40e_xmit_pkts_simple));
 		}
-		dev->tx_pkt_prepare = i40e_simple_prep_pkts;
+		rte_eth_set_tx_prep(dev->data->port_id,
+			_RTE_ETH_FUNC(i40e_simple_prep_pkts));
 	} else {
 		PMD_INIT_LOG(DEBUG, "Xmit tx finally be used.");
 		rte_eth_set_tx_burst(dev->data->port_id,
 			_RTE_ETH_FUNC(i40e_xmit_pkts));
-		dev->tx_pkt_prepare = i40e_prep_pkts;
+		rte_eth_set_tx_prep(dev->data->port_id,
+			_RTE_ETH_FUNC(i40e_prep_pkts));
 	}
 }
 
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index c51d5db2f7..85bc29b23a 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -204,10 +204,9 @@ _RTE_ETH_RX_PROTO(i40e_recv_pkts);
 _RTE_ETH_RX_PROTO(i40e_recv_scattered_pkts);
 
 _RTE_ETH_TX_PROTO(i40e_xmit_pkts);
-uint16_t i40e_simple_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-			       uint16_t nb_pkts);
-uint16_t i40e_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-		uint16_t nb_pkts);
+_RTE_ETH_TX_PROTO(i40e_simple_prep_pkts);
+_RTE_ETH_TX_PROTO(i40e_prep_pkts);
+
 int i40e_tx_queue_init(struct i40e_tx_queue *txq);
 int i40e_rx_queue_init(struct i40e_rx_queue *rxq);
 void i40e_free_tx_resources(struct i40e_tx_queue *txq);
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 9558455f7f..42b6f5928d 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1998,7 +1998,7 @@ ice_dev_init(struct rte_eth_dev *dev)
 	dev->tx_descriptor_status = ice_tx_descriptor_status;
 	rte_eth_set_rx_burst(dev->data->port_id, _RTE_ETH_FUNC(ice_recv_pkts));
 	rte_eth_set_tx_burst(dev->data->port_id, _RTE_ETH_FUNC(ice_xmit_pkts));
-	dev->tx_pkt_prepare = ice_prep_pkts;
+	rte_eth_set_tx_prep(dev->data->port_id, _RTE_ETH_FUNC(ice_prep_pkts));
 
 	/* for secondary processes, we don't initialise any further as primary
 	 * has already done this work.
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index e97564fdd6..2ddcbbb721 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -3339,7 +3339,7 @@ ice_set_tx_function_flag(struct rte_eth_dev *dev, struct ice_tx_queue *txq)
 #define ICE_MIN_TSO_MSS            64
 #define ICE_MAX_TSO_MSS            9728
 #define ICE_MAX_TSO_FRAME_SIZE     262144
-uint16_t
+static uint16_t
 ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 	      uint16_t nb_pkts)
 {
@@ -3378,6 +3378,8 @@ ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 	return i;
 }
 
+_RTE_ETH_TX_PREP_DEF(ice_prep_pkts)
+
 void __rte_cold
 ice_set_tx_function(struct rte_eth_dev *dev)
 {
@@ -3430,7 +3432,7 @@ ice_set_tx_function(struct rte_eth_dev *dev)
 	}
 
 	if (ad->tx_vec_allowed) {
-		dev->tx_pkt_prepare = NULL;
+		rte_eth_set_tx_prep(dev->data->port_id, NULL);
 		if (ad->tx_use_avx512) {
 #ifdef CC_AVX512_SUPPORT
 			if (tx_check_ret == ICE_VECTOR_OFFLOAD_PATH) {
@@ -3439,7 +3441,8 @@ ice_set_tx_function(struct rte_eth_dev *dev)
 					    dev->data->port_id);
 				rte_eth_set_tx_burst(dev->data->port_id,
 					_RTE_ETH_FUNC(ice_xmit_pkts_vec_avx512_offload));
-				dev->tx_pkt_prepare = ice_prep_pkts;
+				rte_eth_set_tx_prep(dev->data->port_id,
+					_RTE_ETH_FUNC(ice_prep_pkts));
 			} else {
 				PMD_DRV_LOG(NOTICE,
 					    "Using AVX512 Vector Tx (port %d).",
@@ -3455,7 +3458,8 @@ ice_set_tx_function(struct rte_eth_dev *dev)
 					    dev->data->port_id);
 				rte_eth_set_tx_burst(dev->data->port_id,
 					_RTE_ETH_FUNC(ice_xmit_pkts_vec_avx2_offload));
-				dev->tx_pkt_prepare = ice_prep_pkts;
+				rte_eth_set_tx_prep(dev->data->port_id,
+					_RTE_ETH_FUNC(ice_prep_pkts));
 			} else {
 				PMD_DRV_LOG(DEBUG, "Using %sVector Tx (port %d).",
 					    ad->tx_use_avx2 ? "avx2 " : "",
@@ -3475,12 +3479,13 @@ ice_set_tx_function(struct rte_eth_dev *dev)
 		PMD_INIT_LOG(DEBUG, "Simple tx finally be used.");
 		rte_eth_set_tx_burst(dev->data->port_id,
 			_RTE_ETH_FUNC(ice_xmit_pkts_simple));
-		dev->tx_pkt_prepare = NULL;
+		rte_eth_set_tx_prep(dev->data->port_id, NULL);
 	} else {
 		PMD_INIT_LOG(DEBUG, "Normal tx finally be used.");
 		rte_eth_set_tx_burst(dev->data->port_id,
 			_RTE_ETH_FUNC(ice_xmit_pkts));
-		dev->tx_pkt_prepare = ice_prep_pkts;
+		rte_eth_set_tx_prep(dev->data->port_id,
+			_RTE_ETH_FUNC(ice_prep_pkts));
 	}
 }
 
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 3c06406204..53f6080cc9 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -215,8 +215,7 @@ int ice_fdir_setup_rx_resources(struct ice_pf *pf);
 _RTE_ETH_RX_PROTO(ice_recv_pkts);
 _RTE_ETH_TX_PROTO(ice_xmit_pkts);
 void ice_set_rx_function(struct rte_eth_dev *dev);
-uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
-		       uint16_t nb_pkts);
+_RTE_ETH_TX_PROTO(ice_prep_pkts);
 void ice_set_tx_function_flag(struct rte_eth_dev *dev,
 			      struct ice_tx_queue *txq);
 void ice_set_tx_function(struct rte_eth_dev *dev);
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 45d1160465..fe1b4fc349 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -1727,6 +1727,68 @@ rte_eth_tx_burst_t rte_eth_get_tx_burst(uint16_t port_id);
 __rte_experimental
 int rte_eth_set_tx_burst(uint16_t port_id, rte_eth_tx_burst_t txf);
 
+/**
+ * @internal
+ * Helper routine for eth driver tx_prepare API.
+ * Should be called as first thing on entrance to the PMD's rte_eth_tx_prepare
+ * implementation.
+ * Does necessary checks and returns pointer to TX queue data structure.
+ *
+ * @param port_id
+ *  The port identifier of the Ethernet device.
+ * @param queue_id
+ *  The index of the transmit queues.
+ *
+ * @return
+ *  Pointer to device TX queue structure on success or NULL otherwise.
+ */
+__rte_internal
+static inline void *
+_rte_eth_tx_prep_prolog(uint16_t port_id, uint16_t queue_id)
+{
+	struct rte_eth_dev *dev;
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+	if (!rte_eth_dev_is_valid_port(port_id)) {
+		RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
+		rte_errno = ENODEV;
+		return NULL;
+	}
+#endif
+
+	dev = &rte_eth_devices[port_id];
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+	if (queue_id >= dev->data->nb_tx_queues) {
+		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
+		rte_errno = EINVAL;
+		return NULL;
+	}
+#endif
+
+	return dev->data->tx_queues[queue_id];
+}
+
+/**
+ * @internal
+ * Helper macro to create new API wrappers for existing PMD tx_prepare
+ * functions.
+ */
+#define _RTE_ETH_TX_PREP_DEF(fn) \
+_RTE_ETH_TX_PROTO(fn) \
+{ \
+	void *txq = _rte_eth_tx_prep_prolog(port_id, queue_id); \
+	if (txq == NULL) \
+		return 0; \
+	return fn(txq, tx_pkts, nb_pkts); \
+}
+
+__rte_experimental
+rte_eth_tx_prep_t rte_eth_get_tx_prep(uint16_t port_id);
+
+__rte_experimental
+int rte_eth_set_tx_prep(uint16_t port_id, rte_eth_tx_prep_t txf);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 1165e0bb32..6b1d9c5f83 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -588,7 +588,6 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
 	eth_dev->device = NULL;
 	eth_dev->process_private = NULL;
 	eth_dev->intr_handle = NULL;
-	eth_dev->tx_pkt_prepare = NULL;
 	eth_dev->rx_queue_count = NULL;
 	eth_dev->rx_descriptor_done = NULL;
 	eth_dev->rx_descriptor_status = NULL;
@@ -6379,3 +6378,25 @@ rte_eth_set_tx_burst(uint16_t port_id, rte_eth_tx_burst_t txf)
 	rte_eth_burst_api[port_id].tx_pkt_burst = txf;
 	return 0;
 }
+
+__rte_experimental
+rte_eth_tx_prep_t
+rte_eth_get_tx_prep(uint16_t port_id)
+{
+	if (port_id >= RTE_DIM(rte_eth_burst_api)) {
+		rte_errno = EINVAL;
+		return NULL;
+	}
+	return rte_eth_burst_api[port_id].tx_pkt_prepare;
+}
+
+__rte_experimental
+int
+rte_eth_set_tx_prep(uint16_t port_id, rte_eth_tx_prep_t tpf)
+{
+	if (port_id >= RTE_DIM(rte_eth_burst_api))
+		return -EINVAL;
+
+	rte_eth_burst_api[port_id].tx_pkt_prepare = tpf;
+	return 0;
+}
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 3eac61a289..01fd1c99c3 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -5293,30 +5293,13 @@ static inline uint16_t
 rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id,
 		struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 {
-	struct rte_eth_dev *dev;
-
-#ifdef RTE_ETHDEV_DEBUG_TX
-	if (!rte_eth_dev_is_valid_port(port_id)) {
-		RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
-		rte_errno = ENODEV;
-		return 0;
-	}
-#endif
-
-	dev = &rte_eth_devices[port_id];
-
-#ifdef RTE_ETHDEV_DEBUG_TX
-	if (queue_id >= dev->data->nb_tx_queues) {
-		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
-		rte_errno = EINVAL;
+	if (port_id >= RTE_MAX_ETHPORTS)
 		return 0;
-	}
-#endif
 
-	if (!dev->tx_pkt_prepare)
+	if (rte_eth_burst_api[port_id].tx_pkt_prepare == NULL)
 		return nb_pkts;
 
-	return (*dev->tx_pkt_prepare)(dev->data->tx_queues[queue_id],
+	return rte_eth_burst_api[port_id].tx_pkt_prepare(port_id, queue_id,
 			tx_pkts, nb_pkts);
 }
 
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index ace77db1b6..2d4600af4d 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -115,8 +115,6 @@ struct rte_eth_rxtx_callback {
  * process, while the actual configuration data for the device is shared.
  */
 struct rte_eth_dev {
-	eth_tx_prep_t tx_pkt_prepare; /**< Pointer to PMD transmit prepare function. */
-
 	eth_rx_queue_count_t       rx_queue_count; /**< Get the number of used RX descriptors. */
 	eth_rx_descriptor_done_t   rx_descriptor_done;   /**< Check rxd DD bit. */
 	eth_rx_descriptor_status_t rx_descriptor_status; /**< Check the status of a Rx descriptor. */
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 8f8a6b4a5a..b26fd478aa 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -254,8 +254,10 @@ EXPERIMENTAL {
 	rte_eth_burst_api;
 	rte_eth_get_rx_burst;
 	rte_eth_get_tx_burst;
+	rte_eth_get_tx_prep;
 	rte_eth_set_rx_burst;
 	rte_eth_set_tx_burst;
+	rte_eth_set_tx_prep;
 };
 
 INTERNAL {
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [dpdk-dev] [RFC 5/7] eth: make drivers to use new API to obtain descriptor status
  2021-08-20 16:28 [dpdk-dev] [RFC 0/7] hide eth dev related structures Konstantin Ananyev
                   ` (3 preceding siblings ...)
  2021-08-20 16:28 ` [dpdk-dev] [RFC 4/7] eth: make drivers to use new API for Tx prepare Konstantin Ananyev
@ 2021-08-20 16:28 ` Konstantin Ananyev
  2021-08-20 16:28 ` [dpdk-dev] [RFC 6/7] eth: make drivers to use new API for Rx queue count Konstantin Ananyev
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 112+ messages in thread
From: Konstantin Ananyev @ 2021-08-20 16:28 UTC (permalink / raw)
  To: dev
  Cc: thomas, ferruh.yigit, andrew.rybchenko, qiming.yang, qi.z.zhang,
	beilei.xing, techboard, Konstantin Ananyev

ethdev: make changes so drivers can start using new API for
        rx_descriptor_status() and tx_descriptor_status().
        remove related function pointers from 'struct rte_eth_dev'.
drivers/net: adjust to new API.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 drivers/net/i40e/i40e_ethdev.c    |   6 +-
 drivers/net/i40e/i40e_ethdev_vf.c |   6 +-
 drivers/net/i40e/i40e_rxtx.c      |   8 +-
 drivers/net/i40e/i40e_rxtx.h      |   4 +-
 drivers/net/ice/ice_ethdev.c      |   6 +-
 drivers/net/ice/ice_rxtx.c        |   8 +-
 drivers/net/ice/ice_rxtx.h        |   4 +-
 lib/ethdev/ethdev_driver.h        | 132 ++++++++++++++++++++++++++++++
 lib/ethdev/rte_ethdev.c           |  45 +++++++++-
 lib/ethdev/rte_ethdev.h           |  40 ++++-----
 lib/ethdev/rte_ethdev_core.h      |   3 -
 lib/ethdev/version.map            |   4 +
 12 files changed, 221 insertions(+), 45 deletions(-)

diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index bd6408da90..da5a7ec168 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1435,8 +1435,10 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
 	dev->dev_ops = &i40e_eth_dev_ops;
 	dev->rx_queue_count = i40e_dev_rx_queue_count;
 	dev->rx_descriptor_done = i40e_dev_rx_descriptor_done;
-	dev->rx_descriptor_status = i40e_dev_rx_descriptor_status;
-	dev->tx_descriptor_status = i40e_dev_tx_descriptor_status;
+	rte_eth_set_rx_desc_st(dev->data->port_id,
+		_RTE_ETH_FUNC(i40e_dev_rx_descriptor_status));
+	rte_eth_set_tx_desc_st(dev->data->port_id,
+		_RTE_ETH_FUNC(i40e_dev_tx_descriptor_status));
 	rte_eth_set_rx_burst(dev->data->port_id, _RTE_ETH_FUNC(i40e_recv_pkts));
 	rte_eth_set_tx_burst(dev->data->port_id, _RTE_ETH_FUNC(i40e_xmit_pkts));
 	rte_eth_set_tx_prep(dev->data->port_id, _RTE_ETH_FUNC(i40e_prep_pkts));
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 3755bdb66a..f1bd6d4e1b 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -1574,8 +1574,10 @@ i40evf_dev_init(struct rte_eth_dev *eth_dev)
 	eth_dev->dev_ops = &i40evf_eth_dev_ops;
 	eth_dev->rx_queue_count       = i40e_dev_rx_queue_count;
 	eth_dev->rx_descriptor_done   = i40e_dev_rx_descriptor_done;
-	eth_dev->rx_descriptor_status = i40e_dev_rx_descriptor_status;
-	eth_dev->tx_descriptor_status = i40e_dev_tx_descriptor_status;
+	rte_eth_set_rx_desc_st(eth_dev->data->port_id,
+		_RTE_ETH_FUNC(i40e_dev_rx_descriptor_status));
+	rte_eth_set_tx_desc_st(eth_dev->data->port_id,
+		_RTE_ETH_FUNC(i40e_dev_tx_descriptor_status));
 	rte_eth_set_rx_burst(eth_dev->data->port_id,
 		_RTE_ETH_FUNC(i40e_recv_pkts));
 	rte_eth_set_tx_burst(eth_dev->data->port_id,
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 44c4d33879..310bb3f496 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2189,7 +2189,7 @@ i40e_dev_rx_descriptor_done(void *rx_queue, uint16_t offset)
 	return ret;
 }
 
-int
+static int
 i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t offset)
 {
 	struct i40e_rx_queue *rxq = rx_queue;
@@ -2216,7 +2216,9 @@ i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t offset)
 	return RTE_ETH_RX_DESC_AVAIL;
 }
 
-int
+_RTE_ETH_RX_DESC_DEF(i40e_dev_rx_descriptor_status)
+
+static int
 i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
 {
 	struct i40e_tx_queue *txq = tx_queue;
@@ -2247,6 +2249,8 @@ i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
 	return RTE_ETH_TX_DESC_FULL;
 }
 
+_RTE_ETH_TX_DESC_DEF(i40e_dev_tx_descriptor_status)
+
 static int
 i40e_dev_tx_queue_setup_runtime(struct rte_eth_dev *dev,
 				struct i40e_tx_queue *txq)
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 85bc29b23a..42b3407fe2 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -223,8 +223,8 @@ void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
 uint32_t i40e_dev_rx_queue_count(struct rte_eth_dev *dev,
 				 uint16_t rx_queue_id);
 int i40e_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
-int i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
-int i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
+_RTE_ETH_RX_DESC_PROTO(i40e_dev_rx_descriptor_status);
+_RTE_ETH_TX_DESC_PROTO(i40e_dev_tx_descriptor_status);
 
 _RTE_ETH_RX_PROTO(i40e_recv_pkts_vec);
 _RTE_ETH_RX_PROTO(i40e_recv_scattered_pkts_vec);
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 42b6f5928d..8907737ba3 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1994,8 +1994,10 @@ ice_dev_init(struct rte_eth_dev *dev)
 
 	dev->dev_ops = &ice_eth_dev_ops;
 	dev->rx_queue_count = ice_rx_queue_count;
-	dev->rx_descriptor_status = ice_rx_descriptor_status;
-	dev->tx_descriptor_status = ice_tx_descriptor_status;
+	rte_eth_set_rx_desc_st(dev->data->port_id,
+		 _RTE_ETH_FUNC(ice_rx_descriptor_status));
+	rte_eth_set_tx_desc_st(dev->data->port_id,
+		 _RTE_ETH_FUNC(ice_tx_descriptor_status));
 	rte_eth_set_rx_burst(dev->data->port_id, _RTE_ETH_FUNC(ice_recv_pkts));
 	rte_eth_set_tx_burst(dev->data->port_id, _RTE_ETH_FUNC(ice_xmit_pkts));
 	rte_eth_set_tx_prep(dev->data->port_id, _RTE_ETH_FUNC(ice_prep_pkts));
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 2ddcbbb721..461135b4b4 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -2021,7 +2021,7 @@ ice_dev_supported_ptypes_get(struct rte_eth_dev *dev)
 	return NULL;
 }
 
-int
+static int
 ice_rx_descriptor_status(void *rx_queue, uint16_t offset)
 {
 	volatile union ice_rx_flex_desc *rxdp;
@@ -2046,7 +2046,9 @@ ice_rx_descriptor_status(void *rx_queue, uint16_t offset)
 	return RTE_ETH_RX_DESC_AVAIL;
 }
 
-int
+_RTE_ETH_RX_DESC_DEF(ice_rx_descriptor_status)
+
+static int
 ice_tx_descriptor_status(void *tx_queue, uint16_t offset)
 {
 	struct ice_tx_queue *txq = tx_queue;
@@ -2077,6 +2079,8 @@ ice_tx_descriptor_status(void *tx_queue, uint16_t offset)
 	return RTE_ETH_TX_DESC_FULL;
 }
 
+_RTE_ETH_TX_DESC_DEF(ice_tx_descriptor_status)
+
 void
 ice_free_queues(struct rte_eth_dev *dev)
 {
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 53f6080cc9..49418442eb 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -228,8 +228,8 @@ int ice_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			  struct rte_eth_burst_mode *mode);
 int ice_tx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			  struct rte_eth_burst_mode *mode);
-int ice_rx_descriptor_status(void *rx_queue, uint16_t offset);
-int ice_tx_descriptor_status(void *tx_queue, uint16_t offset);
+_RTE_ETH_RX_DESC_PROTO(ice_rx_descriptor_status);
+_RTE_ETH_TX_DESC_PROTO(ice_tx_descriptor_status);
 void ice_set_default_ptype_table(struct rte_eth_dev *dev);
 const uint32_t *ice_dev_supported_ptypes_get(struct rte_eth_dev *dev);
 void ice_select_rxd_to_pkt_fields_handler(struct ice_rx_queue *rxq,
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index fe1b4fc349..eec56189a0 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -1789,6 +1789,138 @@ rte_eth_tx_prep_t rte_eth_get_tx_prep(uint16_t port_id);
 __rte_experimental
 int rte_eth_set_tx_prep(uint16_t port_id, rte_eth_tx_prep_t txf);
 
+/**
+ * @internal
+ * Helper routine for eth driver rx_descriptor_status API.
+ * Should be called as first thing on entrance to the PMD's
+ * rx_descriptor_status implementation.
+ * Does necessary checks and retrieves pointer to device RX queue.
+ *
+ * @param port_id
+ *  The port identifier of the Ethernet device.
+ * @param queue_id
+ *  The index of the receive queue.
+ * @param rxq
+ *  The output pointer to the RX queue structure.
+ *
+ * @return
+ *  Zero on success or negative error code otherwise.
+ */
+__rte_internal
+static inline int
+_rte_eth_rx_desc_prolog(uint16_t port_id, uint16_t queue_id, void **rxq)
+{
+	struct rte_eth_dev *dev;
+
+#ifdef RTE_ETHDEV_DEBUG_RX
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+#endif
+	dev = &rte_eth_devices[port_id];
+#ifdef RTE_ETHDEV_DEBUG_RX
+	if (queue_id >= dev->data->nb_rx_queues)
+		return -ENODEV;
+#endif
+	*rxq = dev->data->rx_queues[queue_id];
+        return 0;
+}
+
+/**
+ * @internal
+ * Helper macro to create new API wrappers for existing PMD rx_descriptor_status
+ * functions.
+ */
+#define _RTE_ETH_RX_DESC_PROTO(fn) \
+	int _RTE_ETH_FUNC(fn)(uint16_t port_id, uint16_t queue_id, \
+			uint16_t offset)
+
+/**
+ * @internal
+ * Helper macro to create new API wrappers for existing PMD rx_descriptor_status
+ * functions.
+ */
+#define _RTE_ETH_RX_DESC_DEF(fn) \
+_RTE_ETH_RX_DESC_PROTO(fn) \
+{ \
+	int rc; \
+	void *rxq; \
+	rc = _rte_eth_rx_desc_prolog(port_id, queue_id, &rxq); \
+	if (rc != 0) \
+		return rc; \
+	return fn(rxq, offset); \
+}
+
+__rte_experimental
+rte_eth_rx_descriptor_status_t rte_eth_get_rx_desc_st(uint16_t port_id);
+
+__rte_experimental
+int rte_eth_set_rx_desc_st(uint16_t port_id, rte_eth_rx_descriptor_status_t rf);
+
+/**
+ * @internal
+ * Helper routine for eth driver tx_descriptor_status API.
+ * Should be called as first thing on entrance to the PMD's
+ * tx_descriptor_status implementation.
+ * Does necessary checks and retrieves pointer to device TX queue.
+ *
+ * @param port_id
+ *  The port identifier of the Ethernet device.
+ * @param queue_id
+ *  The index of the transmit queue.
+ * @param txq
+ *  The output pointer to the TX queue structure.
+ *
+ * @return
+ *  Zero on success or negative error code otherwise.
+ */
+__rte_internal
+static inline int
+_rte_eth_tx_desc_prolog(uint16_t port_id, uint16_t queue_id, void **txq)
+{
+	struct rte_eth_dev *dev;
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+#endif
+	dev = &rte_eth_devices[port_id];
+#ifdef RTE_ETHDEV_DEBUG_TX
+	if (queue_id >= dev->data->nb_tx_queues)
+		return -ENODEV;
+#endif
+	*txq = dev->data->tx_queues[queue_id];
+        return 0;
+}
+
+/**
+ * @internal
+ * Helper macro to create new API wrappers for existing PMD tx_descriptor_status
+ * functions.
+ */
+#define _RTE_ETH_TX_DESC_PROTO(fn) \
+	int _RTE_ETH_FUNC(fn)(uint16_t port_id, uint16_t queue_id, \
+			uint16_t offset)
+
+/**
+ * @internal
+ * Helper macro to create new API wrappers for existing PMD tx_descriptor_status
+ * functions.
+ */
+#define _RTE_ETH_TX_DESC_DEF(fn) \
+_RTE_ETH_TX_DESC_PROTO(fn) \
+{ \
+	int rc; \
+	void *txq; \
+	rc = _rte_eth_tx_desc_prolog(port_id, queue_id, &txq); \
+	if (rc != 0) \
+		return rc; \
+	return fn(txq, offset); \
+}
+
+__rte_experimental
+rte_eth_tx_descriptor_status_t rte_eth_get_tx_desc_st(uint16_t port_id);
+
+__rte_experimental
+int rte_eth_set_tx_desc_st(uint16_t port_id, rte_eth_tx_descriptor_status_t rf);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 6b1d9c5f83..e48d1ec281 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -590,8 +590,6 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
 	eth_dev->intr_handle = NULL;
 	eth_dev->rx_queue_count = NULL;
 	eth_dev->rx_descriptor_done = NULL;
-	eth_dev->rx_descriptor_status = NULL;
-	eth_dev->tx_descriptor_status = NULL;
 	eth_dev->dev_ops = NULL;
 
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
@@ -6400,3 +6398,46 @@ rte_eth_set_tx_prep(uint16_t port_id, rte_eth_tx_prep_t tpf)
 	rte_eth_burst_api[port_id].tx_pkt_prepare = tpf;
 	return 0;
 }
+
+__rte_experimental
+rte_eth_rx_descriptor_status_t
+rte_eth_get_rx_desc_st(uint16_t port_id)
+{
+	if (port_id >= RTE_DIM(rte_eth_burst_api)) {
+		rte_errno = EINVAL;
+		return NULL;
+	}
+	return rte_eth_burst_api[port_id].rx_descriptor_status;
+}
+
+__rte_experimental
+int
+rte_eth_set_rx_desc_st(uint16_t port_id, rte_eth_rx_descriptor_status_t rf)
+{
+	if (port_id >= RTE_DIM(rte_eth_burst_api))
+		return -EINVAL;
+
+	rte_eth_burst_api[port_id].rx_descriptor_status = rf;
+	return 0;
+}
+
+rte_eth_tx_descriptor_status_t
+rte_eth_get_tx_desc_st(uint16_t port_id)
+{
+	if (port_id >= RTE_DIM(rte_eth_burst_api)) {
+		rte_errno = EINVAL;
+		return NULL;
+	}
+	return rte_eth_burst_api[port_id].tx_descriptor_status;
+}
+
+__rte_experimental
+int
+rte_eth_set_tx_desc_st(uint16_t port_id, rte_eth_tx_descriptor_status_t tf)
+{
+	if (port_id >= RTE_DIM(rte_eth_burst_api))
+		return -EINVAL;
+
+	rte_eth_burst_api[port_id].tx_descriptor_status = tf;
+	return 0;
+}
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 01fd1c99c3..073b532b7b 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -5082,21 +5082,15 @@ static inline int
 rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
 	uint16_t offset)
 {
-	struct rte_eth_dev *dev;
-	void *rxq;
+	rte_eth_rx_descriptor_status_t rds;
 
-#ifdef RTE_ETHDEV_DEBUG_RX
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
-#endif
-	dev = &rte_eth_devices[port_id];
-#ifdef RTE_ETHDEV_DEBUG_RX
-	if (queue_id >= dev->data->nb_rx_queues)
-		return -ENODEV;
-#endif
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_descriptor_status, -ENOTSUP);
-	rxq = dev->data->rx_queues[queue_id];
+	if (port_id >= RTE_MAX_ETHPORTS)
+		return -EINVAL;
+
+	rds = rte_eth_burst_api[port_id].rx_descriptor_status;
+	RTE_FUNC_PTR_OR_ERR_RET(rds, -ENOTSUP);
 
-	return (*dev->rx_descriptor_status)(rxq, offset);
+	return (rds)(port_id, queue_id, offset);
 }
 
 #define RTE_ETH_TX_DESC_FULL    0 /**< Desc filled for hw, waiting xmit. */
@@ -5139,21 +5133,15 @@ rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
 static inline int rte_eth_tx_descriptor_status(uint16_t port_id,
 	uint16_t queue_id, uint16_t offset)
 {
-	struct rte_eth_dev *dev;
-	void *txq;
+	rte_eth_tx_descriptor_status_t tds;
 
-#ifdef RTE_ETHDEV_DEBUG_TX
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
-#endif
-	dev = &rte_eth_devices[port_id];
-#ifdef RTE_ETHDEV_DEBUG_TX
-	if (queue_id >= dev->data->nb_tx_queues)
-		return -ENODEV;
-#endif
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_descriptor_status, -ENOTSUP);
-	txq = dev->data->tx_queues[queue_id];
+	if (port_id >= RTE_MAX_ETHPORTS)
+		return -EINVAL;
+
+	tds = rte_eth_burst_api[port_id].tx_descriptor_status;
+	RTE_FUNC_PTR_OR_ERR_RET(tds, -ENOTSUP);
 
-	return (*dev->tx_descriptor_status)(txq, offset);
+	return (tds)(port_id, queue_id, offset);
 }
 
 /**
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index 2d4600af4d..1e42bacfce 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -117,9 +117,6 @@ struct rte_eth_rxtx_callback {
 struct rte_eth_dev {
 	eth_rx_queue_count_t       rx_queue_count; /**< Get the number of used RX descriptors. */
 	eth_rx_descriptor_done_t   rx_descriptor_done;   /**< Check rxd DD bit. */
-	eth_rx_descriptor_status_t rx_descriptor_status; /**< Check the status of a Rx descriptor. */
-	eth_tx_descriptor_status_t tx_descriptor_status; /**< Check the status of a Tx descriptor. */
-
 	/**
 	 * Next two fields are per-device data but *data is shared between
 	 * primary and secondary processes and *process_private is per-process
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index b26fd478aa..802d9c3c11 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -253,9 +253,13 @@ EXPERIMENTAL {
 	# added in 21.11
 	rte_eth_burst_api;
 	rte_eth_get_rx_burst;
+	rte_eth_get_rx_desc_st;
+	rte_eth_get_tx_desc_st;
 	rte_eth_get_tx_burst;
 	rte_eth_get_tx_prep;
 	rte_eth_set_rx_burst;
+	rte_eth_set_rx_desc_st;
+	rte_eth_set_tx_desc_st;
 	rte_eth_set_tx_burst;
 	rte_eth_set_tx_prep;
 };
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [dpdk-dev] [RFC 6/7] eth: make drivers to use new API for Rx queue count
  2021-08-20 16:28 [dpdk-dev] [RFC 0/7] hide eth dev related structures Konstantin Ananyev
                   ` (4 preceding siblings ...)
  2021-08-20 16:28 ` [dpdk-dev] [RFC 5/7] eth: make drivers to use new API to obtain descriptor status Konstantin Ananyev
@ 2021-08-20 16:28 ` Konstantin Ananyev
  2021-08-20 16:28 ` [dpdk-dev] [RFC 7/7] eth: hide eth dev related structures Konstantin Ananyev
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 112+ messages in thread
From: Konstantin Ananyev @ 2021-08-20 16:28 UTC (permalink / raw)
  To: dev
  Cc: thomas, ferruh.yigit, andrew.rybchenko, qiming.yang, qi.z.zhang,
	beilei.xing, techboard, Konstantin Ananyev

ethdev:
 - make changes so drivers can start using new API for rx_queue_count().
 - provide helper functions/macros.
 - remove rx_queue_count() from 'struct rte_eth_dev'.
drivers/net:
 - adjust to new rx_queue_count() API.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 drivers/net/i40e/i40e_ethdev.c    |  3 +-
 drivers/net/i40e/i40e_ethdev_vf.c |  3 +-
 drivers/net/i40e/i40e_rxtx.c      |  4 ++-
 drivers/net/i40e/i40e_rxtx.h      |  3 +-
 drivers/net/ice/ice_ethdev.c      |  3 +-
 drivers/net/ice/ice_rxtx.c        |  4 ++-
 drivers/net/ice/ice_rxtx.h        |  2 +-
 lib/ethdev/ethdev_driver.h        | 58 +++++++++++++++++++++++++++++++
 lib/ethdev/rte_ethdev.c           | 24 ++++++++++++-
 lib/ethdev/rte_ethdev.h           | 13 ++++---
 lib/ethdev/rte_ethdev_core.h      |  1 -
 lib/ethdev/version.map            |  6 ++--
 12 files changed, 105 insertions(+), 19 deletions(-)

diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index da5a7ec168..a99363659a 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1433,7 +1433,8 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
 	PMD_INIT_FUNC_TRACE();
 
 	dev->dev_ops = &i40e_eth_dev_ops;
-	dev->rx_queue_count = i40e_dev_rx_queue_count;
+	rte_eth_set_rx_qcnt(dev->data->port_id,
+		_RTE_ETH_FUNC(i40e_dev_rx_queue_count));
 	dev->rx_descriptor_done = i40e_dev_rx_descriptor_done;
 	rte_eth_set_rx_desc_st(dev->data->port_id,
 		_RTE_ETH_FUNC(i40e_dev_rx_descriptor_status));
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index f1bd6d4e1b..0da30f6784 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -1572,7 +1572,8 @@ i40evf_dev_init(struct rte_eth_dev *eth_dev)
 
 	/* assign ops func pointer */
 	eth_dev->dev_ops = &i40evf_eth_dev_ops;
-	eth_dev->rx_queue_count       = i40e_dev_rx_queue_count;
+	rte_eth_set_rx_qcnt(eth_dev->data->port_id,
+		_RTE_ETH_FUNC(i40e_dev_rx_queue_count));
 	eth_dev->rx_descriptor_done   = i40e_dev_rx_descriptor_done;
 	rte_eth_set_rx_desc_st(eth_dev->data->port_id,
 		_RTE_ETH_FUNC(i40e_dev_rx_descriptor_status));
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 310bb3f496..f0f42c41b2 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2134,7 +2134,7 @@ i40e_dev_rx_queue_release(void *rxq)
 	rte_free(q);
 }
 
-uint32_t
+static uint32_t
 i40e_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
 #define I40E_RXQ_SCAN_INTERVAL 4
@@ -2163,6 +2163,8 @@ i40e_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	return desc;
 }
 
+_RTE_ETH_RX_QCNT_DEF(i40e_dev_rx_queue_count)
+
 int
 i40e_dev_rx_descriptor_done(void *rx_queue, uint16_t offset)
 {
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 42b3407fe2..3d98b1f9fb 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -220,8 +220,7 @@ int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
 int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
 void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
 
-uint32_t i40e_dev_rx_queue_count(struct rte_eth_dev *dev,
-				 uint16_t rx_queue_id);
+_RTE_ETH_RX_QCNT_PROTO(i40e_dev_rx_queue_count);
 int i40e_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
 _RTE_ETH_RX_DESC_PROTO(i40e_dev_rx_descriptor_status);
 _RTE_ETH_TX_DESC_PROTO(i40e_dev_tx_descriptor_status);
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 8907737ba3..cb27f2f501 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1993,7 +1993,8 @@ ice_dev_init(struct rte_eth_dev *dev)
 #endif
 
 	dev->dev_ops = &ice_eth_dev_ops;
-	dev->rx_queue_count = ice_rx_queue_count;
+	rte_eth_set_rx_qcnt(dev->data->port_id,
+		 _RTE_ETH_FUNC(ice_rx_queue_count));
 	rte_eth_set_rx_desc_st(dev->data->port_id,
 		 _RTE_ETH_FUNC(ice_rx_descriptor_status));
 	rte_eth_set_tx_desc_st(dev->data->port_id,
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 461135b4b4..e7af0a649b 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1426,7 +1426,7 @@ ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 	qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
 }
 
-uint32_t
+static uint32_t
 ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
 #define ICE_RXQ_SCAN_INTERVAL 4
@@ -1454,6 +1454,8 @@ ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	return desc;
 }
 
+_RTE_ETH_RX_QCNT_DEF(ice_rx_queue_count)
+
 #define ICE_RX_FLEX_ERR0_BITS	\
 	((1 << ICE_RX_FLEX_DESC_STATUS0_HBO_S) |	\
 	 (1 << ICE_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) |	\
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index 49418442eb..d1e0a8b011 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -219,7 +219,7 @@ _RTE_ETH_TX_PROTO(ice_prep_pkts);
 void ice_set_tx_function_flag(struct rte_eth_dev *dev,
 			      struct ice_tx_queue *txq);
 void ice_set_tx_function(struct rte_eth_dev *dev);
-uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+_RTE_ETH_RX_QCNT_PROTO(ice_rx_queue_count);
 void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 		      struct rte_eth_rxq_info *qinfo);
 void ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index eec56189a0..accaf1aab2 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -1921,6 +1921,64 @@ rte_eth_tx_descriptor_status_t rte_eth_get_tx_desc_st(uint16_t port_id);
 __rte_experimental
 int rte_eth_set_tx_desc_st(uint16_t port_id, rte_eth_tx_descriptor_status_t rf);
 
+/**
+ * @internal
+ * Helper routine for eth driver rx_queue_count API.
+ * Should be called as first thing on entrance to the PMD's
+ * rx_queue_count implementation.
+ * Does necessary checks for input parameters.
+ *
+ * @param port_id
+ *  The port identifier of the Ethernet device.
+ * @param queue_id
+ *  The index of the receive queue.
+ *
+ * @return
+ *  Zero on success or negative error code otherwise.
+ */
+__rte_internal
+static inline int
+_rte_eth_rx_qcnt_prolog(uint16_t port_id, uint16_t queue_id)
+{
+	struct rte_eth_dev *dev;
+
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	dev = &rte_eth_devices[port_id];
+	if (queue_id >= dev->data->nb_rx_queues ||
+			dev->data->rx_queues[queue_id] == NULL)
+		return -EINVAL;
+	return 0;
+}
+
+/**
+ * @internal
+ * Helper macro to create new API wrappers for existing PMD rx_queue_count
+ * functions.
+ */
+#define _RTE_ETH_RX_QCNT_PROTO(fn) \
+	int _RTE_ETH_FUNC(fn)(uint16_t port_id, uint16_t queue_id)
+
+/**
+ * @internal
+ * Helper macro to create new API wrappers for existing PMD rx_queue_count
+ * functions.
+ */
+#define _RTE_ETH_RX_QCNT_DEF(fn) \
+_RTE_ETH_RX_QCNT_PROTO(fn) \
+{ \
+	int rc; \
+       	rc = _rte_eth_rx_qcnt_prolog(port_id, queue_id); \
+	if (rc != 0) \
+		return rc; \
+	return fn(&rte_eth_devices[port_id], queue_id); \
+}
+
+__rte_experimental
+rte_eth_rx_queue_count_t rte_eth_get_rx_qcnt(uint16_t port_id);
+
+__rte_experimental
+int rte_eth_set_rx_qcnt(uint16_t port_id, rte_eth_rx_queue_count_t rf);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index e48d1ec281..0cc9f40e95 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -588,7 +588,6 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
 	eth_dev->device = NULL;
 	eth_dev->process_private = NULL;
 	eth_dev->intr_handle = NULL;
-	eth_dev->rx_queue_count = NULL;
 	eth_dev->rx_descriptor_done = NULL;
 	eth_dev->dev_ops = NULL;
 
@@ -6421,6 +6420,7 @@ rte_eth_set_rx_desc_st(uint16_t port_id, rte_eth_rx_descriptor_status_t rf)
 	return 0;
 }
 
+__rte_experimental
 rte_eth_tx_descriptor_status_t
 rte_eth_get_tx_desc_st(uint16_t port_id)
 {
@@ -6441,3 +6441,25 @@ rte_eth_set_tx_desc_st(uint16_t port_id, rte_eth_tx_descriptor_status_t tf)
 	rte_eth_burst_api[port_id].tx_descriptor_status = tf;
 	return 0;
 }
+
+__rte_experimental
+rte_eth_rx_queue_count_t
+rte_eth_get_rx_qcnt(uint16_t port_id)
+{
+	if (port_id >= RTE_DIM(rte_eth_burst_api)) {
+		rte_errno = EINVAL;
+		return NULL;
+	}
+	return rte_eth_burst_api[port_id].rx_queue_count;
+}
+
+__rte_experimental
+int
+rte_eth_set_rx_qcnt(uint16_t port_id, rte_eth_rx_queue_count_t rf)
+{
+	if (port_id >= RTE_DIM(rte_eth_burst_api))
+		return -EINVAL;
+
+	rte_eth_burst_api[port_id].rx_queue_count = rf;
+	return 0;
+}
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 073b532b7b..73aeef8c36 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -5004,16 +5004,15 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
 static inline int
 rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
 {
-	struct rte_eth_dev *dev;
+	rte_eth_rx_queue_count_t rqc;
 
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
-	dev = &rte_eth_devices[port_id];
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_queue_count, -ENOTSUP);
-	if (queue_id >= dev->data->nb_rx_queues ||
-	    dev->data->rx_queues[queue_id] == NULL)
+	if (port_id >= RTE_MAX_ETHPORTS)
 		return -EINVAL;
 
-	return (int)(*dev->rx_queue_count)(dev, queue_id);
+	rqc = rte_eth_burst_api[port_id].rx_queue_count;
+	RTE_FUNC_PTR_OR_ERR_RET(rqc, -ENOTSUP);
+
+	return (rqc)(port_id, queue_id);
 }
 
 /**
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index 1e42bacfce..53dd5c2114 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -115,7 +115,6 @@ struct rte_eth_rxtx_callback {
  * process, while the actual configuration data for the device is shared.
  */
 struct rte_eth_dev {
-	eth_rx_queue_count_t       rx_queue_count; /**< Get the number of used RX descriptors. */
 	eth_rx_descriptor_done_t   rx_descriptor_done;   /**< Check rxd DD bit. */
 	/**
 	 * Next two fields are per-device data but *data is shared between
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 802d9c3c11..ff838fef53 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -254,13 +254,15 @@ EXPERIMENTAL {
 	rte_eth_burst_api;
 	rte_eth_get_rx_burst;
 	rte_eth_get_rx_desc_st;
-	rte_eth_get_tx_desc_st;
+	rte_eth_get_rx_qcnt;
 	rte_eth_get_tx_burst;
+	rte_eth_get_tx_desc_st;
 	rte_eth_get_tx_prep;
 	rte_eth_set_rx_burst;
 	rte_eth_set_rx_desc_st;
-	rte_eth_set_tx_desc_st;
+	rte_eth_set_rx_qcnt;
 	rte_eth_set_tx_burst;
+	rte_eth_set_tx_desc_st;
 	rte_eth_set_tx_prep;
 };
 
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [dpdk-dev] [RFC 7/7] eth: hide eth dev related structures
  2021-08-20 16:28 [dpdk-dev] [RFC 0/7] hide eth dev related structures Konstantin Ananyev
                   ` (5 preceding siblings ...)
  2021-08-20 16:28 ` [dpdk-dev] [RFC 6/7] eth: make drivers to use new API for Rx queue count Konstantin Ananyev
@ 2021-08-20 16:28 ` Konstantin Ananyev
  2021-08-26 12:37 ` [dpdk-dev] [RFC 0/7] " Jerin Jacob
  2021-09-22 14:09 ` [dpdk-dev] [RFC v2 0/5] " Konstantin Ananyev
  8 siblings, 0 replies; 112+ messages in thread
From: Konstantin Ananyev @ 2021-08-20 16:28 UTC (permalink / raw)
  To: dev
  Cc: thomas, ferruh.yigit, andrew.rybchenko, qiming.yang, qi.z.zhang,
	beilei.xing, techboard, Konstantin Ananyev

Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
data into ethdev_driver.h.
Make changes to keep DPDK building after that.
Remove references to 'rte_eth_devices[]' from test-pmd.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 app/test-pmd/config.c                         |  23 ++-
 drivers/common/octeontx2/otx2_sec_idev.c      |   2 +-
 drivers/crypto/octeontx2/otx2_cryptodev_ops.c |   2 +-
 lib/ethdev/ethdev_driver.h                    | 135 ++++++++++++++++++
 lib/ethdev/rte_ethdev.c                       |  26 ++++
 lib/ethdev/rte_ethdev.h                       |  44 +++---
 lib/ethdev/rte_ethdev_core.h                  | 135 ------------------
 lib/ethdev/version.map                        |   1 +
 lib/eventdev/rte_event_eth_rx_adapter.c       |   2 +-
 lib/eventdev/rte_event_eth_tx_adapter.c       |   2 +-
 lib/eventdev/rte_eventdev.c                   |   2 +-
 11 files changed, 197 insertions(+), 177 deletions(-)

diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 31d8ba1b91..5b6a8a1680 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -5197,20 +5197,20 @@ show_macs(portid_t port_id)
 {
 	char buf[RTE_ETHER_ADDR_FMT_SIZE];
 	struct rte_eth_dev_info dev_info;
-	struct rte_ether_addr *addr;
-	uint32_t i, num_macs = 0;
-	struct rte_eth_dev *dev;
-
-	dev = &rte_eth_devices[port_id];
+	int32_t i, rc, num_macs = 0;
 
 	if (eth_dev_info_get_print_err(port_id, &dev_info))
 		return;
 
-	for (i = 0; i < dev_info.max_mac_addrs; i++) {
-		addr = &dev->data->mac_addrs[i];
+	struct rte_ether_addr addr[dev_info.max_mac_addrs];
+	rc = rte_eth_macaddrs_get(port_id, addr, dev_info.max_mac_addrs);
+	if (rc < 0)
+		return;
+
+	for (i = 0; i < rc; i++) {
 
 		/* skip zero address */
-		if (rte_is_zero_ether_addr(addr))
+		if (rte_is_zero_ether_addr(&addr[i]))
 			continue;
 
 		num_macs++;
@@ -5218,14 +5218,13 @@ show_macs(portid_t port_id)
 
 	printf("Number of MAC address added: %d\n", num_macs);
 
-	for (i = 0; i < dev_info.max_mac_addrs; i++) {
-		addr = &dev->data->mac_addrs[i];
+	for (i = 0; i < rc; i++) {
 
 		/* skip zero address */
-		if (rte_is_zero_ether_addr(addr))
+		if (rte_is_zero_ether_addr(&addr[i]))
 			continue;
 
-		rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, addr);
+		rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, &addr[i]);
 		printf("  %s\n", buf);
 	}
 }
diff --git a/drivers/common/octeontx2/otx2_sec_idev.c b/drivers/common/octeontx2/otx2_sec_idev.c
index 6e9643c383..b561b67174 100644
--- a/drivers/common/octeontx2/otx2_sec_idev.c
+++ b/drivers/common/octeontx2/otx2_sec_idev.c
@@ -4,7 +4,7 @@
 
 #include <rte_atomic.h>
 #include <rte_bus_pci.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_spinlock.h>
 
 #include "otx2_common.h"
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
index 42100154cd..c71be61158 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
@@ -6,7 +6,7 @@
 
 #include <rte_cryptodev_pmd.h>
 #include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_event_crypto_adapter.h>
 
 #include "otx2_cryptodev.h"
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index accaf1aab2..f931fd10e1 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -21,6 +21,141 @@
 extern "C" {
 #endif
 
+/**
+ * @internal
+ * Structure used to hold information about the callbacks to be called for a
+ * queue on RX and TX.
+ */
+struct rte_eth_rxtx_callback {
+	struct rte_eth_rxtx_callback *next;
+	union{
+		rte_rx_callback_fn rx;
+		rte_tx_callback_fn tx;
+	} fn;
+	void *param;
+};
+
+/**
+ * @internal
+ * The generic data structure associated with each ethernet device.
+ *
+ * Pointers to burst-oriented packet receive and transmit functions are
+ * located at the beginning of the structure, along with the pointer to
+ * where all the data elements for the particular device are stored in shared
+ * memory. This split allows the function pointer and driver data to be per-
+ * process, while the actual configuration data for the device is shared.
+ */
+struct rte_eth_dev {
+	eth_rx_descriptor_done_t   rx_descriptor_done;   /**< Check rxd DD bit. */
+	/**
+	 * Next two fields are per-device data but *data is shared between
+	 * primary and secondary processes and *process_private is per-process
+	 * private. The second one is managed by PMDs if necessary.
+	 */
+	struct rte_eth_dev_data *data;  /**< Pointer to device data. */
+	void *process_private; /**< Pointer to per-process device data. */
+	const struct eth_dev_ops *dev_ops; /**< Functions exported by PMD */
+	struct rte_device *device; /**< Backing device */
+	struct rte_intr_handle *intr_handle; /**< Device interrupt handle */
+	/** User application callbacks for NIC interrupts */
+	struct rte_eth_dev_cb_list link_intr_cbs;
+	/**
+	 * User-supplied functions called from rx_burst to post-process
+	 * received packets before passing them to the user
+	 */
+	struct rte_eth_rxtx_callback *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
+	/**
+	 * User-supplied functions called from tx_burst to pre-process
+	 * received packets before passing them to the driver for transmission.
+	 */
+	struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
+	enum rte_eth_dev_state state; /**< Flag indicating the port state */
+	void *security_ctx; /**< Context for security ops */
+
+	uint64_t reserved_64s[4]; /**< Reserved for future fields */
+	void *reserved_ptrs[4];   /**< Reserved for future fields */
+} __rte_cache_aligned;
+
+struct rte_eth_dev_sriov;
+struct rte_eth_dev_owner;
+
+/**
+ * @internal
+ * The data part, with no function pointers, associated with each ethernet device.
+ *
+ * This structure is safe to place in shared memory to be common among different
+ * processes in a multi-process configuration.
+ */
+struct rte_eth_dev_data {
+	char name[RTE_ETH_NAME_MAX_LEN]; /**< Unique identifier name */
+
+	void **rx_queues; /**< Array of pointers to RX queues. */
+	void **tx_queues; /**< Array of pointers to TX queues. */
+	uint16_t nb_rx_queues; /**< Number of RX queues. */
+	uint16_t nb_tx_queues; /**< Number of TX queues. */
+
+	struct rte_eth_dev_sriov sriov;    /**< SRIOV data */
+
+	void *dev_private;
+			/**< PMD-specific private data.
+			 *   @see rte_eth_dev_release_port()
+			 */
+
+	struct rte_eth_link dev_link;   /**< Link-level information & status. */
+	struct rte_eth_conf dev_conf;   /**< Configuration applied to device. */
+	uint16_t mtu;                   /**< Maximum Transmission Unit. */
+	uint32_t min_rx_buf_size;
+			/**< Common RX buffer size handled by all queues. */
+
+	uint64_t rx_mbuf_alloc_failed; /**< RX ring mbuf allocation failures. */
+	struct rte_ether_addr *mac_addrs;
+			/**< Device Ethernet link address.
+			 *   @see rte_eth_dev_release_port()
+			 */
+	uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
+			/**< Bitmap associating MAC addresses to pools. */
+	struct rte_ether_addr *hash_mac_addrs;
+			/**< Device Ethernet MAC addresses of hash filtering.
+			 *   @see rte_eth_dev_release_port()
+			 */
+	uint16_t port_id;           /**< Device [external] port identifier. */
+
+	__extension__
+	uint8_t promiscuous   : 1, /**< RX promiscuous mode ON(1) / OFF(0). */
+		scattered_rx : 1,  /**< RX of scattered packets is ON(1) / OFF(0) */
+		all_multicast : 1, /**< RX all multicast mode ON(1) / OFF(0). */
+		dev_started : 1,   /**< Device state: STARTED(1) / STOPPED(0). */
+		lro         : 1,   /**< RX LRO is ON(1) / OFF(0) */
+		dev_configured : 1;
+		/**< Indicates whether the device is configured.
+		 *   CONFIGURED(1) / NOT CONFIGURED(0).
+		 */
+	uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
+		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
+	uint8_t tx_queue_state[RTE_MAX_QUEUES_PER_PORT];
+		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
+	uint32_t dev_flags;             /**< Capabilities. */
+	int numa_node;                  /**< NUMA node connection. */
+	struct rte_vlan_filter_conf vlan_filter_conf;
+			/**< VLAN filter configuration. */
+	struct rte_eth_dev_owner owner; /**< The port owner. */
+	uint16_t representor_id;
+			/**< Switch-specific identifier.
+			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
+			 */
+
+	pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
+	uint64_t reserved_64s[4]; /**< Reserved for future fields */
+	void *reserved_ptrs[4];   /**< Reserved for future fields */
+} __rte_cache_aligned;
+
+/**
+ * @internal
+ * The pool of *rte_eth_dev* structures. The size of the pool
+ * is configured at compile-time in the <rte_ethdev.c> file.
+ */
+extern struct rte_eth_dev rte_eth_devices[];
+
 /**< @internal Declaration of the hairpin peer queue information structure. */
 struct rte_hairpin_peer_info;
 
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 0cc9f40e95..a41f3b2d57 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -3588,6 +3588,32 @@ rte_eth_dev_set_ptypes(uint16_t port_id, uint32_t ptype_mask,
 	return ret;
 }
 
+__rte_experimental
+int
+rte_eth_macaddrs_get(uint16_t port_id, struct rte_ether_addr ma[], uint32_t num)
+{
+	int32_t ret;
+	struct rte_eth_dev *dev;
+	struct rte_eth_dev_info dev_info;
+
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	dev = &rte_eth_devices[port_id];
+
+	ret = rte_eth_dev_info_get(port_id, &dev_info);
+	if (ret != 0)
+		return ret;
+
+	if (ma == NULL) {
+		RTE_ETHDEV_LOG(ERR, "%s: invalid parameters\n", __func__);
+		return -EINVAL;
+	}
+
+	num = RTE_MIN(dev_info.max_mac_addrs, num);
+	memcpy(ma, dev->data->mac_addrs, num * sizeof(ma[0]));
+
+	return num;
+}
+
 int
 rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr)
 {
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 73aeef8c36..0f425cf042 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -3013,6 +3013,25 @@ int rte_eth_dev_set_rx_queue_stats_mapping(uint16_t port_id,
  */
 int rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr);
 
+/**
+ * Retrieve the Ethernet addresses of an Ethernet device.
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device.
+ * @param ma
+ *   A pointer to an array of structures of type *ether_addr* to be filled with
+ *   the Ethernet addresses of the Ethernet device.
+ * @param ma
+ *   Number of elements in the *ma* array.
+ * @return
+ *   - (0) if successful
+ *   - (-ENODEV) if *port_id* invalid.
+ *   - (-EINVAL) if bad parameter.
+ */
+__rte_experimental
+int rte_eth_macaddrs_get(uint16_t port_id, struct rte_ether_addr ma[],
+		uint32_t num);
+
 /**
  * Retrieve the contextual information of an Ethernet device.
  *
@@ -5015,31 +5034,6 @@ rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
 	return (rqc)(port_id, queue_id);
 }
 
-/**
- * Check if the DD bit of the specific RX descriptor in the queue has been set
- *
- * @param port_id
- *  The port identifier of the Ethernet device.
- * @param queue_id
- *  The queue id on the specific port.
- * @param offset
- *  The offset of the descriptor ID from tail.
- * @return
- *  - (1) if the specific DD bit is set.
- *  - (0) if the specific DD bit is not set.
- *  - (-ENODEV) if *port_id* invalid.
- *  - (-ENOTSUP) if the device does not support this function
- */
-__rte_deprecated
-static inline int
-rte_eth_rx_descriptor_done(uint16_t port_id, uint16_t queue_id, uint16_t offset)
-{
-	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_descriptor_done, -ENOTSUP);
-	return (*dev->rx_descriptor_done)(dev->data->rx_queues[queue_id], offset);
-}
-
 #define RTE_ETH_RX_DESC_AVAIL    0 /**< Desc available for hw. */
 #define RTE_ETH_RX_DESC_DONE     1 /**< Desc done, filled by hw. */
 #define RTE_ETH_RX_DESC_UNAVAIL  2 /**< Desc used by driver or hw. */
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index 53dd5c2114..06f42ce899 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -90,139 +90,4 @@ struct rte_eth_burst_api {
 
 extern struct rte_eth_burst_api rte_eth_burst_api[RTE_MAX_ETHPORTS];
 
-/**
- * @internal
- * Structure used to hold information about the callbacks to be called for a
- * queue on RX and TX.
- */
-struct rte_eth_rxtx_callback {
-	struct rte_eth_rxtx_callback *next;
-	union{
-		rte_rx_callback_fn rx;
-		rte_tx_callback_fn tx;
-	} fn;
-	void *param;
-};
-
-/**
- * @internal
- * The generic data structure associated with each ethernet device.
- *
- * Pointers to burst-oriented packet receive and transmit functions are
- * located at the beginning of the structure, along with the pointer to
- * where all the data elements for the particular device are stored in shared
- * memory. This split allows the function pointer and driver data to be per-
- * process, while the actual configuration data for the device is shared.
- */
-struct rte_eth_dev {
-	eth_rx_descriptor_done_t   rx_descriptor_done;   /**< Check rxd DD bit. */
-	/**
-	 * Next two fields are per-device data but *data is shared between
-	 * primary and secondary processes and *process_private is per-process
-	 * private. The second one is managed by PMDs if necessary.
-	 */
-	struct rte_eth_dev_data *data;  /**< Pointer to device data. */
-	void *process_private; /**< Pointer to per-process device data. */
-	const struct eth_dev_ops *dev_ops; /**< Functions exported by PMD */
-	struct rte_device *device; /**< Backing device */
-	struct rte_intr_handle *intr_handle; /**< Device interrupt handle */
-	/** User application callbacks for NIC interrupts */
-	struct rte_eth_dev_cb_list link_intr_cbs;
-	/**
-	 * User-supplied functions called from rx_burst to post-process
-	 * received packets before passing them to the user
-	 */
-	struct rte_eth_rxtx_callback *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
-	/**
-	 * User-supplied functions called from tx_burst to pre-process
-	 * received packets before passing them to the driver for transmission.
-	 */
-	struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
-	enum rte_eth_dev_state state; /**< Flag indicating the port state */
-	void *security_ctx; /**< Context for security ops */
-
-	uint64_t reserved_64s[4]; /**< Reserved for future fields */
-	void *reserved_ptrs[4];   /**< Reserved for future fields */
-} __rte_cache_aligned;
-
-struct rte_eth_dev_sriov;
-struct rte_eth_dev_owner;
-
-/**
- * @internal
- * The data part, with no function pointers, associated with each ethernet device.
- *
- * This structure is safe to place in shared memory to be common among different
- * processes in a multi-process configuration.
- */
-struct rte_eth_dev_data {
-	char name[RTE_ETH_NAME_MAX_LEN]; /**< Unique identifier name */
-
-	void **rx_queues; /**< Array of pointers to RX queues. */
-	void **tx_queues; /**< Array of pointers to TX queues. */
-	uint16_t nb_rx_queues; /**< Number of RX queues. */
-	uint16_t nb_tx_queues; /**< Number of TX queues. */
-
-	struct rte_eth_dev_sriov sriov;    /**< SRIOV data */
-
-	void *dev_private;
-			/**< PMD-specific private data.
-			 *   @see rte_eth_dev_release_port()
-			 */
-
-	struct rte_eth_link dev_link;   /**< Link-level information & status. */
-	struct rte_eth_conf dev_conf;   /**< Configuration applied to device. */
-	uint16_t mtu;                   /**< Maximum Transmission Unit. */
-	uint32_t min_rx_buf_size;
-			/**< Common RX buffer size handled by all queues. */
-
-	uint64_t rx_mbuf_alloc_failed; /**< RX ring mbuf allocation failures. */
-	struct rte_ether_addr *mac_addrs;
-			/**< Device Ethernet link address.
-			 *   @see rte_eth_dev_release_port()
-			 */
-	uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
-			/**< Bitmap associating MAC addresses to pools. */
-	struct rte_ether_addr *hash_mac_addrs;
-			/**< Device Ethernet MAC addresses of hash filtering.
-			 *   @see rte_eth_dev_release_port()
-			 */
-	uint16_t port_id;           /**< Device [external] port identifier. */
-
-	__extension__
-	uint8_t promiscuous   : 1, /**< RX promiscuous mode ON(1) / OFF(0). */
-		scattered_rx : 1,  /**< RX of scattered packets is ON(1) / OFF(0) */
-		all_multicast : 1, /**< RX all multicast mode ON(1) / OFF(0). */
-		dev_started : 1,   /**< Device state: STARTED(1) / STOPPED(0). */
-		lro         : 1,   /**< RX LRO is ON(1) / OFF(0) */
-		dev_configured : 1;
-		/**< Indicates whether the device is configured.
-		 *   CONFIGURED(1) / NOT CONFIGURED(0).
-		 */
-	uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
-		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
-	uint8_t tx_queue_state[RTE_MAX_QUEUES_PER_PORT];
-		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
-	uint32_t dev_flags;             /**< Capabilities. */
-	int numa_node;                  /**< NUMA node connection. */
-	struct rte_vlan_filter_conf vlan_filter_conf;
-			/**< VLAN filter configuration. */
-	struct rte_eth_dev_owner owner; /**< The port owner. */
-	uint16_t representor_id;
-			/**< Switch-specific identifier.
-			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
-			 */
-
-	pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
-	uint64_t reserved_64s[4]; /**< Reserved for future fields */
-	void *reserved_ptrs[4];   /**< Reserved for future fields */
-} __rte_cache_aligned;
-
-/**
- * @internal
- * The pool of *rte_eth_dev* structures. The size of the pool
- * is configured at compile-time in the <rte_ethdev.c> file.
- */
-extern struct rte_eth_dev rte_eth_devices[];
-
 #endif /* _RTE_ETHDEV_CORE_H_ */
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index ff838fef53..24d90c05ea 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -258,6 +258,7 @@ EXPERIMENTAL {
 	rte_eth_get_tx_burst;
 	rte_eth_get_tx_desc_st;
 	rte_eth_get_tx_prep;
+	rte_eth_macaddrs_get;
 	rte_eth_set_rx_burst;
 	rte_eth_set_rx_desc_st;
 	rte_eth_set_rx_qcnt;
diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c
index 13dfb28401..89c4ca5d40 100644
--- a/lib/eventdev/rte_event_eth_rx_adapter.c
+++ b/lib/eventdev/rte_event_eth_rx_adapter.c
@@ -11,7 +11,7 @@
 #include <rte_common.h>
 #include <rte_dev.h>
 #include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_log.h>
 #include <rte_malloc.h>
 #include <rte_service_component.h>
diff --git a/lib/eventdev/rte_event_eth_tx_adapter.c b/lib/eventdev/rte_event_eth_tx_adapter.c
index 18c0359db7..1c06c8707c 100644
--- a/lib/eventdev/rte_event_eth_tx_adapter.c
+++ b/lib/eventdev/rte_event_eth_tx_adapter.c
@@ -3,7 +3,7 @@
  */
 #include <rte_spinlock.h>
 #include <rte_service_component.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 
 #include "eventdev_pmd.h"
 #include "rte_eventdev_trace.h"
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index 594dd5e759..ca1552475e 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -29,7 +29,7 @@
 #include <rte_common.h>
 #include <rte_malloc.h>
 #include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_cryptodev.h>
 #include <rte_cryptodev_pmd.h>
 #include <rte_telemetry.h>
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [RFC 0/7] hide eth dev related structures
  2021-08-20 16:28 [dpdk-dev] [RFC 0/7] hide eth dev related structures Konstantin Ananyev
                   ` (6 preceding siblings ...)
  2021-08-20 16:28 ` [dpdk-dev] [RFC 7/7] eth: hide eth dev related structures Konstantin Ananyev
@ 2021-08-26 12:37 ` Jerin Jacob
  2021-09-06 18:09   ` Ferruh Yigit
  2021-09-14 13:33   ` Ananyev, Konstantin
  2021-09-22 14:09 ` [dpdk-dev] [RFC v2 0/5] " Konstantin Ananyev
  8 siblings, 2 replies; 112+ messages in thread
From: Jerin Jacob @ 2021-08-26 12:37 UTC (permalink / raw)
  To: Konstantin Ananyev
  Cc: dpdk-dev, Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko,
	Qiming Yang, Qi Zhang, Beilei Xing, techboard

On Fri, Aug 20, 2021 at 9:59 PM Konstantin Ananyev
<konstantin.ananyev@intel.com> wrote:
>
> NOTE: This is just an RFC to start further discussion and collect the feedback.
> Due to significant amount of work, changes required are applied only to two
> PMDs so far: net/i40e and net/ice.
> So to build it you'll need to add:
> -Denable_drivers='common/*,mempool/*,net/ice,net/i40e'
> to your config options.

>
> That approach was selected to avoid(/minimize) possible performance losses.
>
> So far I done only limited amount functional and performance testing.
> Didn't spot any functional problems, and performance numbers
> remains the same before and after the patch on my box (testpmd, macswap fwd).


Based on testing on octeonxt2. We see some regression in testpmd and
bit on l3fwd too.

Without patch: 73.5mpps/core in testpmd iofwd
With out patch: 72 5mpps/core in testpmd iofwd

Based on my understanding it is due to additional indirection.

My suggestion to fix the problem by:
Removing the additional `data` redirection and pull callback function
pointers back
and keep rest as opaque as done in the existing patch like [1]

I don't believe this has any real implication on future ABI stability
as we will not be adding
any new item in rte_eth_fp in any way as new features can be added in slowpath
rte_eth_dev as mentioned in the patch.

[2] is the patch of doing the same as I don't see any performance
regression after [2].


[1]
- struct rte_eth_burst_api {
- struct rte_eth_fp {
+ void *data;
  rte_eth_rx_burst_t rx_pkt_burst;
  /**< PMD receive function. */
  rte_eth_tx_burst_t tx_pkt_burst;
@@ -85,8 +100,19 @@ struct rte_eth_burst_api {
  /**< Check the status of a Rx descriptor. */
  rte_eth_tx_descriptor_status_t tx_descriptor_status;
  /**< Check the status of a Tx descriptor. */
+ /**
+ * User-supplied functions called from rx_burst to post-process
+ * received packets before passing them to the user
+ */
+ struct rte_eth_rxtx_callback
+ *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
+ /**
+ * User-supplied functions called from tx_burst to pre-process
+ * received packets before passing them to the driver for transmission.
+ */
+ struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
  uintptr_t reserved[2];
-} __rte_cache_min_aligned;
+} __rte_cache_aligned;

[2]
https://pastebin.com/CuqkrCW4

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [RFC 0/7] hide eth dev related structures
  2021-08-26 12:37 ` [dpdk-dev] [RFC 0/7] " Jerin Jacob
@ 2021-09-06 18:09   ` Ferruh Yigit
  2021-09-14 13:33   ` Ananyev, Konstantin
  1 sibling, 0 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-09-06 18:09 UTC (permalink / raw)
  To: Jerin Jacob, Konstantin Ananyev
  Cc: dpdk-dev, Thomas Monjalon, Andrew Rybchenko, Qiming Yang,
	Qi Zhang, Beilei Xing, techboard

On 8/26/2021 1:37 PM, Jerin Jacob wrote:
> On Fri, Aug 20, 2021 at 9:59 PM Konstantin Ananyev
> <konstantin.ananyev@intel.com> wrote:
>>
>> NOTE: This is just an RFC to start further discussion and collect the feedback.
>> Due to significant amount of work, changes required are applied only to two
>> PMDs so far: net/i40e and net/ice.
>> So to build it you'll need to add:
>> -Denable_drivers='common/*,mempool/*,net/ice,net/i40e'
>> to your config options.
> 
>>
>> That approach was selected to avoid(/minimize) possible performance losses.
>>
>> So far I done only limited amount functional and performance testing.
>> Didn't spot any functional problems, and performance numbers
>> remains the same before and after the patch on my box (testpmd, macswap fwd).
> 
> 
> Based on testing on octeonxt2. We see some regression in testpmd and
> bit on l3fwd too.
> 
> Without patch: 73.5mpps/core in testpmd iofwd
> With out patch: 72 5mpps/core in testpmd iofwd
> 

So roughly 1.3% drop.

> Based on my understanding it is due to additional indirection.
> 

What is the additional indirection? I can see all dereferences area same.

> My suggestion to fix the problem by:
> Removing the additional `data` redirection and pull callback function
> pointers back
> and keep rest as opaque as done in the existing patch like [1]
> 
> I don't believe this has any real implication on future ABI stability
> as we will not be adding
> any new item in rte_eth_fp in any way as new features can be added in slowpath
> rte_eth_dev as mentioned in the patch.
> 
> [2] is the patch of doing the same as I don't see any performance
> regression after [2].
> 

Only there is an additional check after Konstantin's patch (in both
'rte_eth_rx_burst()' & 'rte_eth_tx_burst()'):
"
if (port_id >= RTE_MAX_ETHPORTS)
        return -EINVAL;
"

Which I can see removed again in your patch [2] which fixes the regression. I
wonder if this can be reason of restoring the performance drop, can you please
test original patch by just removing the 'RTE_MAX_ETHPORTS' check?

Thanks,
ferruh

> 
> [1]
> - struct rte_eth_burst_api {
> - struct rte_eth_fp {
> + void *data;
>   rte_eth_rx_burst_t rx_pkt_burst;
>   /**< PMD receive function. */
>   rte_eth_tx_burst_t tx_pkt_burst;
> @@ -85,8 +100,19 @@ struct rte_eth_burst_api {
>   /**< Check the status of a Rx descriptor. */
>   rte_eth_tx_descriptor_status_t tx_descriptor_status;
>   /**< Check the status of a Tx descriptor. */
> + /**
> + * User-supplied functions called from rx_burst to post-process
> + * received packets before passing them to the user
> + */
> + struct rte_eth_rxtx_callback
> + *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
> + /**
> + * User-supplied functions called from tx_burst to pre-process
> + * received packets before passing them to the driver for transmission.
> + */
> + struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
>   uintptr_t reserved[2];
> -} __rte_cache_min_aligned;
> +} __rte_cache_aligned;
> 
> [2]
> https://pastebin.com/CuqkrCW4
> 


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [RFC 2/7] eth: make drivers to use new API for Rx
  2021-08-20 16:28 ` [dpdk-dev] [RFC 2/7] eth: make drivers to use new API for Rx Konstantin Ananyev
@ 2021-09-06 18:41   ` Ferruh Yigit
  2021-09-14 14:28     ` Ananyev, Konstantin
  0 siblings, 1 reply; 112+ messages in thread
From: Ferruh Yigit @ 2021-09-06 18:41 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: thomas, andrew.rybchenko, qiming.yang, qi.z.zhang, beilei.xing,
	techboard

On 8/20/2021 5:28 PM, Konstantin Ananyev wrote:
> ethdev:
>  - make changes so drivers can start using new API for rx_pkt_burst().
>  - provide helper functions/macros.
>  - remove rx_pkt_burst() from 'struct rte_eth_dev'.
> drivers/net:
>  - adjust to new rx_burst API.
> 

Overall this enables us hiding the ethdev internals, which is good. But it
duplicates most of the datapath function (rx burst for this patch) per each PMD ops.

I wonder if we can have the callbacks ('_rte_eth_rx_epilog()') as separate
function, this still enables us to hide the structs. Of course additional
function call will bring some overhead, but if we enabled callbacks and calling
them per packet, do we really care about additional function call?


> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

<...>

> @@ -3229,7 +3289,7 @@ int
>  ice_rx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id,
>  		      struct rte_eth_burst_mode *mode)
>  {
> -	eth_rx_burst_t pkt_burst = dev->rx_pkt_burst;
> +	rte_eth_rx_burst_t pkt_burst = rte_eth_get_rx_burst(dev->data->port_id);

Does it makes easier to orginanise the patchset to have a separate patch to
switch first to 'rte_eth_get_rx_burst()' / 'rte_eth_set_rx_burst()' with old
implementation ('dev->rx_pkt_burst' get/set), and later just change the
'rte_eth_get_rx_burst()' / 'rte_eth_set_rx_burst()' implementation when
structure is updated.

<...>

> --- a/drivers/net/ice/ice_rxtx_vec_sse.c
> +++ b/drivers/net/ice/ice_rxtx_vec_sse.c
> @@ -587,13 +587,15 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
>   * - nb_pkts > ICE_VPMD_RX_BURST, only scan ICE_VPMD_RX_BURST
>   *   numbers of DD bits
>   */
> -uint16_t
> +static inline uint16_t

These functions eventually will be called via a function pointer, so is there a
benefit to request them to 'inline', why not just 'static' ?

<...>

> +_RTE_ETH_RX_DEF(ice_recv_scattered_pkts_vec)
> +

This will duplicate most of the Rx burst function for each PMD Rx ops.

<...>

> +
> +#define _RTE_ETH_FUNC(fn)	_rte_eth_##fn
> +

Do we need this macro? The functions are still 'static', so they won't be
visible to application and there won't be a namespace problem.

Dropping and just use the original fucntion name may reduce the changes in the
drivers.

<...>

> +__rte_experimental
> +rte_eth_rx_burst_t rte_eth_get_rx_burst(uint16_t port_id);
> +
> +__rte_experimental
> +int rte_eth_set_rx_burst(uint16_t port_id, rte_eth_rx_burst_t rxf);

can s/__rte_experimental/__rte_internal/

<...>

> +
> +__rte_experimental
> +rte_eth_rx_burst_t
> +rte_eth_get_rx_burst(uint16_t port_id)
> +{
> +	if (port_id >= RTE_DIM(rte_eth_burst_api)) {
> +		rte_errno = EINVAL;
> +		return NULL;
> +	}
> +	return rte_eth_burst_api[port_id].rx_pkt_burst;
> +}
> +
> +__rte_experimental
> +int
> +rte_eth_set_rx_burst(uint16_t port_id, rte_eth_rx_burst_t rxf)
> +{
> +	if (port_id >= RTE_DIM(rte_eth_burst_api))
> +		return -EINVAL;
> +
> +	rte_eth_burst_api[port_id].rx_pkt_burst = rxf;
> +	return 0;
> +}

Since these are internal functions for drivers, it can be easier for drivers to
use directly with 'struct rte_eth_dev *eth_dev', instead of 'port_id'.

So instead of APIs getting 'port_id' as parameter, they can get 'struct
rte_eth_dev *eth_dev'? Drivers for sure will have 'eth_dev' references for their
device.

Overall, I think make sense for all public APIs to have handler ('port_id') as
parameter, and all driver APIs to have 'eth_device' as paramter.

<...>

> @@ -4981,44 +4981,11 @@ static inline uint16_t
>  rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
>  		 struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
>  {
> -	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> -	uint16_t nb_rx;
> -
> -#ifdef RTE_ETHDEV_DEBUG_RX
> -	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
> -	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
> -
> -	if (queue_id >= dev->data->nb_rx_queues) {
> -		RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u\n", queue_id);
> +	if (port_id >= RTE_MAX_ETHPORTS)

As an API, it makes sense to validate the input. But not sure to add these
checks as part of this set, as previous versions don't have it. Perhaps we can
add them with separate patch and discussion.

<...>

> +++ b/lib/ethdev/version.map
> @@ -249,6 +249,11 @@ EXPERIMENTAL {
>  	rte_mtr_meter_policy_delete;
>  	rte_mtr_meter_policy_update;
>  	rte_mtr_meter_policy_validate;
> +
> +	# added in 21.11
> +	rte_eth_burst_api;
> +	rte_eth_get_rx_burst;
> +	rte_eth_set_rx_burst;

I think these APIs intented to use only by drivers, so instead of experimental,
they can be added as 'INTERNAL'.

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [RFC 0/7] hide eth dev related structures
  2021-08-26 12:37 ` [dpdk-dev] [RFC 0/7] " Jerin Jacob
  2021-09-06 18:09   ` Ferruh Yigit
@ 2021-09-14 13:33   ` Ananyev, Konstantin
  2021-09-15  9:45     ` Jerin Jacob
  1 sibling, 1 reply; 112+ messages in thread
From: Ananyev, Konstantin @ 2021-09-14 13:33 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dpdk-dev, Thomas Monjalon, Yigit, Ferruh, Andrew Rybchenko, Yang,
	Qiming, Zhang, Qi Z, Xing, Beilei, techboard


Hi Jerin,

> > NOTE: This is just an RFC to start further discussion and collect the feedback.
> > Due to significant amount of work, changes required are applied only to two
> > PMDs so far: net/i40e and net/ice.
> > So to build it you'll need to add:
> > -Denable_drivers='common/*,mempool/*,net/ice,net/i40e'
> > to your config options.
> 
> >
> > That approach was selected to avoid(/minimize) possible performance losses.
> >
> > So far I done only limited amount functional and performance testing.
> > Didn't spot any functional problems, and performance numbers
> > remains the same before and after the patch on my box (testpmd, macswap fwd).
> 
> 
> Based on testing on octeonxt2. We see some regression in testpmd and
> bit on l3fwd too.
> 
> Without patch: 73.5mpps/core in testpmd iofwd
> With out patch: 72 5mpps/core in testpmd iofwd
> 
> Based on my understanding it is due to additional indirection.

From your patch below, it looks like not actually additional indirection,
but extra memory dereference - func and dev pointers are now stored
at different places. Plus the fact that now we dereference rte_eth_devices[]
data inside PMD function. Which probably prevents compiler and CPU to load
 rte_eth_devices[port_id].data and rte_eth_devices[port_id]. pre_tx_burst_cbs[queue_id]  
in advance before calling actual RX/TX function.
About your approach: I don’t mind to add extra opaque 'void *data' pointer,
but would prefer not to expose callback invocations code into inline function.
Main reason for that - I think it still need to be reworked to allow adding/removing 
callbacks without stopping the device. Something similar to what was done for cryptodev
callbacks. To be able to do that in future without another ABI breakage callbacks related part
needs to be kept internal.
Though what we probably can do: add two dynamic arrays of opaque pointers to  rte_eth_burst_api.
One for rx/tx queue data pointers, second for rx/tx callback pointers.
To be more specific, something like:

typedef uint16_t (*rte_eth_rx_burst_t)( void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, void *cbs);
typedef uint16_t (*rte_eth_tx_burst_t)(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *cbs);
....

struct rte_eth_burst_api {
        rte_eth_rx_burst_t rx_pkt_burst;
        /**< PMD receive function. */
        rte_eth_tx_burst_t tx_pkt_burst;
        /**< PMD transmit function. */
        rte_eth_tx_prep_t tx_pkt_prepare;
        /**< PMD transmit prepare function. */
        rte_eth_rx_queue_count_t rx_queue_count;
        /**< Get the number of used RX descriptors. */
        rte_eth_rx_descriptor_status_t rx_descriptor_status;
        /**< Check the status of a Rx descriptor. */
        rte_eth_tx_descriptor_status_t tx_descriptor_status;
        /**< Check the status of a Tx descriptor. */
        struct {
                 void **queue_data;   /* point to rte_eth_devices[port_id].data-> rx_queues */
                 void **cbs;                  /*  points to rte_eth_devices[port_id].post_rx_burst_cbs */ 
       } rx_data, tx_data;
} __rte_cache_aligned;

static inline uint16_t
rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
                 struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
{
       struct rte_eth_burst_api *p;

        if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT)
                return 0;
 
      p =  &rte_eth_burst_api[port_id];
      return p->rx_pkt_burst(p->rx_data.queue_data[queue_id], rx_pkts, nb_pkts, p->rx_data.cbs[queue_id]);
}

Same for TX.

If that looks ok to everyone, I'll try to prepare next version based on that.
In theory that should avoid extra dereference problem and even reduce indirection.
As a drawback data->rxq/txq should always be allocated for RTE_MAX_QUEUES_PER_PORT entries,
but I presume that’s not a big deal.

As a side question - is there any reason why rte_ethdev_trace_rx_burst() is invoked at very last point,
while rte_ethdev_trace_tx_burst()  after CBs but before actual tx_pkt_burst()?
It would make things simpler if tracng would always be done either on entrance or exit of rx/tx_burst.

> 
> My suggestion to fix the problem by:
> Removing the additional `data` redirection and pull callback function
> pointers back
> and keep rest as opaque as done in the existing patch like [1]
> 
> I don't believe this has any real implication on future ABI stability
> as we will not be adding
> any new item in rte_eth_fp in any way as new features can be added in slowpath
> rte_eth_dev as mentioned in the patch.
> 
> [2] is the patch of doing the same as I don't see any performance
> regression after [2].
> 
> 
> [1]
> - struct rte_eth_burst_api {
> - struct rte_eth_fp {
> + void *data;
>   rte_eth_rx_burst_t rx_pkt_burst;
>   /**< PMD receive function. */
>   rte_eth_tx_burst_t tx_pkt_burst;
> @@ -85,8 +100,19 @@ struct rte_eth_burst_api {
>   /**< Check the status of a Rx descriptor. */
>   rte_eth_tx_descriptor_status_t tx_descriptor_status;
>   /**< Check the status of a Tx descriptor. */
> + /**
> + * User-supplied functions called from rx_burst to post-process
> + * received packets before passing them to the user
> + */
> + struct rte_eth_rxtx_callback
> + *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
> + /**
> + * User-supplied functions called from tx_burst to pre-process
> + * received packets before passing them to the driver for transmission.
> + */
> + struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
>   uintptr_t reserved[2];
> -} __rte_cache_min_aligned;
> +} __rte_cache_aligned;
> 
> [2]
> https://pastebin.com/CuqkrCW4

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [RFC 2/7] eth: make drivers to use new API for Rx
  2021-09-06 18:41   ` Ferruh Yigit
@ 2021-09-14 14:28     ` Ananyev, Konstantin
  0 siblings, 0 replies; 112+ messages in thread
From: Ananyev, Konstantin @ 2021-09-14 14:28 UTC (permalink / raw)
  To: Yigit, Ferruh, dev
  Cc: thomas, andrew.rybchenko, Yang, Qiming, Zhang, Qi Z, Xing,
	 Beilei, techboard



Hi Ferruh,

> 
> Overall this enables us hiding the ethdev internals, which is good. But it
> duplicates most of the datapath function (rx burst for this patch) per each PMD ops.

Yes, same as right now rte_eth_rx/tx_burst() code can be duplicated in dozen places 
inside user-level code. And as any other 'static inline' function that we define and use inside DPDK.
Personally I don't see why it is a problem.

> 
> I wonder if we can have the callbacks ('_rte_eth_rx_epilog()') as separate
> function, this still enables us to hide the structs. Of course additional
> function call will bring some overhead, but if we enabled callbacks and calling
> them per packet, do we really care about additional function call?

Callbacks are not per packet, but per burst of packets - same as actual RX/TX.
A drawback with such approach -  we either have to keep
post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT] visible to the user
(which I'd prefer not to), or call epilolg() unconditionally - which means
performance drop. 
 
> 
> > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 
> <...>
> 
> > @@ -3229,7 +3289,7 @@ int
> >  ice_rx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id,
> >  		      struct rte_eth_burst_mode *mode)
> >  {
> > -	eth_rx_burst_t pkt_burst = dev->rx_pkt_burst;
> > +	rte_eth_rx_burst_t pkt_burst = rte_eth_get_rx_burst(dev->data->port_id);
> 
> Does it makes easier to orginanise the patchset to have a separate patch to
> switch first to 'rte_eth_get_rx_burst()' / 'rte_eth_set_rx_burst()' with old
> implementation ('dev->rx_pkt_burst' get/set), and later just change the
> 'rte_eth_get_rx_burst()' / 'rte_eth_set_rx_burst()' implementation when
> structure is updated.

This is doable, don't know would be there any benefit from that or not. 

> 
> <...>
> 
> > --- a/drivers/net/ice/ice_rxtx_vec_sse.c
> > +++ b/drivers/net/ice/ice_rxtx_vec_sse.c
> > @@ -587,13 +587,15 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
> >   * - nb_pkts > ICE_VPMD_RX_BURST, only scan ICE_VPMD_RX_BURST
> >   *   numbers of DD bits
> >   */
> > -uint16_t
> > +static inline uint16_t
> 
> These functions eventually will be called via a function pointer, so is there a
> benefit to request them to 'inline', why not just 'static' ?

Agree.
 
> <...>
> 
> > +_RTE_ETH_RX_DEF(ice_recv_scattered_pkts_vec)
> > +
> 
> This will duplicate most of the Rx burst function for each PMD Rx ops.
> 
> <...>
> 
> > +
> > +#define _RTE_ETH_FUNC(fn)	_rte_eth_##fn
> > +
> 
> Do we need this macro? The functions are still 'static', so they won't be
> visible to application and there won't be a namespace problem.

Not all RX/TX burst functions are defined as 'static'.
 
> Dropping and just use the original fucntion name may reduce the changes in the
> drivers.

It allows to keep existing RX/TX functions intact - no need to change prototype, add prolog/epilog, etc. manually.
Instead these macros help to create a wrapper functions around existing ones, that will
become new public entry points. 
All that should help to make changes faster and in a safer manner.
Though these macros are just helper ones to simplify the transition.
if someone will prefer to make changes in all their RX/TX function by hand - that is still possible. 
 
> <...>
> 
> > +__rte_experimental
> > +rte_eth_rx_burst_t rte_eth_get_rx_burst(uint16_t port_id);
> > +
> > +__rte_experimental
> > +int rte_eth_set_rx_burst(uint16_t port_id, rte_eth_rx_burst_t rxf);
> 
> can s/__rte_experimental/__rte_internal/

OK.
 
> <...>
> 
> > +
> > +__rte_experimental
> > +rte_eth_rx_burst_t
> > +rte_eth_get_rx_burst(uint16_t port_id)
> > +{
> > +	if (port_id >= RTE_DIM(rte_eth_burst_api)) {
> > +		rte_errno = EINVAL;
> > +		return NULL;
> > +	}
> > +	return rte_eth_burst_api[port_id].rx_pkt_burst;
> > +}
> > +
> > +__rte_experimental
> > +int
> > +rte_eth_set_rx_burst(uint16_t port_id, rte_eth_rx_burst_t rxf)
> > +{
> > +	if (port_id >= RTE_DIM(rte_eth_burst_api))
> > +		return -EINVAL;
> > +
> > +	rte_eth_burst_api[port_id].rx_pkt_burst = rxf;
> > +	return 0;
> > +}
> 
> Since these are internal functions for drivers, it can be easier for drivers to
> use directly with 'struct rte_eth_dev *eth_dev', instead of 'port_id'.
> 
> So instead of APIs getting 'port_id' as parameter, they can get 'struct
> rte_eth_dev *eth_dev'? Drivers for sure will have 'eth_dev' references for their
> device.

I am fine either way - it is a control path internal function.
 
> Overall, I think make sense for all public APIs to have handler ('port_id') as
> parameter, and all driver APIs to have 'eth_device' as paramter.
> 
> <...>
> 
> > @@ -4981,44 +4981,11 @@ static inline uint16_t
> >  rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
> >  		 struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
> >  {
> > -	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > -	uint16_t nb_rx;
> > -
> > -#ifdef RTE_ETHDEV_DEBUG_RX
> > -	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
> > -	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
> > -
> > -	if (queue_id >= dev->data->nb_rx_queues) {
> > -		RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u\n", queue_id);
> > +	if (port_id >= RTE_MAX_ETHPORTS)
> 
> As an API, it makes sense to validate the input. But not sure to add these
> checks as part of this set, as previous versions don't have it. Perhaps we can
> add them with separate patch and discussion.

You mean 'if (port_id >= RTE_MAX_ETHPORTS)'?
No, this check is not present in current version.
Obviously, it can be removed, though I think it would be good to have it:
will help to keep thigs less error-prone.
I don’t think it would really impact the performance, in some cases
compiler will even be able to optimise out such check.  

> <...>
> 
> > +++ b/lib/ethdev/version.map
> > @@ -249,6 +249,11 @@ EXPERIMENTAL {
> >  	rte_mtr_meter_policy_delete;
> >  	rte_mtr_meter_policy_update;
> >  	rte_mtr_meter_policy_validate;
> > +
> > +	# added in 21.11
> > +	rte_eth_burst_api;
> > +	rte_eth_get_rx_burst;
> > +	rte_eth_set_rx_burst;
> 
> I think these APIs intented to use only by drivers, so instead of experimental,
> they can be added as 'INTERNAL'.

OK.


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [RFC 0/7] hide eth dev related structures
  2021-09-14 13:33   ` Ananyev, Konstantin
@ 2021-09-15  9:45     ` Jerin Jacob
  2021-09-22 15:08       ` Ananyev, Konstantin
  0 siblings, 1 reply; 112+ messages in thread
From: Jerin Jacob @ 2021-09-15  9:45 UTC (permalink / raw)
  To: Ananyev, Konstantin
  Cc: dpdk-dev, Thomas Monjalon, Yigit, Ferruh, Andrew Rybchenko, Yang,
	Qiming, Zhang, Qi Z, Xing, Beilei, techboard

On Tue, Sep 14, 2021 at 7:03 PM Ananyev, Konstantin
<konstantin.ananyev@intel.com> wrote:
>
>
> Hi Jerin,
>
> > > NOTE: This is just an RFC to start further discussion and collect the feedback.
> > > Due to significant amount of work, changes required are applied only to two
> > > PMDs so far: net/i40e and net/ice.
> > > So to build it you'll need to add:
> > > -Denable_drivers='common/*,mempool/*,net/ice,net/i40e'
> > > to your config options.
> >
> > >
> > > That approach was selected to avoid(/minimize) possible performance losses.
> > >
> > > So far I done only limited amount functional and performance testing.
> > > Didn't spot any functional problems, and performance numbers
> > > remains the same before and after the patch on my box (testpmd, macswap fwd).
> >
> >
> > Based on testing on octeonxt2. We see some regression in testpmd and
> > bit on l3fwd too.
> >
> > Without patch: 73.5mpps/core in testpmd iofwd
> > With out patch: 72 5mpps/core in testpmd iofwd
> >
> > Based on my understanding it is due to additional indirection.
>
> From your patch below, it looks like not actually additional indirection,
> but extra memory dereference - func and dev pointers are now stored
> at different places.

Yup. I meant the same. We are on the same page.

> Plus the fact that now we dereference rte_eth_devices[]
> data inside PMD function. Which probably prevents compiler and CPU to load
>  rte_eth_devices[port_id].data and rte_eth_devices[port_id]. pre_tx_burst_cbs[queue_id]
> in advance before calling actual RX/TX function.

Yes.

> About your approach: I don’t mind to add extra opaque 'void *data' pointer,
> but would prefer not to expose callback invocations code into inline function.
> Main reason for that - I think it still need to be reworked to allow adding/removing
> callbacks without stopping the device. Something similar to what was done for cryptodev
> callbacks. To be able to do that in future without another ABI breakage callbacks related part
> needs to be kept internal.
> Though what we probably can do: add two dynamic arrays of opaque pointers to  rte_eth_burst_api.
> One for rx/tx queue data pointers, second for rx/tx callback pointers.
> To be more specific, something like:
>
> typedef uint16_t (*rte_eth_rx_burst_t)( void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, void *cbs);
> typedef uint16_t (*rte_eth_tx_burst_t)(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *cbs);
> ....
>
> struct rte_eth_burst_api {
>         rte_eth_rx_burst_t rx_pkt_burst;
>         /**< PMD receive function. */
>         rte_eth_tx_burst_t tx_pkt_burst;
>         /**< PMD transmit function. */
>         rte_eth_tx_prep_t tx_pkt_prepare;
>         /**< PMD transmit prepare function. */
>         rte_eth_rx_queue_count_t rx_queue_count;
>         /**< Get the number of used RX descriptors. */
>         rte_eth_rx_descriptor_status_t rx_descriptor_status;
>         /**< Check the status of a Rx descriptor. */
>         rte_eth_tx_descriptor_status_t tx_descriptor_status;
>         /**< Check the status of a Tx descriptor. */
>         struct {
>                  void **queue_data;   /* point to rte_eth_devices[port_id].data-> rx_queues */
>                  void **cbs;                  /*  points to rte_eth_devices[port_id].post_rx_burst_cbs */
>        } rx_data, tx_data;
> } __rte_cache_aligned;
>
> static inline uint16_t
> rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
>                  struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
> {
>        struct rte_eth_burst_api *p;
>
>         if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT)
>                 return 0;
>
>       p =  &rte_eth_burst_api[port_id];
>       return p->rx_pkt_burst(p->rx_data.queue_data[queue_id], rx_pkts, nb_pkts, p->rx_data.cbs[queue_id]);



That works.


> }
>
> Same for TX.
>
> If that looks ok to everyone, I'll try to prepare next version based on that.


Looks good to me.

> In theory that should avoid extra dereference problem and even reduce indirection.
> As a drawback data->rxq/txq should always be allocated for RTE_MAX_QUEUES_PER_PORT entries,
> but I presume that’s not a big deal.
>
> As a side question - is there any reason why rte_ethdev_trace_rx_burst() is invoked at very last point,
> while rte_ethdev_trace_tx_burst()  after CBs but before actual tx_pkt_burst()?
> It would make things simpler if tracng would always be done either on entrance or exit of rx/tx_burst.

exit is fine.

>
> >
> > My suggestion to fix the problem by:
> > Removing the additional `data` redirection and pull callback function
> > pointers back
> > and keep rest as opaque as done in the existing patch like [1]
> >
> > I don't believe this has any real implication on future ABI stability
> > as we will not be adding
> > any new item in rte_eth_fp in any way as new features can be added in slowpath
> > rte_eth_dev as mentioned in the patch.

Ack

I will happy to test again after the rework and report any performance
issues if any.

Thaks for the great work :-)


> >
> > [2] is the patch of doing the same as I don't see any performance
> > regression after [2].
> >
> >
> > [1]
> > - struct rte_eth_burst_api {
> > - struct rte_eth_fp {
> > + void *data;
> >   rte_eth_rx_burst_t rx_pkt_burst;
> >   /**< PMD receive function. */
> >   rte_eth_tx_burst_t tx_pkt_burst;
> > @@ -85,8 +100,19 @@ struct rte_eth_burst_api {
> >   /**< Check the status of a Rx descriptor. */
> >   rte_eth_tx_descriptor_status_t tx_descriptor_status;
> >   /**< Check the status of a Tx descriptor. */
> > + /**
> > + * User-supplied functions called from rx_burst to post-process
> > + * received packets before passing them to the user
> > + */
> > + struct rte_eth_rxtx_callback
> > + *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
> > + /**
> > + * User-supplied functions called from tx_burst to pre-process
> > + * received packets before passing them to the driver for transmission.
> > + */
> > + struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
> >   uintptr_t reserved[2];
> > -} __rte_cache_min_aligned;
> > +} __rte_cache_aligned;
> >
> > [2]
> > https://pastebin.com/CuqkrCW4

^ permalink raw reply	[flat|nested] 112+ messages in thread

* [dpdk-dev] [RFC v2 0/5] hide eth dev related structures
  2021-08-20 16:28 [dpdk-dev] [RFC 0/7] hide eth dev related structures Konstantin Ananyev
                   ` (7 preceding siblings ...)
  2021-08-26 12:37 ` [dpdk-dev] [RFC 0/7] " Jerin Jacob
@ 2021-09-22 14:09 ` Konstantin Ananyev
  2021-09-22 14:09   ` [dpdk-dev] [RFC v2 1/5] ethdev: allocate max space for internal queue array Konstantin Ananyev
                     ` (5 more replies)
  8 siblings, 6 replies; 112+ messages in thread
From: Konstantin Ananyev @ 2021-09-22 14:09 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

The aim of these patch series is to make rte_ethdev core data structures
(rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback, etc.) internal to
DPDK and not visible to the user.
That should allow future possible changes to core ethdev related structures
to be transparent to the user and help to improve ABI/API stability.
Note that current ethdev API is preserved, but it is a formal ABI break.

The work is based on previous discussions at:
https://www.mail-archive.com/dev@dpdk.org/msg211405.html
https://www.mail-archive.com/dev@dpdk.org/msg216685.html
and consists of the following main points:
1. Copy public 'fast' function pointers (rx_pkt_burst(), etc.) and
   related data pointer from rte_eth_dev into a separate flat array.
   We keep it public to still be able to use inline functions for these
   'fast' calls (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
   Note that apart from function pointers itself, each element of this
   flat array also contains two opaque pointers for each ethdev:
   1) a pointer to an array of internal queue data pointers
   2)  points to array of queue callback data pointers.
   Note that exposing this extra information allows us to avoid extra
   changes inside PMD level, plus should help to avoid possible
   performance degradation.
2. Change implementation of 'fast' inline ethdev functions
   (rte_eth_rx_burst(), etc.) to use new public flat array.
   While it is an ABI breakage, this change is intended to be transparent
   for both users (no changes in user app is required) and PMD developers
   (no changes in PMD is required).
   One extra note - with new implementation RX/TX callback invocation
   will cost one extra function call with this changes. That might cause
   some slowdown for code-path with RX/TX callbacks heavily involved.
   Hope such tradeoff is acceptable for the community.
3. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
   things into internal header: <ethdev_driver.h>.

That approach was selected to:
  - Avoid(/minimize) possible performance losses.
  - Minimize required changes inside PMDs.
 
Performance testing results (ICX 2.0GHz):
 - testpmd macswap fwd mode, plus
   a) no RX/TX callbacks:
      performance numbers remains the same before and after the patch
   b) bpf-load rx 0 0 JM ./dpdk.org/examples/bpf/t3.o:
      ~2% slowdown

Would like to thank Ferruh and Jerrin for reviewing and testing previous
version of this RFC.
All interested parties please provide your feedback for v2.
If there would be no major objections, I plan to submit a proper v3
patch in next few days.

Konstantin Ananyev (5):
  ethdev: allocate max space for internal queue array
  ethdev: change input parameters for rx_queue_count
  ethdev: copy ethdev 'burst' API into separate structure
  ethdev: make burst functions to use new flat array
  ethdev: hide eth dev related structures

 app/test-pmd/config.c                         |  23 +-
 drivers/common/octeontx2/otx2_sec_idev.c      |   2 +-
 drivers/crypto/octeontx2/otx2_cryptodev_ops.c |   2 +-
 drivers/net/ark/ark_ethdev_rx.c               |   4 +-
 drivers/net/ark/ark_ethdev_rx.h               |   3 +-
 drivers/net/atlantic/atl_ethdev.h             |   2 +-
 drivers/net/atlantic/atl_rxtx.c               |   9 +-
 drivers/net/bnxt/bnxt_ethdev.c                |   8 +-
 drivers/net/cxgbe/base/adapter.h              |   2 +-
 drivers/net/dpaa/dpaa_ethdev.c                |   9 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |   9 +-
 drivers/net/dpaa2/dpaa2_ptp.c                 |   2 +-
 drivers/net/e1000/e1000_ethdev.h              |   6 +-
 drivers/net/e1000/em_rxtx.c                   |   4 +-
 drivers/net/e1000/igb_rxtx.c                  |   4 +-
 drivers/net/enic/enic_ethdev.c                |  12 +-
 drivers/net/fm10k/fm10k.h                     |   2 +-
 drivers/net/fm10k/fm10k_rxtx.c                |   4 +-
 drivers/net/hns3/hns3_rxtx.c                  |   7 +-
 drivers/net/hns3/hns3_rxtx.h                  |   2 +-
 drivers/net/i40e/i40e_rxtx.c                  |   4 +-
 drivers/net/i40e/i40e_rxtx.h                  |   3 +-
 drivers/net/iavf/iavf_rxtx.c                  |   4 +-
 drivers/net/iavf/iavf_rxtx.h                  |   2 +-
 drivers/net/ice/ice_rxtx.c                    |   4 +-
 drivers/net/ice/ice_rxtx.h                    |   2 +-
 drivers/net/igc/igc_txrx.c                    |   5 +-
 drivers/net/igc/igc_txrx.h                    |   3 +-
 drivers/net/ixgbe/ixgbe_ethdev.h              |   3 +-
 drivers/net/ixgbe/ixgbe_rxtx.c                |   4 +-
 drivers/net/mlx5/mlx5_rx.c                    |  26 +-
 drivers/net/mlx5/mlx5_rx.h                    |   2 +-
 drivers/net/netvsc/hn_rxtx.c                  |   4 +-
 drivers/net/netvsc/hn_var.h                   |   3 +-
 drivers/net/nfp/nfp_rxtx.c                    |   4 +-
 drivers/net/nfp/nfp_rxtx.h                    |   3 +-
 drivers/net/octeontx2/otx2_ethdev.h           |   2 +-
 drivers/net/octeontx2/otx2_ethdev_ops.c       |   8 +-
 drivers/net/sfc/sfc_ethdev.c                  |  12 +-
 drivers/net/thunderx/nicvf_ethdev.c           |   3 +-
 drivers/net/thunderx/nicvf_rxtx.c             |   4 +-
 drivers/net/thunderx/nicvf_rxtx.h             |   2 +-
 drivers/net/txgbe/txgbe_ethdev.h              |   3 +-
 drivers/net/txgbe/txgbe_rxtx.c                |   4 +-
 drivers/net/vhost/rte_eth_vhost.c             |   4 +-
 lib/ethdev/ethdev_driver.h                    | 152 +++++++++
 lib/ethdev/ethdev_private.c                   |  84 +++++
 lib/ethdev/ethdev_private.h                   |   7 +
 lib/ethdev/rte_ethdev.c                       |  78 +++--
 lib/ethdev/rte_ethdev.h                       | 288 ++++++++++++------
 lib/ethdev/rte_ethdev_core.h                  | 168 +++-------
 lib/ethdev/version.map                        |   6 +
 lib/eventdev/rte_event_eth_rx_adapter.c       |   2 +-
 lib/eventdev/rte_event_eth_tx_adapter.c       |   2 +-
 lib/eventdev/rte_eventdev.c                   |   2 +-
 55 files changed, 643 insertions(+), 380 deletions(-)

-- 
2.26.3


^ permalink raw reply	[flat|nested] 112+ messages in thread

* [dpdk-dev] [RFC v2 1/5] ethdev: allocate max space for internal queue array
  2021-09-22 14:09 ` [dpdk-dev] [RFC v2 0/5] " Konstantin Ananyev
@ 2021-09-22 14:09   ` Konstantin Ananyev
  2021-09-22 14:09   ` [dpdk-dev] [RFC v2 2/5] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 112+ messages in thread
From: Konstantin Ananyev @ 2021-09-22 14:09 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

At queue configure stage always allocate space for maximum possible
number (RTE_MAX_QUEUES_PER_PORT) of queue pointers.
That will allow 'fast' inline functions (eth_rx_burst, etc.) to refer
pointer to internal queue data without extra checking of current number
of configured queues.
That would help in future to hide rte_eth_dev and related structures.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/ethdev/rte_ethdev.c | 36 +++++++++---------------------------
 1 file changed, 9 insertions(+), 27 deletions(-)

diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index daf5ca9242..424bc260fa 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -898,7 +898,8 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 
 	if (dev->data->rx_queues == NULL && nb_queues != 0) { /* first time configuration */
 		dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues",
-				sizeof(dev->data->rx_queues[0]) * nb_queues,
+				sizeof(dev->data->rx_queues[0]) *
+				RTE_MAX_QUEUES_PER_PORT,
 				RTE_CACHE_LINE_SIZE);
 		if (dev->data->rx_queues == NULL) {
 			dev->data->nb_rx_queues = 0;
@@ -909,21 +910,11 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 
 		rxq = dev->data->rx_queues;
 
-		for (i = nb_queues; i < old_nb_queues; i++)
+		for (i = nb_queues; i < old_nb_queues; i++) {
 			(*dev->dev_ops->rx_queue_release)(rxq[i]);
-		rxq = rte_realloc(rxq, sizeof(rxq[0]) * nb_queues,
-				RTE_CACHE_LINE_SIZE);
-		if (rxq == NULL)
-			return -(ENOMEM);
-		if (nb_queues > old_nb_queues) {
-			uint16_t new_qs = nb_queues - old_nb_queues;
-
-			memset(rxq + old_nb_queues, 0,
-				sizeof(rxq[0]) * new_qs);
+			rxq[i] = NULL;
 		}
 
-		dev->data->rx_queues = rxq;
-
 	} else if (dev->data->rx_queues != NULL && nb_queues == 0) {
 		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, -ENOTSUP);
 
@@ -1138,8 +1129,9 @@ eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 
 	if (dev->data->tx_queues == NULL && nb_queues != 0) { /* first time configuration */
 		dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues",
-						   sizeof(dev->data->tx_queues[0]) * nb_queues,
-						   RTE_CACHE_LINE_SIZE);
+				sizeof(dev->data->tx_queues[0]) *
+				RTE_MAX_QUEUES_PER_PORT,
+				RTE_CACHE_LINE_SIZE);
 		if (dev->data->tx_queues == NULL) {
 			dev->data->nb_tx_queues = 0;
 			return -(ENOMEM);
@@ -1149,21 +1141,11 @@ eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 
 		txq = dev->data->tx_queues;
 
-		for (i = nb_queues; i < old_nb_queues; i++)
+		for (i = nb_queues; i < old_nb_queues; i++) {
 			(*dev->dev_ops->tx_queue_release)(txq[i]);
-		txq = rte_realloc(txq, sizeof(txq[0]) * nb_queues,
-				  RTE_CACHE_LINE_SIZE);
-		if (txq == NULL)
-			return -ENOMEM;
-		if (nb_queues > old_nb_queues) {
-			uint16_t new_qs = nb_queues - old_nb_queues;
-
-			memset(txq + old_nb_queues, 0,
-			       sizeof(txq[0]) * new_qs);
+			txq[i] = NULL;
 		}
 
-		dev->data->tx_queues = txq;
-
 	} else if (dev->data->tx_queues != NULL && nb_queues == 0) {
 		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release, -ENOTSUP);
 
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [dpdk-dev] [RFC v2 2/5] ethdev: change input parameters for rx_queue_count
  2021-09-22 14:09 ` [dpdk-dev] [RFC v2 0/5] " Konstantin Ananyev
  2021-09-22 14:09   ` [dpdk-dev] [RFC v2 1/5] ethdev: allocate max space for internal queue array Konstantin Ananyev
@ 2021-09-22 14:09   ` Konstantin Ananyev
  2021-09-23  5:51     ` Wang, Haiyue
  2021-09-22 14:09   ` [dpdk-dev] [RFC v2 3/5] ethdev: copy ethdev 'burst' API into separate structure Konstantin Ananyev
                     ` (3 subsequent siblings)
  5 siblings, 1 reply; 112+ messages in thread
From: Konstantin Ananyev @ 2021-09-22 14:09 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Currently majority of 'fast' ethdev ops take pointers to internal
queue data structures as an input parameter.
While eth_rx_queue_count() takes a pointer to rte_eth_dev and queue
index.
For future work to hide rte_eth_devices[] and friends it would be
plausible to unify parameters list of all 'fast' ethdev ops.
This patch changes eth_rx_queue_count() to accept pointer to internal
queue data as input parameter.
This is an API and ABI breakage.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 drivers/net/ark/ark_ethdev_rx.c         |  4 ++--
 drivers/net/ark/ark_ethdev_rx.h         |  3 +--
 drivers/net/atlantic/atl_ethdev.h       |  2 +-
 drivers/net/atlantic/atl_rxtx.c         |  9 ++-------
 drivers/net/bnxt/bnxt_ethdev.c          |  8 +++++---
 drivers/net/dpaa/dpaa_ethdev.c          |  9 ++++-----
 drivers/net/dpaa2/dpaa2_ethdev.c        |  9 ++++-----
 drivers/net/e1000/e1000_ethdev.h        |  6 ++----
 drivers/net/e1000/em_rxtx.c             |  4 ++--
 drivers/net/e1000/igb_rxtx.c            |  4 ++--
 drivers/net/enic/enic_ethdev.c          | 12 ++++++------
 drivers/net/fm10k/fm10k.h               |  2 +-
 drivers/net/fm10k/fm10k_rxtx.c          |  4 ++--
 drivers/net/hns3/hns3_rxtx.c            |  7 +++++--
 drivers/net/hns3/hns3_rxtx.h            |  2 +-
 drivers/net/i40e/i40e_rxtx.c            |  4 ++--
 drivers/net/i40e/i40e_rxtx.h            |  3 +--
 drivers/net/iavf/iavf_rxtx.c            |  4 ++--
 drivers/net/iavf/iavf_rxtx.h            |  2 +-
 drivers/net/ice/ice_rxtx.c              |  4 ++--
 drivers/net/ice/ice_rxtx.h              |  2 +-
 drivers/net/igc/igc_txrx.c              |  5 ++---
 drivers/net/igc/igc_txrx.h              |  3 +--
 drivers/net/ixgbe/ixgbe_ethdev.h        |  3 +--
 drivers/net/ixgbe/ixgbe_rxtx.c          |  4 ++--
 drivers/net/mlx5/mlx5_rx.c              | 26 ++++++++++++-------------
 drivers/net/mlx5/mlx5_rx.h              |  2 +-
 drivers/net/netvsc/hn_rxtx.c            |  4 ++--
 drivers/net/netvsc/hn_var.h             |  2 +-
 drivers/net/nfp/nfp_rxtx.c              |  4 ++--
 drivers/net/nfp/nfp_rxtx.h              |  3 +--
 drivers/net/octeontx2/otx2_ethdev.h     |  2 +-
 drivers/net/octeontx2/otx2_ethdev_ops.c |  8 ++++----
 drivers/net/sfc/sfc_ethdev.c            | 12 ++++++------
 drivers/net/thunderx/nicvf_ethdev.c     |  3 +--
 drivers/net/thunderx/nicvf_rxtx.c       |  4 ++--
 drivers/net/thunderx/nicvf_rxtx.h       |  2 +-
 drivers/net/txgbe/txgbe_ethdev.h        |  3 +--
 drivers/net/txgbe/txgbe_rxtx.c          |  4 ++--
 drivers/net/vhost/rte_eth_vhost.c       |  4 ++--
 lib/ethdev/rte_ethdev.h                 |  2 +-
 lib/ethdev/rte_ethdev_core.h            |  3 +--
 42 files changed, 97 insertions(+), 110 deletions(-)

diff --git a/drivers/net/ark/ark_ethdev_rx.c b/drivers/net/ark/ark_ethdev_rx.c
index d255f0177b..98658ce621 100644
--- a/drivers/net/ark/ark_ethdev_rx.c
+++ b/drivers/net/ark/ark_ethdev_rx.c
@@ -388,11 +388,11 @@ eth_ark_rx_queue_drain(struct ark_rx_queue *queue)
 }
 
 uint32_t
-eth_ark_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id)
+eth_ark_dev_rx_queue_count(void *rx_queue)
 {
 	struct ark_rx_queue *queue;
 
-	queue = dev->data->rx_queues[queue_id];
+	queue = rx_queue;
 	return (queue->prod_index - queue->cons_index);	/* mod arith */
 }
 
diff --git a/drivers/net/ark/ark_ethdev_rx.h b/drivers/net/ark/ark_ethdev_rx.h
index c8dc340a8a..859fcf1e6f 100644
--- a/drivers/net/ark/ark_ethdev_rx.h
+++ b/drivers/net/ark/ark_ethdev_rx.h
@@ -17,8 +17,7 @@ int eth_ark_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			       unsigned int socket_id,
 			       const struct rte_eth_rxconf *rx_conf,
 			       struct rte_mempool *mp);
-uint32_t eth_ark_dev_rx_queue_count(struct rte_eth_dev *dev,
-				    uint16_t rx_queue_id);
+uint32_t eth_ark_dev_rx_queue_count(void *rx_queue);
 int eth_ark_rx_stop_queue(struct rte_eth_dev *dev, uint16_t queue_id);
 int eth_ark_rx_start_queue(struct rte_eth_dev *dev, uint16_t queue_id);
 uint16_t eth_ark_recv_pkts_noop(void *rx_queue, struct rte_mbuf **rx_pkts,
diff --git a/drivers/net/atlantic/atl_ethdev.h b/drivers/net/atlantic/atl_ethdev.h
index f547571b5c..e808460520 100644
--- a/drivers/net/atlantic/atl_ethdev.h
+++ b/drivers/net/atlantic/atl_ethdev.h
@@ -66,7 +66,7 @@ int atl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
 
-uint32_t atl_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t atl_rx_queue_count(void *rx_queue);
 
 int atl_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int atl_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c
index 7d367c9306..35bb13044e 100644
--- a/drivers/net/atlantic/atl_rxtx.c
+++ b/drivers/net/atlantic/atl_rxtx.c
@@ -689,18 +689,13 @@ atl_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 /* Return Rx queue avail count */
 
 uint32_t
-atl_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+atl_rx_queue_count(void *rx_queue)
 {
 	struct atl_rx_queue *rxq;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (rx_queue_id >= dev->data->nb_rx_queues) {
-		PMD_DRV_LOG(ERR, "Invalid RX queue id=%d", rx_queue_id);
-		return 0;
-	}
-
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 
 	if (rxq == NULL)
 		return 0;
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 097dd10de9..e07242e961 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3130,20 +3130,22 @@ bnxt_dev_led_off_op(struct rte_eth_dev *dev)
 }
 
 static uint32_t
-bnxt_rx_queue_count_op(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+bnxt_rx_queue_count_op(void *rx_queue)
 {
-	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
+	struct bnxt *bp;
 	struct bnxt_cp_ring_info *cpr;
 	uint32_t desc = 0, raw_cons, cp_ring_size;
 	struct bnxt_rx_queue *rxq;
 	struct rx_pkt_cmpl *rxcmp;
 	int rc;
 
+	rxq = rx_queue;
+	bp = rxq->bp;
+
 	rc = is_bnxt_in_error(bp);
 	if (rc)
 		return rc;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
 	cpr = rxq->cp_ring;
 	raw_cons = cpr->cp_raw_cons;
 	cp_ring_size = cpr->cp_ring_struct->ring_size;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 36d8f9249d..b5589300c9 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1278,17 +1278,16 @@ static void dpaa_eth_tx_queue_release(void *txq __rte_unused)
 }
 
 static uint32_t
-dpaa_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+dpaa_dev_rx_queue_count(void *rx_queue)
 {
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
-	struct qman_fq *rxq = &dpaa_intf->rx_queues[rx_queue_id];
+	struct qman_fq *rxq = rx_queue;
 	u32 frm_cnt = 0;
 
 	PMD_INIT_FUNC_TRACE();
 
 	if (qman_query_fq_frm_cnt(rxq, &frm_cnt) == 0) {
-		DPAA_PMD_DEBUG("RX frame count for q(%d) is %u",
-			       rx_queue_id, frm_cnt);
+		DPAA_PMD_DEBUG("RX frame count for q(%p) is %u",
+			       rx_queue, frm_cnt);
 	}
 	return frm_cnt;
 }
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index c12169578e..b295af2a57 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1011,10 +1011,9 @@ dpaa2_dev_tx_queue_release(void *q __rte_unused)
 }
 
 static uint32_t
-dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+dpaa2_dev_rx_queue_count(void *rx_queue)
 {
 	int32_t ret;
-	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	struct dpaa2_queue *dpaa2_q;
 	struct qbman_swp *swp;
 	struct qbman_fq_query_np_rslt state;
@@ -1031,12 +1030,12 @@ dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 	swp = DPAA2_PER_LCORE_PORTAL;
 
-	dpaa2_q = (struct dpaa2_queue *)priv->rx_vq[rx_queue_id];
+	dpaa2_q = rx_queue;
 
 	if (qbman_fq_query_state(swp, dpaa2_q->fqid, &state) == 0) {
 		frame_cnt = qbman_fq_state_frame_count(&state);
-		DPAA2_PMD_DP_DEBUG("RX frame count for q(%d) is %u",
-				rx_queue_id, frame_cnt);
+		DPAA2_PMD_DP_DEBUG("RX frame count for q(%p) is %u",
+				rx_queue, frame_cnt);
 	}
 	return frame_cnt;
 }
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 3b4d9c3ee6..460e130a83 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -399,8 +399,7 @@ int eth_igb_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		const struct rte_eth_rxconf *rx_conf,
 		struct rte_mempool *mb_pool);
 
-uint32_t eth_igb_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t eth_igb_rx_queue_count(void *rx_queue);
 
 int eth_igb_rx_descriptor_done(void *rx_queue, uint16_t offset);
 
@@ -476,8 +475,7 @@ int eth_em_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		const struct rte_eth_rxconf *rx_conf,
 		struct rte_mempool *mb_pool);
 
-uint32_t eth_em_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t eth_em_rx_queue_count(void *rx_queue);
 
 int eth_em_rx_descriptor_done(void *rx_queue, uint16_t offset);
 
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index dfd8f2fd00..40de36cb20 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -1489,14 +1489,14 @@ eth_em_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-eth_em_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_em_rx_queue_count(void *rx_queue)
 {
 #define EM_RXQ_SCAN_INTERVAL 4
 	volatile struct e1000_rx_desc *rxdp;
 	struct em_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 278d5d2712..3210a0e008 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -1769,14 +1769,14 @@ eth_igb_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-eth_igb_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_igb_rx_queue_count(void *rx_queue)
 {
 #define IGB_RXQ_SCAN_INTERVAL 4
 	volatile union e1000_adv_rx_desc *rxdp;
 	struct igb_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 8d5797523b..5b2d60ad9c 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -233,18 +233,18 @@ static void enicpmd_dev_rx_queue_release(void *rxq)
 	enic_free_rq(rxq);
 }
 
-static uint32_t enicpmd_dev_rx_queue_count(struct rte_eth_dev *dev,
-					   uint16_t rx_queue_id)
+static uint32_t enicpmd_dev_rx_queue_count(void *rx_queue)
 {
-	struct enic *enic = pmd_priv(dev);
+	struct enic *enic;
+	struct vnic_rq *sop_rq;
 	uint32_t queue_count = 0;
 	struct vnic_cq *cq;
 	uint32_t cq_tail;
 	uint16_t cq_idx;
-	int rq_num;
 
-	rq_num = enic_rte_rq_idx_to_sop_idx(rx_queue_id);
-	cq = &enic->cq[enic_cq_rq(enic, rq_num)];
+	sop_rq = rx_queue;
+	enic = vnic_dev_priv(sop_rq->vdev);
+	cq = &enic->cq[enic_cq_rq(enic, sop_rq->index)];
 	cq_idx = cq->to_clean;
 
 	cq_tail = ioread32(&cq->ctrl->cq_tail);
diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h
index 916b856acc..648d12a1b4 100644
--- a/drivers/net/fm10k/fm10k.h
+++ b/drivers/net/fm10k/fm10k.h
@@ -324,7 +324,7 @@ uint16_t fm10k_recv_scattered_pkts(void *rx_queue,
 		struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
 
 uint32_t
-fm10k_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+fm10k_dev_rx_queue_count(void *rx_queue);
 
 int
 fm10k_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index 0a9a27aa5a..eab798e52c 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -367,14 +367,14 @@ fm10k_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 }
 
 uint32_t
-fm10k_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+fm10k_dev_rx_queue_count(void *rx_queue)
 {
 #define FM10K_RXQ_SCAN_INTERVAL 4
 	volatile union fm10k_rx_desc *rxdp;
 	struct fm10k_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->hw_ring[rxq->next_dd];
 	while ((desc < rxq->nb_desc) &&
 		rxdp->w.status & rte_cpu_to_le_16(FM10K_RXD_STATUS_DD)) {
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 481872e395..04791ae7d0 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -4673,7 +4673,7 @@ hns3_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
 }
 
 uint32_t
-hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+hns3_rx_queue_count(void *rx_queue)
 {
 	/*
 	 * Number of BDs that have been processed by the driver
@@ -4681,9 +4681,12 @@ hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	 */
 	uint32_t driver_hold_bd_num;
 	struct hns3_rx_queue *rxq;
+	const struct rte_eth_dev *dev;
 	uint32_t fbd_num;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
+	dev = &rte_eth_devices[rxq->port_id];
+
 	fbd_num = hns3_read_dev(rxq, HNS3_RING_RX_FBDNUM_REG);
 	if (dev->rx_pkt_burst == hns3_recv_pkts_vec ||
 	    dev->rx_pkt_burst == hns3_recv_pkts_vec_sve)
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index cd7c21c1d0..34a028701f 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -696,7 +696,7 @@ int hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 			struct rte_mempool *mp);
 int hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 			unsigned int socket, const struct rte_eth_txconf *conf);
-uint32_t hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t hns3_rx_queue_count(void *rx_queue);
 int hns3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int hns3_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int hns3_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 3eb82578b0..5493ae6bba 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2117,14 +2117,14 @@ i40e_dev_rx_queue_release(void *rxq)
 }
 
 uint32_t
-i40e_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+i40e_dev_rx_queue_count(void *rx_queue)
 {
 #define I40E_RXQ_SCAN_INTERVAL 4
 	volatile union i40e_rx_desc *rxdp;
 	struct i40e_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 	while ((desc < rxq->nb_rx_desc) &&
 		((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 5ccf5773e8..a08b80f020 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -225,8 +225,7 @@ int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
 int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
 void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
 
-uint32_t i40e_dev_rx_queue_count(struct rte_eth_dev *dev,
-				 uint16_t rx_queue_id);
+uint32_t i40e_dev_rx_queue_count(void *rx_queue);
 int i40e_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
 int i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 6de8ad3fe3..a08c2c6cf4 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -2793,14 +2793,14 @@ iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 
 /* Get the number of used descriptors of a rx queue */
 uint32_t
-iavf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id)
+iavf_dev_rxq_count(void *rx_queue)
 {
 #define IAVF_RXQ_SCAN_INTERVAL 4
 	volatile union iavf_rx_desc *rxdp;
 	struct iavf_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index e210b913d6..2f7bec2b63 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -453,7 +453,7 @@ void iavf_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			  struct rte_eth_rxq_info *qinfo);
 void iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			  struct rte_eth_txq_info *qinfo);
-uint32_t iavf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id);
+uint32_t iavf_dev_rxq_count(void *rx_queue);
 int iavf_dev_rx_desc_status(void *rx_queue, uint16_t offset);
 int iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset);
 
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 5d7ab4f047..61936b0ab1 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1427,14 +1427,14 @@ ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 }
 
 uint32_t
-ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+ice_rx_queue_count(void *rx_queue)
 {
 #define ICE_RXQ_SCAN_INTERVAL 4
 	volatile union ice_rx_flex_desc *rxdp;
 	struct ice_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 	while ((desc < rxq->nb_rx_desc) &&
 	       rte_le_to_cpu_16(rxdp->wb.status_error0) &
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index b10db0874d..b45abec91a 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -222,7 +222,7 @@ uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 void ice_set_tx_function_flag(struct rte_eth_dev *dev,
 			      struct ice_tx_queue *txq);
 void ice_set_tx_function(struct rte_eth_dev *dev);
-uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t ice_rx_queue_count(void *rx_queue);
 void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 		      struct rte_eth_rxq_info *qinfo);
 void ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index b5489eedd2..437992ecdf 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -722,8 +722,7 @@ void eth_igc_rx_queue_release(void *rxq)
 		igc_rx_queue_release(rxq);
 }
 
-uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id)
+uint32_t eth_igc_rx_queue_count(void *rx_queue)
 {
 	/**
 	 * Check the DD bit of a rx descriptor of each 4 in a group,
@@ -736,7 +735,7 @@ uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
 	struct igc_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 
 	while (desc < rxq->nb_rx_desc - rxq->rx_tail) {
diff --git a/drivers/net/igc/igc_txrx.h b/drivers/net/igc/igc_txrx.h
index f2b2d75bbc..b0c4b3ebd9 100644
--- a/drivers/net/igc/igc_txrx.h
+++ b/drivers/net/igc/igc_txrx.h
@@ -22,8 +22,7 @@ int eth_igc_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		const struct rte_eth_rxconf *rx_conf,
 		struct rte_mempool *mb_pool);
 
-uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t eth_igc_rx_queue_count(void *rx_queue);
 
 int eth_igc_rx_descriptor_done(void *rx_queue, uint16_t offset);
 
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index a0ce18ca24..c5027be1dc 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -602,8 +602,7 @@ int  ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
 
-uint32_t ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t ixgbe_dev_rx_queue_count(void *rx_queue);
 
 int ixgbe_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
 
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index bfdfd5e755..1f802851e3 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -3258,14 +3258,14 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+ixgbe_dev_rx_queue_count(void *rx_queue)
 {
 #define IXGBE_RXQ_SCAN_INTERVAL 4
 	volatile union ixgbe_adv_rx_desc *rxdp;
 	struct ixgbe_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index e3b1051ba4..1a9eb35acc 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -240,32 +240,32 @@ mlx5_rx_burst_mode_get(struct rte_eth_dev *dev,
 /**
  * DPDK callback to get the number of used descriptors in a RX queue.
  *
- * @param dev
- *   Pointer to the device structure.
- *
- * @param rx_queue_id
- *   The Rx queue.
+ * @param rx_queue
+ *   The Rx queue pointer.
  *
  * @return
  *   The number of used rx descriptor.
  *   -EINVAL if the queue is invalid
  */
 uint32_t
-mlx5_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+mlx5_rx_queue_count(void *rx_queue)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq;
+	struct mlx5_rxq_data *rxq = rx_queue;
+	struct rte_eth_dev *dev;
+
+	if (!rxq) {
+		rte_errno = EINVAL;
+		return -rte_errno;
+	}
+
+	dev = &rte_eth_devices[rxq->port_id];
 
 	if (dev->rx_pkt_burst == NULL ||
 	    dev->rx_pkt_burst == removed_rx_burst) {
 		rte_errno = ENOTSUP;
 		return -rte_errno;
 	}
-	rxq = (*priv->rxqs)[rx_queue_id];
-	if (!rxq) {
-		rte_errno = EINVAL;
-		return -rte_errno;
-	}
+
 	return rx_queue_count(rxq);
 }
 
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 3f2b99fb65..5e4ac7324d 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -260,7 +260,7 @@ uint16_t mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts,
 uint16_t removed_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts,
 			  uint16_t pkts_n);
 int mlx5_rx_descriptor_status(void *rx_queue, uint16_t offset);
-uint32_t mlx5_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t mlx5_rx_queue_count(void *rx_queue);
 void mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 		       struct rte_eth_rxq_info *qinfo);
 int mlx5_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t rx_queue_id,
diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c
index c6bf7cc132..30aac371c8 100644
--- a/drivers/net/netvsc/hn_rxtx.c
+++ b/drivers/net/netvsc/hn_rxtx.c
@@ -1018,9 +1018,9 @@ hn_dev_rx_queue_release(void *arg)
  * For this device that means how many packets are pending in the ring.
  */
 uint32_t
-hn_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id)
+hn_dev_rx_queue_count(void *rx_queue)
 {
-	struct hn_rx_queue *rxq = dev->data->rx_queues[queue_id];
+	struct hn_rx_queue *rxq = rx_queue;
 
 	return rte_ring_count(rxq->rx_ring);
 }
diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h
index 43642408bc..2a2bac9338 100644
--- a/drivers/net/netvsc/hn_var.h
+++ b/drivers/net/netvsc/hn_var.h
@@ -215,7 +215,7 @@ int	hn_dev_rx_queue_setup(struct rte_eth_dev *dev,
 void	hn_dev_rx_queue_info(struct rte_eth_dev *dev, uint16_t queue_id,
 			     struct rte_eth_rxq_info *qinfo);
 void	hn_dev_rx_queue_release(void *arg);
-uint32_t hn_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id);
+uint32_t hn_dev_rx_queue_count(void *rx_queue);
 int	hn_dev_rx_queue_status(void *rxq, uint16_t offset);
 void	hn_dev_free_queues(struct rte_eth_dev *dev);
 
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 1402c5f84a..4b2ac4cc43 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -97,14 +97,14 @@ nfp_net_rx_freelist_setup(struct rte_eth_dev *dev)
 }
 
 uint32_t
-nfp_net_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
+nfp_net_rx_queue_count(void *rx_queue)
 {
 	struct nfp_net_rxq *rxq;
 	struct nfp_net_rx_desc *rxds;
 	uint32_t idx;
 	uint32_t count;
 
-	rxq = (struct nfp_net_rxq *)dev->data->rx_queues[queue_idx];
+	rxq = rx_queue;
 
 	idx = rxq->rd_p;
 
diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h
index b0a8bf81b0..0fd50a6c22 100644
--- a/drivers/net/nfp/nfp_rxtx.h
+++ b/drivers/net/nfp/nfp_rxtx.h
@@ -275,8 +275,7 @@ struct nfp_net_rxq {
 } __rte_aligned(64);
 
 int nfp_net_rx_freelist_setup(struct rte_eth_dev *dev);
-uint32_t nfp_net_rx_queue_count(struct rte_eth_dev *dev,
-				       uint16_t queue_idx);
+uint32_t nfp_net_rx_queue_count(void *rx_queue);
 uint16_t nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 				  uint16_t nb_pkts);
 void nfp_net_rx_queue_release(void *rxq);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 7871e3d30b..6696db6f6f 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -431,7 +431,7 @@ int otx2_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			   struct rte_eth_burst_mode *mode);
 int otx2_tx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			   struct rte_eth_burst_mode *mode);
-uint32_t otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t qidx);
+uint32_t otx2_nix_rx_queue_count(void *rx_queue);
 int otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt);
 int otx2_nix_rx_descriptor_done(void *rxq, uint16_t offset);
 int otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 552e6bd43d..e6f8e5bfc1 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -342,13 +342,13 @@ nix_rx_head_tail_get(struct otx2_eth_dev *dev,
 }
 
 uint32_t
-otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t queue_idx)
+otx2_nix_rx_queue_count(void *rx_queue)
 {
-	struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[queue_idx];
-	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_eth_rxq *rxq = rx_queue;
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(rxq->eth_dev);
 	uint32_t head, tail;
 
-	nix_rx_head_tail_get(dev, &head, &tail, queue_idx);
+	nix_rx_head_tail_get(dev, &head, &tail, rxq->rq);
 	return (tail - head) % rxq->qlen;
 }
 
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 2db0d000c3..4b5713f3ec 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1281,19 +1281,19 @@ sfc_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
  * use any process-local pointers from the adapter data.
  */
 static uint32_t
-sfc_rx_queue_count(struct rte_eth_dev *dev, uint16_t ethdev_qid)
+sfc_rx_queue_count(void *rx_queue)
 {
-	const struct sfc_adapter_priv *sap = sfc_adapter_priv_by_eth_dev(dev);
-	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
-	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
+	struct sfc_dp_rxq *dp_rxq = rx_queue;
+	const struct sfc_dp_rx *dp_rx;
 	struct sfc_rxq_info *rxq_info;
 
-	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
+	dp_rx = sfc_dp_rx_by_dp_rxq(dp_rxq);
+	rxq_info = sfc_rxq_info_by_dp_rxq(dp_rxq);
 
 	if ((rxq_info->state & SFC_RXQ_STARTED) == 0)
 		return 0;
 
-	return sap->dp_rx->qdesc_npending(rxq_info->dp);
+	return dp_rx->qdesc_npending(dp_rxq);
 }
 
 /*
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 561a98fc81..0e87620e42 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -1060,8 +1060,7 @@ nicvf_rx_queue_release_mbufs(struct rte_eth_dev *dev, struct nicvf_rxq *rxq)
 	if (dev->rx_pkt_burst == NULL)
 		return;
 
-	while ((rxq_cnt = nicvf_dev_rx_queue_count(dev,
-				nicvf_netdev_qidx(rxq->nic, rxq->queue_id)))) {
+	while ((rxq_cnt = nicvf_dev_rx_queue_count(rxq))) {
 		nb_pkts = dev->rx_pkt_burst(rxq, rx_pkts,
 					NICVF_MAX_RX_FREE_THRESH);
 		PMD_DRV_LOG(INFO, "nb_pkts=%d  rxq_cnt=%d", nb_pkts, rxq_cnt);
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index 91e09ff8d5..0d4f4ae87e 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -649,11 +649,11 @@ nicvf_recv_pkts_multiseg_cksum_vlan_strip(void *rx_queue,
 }
 
 uint32_t
-nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
+nicvf_dev_rx_queue_count(void *rx_queue)
 {
 	struct nicvf_rxq *rxq;
 
-	rxq = dev->data->rx_queues[queue_idx];
+	rxq = rx_queue;
 	return nicvf_addr_read(rxq->cq_status) & NICVF_CQ_CQE_COUNT_MASK;
 }
 
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index d6ed660b4e..271f329dc4 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -83,7 +83,7 @@ nicvf_mbuff_init_mseg_update(struct rte_mbuf *pkt, const uint64_t mbuf_init,
 	*(uint64_t *)(&pkt->rearm_data) = init.value;
 }
 
-uint32_t nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx);
+uint32_t nicvf_dev_rx_queue_count(void *rx_queue);
 uint32_t nicvf_dev_rbdr_refill(struct rte_eth_dev *dev, uint16_t queue_idx);
 
 uint16_t nicvf_recv_pkts_no_offload(void *rxq, struct rte_mbuf **rx_pkts,
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 3021933965..569cd6a48f 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -446,8 +446,7 @@ int  txgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
 
-uint32_t txgbe_dev_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t txgbe_dev_rx_queue_count(void *rx_queue);
 
 int txgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int txgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 1a261287d1..2a7cfdeedb 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -2688,14 +2688,14 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-txgbe_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+txgbe_dev_rx_queue_count(void *rx_queue)
 {
 #define TXGBE_RXQ_SCAN_INTERVAL 4
 	volatile struct txgbe_rx_desc *rxdp;
 	struct txgbe_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index a202931e9a..f2b3f142d8 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -1369,11 +1369,11 @@ eth_link_update(struct rte_eth_dev *dev __rte_unused,
 }
 
 static uint32_t
-eth_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_rx_queue_count(void *rx_queue)
 {
 	struct vhost_queue *vq;
 
-	vq = dev->data->rx_queues[rx_queue_id];
+	vq = rx_queue;
 	if (vq == NULL)
 		return 0;
 
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index bef24173cf..73b89fb2f0 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -5028,7 +5028,7 @@ rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
 	    dev->data->rx_queues[queue_id] == NULL)
 		return -EINVAL;
 
-	return (int)(*dev->rx_queue_count)(dev, queue_id);
+	return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]);
 }
 
 /**
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index edf96de2dc..00f27c643a 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -41,8 +41,7 @@ typedef uint16_t (*eth_tx_prep_t)(void *txq,
 /**< @internal Prepare output packets on a transmit queue of an Ethernet device. */
 
 
-typedef uint32_t (*eth_rx_queue_count_t)(struct rte_eth_dev *dev,
-					 uint16_t rx_queue_id);
+typedef uint32_t (*eth_rx_queue_count_t)(void *rxq);
 /**< @internal Get number of used descriptors on a receive queue. */
 
 typedef int (*eth_rx_descriptor_done_t)(void *rxq, uint16_t offset);
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [dpdk-dev] [RFC v2 3/5] ethdev: copy ethdev 'burst' API into separate structure
  2021-09-22 14:09 ` [dpdk-dev] [RFC v2 0/5] " Konstantin Ananyev
  2021-09-22 14:09   ` [dpdk-dev] [RFC v2 1/5] ethdev: allocate max space for internal queue array Konstantin Ananyev
  2021-09-22 14:09   ` [dpdk-dev] [RFC v2 2/5] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
@ 2021-09-22 14:09   ` Konstantin Ananyev
  2021-09-23  5:58     ` Wang, Haiyue
  2021-09-22 14:09   ` [dpdk-dev] [RFC v2 4/5] ethdev: make burst functions to use new flat array Konstantin Ananyev
                     ` (2 subsequent siblings)
  5 siblings, 1 reply; 112+ messages in thread
From: Konstantin Ananyev @ 2021-09-22 14:09 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Copy public function pointers (rx_pkt_burst(), etc.) and related
pointers to internal data from rte_eth_dev structure into a separate flat
array. We can keep it public to still use inline functions for 'fast' calls
(like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
The intention is to make rte_eth_dev and related structures internal.
That should allow future possible changes to core eth_dev strcutures
to be transaprent to the user and help to avoid ABI/API breakages.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/ethdev/ethdev_private.c  | 53 ++++++++++++++++++++++++++++++++++++
 lib/ethdev/ethdev_private.h  |  7 +++++
 lib/ethdev/rte_ethdev.c      | 17 ++++++++++++
 lib/ethdev/rte_ethdev_core.h | 45 ++++++++++++++++++++++++++++++
 4 files changed, 122 insertions(+)

diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 012cf73ca2..a1683da77b 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -174,3 +174,56 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
 		RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
 	return str == NULL ? -1 : 0;
 }
+
+static uint16_t
+dummy_eth_rx_burst(__rte_unused void *rxq,
+		__rte_unused struct rte_mbuf **rx_pkts,
+		__rte_unused uint16_t nb_pkts)
+{
+	RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+static uint16_t
+dummy_eth_tx_burst(__rte_unused void *txq,
+		__rte_unused struct rte_mbuf **tx_pkts,
+		__rte_unused uint16_t nb_pkts)
+{
+	RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n");
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+void
+eth_dev_burst_api_reset(struct rte_eth_burst_api *rba)
+{
+	static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
+	static const struct rte_eth_burst_api dummy_api = {
+		.rx_pkt_burst = dummy_eth_rx_burst,
+		.tx_pkt_burst = dummy_eth_tx_burst,
+		.rxq = {.data = dummy_data, .clbk = dummy_data,},
+		.txq = {.data = dummy_data, .clbk = dummy_data,},
+	};
+
+	*rba = dummy_api;
+}
+
+void
+eth_dev_burst_api_setup(struct rte_eth_burst_api *rba,
+		const struct rte_eth_dev *dev)
+{
+	rba->rx_pkt_burst = dev->rx_pkt_burst;
+	rba->tx_pkt_burst = dev->tx_pkt_burst;
+	rba->tx_pkt_prepare = dev->tx_pkt_prepare;
+	rba->rx_queue_count = dev->rx_queue_count;
+	rba->rx_descriptor_status = dev->rx_descriptor_status;
+	rba->tx_descriptor_status = dev->tx_descriptor_status;
+
+	rba->rxq.data = dev->data->rx_queues;
+	rba->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
+
+	rba->txq.data = dev->data->tx_queues;
+	rba->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
+}
+
diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
index 9bb0879538..54921f4860 100644
--- a/lib/ethdev/ethdev_private.h
+++ b/lib/ethdev/ethdev_private.h
@@ -30,6 +30,13 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
 /* Parse devargs value for representor parameter. */
 int rte_eth_devargs_parse_representor_ports(char *str, void *data);
 
+/* reset eth 'burst' API to dummy values */
+void eth_dev_burst_api_reset(struct rte_eth_burst_api *rba);
+
+/* setup eth 'burst' API to ethdev values */
+void eth_dev_burst_api_setup(struct rte_eth_burst_api *rba,
+		const struct rte_eth_dev *dev);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 424bc260fa..5904bb7bae 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -44,6 +44,9 @@
 static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
 struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
 
+/* public 'fast/burst' API */
+struct rte_eth_burst_api rte_eth_burst_api[RTE_MAX_ETHPORTS];
+
 /* spinlock for eth device callbacks */
 static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
 
@@ -1788,6 +1791,9 @@ rte_eth_dev_start(uint16_t port_id)
 		(*dev->dev_ops->link_update)(dev, 0);
 	}
 
+	/* expose selection of PMD rx/tx function */
+	eth_dev_burst_api_setup(rte_eth_burst_api + port_id, dev);
+
 	rte_ethdev_trace_start(port_id);
 	return 0;
 }
@@ -1810,6 +1816,9 @@ rte_eth_dev_stop(uint16_t port_id)
 		return 0;
 	}
 
+	/* point rx/tx functions to dummy ones */
+	eth_dev_burst_api_reset(rte_eth_burst_api + port_id);
+
 	dev->data->dev_started = 0;
 	ret = (*dev->dev_ops->dev_stop)(dev);
 	rte_ethdev_trace_stop(port_id, ret);
@@ -4568,6 +4577,14 @@ rte_eth_mirror_rule_reset(uint16_t port_id, uint8_t rule_id)
 	return eth_err(port_id, (*dev->dev_ops->mirror_rule_reset)(dev, rule_id));
 }
 
+RTE_INIT(eth_dev_init_burst_api)
+{
+	uint32_t i;
+
+	for (i = 0; i != RTE_DIM(rte_eth_burst_api); i++)
+		eth_dev_burst_api_reset(rte_eth_burst_api + i);
+}
+
 RTE_INIT(eth_dev_init_cb_lists)
 {
 	uint16_t i;
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index 00f27c643a..da6de5de43 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -53,6 +53,51 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
 typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
 /**< @internal Check the status of a Tx descriptor */
 
+/**
+ * @internal
+ * Structure used to hold opaque pointernals to internal ethdev RX/TXi
+ * queues data.
+ * The main purpose to expose these pointers at all - allow compiler
+ * to fetch this data for 'fast' ethdev inline functions in advance.
+ */
+struct rte_ethdev_qdata {
+	void **data;
+	/**< points to array of internal queue data pointers */
+	void **clbk;
+	/**< points to array of queue callback data pointers */
+};
+
+/**
+ * @internal
+ * 'fast' ethdev funcions and related data are hold in a flat array.
+ * one entry per ethdev.
+ */
+struct rte_eth_burst_api {
+
+	/** first 64B line */
+	eth_rx_burst_t rx_pkt_burst;
+	/**< PMD receive function. */
+	eth_tx_burst_t tx_pkt_burst;
+	/**< PMD transmit function. */
+	eth_tx_prep_t tx_pkt_prepare;
+	/**< PMD transmit prepare function. */
+	eth_rx_queue_count_t rx_queue_count;
+	/**< Get the number of used RX descriptors. */
+	eth_rx_descriptor_status_t rx_descriptor_status;
+	/**< Check the status of a Rx descriptor. */
+	eth_tx_descriptor_status_t tx_descriptor_status;
+	/**< Check the status of a Tx descriptor. */
+	uintptr_t reserved[2];
+
+	/** second 64B line */
+	struct rte_ethdev_qdata rxq;
+	struct rte_ethdev_qdata txq;
+	uintptr_t reserved2[4];
+
+} __rte_cache_aligned;
+
+extern struct rte_eth_burst_api rte_eth_burst_api[RTE_MAX_ETHPORTS];
+
 
 /**
  * @internal
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [dpdk-dev] [RFC v2 4/5] ethdev: make burst functions to use new flat array
  2021-09-22 14:09 ` [dpdk-dev] [RFC v2 0/5] " Konstantin Ananyev
                     ` (2 preceding siblings ...)
  2021-09-22 14:09   ` [dpdk-dev] [RFC v2 3/5] ethdev: copy ethdev 'burst' API into separate structure Konstantin Ananyev
@ 2021-09-22 14:09   ` Konstantin Ananyev
  2021-09-22 14:09   ` [dpdk-dev] [RFC v2 5/5] ethdev: hide eth dev related structures Konstantin Ananyev
  2021-10-01 14:02   ` [dpdk-dev] [PATCH v3 0/7] " Konstantin Ananyev
  5 siblings, 0 replies; 112+ messages in thread
From: Konstantin Ananyev @ 2021-09-22 14:09 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Rework 'fast' burst functions to use rte_eth_burst_api[].
While it is an API/ABI breakage, this change is intended to be
transparent for both users (no changes in user app is required) and
PMD developers (no changes in PMD is required).
One extra thing to note - RX/TX callback invocation will cause extra
function call with this changes. That might cause some insignificant
slowdown for code-path with RX/TX callbacks heavily involved.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/ethdev/ethdev_private.c |  31 +++++
 lib/ethdev/rte_ethdev.h     | 244 ++++++++++++++++++++++++++----------
 lib/ethdev/version.map      |   5 +
 3 files changed, 212 insertions(+), 68 deletions(-)

diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index a1683da77b..b46e077139 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -227,3 +227,34 @@ eth_dev_burst_api_setup(struct rte_eth_burst_api *rba,
 	rba->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
 }
 
+uint16_t
+__rte_eth_rx_epilog(uint16_t port_id, uint16_t queue_id,
+	struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
+	void *opaque)
+{
+	const struct rte_eth_rxtx_callback *cb = opaque;
+
+	while (cb != NULL) {
+		nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
+				nb_pkts, cb->param);
+		cb = cb->next;
+	}
+
+	return nb_rx;
+}
+
+uint16_t
+__rte_eth_tx_prolog(uint16_t port_id, uint16_t queue_id,
+	struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque)
+{
+	const struct rte_eth_rxtx_callback *cb = opaque;
+
+	while (cb != NULL) {
+		nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
+				cb->param);
+		cb = cb->next;
+	}
+
+	return nb_pkts;
+}
+
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 73b89fb2f0..58ee983b03 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -4872,6 +4872,34 @@ int rte_eth_representor_info_get(uint16_t port_id,
 
 #include <rte_ethdev_core.h>
 
+/**
+ * @internal
+ * Helper routine for eth driver rx_burst API.
+ * Should be called at exit from PMD's rte_eth_rx_bulk implementation.
+ * Does necessary post-processing - invokes RX callbacks if any, etc.
+ *
+ * @param port_id
+ *  The port identifier of the Ethernet device.
+ * @param queue_id
+ *  The index of the receive queue from which to retrieve input packets.
+ * @param rx_pkts
+ *   The address of an array of pointers to *rte_mbuf* structures that
+ *   have been retrieved from the device.
+ * @param nb_pkts
+ *   The number of packets that were retrieved from the device.
+ * @param nb_pkts
+ *   The number of elements in *rx_pkts* array.
+ * @param opaque
+ *   Opaque pointer of RX queue callback related data.
+ *
+ * @return
+ *  The number of packets effectively supplied to the *rx_pkts* array.
+ */
+__rte_experimental
+uint16_t __rte_eth_rx_epilog(uint16_t port_id, uint16_t queue_id,
+		struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
+		void *opaque);
+
 /**
  *
  * Retrieve a burst of input packets from a receive queue of an Ethernet
@@ -4963,23 +4991,37 @@ static inline uint16_t
 rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
 		 struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
 {
-	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 	uint16_t nb_rx;
+	struct rte_eth_burst_api *p;
+	void *cb, *qd;
+
+#ifdef RTE_ETHDEV_DEBUG_RX
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return 0;
+	}
+#endif
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_burst_api[port_id];
+	qd = p->rxq.data[queue_id];
 
 #ifdef RTE_ETHDEV_DEBUG_RX
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
 
-	if (queue_id >= dev->data->nb_rx_queues) {
-		RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u\n", queue_id);
+	if (qd == NULL) {
+		RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u for port_id=%u\n",
+			queue_id, port_id);
 		return 0;
 	}
 #endif
-	nb_rx = (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id],
-				     rx_pkts, nb_pkts);
+
+	nb_rx = p->rx_pkt_burst(qd, rx_pkts, nb_pkts);
 
 #ifdef RTE_ETHDEV_RXTX_CALLBACKS
-	struct rte_eth_rxtx_callback *cb;
 
 	/* __ATOMIC_RELEASE memory order was used when the
 	 * call back was inserted into the list.
@@ -4987,16 +5029,10 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
 	 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
 	 * not required.
 	 */
-	cb = __atomic_load_n(&dev->post_rx_burst_cbs[queue_id],
-				__ATOMIC_RELAXED);
-
-	if (unlikely(cb != NULL)) {
-		do {
-			nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
-						nb_pkts, cb->param);
-			cb = cb->next;
-		} while (cb != NULL);
-	}
+	cb = __atomic_load_n((void **)&p->rxq.clbk[queue_id], __ATOMIC_RELAXED);
+	if (unlikely(cb != NULL))
+		nb_rx = __rte_eth_rx_epilog(port_id, queue_id, rx_pkts, nb_rx,
+				nb_pkts, cb);
 #endif
 
 	rte_ethdev_trace_rx_burst(port_id, queue_id, (void **)rx_pkts, nb_rx);
@@ -5019,16 +5055,27 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
 static inline int
 rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
 {
-	struct rte_eth_dev *dev;
+	struct rte_eth_burst_api *p;
+	void *qd;
+
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return -EINVAL;
+	}
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_burst_api[port_id];
+	qd = p->rxq.data[queue_id];
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
-	dev = &rte_eth_devices[port_id];
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_queue_count, -ENOTSUP);
-	if (queue_id >= dev->data->nb_rx_queues ||
-	    dev->data->rx_queues[queue_id] == NULL)
+	RTE_FUNC_PTR_OR_ERR_RET(*p->rx_queue_count, -ENOTSUP);
+	if (qd == NULL)
 		return -EINVAL;
 
-	return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]);
+	return (int)(*p->rx_queue_count)(qd);
 }
 
 /**
@@ -5097,21 +5144,30 @@ static inline int
 rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
 	uint16_t offset)
 {
-	struct rte_eth_dev *dev;
-	void *rxq;
+	struct rte_eth_burst_api *p;
+	void *qd;
 
 #ifdef RTE_ETHDEV_DEBUG_RX
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return -EINVAL;
+	}
 #endif
-	dev = &rte_eth_devices[port_id];
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_burst_api[port_id];
+	qd = p->rxq.data[queue_id];
+
 #ifdef RTE_ETHDEV_DEBUG_RX
-	if (queue_id >= dev->data->nb_rx_queues)
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (qd == NULL)
 		return -ENODEV;
 #endif
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_descriptor_status, -ENOTSUP);
-	rxq = dev->data->rx_queues[queue_id];
-
-	return (*dev->rx_descriptor_status)(rxq, offset);
+	RTE_FUNC_PTR_OR_ERR_RET(*p->rx_descriptor_status, -ENOTSUP);
+	return (*p->rx_descriptor_status)(qd, offset);
 }
 
 #define RTE_ETH_TX_DESC_FULL    0 /**< Desc filled for hw, waiting xmit. */
@@ -5154,23 +5210,55 @@ rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
 static inline int rte_eth_tx_descriptor_status(uint16_t port_id,
 	uint16_t queue_id, uint16_t offset)
 {
-	struct rte_eth_dev *dev;
-	void *txq;
+	struct rte_eth_burst_api *p;
+	void *qd;
 
 #ifdef RTE_ETHDEV_DEBUG_TX
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return -EINVAL;
+	}
 #endif
-	dev = &rte_eth_devices[port_id];
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_burst_api[port_id];
+	qd = p->txq.data[queue_id];
+
 #ifdef RTE_ETHDEV_DEBUG_TX
-	if (queue_id >= dev->data->nb_tx_queues)
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (qd == NULL)
 		return -ENODEV;
 #endif
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_descriptor_status, -ENOTSUP);
-	txq = dev->data->tx_queues[queue_id];
-
-	return (*dev->tx_descriptor_status)(txq, offset);
+	RTE_FUNC_PTR_OR_ERR_RET(*p->tx_descriptor_status, -ENOTSUP);
+	return (*p->tx_descriptor_status)(qd, offset);
 }
 
+/**
+ * @internal
+ * Helper routine for eth driver tx_burst API.
+ * Should be called before entry PMD's rte_eth_tx_bulk implementation.
+ * Does necessary pre-processing - invokes TX callbacks if any, etc.
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device.
+ * @param queue_id
+ *   The index of the transmit queue through which output packets must be
+ *   sent.
+ * @param tx_pkts
+ *   The address of an array of *nb_pkts* pointers to *rte_mbuf* structures
+ *   which contain the output packets.
+ * @param nb_pkts
+ *   The maximum number of packets to transmit.
+ * @return
+ *   The number of output packets to transmit.
+ */
+__rte_experimental
+uint16_t __rte_eth_tx_prolog(uint16_t port_id, uint16_t queue_id,
+	struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque);
+
 /**
  * Send a burst of output packets on a transmit queue of an Ethernet device.
  *
@@ -5241,20 +5329,34 @@ static inline uint16_t
 rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
 		 struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 {
-	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	struct rte_eth_burst_api *p;
+	void *cb, *qd;
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return 0;
+	}
+#endif
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_burst_api[port_id];
+	qd = p->txq.data[queue_id];
 
 #ifdef RTE_ETHDEV_DEBUG_TX
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0);
 
-	if (queue_id >= dev->data->nb_tx_queues) {
-		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
+	if (qd == NULL) {
+		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
+			queue_id, port_id);
 		return 0;
 	}
 #endif
 
 #ifdef RTE_ETHDEV_RXTX_CALLBACKS
-	struct rte_eth_rxtx_callback *cb;
 
 	/* __ATOMIC_RELEASE memory order was used when the
 	 * call back was inserted into the list.
@@ -5262,21 +5364,16 @@ rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
 	 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
 	 * not required.
 	 */
-	cb = __atomic_load_n(&dev->pre_tx_burst_cbs[queue_id],
-				__ATOMIC_RELAXED);
-
-	if (unlikely(cb != NULL)) {
-		do {
-			nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
-					cb->param);
-			cb = cb->next;
-		} while (cb != NULL);
-	}
+	cb = __atomic_load_n((void **)&p->txq.clbk[queue_id], __ATOMIC_RELAXED);
+	if (unlikely(cb != NULL))
+		nb_pkts = __rte_eth_tx_prolog(port_id, queue_id, tx_pkts,
+				nb_pkts, cb);
 #endif
 
-	rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts,
-		nb_pkts);
-	return (*dev->tx_pkt_burst)(dev->data->tx_queues[queue_id], tx_pkts, nb_pkts);
+	nb_pkts = p->tx_pkt_burst(qd, tx_pkts, nb_pkts);
+
+	rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts, nb_pkts);
+	return nb_pkts;
 }
 
 /**
@@ -5339,31 +5436,42 @@ static inline uint16_t
 rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id,
 		struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 {
-	struct rte_eth_dev *dev;
+	struct rte_eth_burst_api *p;
+	void *qd;
 
 #ifdef RTE_ETHDEV_DEBUG_TX
-	if (!rte_eth_dev_is_valid_port(port_id)) {
-		RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
 		rte_errno = ENODEV;
 		return 0;
 	}
 #endif
 
-	dev = &rte_eth_devices[port_id];
+	/* fetch pointer to queue data */
+	p = &rte_eth_burst_api[port_id];
+	qd = p->txq.data[queue_id];
 
 #ifdef RTE_ETHDEV_DEBUG_TX
-	if (queue_id >= dev->data->nb_tx_queues) {
-		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
+	if (!rte_eth_dev_is_valid_port(port_id)) {
+		RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
+		rte_errno = ENODEV;
+		return 0;
+	}
+	if (qd == NULL) {
+		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
+			queue_id, port_id);
 		rte_errno = EINVAL;
 		return 0;
 	}
 #endif
 
-	if (!dev->tx_pkt_prepare)
+	if (!p->tx_pkt_prepare)
 		return nb_pkts;
 
-	return (*dev->tx_pkt_prepare)(dev->data->tx_queues[queue_id],
-			tx_pkts, nb_pkts);
+	return p->tx_pkt_prepare(qd, tx_pkts, nb_pkts);
 }
 
 #else
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 904bce6ea1..65444f9a99 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -247,11 +247,16 @@ EXPERIMENTAL {
 	rte_mtr_meter_policy_delete;
 	rte_mtr_meter_policy_update;
 	rte_mtr_meter_policy_validate;
+
+	# added in 21.05
+	__rte_eth_rx_epilog;
+	__rte_eth_tx_prolog;
 };
 
 INTERNAL {
 	global:
 
+	rte_eth_burst_api;
 	rte_eth_dev_allocate;
 	rte_eth_dev_allocated;
 	rte_eth_dev_attach_secondary;
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [dpdk-dev] [RFC v2 5/5] ethdev: hide eth dev related structures
  2021-09-22 14:09 ` [dpdk-dev] [RFC v2 0/5] " Konstantin Ananyev
                     ` (3 preceding siblings ...)
  2021-09-22 14:09   ` [dpdk-dev] [RFC v2 4/5] ethdev: make burst functions to use new flat array Konstantin Ananyev
@ 2021-09-22 14:09   ` Konstantin Ananyev
  2021-10-01 14:02   ` [dpdk-dev] [PATCH v3 0/7] " Konstantin Ananyev
  5 siblings, 0 replies; 112+ messages in thread
From: Konstantin Ananyev @ 2021-09-22 14:09 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
data into private header (ethdev_driver.h).
Make changes to keep DPDK building after that.
Remove references to 'rte_eth_devices[]' from test-pmd.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 app/test-pmd/config.c                         |  23 ++-
 drivers/common/octeontx2/otx2_sec_idev.c      |   2 +-
 drivers/crypto/octeontx2/otx2_cryptodev_ops.c |   2 +-
 drivers/net/cxgbe/base/adapter.h              |   2 +-
 drivers/net/dpaa2/dpaa2_ptp.c                 |   2 +-
 drivers/net/netvsc/hn_var.h                   |   1 +
 lib/ethdev/ethdev_driver.h                    | 152 ++++++++++++++++++
 lib/ethdev/rte_ethdev.c                       |  25 +++
 lib/ethdev/rte_ethdev.h                       |  44 +++--
 lib/ethdev/rte_ethdev_core.h                  | 144 -----------------
 lib/ethdev/version.map                        |   1 +
 lib/eventdev/rte_event_eth_rx_adapter.c       |   2 +-
 lib/eventdev/rte_event_eth_tx_adapter.c       |   2 +-
 lib/eventdev/rte_eventdev.c                   |   2 +-
 14 files changed, 216 insertions(+), 188 deletions(-)

diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index f5765b34f7..11060bad12 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -5213,20 +5213,20 @@ show_macs(portid_t port_id)
 {
 	char buf[RTE_ETHER_ADDR_FMT_SIZE];
 	struct rte_eth_dev_info dev_info;
-	struct rte_ether_addr *addr;
-	uint32_t i, num_macs = 0;
-	struct rte_eth_dev *dev;
-
-	dev = &rte_eth_devices[port_id];
+	int32_t i, rc, num_macs = 0;
 
 	if (eth_dev_info_get_print_err(port_id, &dev_info))
 		return;
 
-	for (i = 0; i < dev_info.max_mac_addrs; i++) {
-		addr = &dev->data->mac_addrs[i];
+	struct rte_ether_addr addr[dev_info.max_mac_addrs];
+	rc = rte_eth_macaddrs_get(port_id, addr, dev_info.max_mac_addrs);
+	if (rc < 0)
+		return;
+
+	for (i = 0; i < rc; i++) {
 
 		/* skip zero address */
-		if (rte_is_zero_ether_addr(addr))
+		if (rte_is_zero_ether_addr(&addr[i]))
 			continue;
 
 		num_macs++;
@@ -5234,14 +5234,13 @@ show_macs(portid_t port_id)
 
 	printf("Number of MAC address added: %d\n", num_macs);
 
-	for (i = 0; i < dev_info.max_mac_addrs; i++) {
-		addr = &dev->data->mac_addrs[i];
+	for (i = 0; i < rc; i++) {
 
 		/* skip zero address */
-		if (rte_is_zero_ether_addr(addr))
+		if (rte_is_zero_ether_addr(&addr[i]))
 			continue;
 
-		rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, addr);
+		rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, &addr[i]);
 		printf("  %s\n", buf);
 	}
 }
diff --git a/drivers/common/octeontx2/otx2_sec_idev.c b/drivers/common/octeontx2/otx2_sec_idev.c
index 6e9643c383..b561b67174 100644
--- a/drivers/common/octeontx2/otx2_sec_idev.c
+++ b/drivers/common/octeontx2/otx2_sec_idev.c
@@ -4,7 +4,7 @@
 
 #include <rte_atomic.h>
 #include <rte_bus_pci.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_spinlock.h>
 
 #include "otx2_common.h"
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
index 09ddbb5f34..723804347f 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
@@ -6,7 +6,7 @@
 
 #include <cryptodev_pmd.h>
 #include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_event_crypto_adapter.h>
 
 #include "otx2_cryptodev.h"
diff --git a/drivers/net/cxgbe/base/adapter.h b/drivers/net/cxgbe/base/adapter.h
index 01a2a9d147..1c7c8afe16 100644
--- a/drivers/net/cxgbe/base/adapter.h
+++ b/drivers/net/cxgbe/base/adapter.h
@@ -12,7 +12,7 @@
 #include <rte_mbuf.h>
 #include <rte_io.h>
 #include <rte_rwlock.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 
 #include "../cxgbe_compat.h"
 #include "../cxgbe_ofld.h"
diff --git a/drivers/net/dpaa2/dpaa2_ptp.c b/drivers/net/dpaa2/dpaa2_ptp.c
index 899dd5d442..8d79e39244 100644
--- a/drivers/net/dpaa2/dpaa2_ptp.c
+++ b/drivers/net/dpaa2/dpaa2_ptp.c
@@ -10,7 +10,7 @@
 #include <unistd.h>
 #include <stdarg.h>
 
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_log.h>
 #include <rte_eth_ctrl.h>
 #include <rte_malloc.h>
diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h
index 2a2bac9338..74e6e6010d 100644
--- a/drivers/net/netvsc/hn_var.h
+++ b/drivers/net/netvsc/hn_var.h
@@ -7,6 +7,7 @@
  */
 
 #include <rte_eal_paging.h>
+#include <ethdev_driver.h>
 
 /*
  * Tunable ethdev params
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 40e474aa7e..f841368b58 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -21,6 +21,158 @@
 extern "C" {
 #endif
 
+/**
+ * @internal
+ * Structure used to hold information about the callbacks to be called for a
+ * queue on RX and TX.
+ */
+struct rte_eth_rxtx_callback {
+	struct rte_eth_rxtx_callback *next;
+	union{
+		rte_rx_callback_fn rx;
+		rte_tx_callback_fn tx;
+	} fn;
+	void *param;
+};
+
+/**
+ * @internal
+ * The generic data structure associated with each ethernet device.
+ *
+ * Pointers to burst-oriented packet receive and transmit functions are
+ * located at the beginning of the structure, along with the pointer to
+ * where all the data elements for the particular device are stored in shared
+ * memory. This split allows the function pointer and driver data to be per-
+ * process, while the actual configuration data for the device is shared.
+ */
+struct rte_eth_dev {
+	eth_rx_burst_t rx_pkt_burst; /**< Pointer to PMD receive function. */
+	eth_tx_burst_t tx_pkt_burst; /**< Pointer to PMD transmit function. */
+	eth_tx_prep_t tx_pkt_prepare;
+	/**< Pointer to PMD transmit prepare function. */
+	eth_rx_queue_count_t rx_queue_count;
+	/**< Get the number of used RX descriptors. */
+	eth_rx_descriptor_done_t rx_descriptor_done;
+	/**< Check rxd DD bit. */
+	eth_rx_descriptor_status_t rx_descriptor_status;
+	/**< Check the status of a Rx descriptor. */
+	eth_tx_descriptor_status_t tx_descriptor_status;
+	/**< Check the status of a Tx descriptor. */
+
+	/**
+	 * Next two fields are per-device data but *data is shared between
+	 * primary and secondary processes and *process_private is per-process
+	 * private. The second one is managed by PMDs if necessary.
+	 */
+	struct rte_eth_dev_data *data;  /**< Pointer to device data. */
+	void *process_private; /**< Pointer to per-process device data. */
+	const struct eth_dev_ops *dev_ops; /**< Functions exported by PMD */
+	struct rte_device *device; /**< Backing device */
+	struct rte_intr_handle *intr_handle; /**< Device interrupt handle */
+	/** User application callbacks for NIC interrupts */
+	struct rte_eth_dev_cb_list link_intr_cbs;
+	/**
+	 * User-supplied functions called from rx_burst to post-process
+	 * received packets before passing them to the user
+	 */
+	struct rte_eth_rxtx_callback *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
+	/**
+	 * User-supplied functions called from tx_burst to pre-process
+	 * received packets before passing them to the driver for transmission.
+	 */
+	struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
+	enum rte_eth_dev_state state; /**< Flag indicating the port state */
+	void *security_ctx; /**< Context for security ops */
+
+	uint64_t reserved_64s[4]; /**< Reserved for future fields */
+	void *reserved_ptrs[4];   /**< Reserved for future fields */
+} __rte_cache_aligned;
+
+struct rte_eth_dev_sriov;
+struct rte_eth_dev_owner;
+
+/**
+ * @internal
+ * The data part, with no function pointers, associated with each ethernet
+ * device. This structure is safe to place in shared memory to be common
+ * among different processes in a multi-process configuration.
+ */
+struct rte_eth_dev_data {
+	char name[RTE_ETH_NAME_MAX_LEN]; /**< Unique identifier name */
+
+	void **rx_queues; /**< Array of pointers to RX queues. */
+	void **tx_queues; /**< Array of pointers to TX queues. */
+	uint16_t nb_rx_queues; /**< Number of RX queues. */
+	uint16_t nb_tx_queues; /**< Number of TX queues. */
+
+	struct rte_eth_dev_sriov sriov;    /**< SRIOV data */
+
+	void *dev_private;
+			/**< PMD-specific private data.
+			 *   @see rte_eth_dev_release_port()
+			 */
+
+	struct rte_eth_link dev_link;   /**< Link-level information & status. */
+	struct rte_eth_conf dev_conf;   /**< Configuration applied to device. */
+	uint16_t mtu;                   /**< Maximum Transmission Unit. */
+	uint32_t min_rx_buf_size;
+			/**< Common RX buffer size handled by all queues. */
+
+	uint64_t rx_mbuf_alloc_failed; /**< RX ring mbuf allocation failures. */
+	struct rte_ether_addr *mac_addrs;
+			/**< Device Ethernet link address.
+			 *   @see rte_eth_dev_release_port()
+			 */
+	uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
+			/**< Bitmap associating MAC addresses to pools. */
+	struct rte_ether_addr *hash_mac_addrs;
+			/**< Device Ethernet MAC addresses of hash filtering.
+			 *   @see rte_eth_dev_release_port()
+			 */
+	uint16_t port_id;           /**< Device [external] port identifier. */
+
+	__extension__
+	uint8_t promiscuous   : 1,
+		/**< RX promiscuous mode ON(1) / OFF(0). */
+		scattered_rx : 1,
+		/**< RX of scattered packets is ON(1) / OFF(0) */
+		all_multicast : 1,
+		/**< RX all multicast mode ON(1) / OFF(0). */
+		dev_started : 1,
+		/**< Device state: STARTED(1) / STOPPED(0). */
+		lro         : 1,
+		/**< RX LRO is ON(1) / OFF(0) */
+		dev_configured : 1;
+		/**< Indicates whether the device is configured.
+		 *   CONFIGURED(1) / NOT CONFIGURED(0).
+		 */
+	uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
+		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
+	uint8_t tx_queue_state[RTE_MAX_QUEUES_PER_PORT];
+		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
+	uint32_t dev_flags;             /**< Capabilities. */
+	int numa_node;                  /**< NUMA node connection. */
+	struct rte_vlan_filter_conf vlan_filter_conf;
+			/**< VLAN filter configuration. */
+	struct rte_eth_dev_owner owner; /**< The port owner. */
+	uint16_t representor_id;
+			/**< Switch-specific identifier.
+			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
+			 */
+
+	pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
+	uint64_t reserved_64s[4]; /**< Reserved for future fields */
+	void *reserved_ptrs[4];   /**< Reserved for future fields */
+} __rte_cache_aligned;
+
+/**
+ * @internal
+ * The pool of *rte_eth_dev* structures. The size of the pool
+ * is configured at compile-time in the <rte_ethdev.c> file.
+ */
+extern struct rte_eth_dev rte_eth_devices[];
+
+
 /**< @internal Declaration of the hairpin peer queue information structure. */
 struct rte_hairpin_peer_info;
 
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 5904bb7bae..f3ed554aa7 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -3572,6 +3572,31 @@ rte_eth_dev_set_ptypes(uint16_t port_id, uint32_t ptype_mask,
 	return ret;
 }
 
+int
+rte_eth_macaddrs_get(uint16_t port_id, struct rte_ether_addr ma[], uint32_t num)
+{
+	int32_t ret;
+	struct rte_eth_dev *dev;
+	struct rte_eth_dev_info dev_info;
+
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	dev = &rte_eth_devices[port_id];
+
+	ret = rte_eth_dev_info_get(port_id, &dev_info);
+	if (ret != 0)
+		return ret;
+
+	if (ma == NULL) {
+		RTE_ETHDEV_LOG(ERR, "%s: invalid parameters\n", __func__);
+		return -EINVAL;
+	}
+
+	num = RTE_MIN(dev_info.max_mac_addrs, num);
+	memcpy(ma, dev->data->mac_addrs, num * sizeof(ma[0]));
+
+	return num;
+}
+
 int
 rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr)
 {
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 58ee983b03..47e830e5bd 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -3005,6 +3005,25 @@ int rte_eth_dev_set_rx_queue_stats_mapping(uint16_t port_id,
  */
 int rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr);
 
+/**
+ * Retrieve the Ethernet addresses of an Ethernet device.
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device.
+ * @param ma
+ *   A pointer to an array of structures of type *ether_addr* to be filled with
+ *   the Ethernet addresses of the Ethernet device.
+ * @param ma
+ *   Number of elements in the *ma* array.
+ * @return
+ *   - (0) if successful
+ *   - (-ENODEV) if *port_id* invalid.
+ *   - (-EINVAL) if bad parameter.
+ */
+__rte_experimental
+int rte_eth_macaddrs_get(uint16_t port_id, struct rte_ether_addr ma[],
+	uint32_t num);
+
 /**
  * Retrieve the contextual information of an Ethernet device.
  *
@@ -5078,31 +5097,6 @@ rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
 	return (int)(*p->rx_queue_count)(qd);
 }
 
-/**
- * Check if the DD bit of the specific RX descriptor in the queue has been set
- *
- * @param port_id
- *  The port identifier of the Ethernet device.
- * @param queue_id
- *  The queue id on the specific port.
- * @param offset
- *  The offset of the descriptor ID from tail.
- * @return
- *  - (1) if the specific DD bit is set.
- *  - (0) if the specific DD bit is not set.
- *  - (-ENODEV) if *port_id* invalid.
- *  - (-ENOTSUP) if the device does not support this function
- */
-__rte_deprecated
-static inline int
-rte_eth_rx_descriptor_done(uint16_t port_id, uint16_t queue_id, uint16_t offset)
-{
-	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_descriptor_done, -ENOTSUP);
-	return (*dev->rx_descriptor_done)(dev->data->rx_queues[queue_id], offset);
-}
-
 #define RTE_ETH_RX_DESC_AVAIL    0 /**< Desc available for hw. */
 #define RTE_ETH_RX_DESC_DONE     1 /**< Desc done, filled by hw. */
 #define RTE_ETH_RX_DESC_UNAVAIL  2 /**< Desc used by driver or hw. */
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index da6de5de43..20fe789550 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -98,148 +98,4 @@ struct rte_eth_burst_api {
 
 extern struct rte_eth_burst_api rte_eth_burst_api[RTE_MAX_ETHPORTS];
 
-
-/**
- * @internal
- * Structure used to hold information about the callbacks to be called for a
- * queue on RX and TX.
- */
-struct rte_eth_rxtx_callback {
-	struct rte_eth_rxtx_callback *next;
-	union{
-		rte_rx_callback_fn rx;
-		rte_tx_callback_fn tx;
-	} fn;
-	void *param;
-};
-
-/**
- * @internal
- * The generic data structure associated with each ethernet device.
- *
- * Pointers to burst-oriented packet receive and transmit functions are
- * located at the beginning of the structure, along with the pointer to
- * where all the data elements for the particular device are stored in shared
- * memory. This split allows the function pointer and driver data to be per-
- * process, while the actual configuration data for the device is shared.
- */
-struct rte_eth_dev {
-	eth_rx_burst_t rx_pkt_burst; /**< Pointer to PMD receive function. */
-	eth_tx_burst_t tx_pkt_burst; /**< Pointer to PMD transmit function. */
-	eth_tx_prep_t tx_pkt_prepare; /**< Pointer to PMD transmit prepare function. */
-
-	eth_rx_queue_count_t       rx_queue_count; /**< Get the number of used RX descriptors. */
-	eth_rx_descriptor_done_t   rx_descriptor_done;   /**< Check rxd DD bit. */
-	eth_rx_descriptor_status_t rx_descriptor_status; /**< Check the status of a Rx descriptor. */
-	eth_tx_descriptor_status_t tx_descriptor_status; /**< Check the status of a Tx descriptor. */
-
-	/**
-	 * Next two fields are per-device data but *data is shared between
-	 * primary and secondary processes and *process_private is per-process
-	 * private. The second one is managed by PMDs if necessary.
-	 */
-	struct rte_eth_dev_data *data;  /**< Pointer to device data. */
-	void *process_private; /**< Pointer to per-process device data. */
-	const struct eth_dev_ops *dev_ops; /**< Functions exported by PMD */
-	struct rte_device *device; /**< Backing device */
-	struct rte_intr_handle *intr_handle; /**< Device interrupt handle */
-	/** User application callbacks for NIC interrupts */
-	struct rte_eth_dev_cb_list link_intr_cbs;
-	/**
-	 * User-supplied functions called from rx_burst to post-process
-	 * received packets before passing them to the user
-	 */
-	struct rte_eth_rxtx_callback *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
-	/**
-	 * User-supplied functions called from tx_burst to pre-process
-	 * received packets before passing them to the driver for transmission.
-	 */
-	struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
-	enum rte_eth_dev_state state; /**< Flag indicating the port state */
-	void *security_ctx; /**< Context for security ops */
-
-	uint64_t reserved_64s[4]; /**< Reserved for future fields */
-	void *reserved_ptrs[4];   /**< Reserved for future fields */
-} __rte_cache_aligned;
-
-struct rte_eth_dev_sriov;
-struct rte_eth_dev_owner;
-
-/**
- * @internal
- * The data part, with no function pointers, associated with each ethernet device.
- *
- * This structure is safe to place in shared memory to be common among different
- * processes in a multi-process configuration.
- */
-struct rte_eth_dev_data {
-	char name[RTE_ETH_NAME_MAX_LEN]; /**< Unique identifier name */
-
-	void **rx_queues; /**< Array of pointers to RX queues. */
-	void **tx_queues; /**< Array of pointers to TX queues. */
-	uint16_t nb_rx_queues; /**< Number of RX queues. */
-	uint16_t nb_tx_queues; /**< Number of TX queues. */
-
-	struct rte_eth_dev_sriov sriov;    /**< SRIOV data */
-
-	void *dev_private;
-			/**< PMD-specific private data.
-			 *   @see rte_eth_dev_release_port()
-			 */
-
-	struct rte_eth_link dev_link;   /**< Link-level information & status. */
-	struct rte_eth_conf dev_conf;   /**< Configuration applied to device. */
-	uint16_t mtu;                   /**< Maximum Transmission Unit. */
-	uint32_t min_rx_buf_size;
-			/**< Common RX buffer size handled by all queues. */
-
-	uint64_t rx_mbuf_alloc_failed; /**< RX ring mbuf allocation failures. */
-	struct rte_ether_addr *mac_addrs;
-			/**< Device Ethernet link address.
-			 *   @see rte_eth_dev_release_port()
-			 */
-	uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
-			/**< Bitmap associating MAC addresses to pools. */
-	struct rte_ether_addr *hash_mac_addrs;
-			/**< Device Ethernet MAC addresses of hash filtering.
-			 *   @see rte_eth_dev_release_port()
-			 */
-	uint16_t port_id;           /**< Device [external] port identifier. */
-
-	__extension__
-	uint8_t promiscuous   : 1, /**< RX promiscuous mode ON(1) / OFF(0). */
-		scattered_rx : 1,  /**< RX of scattered packets is ON(1) / OFF(0) */
-		all_multicast : 1, /**< RX all multicast mode ON(1) / OFF(0). */
-		dev_started : 1,   /**< Device state: STARTED(1) / STOPPED(0). */
-		lro         : 1,   /**< RX LRO is ON(1) / OFF(0) */
-		dev_configured : 1;
-		/**< Indicates whether the device is configured.
-		 *   CONFIGURED(1) / NOT CONFIGURED(0).
-		 */
-	uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
-		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
-	uint8_t tx_queue_state[RTE_MAX_QUEUES_PER_PORT];
-		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
-	uint32_t dev_flags;             /**< Capabilities. */
-	int numa_node;                  /**< NUMA node connection. */
-	struct rte_vlan_filter_conf vlan_filter_conf;
-			/**< VLAN filter configuration. */
-	struct rte_eth_dev_owner owner; /**< The port owner. */
-	uint16_t representor_id;
-			/**< Switch-specific identifier.
-			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
-			 */
-
-	pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
-	uint64_t reserved_64s[4]; /**< Reserved for future fields */
-	void *reserved_ptrs[4];   /**< Reserved for future fields */
-} __rte_cache_aligned;
-
-/**
- * @internal
- * The pool of *rte_eth_dev* structures. The size of the pool
- * is configured at compile-time in the <rte_ethdev.c> file.
- */
-extern struct rte_eth_dev rte_eth_devices[];
-
 #endif /* _RTE_ETHDEV_CORE_H_ */
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 65444f9a99..d82f2bd1ce 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -251,6 +251,7 @@ EXPERIMENTAL {
 	# added in 21.05
 	__rte_eth_rx_epilog;
 	__rte_eth_tx_prolog;
+	rte_eth_macaddrs_get;
 };
 
 INTERNAL {
diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c
index 13dfb28401..89c4ca5d40 100644
--- a/lib/eventdev/rte_event_eth_rx_adapter.c
+++ b/lib/eventdev/rte_event_eth_rx_adapter.c
@@ -11,7 +11,7 @@
 #include <rte_common.h>
 #include <rte_dev.h>
 #include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_log.h>
 #include <rte_malloc.h>
 #include <rte_service_component.h>
diff --git a/lib/eventdev/rte_event_eth_tx_adapter.c b/lib/eventdev/rte_event_eth_tx_adapter.c
index 18c0359db7..1c06c8707c 100644
--- a/lib/eventdev/rte_event_eth_tx_adapter.c
+++ b/lib/eventdev/rte_event_eth_tx_adapter.c
@@ -3,7 +3,7 @@
  */
 #include <rte_spinlock.h>
 #include <rte_service_component.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 
 #include "eventdev_pmd.h"
 #include "rte_eventdev_trace.h"
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index e347d6dfd5..ebef5f0906 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -29,7 +29,7 @@
 #include <rte_common.h>
 #include <rte_malloc.h>
 #include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_cryptodev.h>
 #include <cryptodev_pmd.h>
 #include <rte_telemetry.h>
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [RFC 0/7] hide eth dev related structures
  2021-09-15  9:45     ` Jerin Jacob
@ 2021-09-22 15:08       ` Ananyev, Konstantin
  2021-09-27 16:14         ` Jerin Jacob
  0 siblings, 1 reply; 112+ messages in thread
From: Ananyev, Konstantin @ 2021-09-22 15:08 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dpdk-dev, Thomas Monjalon, Yigit, Ferruh, Andrew Rybchenko, Yang,
	Qiming, Zhang, Qi Z, Xing, Beilei, techboard


> > Hi Jerin,
> >
> > > > NOTE: This is just an RFC to start further discussion and collect the feedback.
> > > > Due to significant amount of work, changes required are applied only to two
> > > > PMDs so far: net/i40e and net/ice.
> > > > So to build it you'll need to add:
> > > > -Denable_drivers='common/*,mempool/*,net/ice,net/i40e'
> > > > to your config options.
> > >
> > > >
> > > > That approach was selected to avoid(/minimize) possible performance losses.
> > > >
> > > > So far I done only limited amount functional and performance testing.
> > > > Didn't spot any functional problems, and performance numbers
> > > > remains the same before and after the patch on my box (testpmd, macswap fwd).
> > >
> > >
> > > Based on testing on octeonxt2. We see some regression in testpmd and
> > > bit on l3fwd too.
> > >
> > > Without patch: 73.5mpps/core in testpmd iofwd
> > > With out patch: 72 5mpps/core in testpmd iofwd
> > >
> > > Based on my understanding it is due to additional indirection.
> >
> > From your patch below, it looks like not actually additional indirection,
> > but extra memory dereference - func and dev pointers are now stored
> > at different places.
> 
> Yup. I meant the same. We are on the same page.
> 
> > Plus the fact that now we dereference rte_eth_devices[]
> > data inside PMD function. Which probably prevents compiler and CPU to load
> >  rte_eth_devices[port_id].data and rte_eth_devices[port_id]. pre_tx_burst_cbs[queue_id]
> > in advance before calling actual RX/TX function.
> 
> Yes.
> 
> > About your approach: I don’t mind to add extra opaque 'void *data' pointer,
> > but would prefer not to expose callback invocations code into inline function.
> > Main reason for that - I think it still need to be reworked to allow adding/removing
> > callbacks without stopping the device. Something similar to what was done for cryptodev
> > callbacks. To be able to do that in future without another ABI breakage callbacks related part
> > needs to be kept internal.
> > Though what we probably can do: add two dynamic arrays of opaque pointers to  rte_eth_burst_api.
> > One for rx/tx queue data pointers, second for rx/tx callback pointers.
> > To be more specific, something like:
> >
> > typedef uint16_t (*rte_eth_rx_burst_t)( void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, void *cbs);
> > typedef uint16_t (*rte_eth_tx_burst_t)(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *cbs);
> > ....
> >
> > struct rte_eth_burst_api {
> >         rte_eth_rx_burst_t rx_pkt_burst;
> >         /**< PMD receive function. */
> >         rte_eth_tx_burst_t tx_pkt_burst;
> >         /**< PMD transmit function. */
> >         rte_eth_tx_prep_t tx_pkt_prepare;
> >         /**< PMD transmit prepare function. */
> >         rte_eth_rx_queue_count_t rx_queue_count;
> >         /**< Get the number of used RX descriptors. */
> >         rte_eth_rx_descriptor_status_t rx_descriptor_status;
> >         /**< Check the status of a Rx descriptor. */
> >         rte_eth_tx_descriptor_status_t tx_descriptor_status;
> >         /**< Check the status of a Tx descriptor. */
> >         struct {
> >                  void **queue_data;   /* point to rte_eth_devices[port_id].data-> rx_queues */
> >                  void **cbs;                  /*  points to rte_eth_devices[port_id].post_rx_burst_cbs */
> >        } rx_data, tx_data;
> > } __rte_cache_aligned;
> >
> > static inline uint16_t
> > rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
> >                  struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
> > {
> >        struct rte_eth_burst_api *p;
> >
> >         if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT)
> >                 return 0;
> >
> >       p =  &rte_eth_burst_api[port_id];
> >       return p->rx_pkt_burst(p->rx_data.queue_data[queue_id], rx_pkts, nb_pkts, p->rx_data.cbs[queue_id]);
> 
> 
> 
> That works.
> 
> 
> > }
> >
> > Same for TX.
> >
> > If that looks ok to everyone, I'll try to prepare next version based on that.
> 
> 
> Looks good to me.
> 
> > In theory that should avoid extra dereference problem and even reduce indirection.
> > As a drawback data->rxq/txq should always be allocated for RTE_MAX_QUEUES_PER_PORT entries,
> > but I presume that’s not a big deal.
> >
> > As a side question - is there any reason why rte_ethdev_trace_rx_burst() is invoked at very last point,
> > while rte_ethdev_trace_tx_burst()  after CBs but before actual tx_pkt_burst()?
> > It would make things simpler if tracng would always be done either on entrance or exit of rx/tx_burst.
> 
> exit is fine.
> 
> >
> > >
> > > My suggestion to fix the problem by:
> > > Removing the additional `data` redirection and pull callback function
> > > pointers back
> > > and keep rest as opaque as done in the existing patch like [1]
> > >
> > > I don't believe this has any real implication on future ABI stability
> > > as we will not be adding
> > > any new item in rte_eth_fp in any way as new features can be added in slowpath
> > > rte_eth_dev as mentioned in the patch.
> 
> Ack
> 
> I will happy to test again after the rework and report any performance
> issues if any.

Thanks Jerin, v2 is out:
https://patches.dpdk.org/project/dpdk/list/?series=19084
Please have a look, when you'll get a chance.


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [RFC v2 2/5] ethdev: change input parameters for rx_queue_count
  2021-09-22 14:09   ` [dpdk-dev] [RFC v2 2/5] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
@ 2021-09-23  5:51     ` Wang, Haiyue
  0 siblings, 0 replies; 112+ messages in thread
From: Wang, Haiyue @ 2021-09-23  5:51 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev
  Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W, humin29,
	yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang, Qiming,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	Xia, Chenbo, thomas, Yigit, Ferruh, mdr, Jayatheerthan, Jay

> -----Original Message-----
> From: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Sent: Wednesday, September 22, 2021 22:10
> To: dev@dpdk.org
> Cc: Li, Xiaoyun <xiaoyun.li@intel.com>; anoobj@marvell.com; jerinj@marvell.com;
> ndabilpuram@marvell.com; adwivedi@marvell.com; shepard.siegel@atomicrules.com;
> ed.czeck@atomicrules.com; john.miller@atomicrules.com; irusskikh@marvell.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com; rahul.lakkireddy@chelsio.com;
> hemant.agrawal@nxp.com; sachin.saxena@oss.nxp.com; Wang, Haiyue <haiyue.wang@intel.com>; Daley, John
> <johndale@cisco.com>; hyonkim@cisco.com; Zhang, Qi Z <qi.z.zhang@intel.com>; Wang, Xiao W
> <xiao.w.wang@intel.com>; humin29@huawei.com; yisen.zhuang@huawei.com; oulijun@huawei.com; Xing, Beilei
> <beilei.xing@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Yang, Qiming <qiming.yang@intel.com>;
> matan@nvidia.com; viacheslavo@nvidia.com; sthemmin@microsoft.com; longli@microsoft.com;
> heinrich.kuhn@corigine.com; kirankumark@marvell.com; andrew.rybchenko@oktetlabs.ru;
> mczekaj@marvell.com; jiawenwu@trustnetic.com; jianwang@trustnetic.com; maxime.coquelin@redhat.com; Xia,
> Chenbo <chenbo.xia@intel.com>; thomas@monjalon.net; Yigit, Ferruh <ferruh.yigit@intel.com>;
> mdr@ashroe.eu; Jayatheerthan, Jay <jay.jayatheerthan@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>
> Subject: [RFC v2 2/5] ethdev: change input parameters for rx_queue_count
> 
> Currently majority of 'fast' ethdev ops take pointers to internal
> queue data structures as an input parameter.
> While eth_rx_queue_count() takes a pointer to rte_eth_dev and queue
> index.
> For future work to hide rte_eth_devices[] and friends it would be
> plausible to unify parameters list of all 'fast' ethdev ops.
> This patch changes eth_rx_queue_count() to accept pointer to internal
> queue data as input parameter.
> This is an API and ABI breakage.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>  drivers/net/ark/ark_ethdev_rx.c         |  4 ++--
>  drivers/net/ark/ark_ethdev_rx.h         |  3 +--
>  drivers/net/atlantic/atl_ethdev.h       |  2 +-
>  drivers/net/atlantic/atl_rxtx.c         |  9 ++-------
>  drivers/net/bnxt/bnxt_ethdev.c          |  8 +++++---
>  drivers/net/dpaa/dpaa_ethdev.c          |  9 ++++-----
>  drivers/net/dpaa2/dpaa2_ethdev.c        |  9 ++++-----
>  drivers/net/e1000/e1000_ethdev.h        |  6 ++----
>  drivers/net/e1000/em_rxtx.c             |  4 ++--
>  drivers/net/e1000/igb_rxtx.c            |  4 ++--
>  drivers/net/enic/enic_ethdev.c          | 12 ++++++------
>  drivers/net/fm10k/fm10k.h               |  2 +-
>  drivers/net/fm10k/fm10k_rxtx.c          |  4 ++--
>  drivers/net/hns3/hns3_rxtx.c            |  7 +++++--
>  drivers/net/hns3/hns3_rxtx.h            |  2 +-
>  drivers/net/i40e/i40e_rxtx.c            |  4 ++--
>  drivers/net/i40e/i40e_rxtx.h            |  3 +--
>  drivers/net/iavf/iavf_rxtx.c            |  4 ++--
>  drivers/net/iavf/iavf_rxtx.h            |  2 +-
>  drivers/net/ice/ice_rxtx.c              |  4 ++--
>  drivers/net/ice/ice_rxtx.h              |  2 +-
>  drivers/net/igc/igc_txrx.c              |  5 ++---
>  drivers/net/igc/igc_txrx.h              |  3 +--
>  drivers/net/ixgbe/ixgbe_ethdev.h        |  3 +--
>  drivers/net/ixgbe/ixgbe_rxtx.c          |  4 ++--
>  drivers/net/mlx5/mlx5_rx.c              | 26 ++++++++++++-------------
>  drivers/net/mlx5/mlx5_rx.h              |  2 +-
>  drivers/net/netvsc/hn_rxtx.c            |  4 ++--
>  drivers/net/netvsc/hn_var.h             |  2 +-
>  drivers/net/nfp/nfp_rxtx.c              |  4 ++--
>  drivers/net/nfp/nfp_rxtx.h              |  3 +--
>  drivers/net/octeontx2/otx2_ethdev.h     |  2 +-
>  drivers/net/octeontx2/otx2_ethdev_ops.c |  8 ++++----
>  drivers/net/sfc/sfc_ethdev.c            | 12 ++++++------
>  drivers/net/thunderx/nicvf_ethdev.c     |  3 +--
>  drivers/net/thunderx/nicvf_rxtx.c       |  4 ++--
>  drivers/net/thunderx/nicvf_rxtx.h       |  2 +-
>  drivers/net/txgbe/txgbe_ethdev.h        |  3 +--
>  drivers/net/txgbe/txgbe_rxtx.c          |  4 ++--
>  drivers/net/vhost/rte_eth_vhost.c       |  4 ++--
>  lib/ethdev/rte_ethdev.h                 |  2 +-
>  lib/ethdev/rte_ethdev_core.h            |  3 +--
>  42 files changed, 97 insertions(+), 110 deletions(-)
> 
> diff --git a/drivers/net/ark/ark_ethdev_rx.c b/drivers/net/ark/ark_ethdev_rx.c
> index d255f0177b..98658ce621 100644
> --- a/drivers/net/ark/ark_ethdev_rx.c
> +++ b/drivers/net/ark/ark_ethdev_rx.c
> @@ -388,11 +388,11 @@ eth_ark_rx_queue_drain(struct ark_rx_queue *queue)
>  }
> 
>  uint32_t
> -eth_ark_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id)
> +eth_ark_dev_rx_queue_count(void *rx_queue)
>  {
>  	struct ark_rx_queue *queue;

Just change it to be one line : " struct ark_rx_queue *queue = rx_queue;" ?

> 
> -	queue = dev->data->rx_queues[queue_id];
> +	queue = rx_queue;
>  	return (queue->prod_index - queue->cons_index);	/* mod arith */
>  }


> --
> 2.26.3


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [RFC v2 3/5] ethdev: copy ethdev 'burst' API into separate structure
  2021-09-22 14:09   ` [dpdk-dev] [RFC v2 3/5] ethdev: copy ethdev 'burst' API into separate structure Konstantin Ananyev
@ 2021-09-23  5:58     ` Wang, Haiyue
  2021-09-27 18:01       ` Jerin Jacob
  0 siblings, 1 reply; 112+ messages in thread
From: Wang, Haiyue @ 2021-09-23  5:58 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev
  Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W, humin29,
	yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang, Qiming,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	Xia, Chenbo, thomas, Yigit, Ferruh, mdr, Jayatheerthan, Jay

> -----Original Message-----
> From: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Sent: Wednesday, September 22, 2021 22:10
> To: dev@dpdk.org
> Cc: Li, Xiaoyun <xiaoyun.li@intel.com>; anoobj@marvell.com; jerinj@marvell.com;
> ndabilpuram@marvell.com; adwivedi@marvell.com; shepard.siegel@atomicrules.com;
> ed.czeck@atomicrules.com; john.miller@atomicrules.com; irusskikh@marvell.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com; rahul.lakkireddy@chelsio.com;
> hemant.agrawal@nxp.com; sachin.saxena@oss.nxp.com; Wang, Haiyue <haiyue.wang@intel.com>; Daley, John
> <johndale@cisco.com>; hyonkim@cisco.com; Zhang, Qi Z <qi.z.zhang@intel.com>; Wang, Xiao W
> <xiao.w.wang@intel.com>; humin29@huawei.com; yisen.zhuang@huawei.com; oulijun@huawei.com; Xing, Beilei
> <beilei.xing@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Yang, Qiming <qiming.yang@intel.com>;
> matan@nvidia.com; viacheslavo@nvidia.com; sthemmin@microsoft.com; longli@microsoft.com;
> heinrich.kuhn@corigine.com; kirankumark@marvell.com; andrew.rybchenko@oktetlabs.ru;
> mczekaj@marvell.com; jiawenwu@trustnetic.com; jianwang@trustnetic.com; maxime.coquelin@redhat.com; Xia,
> Chenbo <chenbo.xia@intel.com>; thomas@monjalon.net; Yigit, Ferruh <ferruh.yigit@intel.com>;
> mdr@ashroe.eu; Jayatheerthan, Jay <jay.jayatheerthan@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>
> Subject: [RFC v2 3/5] ethdev: copy ethdev 'burst' API into separate structure
> 
> Copy public function pointers (rx_pkt_burst(), etc.) and related
> pointers to internal data from rte_eth_dev structure into a separate flat
> array. We can keep it public to still use inline functions for 'fast' calls
> (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
> The intention is to make rte_eth_dev and related structures internal.
> That should allow future possible changes to core eth_dev strcutures
> to be transaprent to the user and help to avoid ABI/API breakages.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>  lib/ethdev/ethdev_private.c  | 53 ++++++++++++++++++++++++++++++++++++
>  lib/ethdev/ethdev_private.h  |  7 +++++
>  lib/ethdev/rte_ethdev.c      | 17 ++++++++++++
>  lib/ethdev/rte_ethdev_core.h | 45 ++++++++++++++++++++++++++++++
>  4 files changed, 122 insertions(+)
> 
> diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
> index 012cf73ca2..a1683da77b 100644
> --- a/lib/ethdev/ethdev_private.c
> +++ b/lib/ethdev/ethdev_private.c
> @@ -174,3 +174,56 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
>  		RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
>  	return str == NULL ? -1 : 0;
>  }
> +
> +static uint16_t
> +dummy_eth_rx_burst(__rte_unused void *rxq,
> +		__rte_unused struct rte_mbuf **rx_pkts,
> +		__rte_unused uint16_t nb_pkts)
> +{
> +	RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
> +	rte_errno = ENOTSUP;
> +	return 0;
> +}
> +
> +static uint16_t
> +dummy_eth_tx_burst(__rte_unused void *txq,
> +		__rte_unused struct rte_mbuf **tx_pkts,
> +		__rte_unused uint16_t nb_pkts)
> +{
> +	RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n");
> +	rte_errno = ENOTSUP;
> +	return 0;
> +}
> +
> +void
> +eth_dev_burst_api_reset(struct rte_eth_burst_api *rba)
> +{
> +	static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
> +	static const struct rte_eth_burst_api dummy_api = {
> +		.rx_pkt_burst = dummy_eth_rx_burst,
> +		.tx_pkt_burst = dummy_eth_tx_burst,
> +		.rxq = {.data = dummy_data, .clbk = dummy_data,},
> +		.txq = {.data = dummy_data, .clbk = dummy_data,},
> +	};
> +
> +	*rba = dummy_api;
> +}
> +
> +void
> +eth_dev_burst_api_setup(struct rte_eth_burst_api *rba,
> +		const struct rte_eth_dev *dev)
> +{
> +	rba->rx_pkt_burst = dev->rx_pkt_burst;
> +	rba->tx_pkt_burst = dev->tx_pkt_burst;
> +	rba->tx_pkt_prepare = dev->tx_pkt_prepare;
> +	rba->rx_queue_count = dev->rx_queue_count;
> +	rba->rx_descriptor_status = dev->rx_descriptor_status;
> +	rba->tx_descriptor_status = dev->tx_descriptor_status;
> +
> +	rba->rxq.data = dev->data->rx_queues;
> +	rba->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
> +
> +	rba->txq.data = dev->data->tx_queues;
> +	rba->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
> +}
> +
> diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
> index 9bb0879538..54921f4860 100644
> --- a/lib/ethdev/ethdev_private.h
> +++ b/lib/ethdev/ethdev_private.h
> @@ -30,6 +30,13 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
>  /* Parse devargs value for representor parameter. */
>  int rte_eth_devargs_parse_representor_ports(char *str, void *data);
> 
> +/* reset eth 'burst' API to dummy values */
> +void eth_dev_burst_api_reset(struct rte_eth_burst_api *rba);
> +
> +/* setup eth 'burst' API to ethdev values */
> +void eth_dev_burst_api_setup(struct rte_eth_burst_api *rba,
> +		const struct rte_eth_dev *dev);
> +
>  #ifdef __cplusplus
>  }
>  #endif
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index 424bc260fa..5904bb7bae 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -44,6 +44,9 @@
>  static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
>  struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
> 
> +/* public 'fast/burst' API */
> +struct rte_eth_burst_api rte_eth_burst_api[RTE_MAX_ETHPORTS];
> +
>  /* spinlock for eth device callbacks */
>  static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
> 
> @@ -1788,6 +1791,9 @@ rte_eth_dev_start(uint16_t port_id)
>  		(*dev->dev_ops->link_update)(dev, 0);
>  	}
> 
> +	/* expose selection of PMD rx/tx function */
> +	eth_dev_burst_api_setup(rte_eth_burst_api + port_id, dev);
> +
>  	rte_ethdev_trace_start(port_id);
>  	return 0;
>  }
> @@ -1810,6 +1816,9 @@ rte_eth_dev_stop(uint16_t port_id)
>  		return 0;
>  	}
> 
> +	/* point rx/tx functions to dummy ones */
> +	eth_dev_burst_api_reset(rte_eth_burst_api + port_id);
> +
>  	dev->data->dev_started = 0;
>  	ret = (*dev->dev_ops->dev_stop)(dev);
>  	rte_ethdev_trace_stop(port_id, ret);
> @@ -4568,6 +4577,14 @@ rte_eth_mirror_rule_reset(uint16_t port_id, uint8_t rule_id)
>  	return eth_err(port_id, (*dev->dev_ops->mirror_rule_reset)(dev, rule_id));
>  }
> 
> +RTE_INIT(eth_dev_init_burst_api)
> +{
> +	uint32_t i;
> +
> +	for (i = 0; i != RTE_DIM(rte_eth_burst_api); i++)
> +		eth_dev_burst_api_reset(rte_eth_burst_api + i);
> +}
> +
>  RTE_INIT(eth_dev_init_cb_lists)
>  {
>  	uint16_t i;
> diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
> index 00f27c643a..da6de5de43 100644
> --- a/lib/ethdev/rte_ethdev_core.h
> +++ b/lib/ethdev/rte_ethdev_core.h
> @@ -53,6 +53,51 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
>  typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
>  /**< @internal Check the status of a Tx descriptor */
> 
> +/**
> + * @internal
> + * Structure used to hold opaque pointernals to internal ethdev RX/TXi
> + * queues data.
> + * The main purpose to expose these pointers at all - allow compiler
> + * to fetch this data for 'fast' ethdev inline functions in advance.
> + */
> +struct rte_ethdev_qdata {
> +	void **data;
> +	/**< points to array of internal queue data pointers */
> +	void **clbk;
> +	/**< points to array of queue callback data pointers */
> +};
> +
> +/**
> + * @internal
> + * 'fast' ethdev funcions and related data are hold in a flat array.
> + * one entry per ethdev.
> + */
> +struct rte_eth_burst_api {

'ops' is better ? Like "struct rte_eth_burst_ops". ;-)

> +
> +	/** first 64B line */
> +	eth_rx_burst_t rx_pkt_burst;
> +	/**< PMD receive function. */
> +	eth_tx_burst_t tx_pkt_burst;
> +	/**< PMD transmit function. */
> +	eth_tx_prep_t tx_pkt_prepare;
> +	/**< PMD transmit prepare function. */
> +	eth_rx_queue_count_t rx_queue_count;
> +	/**< Get the number of used RX descriptors. */
> +	eth_rx_descriptor_status_t rx_descriptor_status;
> +	/**< Check the status of a Rx descriptor. */
> +	eth_tx_descriptor_status_t tx_descriptor_status;
> +	/**< Check the status of a Tx descriptor. */
> +	uintptr_t reserved[2];
> +

How about 32 bit system ? Does it need something like :

__rte_cache_aligned for rxq to make sure 64B line?

> +	/** second 64B line */
> +	struct rte_ethdev_qdata rxq;
> +	struct rte_ethdev_qdata txq;
> +	uintptr_t reserved2[4];
> +
> +} __rte_cache_aligned;
> +
> +extern struct rte_eth_burst_api rte_eth_burst_api[RTE_MAX_ETHPORTS];
> +
> 
>  /**
>   * @internal
> --
> 2.26.3


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [RFC 0/7] hide eth dev related structures
  2021-09-22 15:08       ` Ananyev, Konstantin
@ 2021-09-27 16:14         ` Jerin Jacob
  2021-09-28  9:37           ` Ananyev, Konstantin
  0 siblings, 1 reply; 112+ messages in thread
From: Jerin Jacob @ 2021-09-27 16:14 UTC (permalink / raw)
  To: Ananyev, Konstantin
  Cc: dpdk-dev, Thomas Monjalon, Yigit, Ferruh, Andrew Rybchenko, Yang,
	Qiming, Zhang, Qi Z, Xing, Beilei, techboard

On Wed, Sep 22, 2021 at 8:38 PM Ananyev, Konstantin
<konstantin.ananyev@intel.com> wrote:
>
>
> > > Hi Jerin,
> > >
> > > > > NOTE: This is just an RFC to start further discussion and collect the feedback.
> > > > > Due to significant amount of work, changes required are applied only to two
> > > > > PMDs so far: net/i40e and net/ice.
> > > > > So to build it you'll need to add:
> > > > > -Denable_drivers='common/*,mempool/*,net/ice,net/i40e'
> > > > > to your config options.
> > > >
> > > > >
> > > > > That approach was selected to avoid(/minimize) possible performance losses.
> > > > >
> > > > > So far I done only limited amount functional and performance testing.
> > > > > Didn't spot any functional problems, and performance numbers
> > > > > remains the same before and after the patch on my box (testpmd, macswap fwd).
> > > >
> > > >
> > > > Based on testing on octeonxt2. We see some regression in testpmd and
> > > > bit on l3fwd too.
> > > >
> > > > Without patch: 73.5mpps/core in testpmd iofwd
> > > > With out patch: 72 5mpps/core in testpmd iofwd
> > > >
> > > > Based on my understanding it is due to additional indirection.
> > >
> > > From your patch below, it looks like not actually additional indirection,
> > > but extra memory dereference - func and dev pointers are now stored
> > > at different places.
> >
> > Yup. I meant the same. We are on the same page.
> >
> > > Plus the fact that now we dereference rte_eth_devices[]
> > > data inside PMD function. Which probably prevents compiler and CPU to load
> > >  rte_eth_devices[port_id].data and rte_eth_devices[port_id]. pre_tx_burst_cbs[queue_id]
> > > in advance before calling actual RX/TX function.
> >
> > Yes.
> >
> > > About your approach: I don’t mind to add extra opaque 'void *data' pointer,
> > > but would prefer not to expose callback invocations code into inline function.
> > > Main reason for that - I think it still need to be reworked to allow adding/removing
> > > callbacks without stopping the device. Something similar to what was done for cryptodev
> > > callbacks. To be able to do that in future without another ABI breakage callbacks related part
> > > needs to be kept internal.
> > > Though what we probably can do: add two dynamic arrays of opaque pointers to  rte_eth_burst_api.
> > > One for rx/tx queue data pointers, second for rx/tx callback pointers.
> > > To be more specific, something like:
> > >
> > > typedef uint16_t (*rte_eth_rx_burst_t)( void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, void *cbs);
> > > typedef uint16_t (*rte_eth_tx_burst_t)(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *cbs);
> > > ....
> > >
> > > struct rte_eth_burst_api {
> > >         rte_eth_rx_burst_t rx_pkt_burst;
> > >         /**< PMD receive function. */
> > >         rte_eth_tx_burst_t tx_pkt_burst;
> > >         /**< PMD transmit function. */
> > >         rte_eth_tx_prep_t tx_pkt_prepare;
> > >         /**< PMD transmit prepare function. */
> > >         rte_eth_rx_queue_count_t rx_queue_count;
> > >         /**< Get the number of used RX descriptors. */
> > >         rte_eth_rx_descriptor_status_t rx_descriptor_status;
> > >         /**< Check the status of a Rx descriptor. */
> > >         rte_eth_tx_descriptor_status_t tx_descriptor_status;
> > >         /**< Check the status of a Tx descriptor. */
> > >         struct {
> > >                  void **queue_data;   /* point to rte_eth_devices[port_id].data-> rx_queues */
> > >                  void **cbs;                  /*  points to rte_eth_devices[port_id].post_rx_burst_cbs */
> > >        } rx_data, tx_data;
> > > } __rte_cache_aligned;
> > >
> > > static inline uint16_t
> > > rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
> > >                  struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
> > > {
> > >        struct rte_eth_burst_api *p;
> > >
> > >         if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT)
> > >                 return 0;
> > >
> > >       p =  &rte_eth_burst_api[port_id];
> > >       return p->rx_pkt_burst(p->rx_data.queue_data[queue_id], rx_pkts, nb_pkts, p->rx_data.cbs[queue_id]);
> >
> >
> >
> > That works.
> >
> >
> > > }
> > >
> > > Same for TX.
> > >
> > > If that looks ok to everyone, I'll try to prepare next version based on that.
> >
> >
> > Looks good to me.
> >
> > > In theory that should avoid extra dereference problem and even reduce indirection.
> > > As a drawback data->rxq/txq should always be allocated for RTE_MAX_QUEUES_PER_PORT entries,
> > > but I presume that’s not a big deal.
> > >
> > > As a side question - is there any reason why rte_ethdev_trace_rx_burst() is invoked at very last point,
> > > while rte_ethdev_trace_tx_burst()  after CBs but before actual tx_pkt_burst()?
> > > It would make things simpler if tracng would always be done either on entrance or exit of rx/tx_burst.
> >
> > exit is fine.
> >
> > >
> > > >
> > > > My suggestion to fix the problem by:
> > > > Removing the additional `data` redirection and pull callback function
> > > > pointers back
> > > > and keep rest as opaque as done in the existing patch like [1]
> > > >
> > > > I don't believe this has any real implication on future ABI stability
> > > > as we will not be adding
> > > > any new item in rte_eth_fp in any way as new features can be added in slowpath
> > > > rte_eth_dev as mentioned in the patch.
> >
> > Ack
> >
> > I will happy to test again after the rework and report any performance
> > issues if any.
>
> Thanks Jerin, v2 is out:
> https://patches.dpdk.org/project/dpdk/list/?series=19084
> Please have a look, when you'll get a chance.

Tested the series. Looks good, No performance issue.

>

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [RFC v2 3/5] ethdev: copy ethdev 'burst' API into separate structure
  2021-09-23  5:58     ` Wang, Haiyue
@ 2021-09-27 18:01       ` Jerin Jacob
  2021-09-28  9:42         ` Ananyev, Konstantin
  0 siblings, 1 reply; 112+ messages in thread
From: Jerin Jacob @ 2021-09-27 18:01 UTC (permalink / raw)
  To: Wang, Haiyue
  Cc: Ananyev, Konstantin, dev, Li, Xiaoyun, anoobj, jerinj,
	ndabilpuram, adwivedi, shepard.siegel, ed.czeck, john.miller,
	irusskikh, ajit.khaparde, somnath.kotur, rahul.lakkireddy,
	hemant.agrawal, sachin.saxena, Daley, John, hyonkim, Zhang, Qi Z,
	Wang, Xiao W, humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu,
	Jingjing, Yang, Qiming, matan, viacheslavo, sthemmin, longli,
	heinrich.kuhn, kirankumark, andrew.rybchenko, mczekaj, jiawenwu,
	jianwang, maxime.coquelin, Xia, Chenbo, thomas, Yigit, Ferruh,
	mdr, Jayatheerthan, Jay

On Thu, Sep 23, 2021 at 11:28 AM Wang, Haiyue <haiyue.wang@intel.com> wrote:
>
> > -----Original Message-----
> > From: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > Sent: Wednesday, September 22, 2021 22:10
> > To: dev@dpdk.org
> > Cc: Li, Xiaoyun <xiaoyun.li@intel.com>; anoobj@marvell.com; jerinj@marvell.com;
> > ndabilpuram@marvell.com; adwivedi@marvell.com; shepard.siegel@atomicrules.com;
> > ed.czeck@atomicrules.com; john.miller@atomicrules.com; irusskikh@marvell.com;
> > ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com; rahul.lakkireddy@chelsio.com;
> > hemant.agrawal@nxp.com; sachin.saxena@oss.nxp.com; Wang, Haiyue <haiyue.wang@intel.com>; Daley, John
> > <johndale@cisco.com>; hyonkim@cisco.com; Zhang, Qi Z <qi.z.zhang@intel.com>; Wang, Xiao W
> > <xiao.w.wang@intel.com>; humin29@huawei.com; yisen.zhuang@huawei.com; oulijun@huawei.com; Xing, Beilei
> > <beilei.xing@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Yang, Qiming <qiming.yang@intel.com>;
> > matan@nvidia.com; viacheslavo@nvidia.com; sthemmin@microsoft.com; longli@microsoft.com;
> > heinrich.kuhn@corigine.com; kirankumark@marvell.com; andrew.rybchenko@oktetlabs.ru;
> > mczekaj@marvell.com; jiawenwu@trustnetic.com; jianwang@trustnetic.com; maxime.coquelin@redhat.com; Xia,
> > Chenbo <chenbo.xia@intel.com>; thomas@monjalon.net; Yigit, Ferruh <ferruh.yigit@intel.com>;
> > mdr@ashroe.eu; Jayatheerthan, Jay <jay.jayatheerthan@intel.com>; Ananyev, Konstantin
> > <konstantin.ananyev@intel.com>
> > Subject: [RFC v2 3/5] ethdev: copy ethdev 'burst' API into separate structure
> >
> > Copy public function pointers (rx_pkt_burst(), etc.) and related
> > pointers to internal data from rte_eth_dev structure into a separate flat
> > array. We can keep it public to still use inline functions for 'fast' calls
> > (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
> > The intention is to make rte_eth_dev and related structures internal.
> > That should allow future possible changes to core eth_dev strcutures
> > to be transaprent to the user and help to avoid ABI/API breakages.
> >
> > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > ---
> >  lib/ethdev/ethdev_private.c  | 53 ++++++++++++++++++++++++++++++++++++
> >  lib/ethdev/ethdev_private.h  |  7 +++++
> >  lib/ethdev/rte_ethdev.c      | 17 ++++++++++++
> >  lib/ethdev/rte_ethdev_core.h | 45 ++++++++++++++++++++++++++++++
> >  4 files changed, 122 insertions(+)
> >
> > diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
> > index 012cf73ca2..a1683da77b 100644
> > --- a/lib/ethdev/ethdev_private.c
> > +++ b/lib/ethdev/ethdev_private.c
> > @@ -174,3 +174,56 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
> >               RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
> >       return str == NULL ? -1 : 0;
> >  }
> > +
> > +static uint16_t
> > +dummy_eth_rx_burst(__rte_unused void *rxq,
> > +             __rte_unused struct rte_mbuf **rx_pkts,
> > +             __rte_unused uint16_t nb_pkts)
> > +{
> > +     RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
> > +     rte_errno = ENOTSUP;
> > +     return 0;
> > +}
> > +
> > +static uint16_t
> > +dummy_eth_tx_burst(__rte_unused void *txq,
> > +             __rte_unused struct rte_mbuf **tx_pkts,
> > +             __rte_unused uint16_t nb_pkts)
> > +{
> > +     RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n");
> > +     rte_errno = ENOTSUP;
> > +     return 0;
> > +}
> > +
> > +void
> > +eth_dev_burst_api_reset(struct rte_eth_burst_api *rba)
> > +{
> > +     static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
> > +     static const struct rte_eth_burst_api dummy_api = {
> > +             .rx_pkt_burst = dummy_eth_rx_burst,
> > +             .tx_pkt_burst = dummy_eth_tx_burst,
> > +             .rxq = {.data = dummy_data, .clbk = dummy_data,},
> > +             .txq = {.data = dummy_data, .clbk = dummy_data,},
> > +     };
> > +
> > +     *rba = dummy_api;
> > +}
> > +
> > +void
> > +eth_dev_burst_api_setup(struct rte_eth_burst_api *rba,
> > +             const struct rte_eth_dev *dev)
> > +{
> > +     rba->rx_pkt_burst = dev->rx_pkt_burst;
> > +     rba->tx_pkt_burst = dev->tx_pkt_burst;
> > +     rba->tx_pkt_prepare = dev->tx_pkt_prepare;
> > +     rba->rx_queue_count = dev->rx_queue_count;
> > +     rba->rx_descriptor_status = dev->rx_descriptor_status;
> > +     rba->tx_descriptor_status = dev->tx_descriptor_status;
> > +
> > +     rba->rxq.data = dev->data->rx_queues;
> > +     rba->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
> > +
> > +     rba->txq.data = dev->data->tx_queues;
> > +     rba->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
> > +}
> > +
> > diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
> > index 9bb0879538..54921f4860 100644
> > --- a/lib/ethdev/ethdev_private.h
> > +++ b/lib/ethdev/ethdev_private.h
> > @@ -30,6 +30,13 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
> >  /* Parse devargs value for representor parameter. */
> >  int rte_eth_devargs_parse_representor_ports(char *str, void *data);
> >
> > +/* reset eth 'burst' API to dummy values */
> > +void eth_dev_burst_api_reset(struct rte_eth_burst_api *rba);
> > +
> > +/* setup eth 'burst' API to ethdev values */
> > +void eth_dev_burst_api_setup(struct rte_eth_burst_api *rba,
> > +             const struct rte_eth_dev *dev);
> > +
> >  #ifdef __cplusplus
> >  }
> >  #endif
> > diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> > index 424bc260fa..5904bb7bae 100644
> > --- a/lib/ethdev/rte_ethdev.c
> > +++ b/lib/ethdev/rte_ethdev.c
> > @@ -44,6 +44,9 @@
> >  static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
> >  struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
> >
> > +/* public 'fast/burst' API */
> > +struct rte_eth_burst_api rte_eth_burst_api[RTE_MAX_ETHPORTS];
> > +
> >  /* spinlock for eth device callbacks */
> >  static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
> >
> > @@ -1788,6 +1791,9 @@ rte_eth_dev_start(uint16_t port_id)
> >               (*dev->dev_ops->link_update)(dev, 0);
> >       }
> >
> > +     /* expose selection of PMD rx/tx function */
> > +     eth_dev_burst_api_setup(rte_eth_burst_api + port_id, dev);
> > +
> >       rte_ethdev_trace_start(port_id);
> >       return 0;
> >  }
> > @@ -1810,6 +1816,9 @@ rte_eth_dev_stop(uint16_t port_id)
> >               return 0;
> >       }
> >
> > +     /* point rx/tx functions to dummy ones */
> > +     eth_dev_burst_api_reset(rte_eth_burst_api + port_id);
> > +
> >       dev->data->dev_started = 0;
> >       ret = (*dev->dev_ops->dev_stop)(dev);
> >       rte_ethdev_trace_stop(port_id, ret);
> > @@ -4568,6 +4577,14 @@ rte_eth_mirror_rule_reset(uint16_t port_id, uint8_t rule_id)
> >       return eth_err(port_id, (*dev->dev_ops->mirror_rule_reset)(dev, rule_id));
> >  }
> >
> > +RTE_INIT(eth_dev_init_burst_api)
> > +{
> > +     uint32_t i;
> > +
> > +     for (i = 0; i != RTE_DIM(rte_eth_burst_api); i++)
> > +             eth_dev_burst_api_reset(rte_eth_burst_api + i);
> > +}
> > +
> >  RTE_INIT(eth_dev_init_cb_lists)
> >  {
> >       uint16_t i;
> > diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
> > index 00f27c643a..da6de5de43 100644
> > --- a/lib/ethdev/rte_ethdev_core.h
> > +++ b/lib/ethdev/rte_ethdev_core.h
> > @@ -53,6 +53,51 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
> >  typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
> >  /**< @internal Check the status of a Tx descriptor */
> >
> > +/**
> > + * @internal
> > + * Structure used to hold opaque pointernals to internal ethdev RX/TXi
> > + * queues data.
> > + * The main purpose to expose these pointers at all - allow compiler
> > + * to fetch this data for 'fast' ethdev inline functions in advance.
> > + */
> > +struct rte_ethdev_qdata {
> > +     void **data;
> > +     /**< points to array of internal queue data pointers */
> > +     void **clbk;
> > +     /**< points to array of queue callback data pointers */
> > +};
> > +
> > +/**
> > + * @internal
> > + * 'fast' ethdev funcions and related data are hold in a flat array.
> > + * one entry per ethdev.
> > + */
> > +struct rte_eth_burst_api {
>
> 'ops' is better ? Like "struct rte_eth_burst_ops". ;-)

Since all fastpath APIs are not in bust in nature. IMO, rte_eth_fp_ops
or so may be better.

>
> > +
> > +     /** first 64B line */
> > +     eth_rx_burst_t rx_pkt_burst;
> > +     /**< PMD receive function. */
> > +     eth_tx_burst_t tx_pkt_burst;
> > +     /**< PMD transmit function. */
> > +     eth_tx_prep_t tx_pkt_prepare;
> > +     /**< PMD transmit prepare function. */
> > +     eth_rx_queue_count_t rx_queue_count;
> > +     /**< Get the number of used RX descriptors. */
> > +     eth_rx_descriptor_status_t rx_descriptor_status;
> > +     /**< Check the status of a Rx descriptor. */
> > +     eth_tx_descriptor_status_t tx_descriptor_status;
> > +     /**< Check the status of a Tx descriptor. */
> > +     uintptr_t reserved[2];
> > +
>
> How about 32 bit system ? Does it need something like :
>
> __rte_cache_aligned for rxq to make sure 64B line?

__rte_cache_aligned_min for 128B CL systems.

>
> > +     /** second 64B line */
> > +     struct rte_ethdev_qdata rxq;
> > +     struct rte_ethdev_qdata txq;
> > +     uintptr_t reserved2[4];
> > +
> > +} __rte_cache_aligned;
> > +
> > +extern struct rte_eth_burst_api rte_eth_burst_api[RTE_MAX_ETHPORTS];
> > +
> >
> >  /**
> >   * @internal
> > --
> > 2.26.3
>

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [RFC 0/7] hide eth dev related structures
  2021-09-27 16:14         ` Jerin Jacob
@ 2021-09-28  9:37           ` Ananyev, Konstantin
  0 siblings, 0 replies; 112+ messages in thread
From: Ananyev, Konstantin @ 2021-09-28  9:37 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dpdk-dev, Thomas Monjalon, Yigit, Ferruh, Andrew Rybchenko, Yang,
	Qiming, Zhang, Qi Z, Xing, Beilei, techboard


> On Wed, Sep 22, 2021 at 8:38 PM Ananyev, Konstantin
> <konstantin.ananyev@intel.com> wrote:
> >
> >
> > > > Hi Jerin,
> > > >
> > > > > > NOTE: This is just an RFC to start further discussion and collect the feedback.
> > > > > > Due to significant amount of work, changes required are applied only to two
> > > > > > PMDs so far: net/i40e and net/ice.
> > > > > > So to build it you'll need to add:
> > > > > > -Denable_drivers='common/*,mempool/*,net/ice,net/i40e'
> > > > > > to your config options.
> > > > >
> > > > > >
> > > > > > That approach was selected to avoid(/minimize) possible performance losses.
> > > > > >
> > > > > > So far I done only limited amount functional and performance testing.
> > > > > > Didn't spot any functional problems, and performance numbers
> > > > > > remains the same before and after the patch on my box (testpmd, macswap fwd).
> > > > >
> > > > >
> > > > > Based on testing on octeonxt2. We see some regression in testpmd and
> > > > > bit on l3fwd too.
> > > > >
> > > > > Without patch: 73.5mpps/core in testpmd iofwd
> > > > > With out patch: 72 5mpps/core in testpmd iofwd
> > > > >
> > > > > Based on my understanding it is due to additional indirection.
> > > >
> > > > From your patch below, it looks like not actually additional indirection,
> > > > but extra memory dereference - func and dev pointers are now stored
> > > > at different places.
> > >
> > > Yup. I meant the same. We are on the same page.
> > >
> > > > Plus the fact that now we dereference rte_eth_devices[]
> > > > data inside PMD function. Which probably prevents compiler and CPU to load
> > > >  rte_eth_devices[port_id].data and rte_eth_devices[port_id]. pre_tx_burst_cbs[queue_id]
> > > > in advance before calling actual RX/TX function.
> > >
> > > Yes.
> > >
> > > > About your approach: I don’t mind to add extra opaque 'void *data' pointer,
> > > > but would prefer not to expose callback invocations code into inline function.
> > > > Main reason for that - I think it still need to be reworked to allow adding/removing
> > > > callbacks without stopping the device. Something similar to what was done for cryptodev
> > > > callbacks. To be able to do that in future without another ABI breakage callbacks related part
> > > > needs to be kept internal.
> > > > Though what we probably can do: add two dynamic arrays of opaque pointers to  rte_eth_burst_api.
> > > > One for rx/tx queue data pointers, second for rx/tx callback pointers.
> > > > To be more specific, something like:
> > > >
> > > > typedef uint16_t (*rte_eth_rx_burst_t)( void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, void *cbs);
> > > > typedef uint16_t (*rte_eth_tx_burst_t)(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *cbs);
> > > > ....
> > > >
> > > > struct rte_eth_burst_api {
> > > >         rte_eth_rx_burst_t rx_pkt_burst;
> > > >         /**< PMD receive function. */
> > > >         rte_eth_tx_burst_t tx_pkt_burst;
> > > >         /**< PMD transmit function. */
> > > >         rte_eth_tx_prep_t tx_pkt_prepare;
> > > >         /**< PMD transmit prepare function. */
> > > >         rte_eth_rx_queue_count_t rx_queue_count;
> > > >         /**< Get the number of used RX descriptors. */
> > > >         rte_eth_rx_descriptor_status_t rx_descriptor_status;
> > > >         /**< Check the status of a Rx descriptor. */
> > > >         rte_eth_tx_descriptor_status_t tx_descriptor_status;
> > > >         /**< Check the status of a Tx descriptor. */
> > > >         struct {
> > > >                  void **queue_data;   /* point to rte_eth_devices[port_id].data-> rx_queues */
> > > >                  void **cbs;                  /*  points to rte_eth_devices[port_id].post_rx_burst_cbs */
> > > >        } rx_data, tx_data;
> > > > } __rte_cache_aligned;
> > > >
> > > > static inline uint16_t
> > > > rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
> > > >                  struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
> > > > {
> > > >        struct rte_eth_burst_api *p;
> > > >
> > > >         if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT)
> > > >                 return 0;
> > > >
> > > >       p =  &rte_eth_burst_api[port_id];
> > > >       return p->rx_pkt_burst(p->rx_data.queue_data[queue_id], rx_pkts, nb_pkts, p->rx_data.cbs[queue_id]);
> > >
> > >
> > >
> > > That works.
> > >
> > >
> > > > }
> > > >
> > > > Same for TX.
> > > >
> > > > If that looks ok to everyone, I'll try to prepare next version based on that.
> > >
> > >
> > > Looks good to me.
> > >
> > > > In theory that should avoid extra dereference problem and even reduce indirection.
> > > > As a drawback data->rxq/txq should always be allocated for RTE_MAX_QUEUES_PER_PORT entries,
> > > > but I presume that’s not a big deal.
> > > >
> > > > As a side question - is there any reason why rte_ethdev_trace_rx_burst() is invoked at very last point,
> > > > while rte_ethdev_trace_tx_burst()  after CBs but before actual tx_pkt_burst()?
> > > > It would make things simpler if tracng would always be done either on entrance or exit of rx/tx_burst.
> > >
> > > exit is fine.
> > >
> > > >
> > > > >
> > > > > My suggestion to fix the problem by:
> > > > > Removing the additional `data` redirection and pull callback function
> > > > > pointers back
> > > > > and keep rest as opaque as done in the existing patch like [1]
> > > > >
> > > > > I don't believe this has any real implication on future ABI stability
> > > > > as we will not be adding
> > > > > any new item in rte_eth_fp in any way as new features can be added in slowpath
> > > > > rte_eth_dev as mentioned in the patch.
> > >
> > > Ack
> > >
> > > I will happy to test again after the rework and report any performance
> > > issues if any.
> >
> > Thanks Jerin, v2 is out:
> > https://patches.dpdk.org/project/dpdk/list/?series=19084
> > Please have a look, when you'll get a chance.
> 
> Tested the series. Looks good, No performance issue.

That's great news, thanks for testing it.
Plan to proceed with proper v3 in next few days.
 


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [RFC v2 3/5] ethdev: copy ethdev 'burst' API into separate structure
  2021-09-27 18:01       ` Jerin Jacob
@ 2021-09-28  9:42         ` Ananyev, Konstantin
  0 siblings, 0 replies; 112+ messages in thread
From: Ananyev, Konstantin @ 2021-09-28  9:42 UTC (permalink / raw)
  To: Jerin Jacob, Wang, Haiyue
  Cc: dev, Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W, humin29,
	yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang, Qiming,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	Xia, Chenbo, thomas, Yigit, Ferruh, mdr, Jayatheerthan, Jay


> > >
> > > Copy public function pointers (rx_pkt_burst(), etc.) and related
> > > pointers to internal data from rte_eth_dev structure into a separate flat
> > > array. We can keep it public to still use inline functions for 'fast' calls
> > > (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
> > > The intention is to make rte_eth_dev and related structures internal.
> > > That should allow future possible changes to core eth_dev strcutures
> > > to be transaprent to the user and help to avoid ABI/API breakages.
> > >
> > > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > > ---
> > >  lib/ethdev/ethdev_private.c  | 53 ++++++++++++++++++++++++++++++++++++
> > >  lib/ethdev/ethdev_private.h  |  7 +++++
> > >  lib/ethdev/rte_ethdev.c      | 17 ++++++++++++
> > >  lib/ethdev/rte_ethdev_core.h | 45 ++++++++++++++++++++++++++++++
> > >  4 files changed, 122 insertions(+)
> > >
> > > diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
> > > index 012cf73ca2..a1683da77b 100644
> > > --- a/lib/ethdev/ethdev_private.c
> > > +++ b/lib/ethdev/ethdev_private.c
> > > @@ -174,3 +174,56 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
> > >               RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
> > >       return str == NULL ? -1 : 0;
> > >  }
> > > +
> > > +static uint16_t
> > > +dummy_eth_rx_burst(__rte_unused void *rxq,
> > > +             __rte_unused struct rte_mbuf **rx_pkts,
> > > +             __rte_unused uint16_t nb_pkts)
> > > +{
> > > +     RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
> > > +     rte_errno = ENOTSUP;
> > > +     return 0;
> > > +}
> > > +
> > > +static uint16_t
> > > +dummy_eth_tx_burst(__rte_unused void *txq,
> > > +             __rte_unused struct rte_mbuf **tx_pkts,
> > > +             __rte_unused uint16_t nb_pkts)
> > > +{
> > > +     RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n");
> > > +     rte_errno = ENOTSUP;
> > > +     return 0;
> > > +}
> > > +
> > > +void
> > > +eth_dev_burst_api_reset(struct rte_eth_burst_api *rba)
> > > +{
> > > +     static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
> > > +     static const struct rte_eth_burst_api dummy_api = {
> > > +             .rx_pkt_burst = dummy_eth_rx_burst,
> > > +             .tx_pkt_burst = dummy_eth_tx_burst,
> > > +             .rxq = {.data = dummy_data, .clbk = dummy_data,},
> > > +             .txq = {.data = dummy_data, .clbk = dummy_data,},
> > > +     };
> > > +
> > > +     *rba = dummy_api;
> > > +}
> > > +
> > > +void
> > > +eth_dev_burst_api_setup(struct rte_eth_burst_api *rba,
> > > +             const struct rte_eth_dev *dev)
> > > +{
> > > +     rba->rx_pkt_burst = dev->rx_pkt_burst;
> > > +     rba->tx_pkt_burst = dev->tx_pkt_burst;
> > > +     rba->tx_pkt_prepare = dev->tx_pkt_prepare;
> > > +     rba->rx_queue_count = dev->rx_queue_count;
> > > +     rba->rx_descriptor_status = dev->rx_descriptor_status;
> > > +     rba->tx_descriptor_status = dev->tx_descriptor_status;
> > > +
> > > +     rba->rxq.data = dev->data->rx_queues;
> > > +     rba->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
> > > +
> > > +     rba->txq.data = dev->data->tx_queues;
> > > +     rba->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
> > > +}
> > > +
> > > diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
> > > index 9bb0879538..54921f4860 100644
> > > --- a/lib/ethdev/ethdev_private.h
> > > +++ b/lib/ethdev/ethdev_private.h
> > > @@ -30,6 +30,13 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
> > >  /* Parse devargs value for representor parameter. */
> > >  int rte_eth_devargs_parse_representor_ports(char *str, void *data);
> > >
> > > +/* reset eth 'burst' API to dummy values */
> > > +void eth_dev_burst_api_reset(struct rte_eth_burst_api *rba);
> > > +
> > > +/* setup eth 'burst' API to ethdev values */
> > > +void eth_dev_burst_api_setup(struct rte_eth_burst_api *rba,
> > > +             const struct rte_eth_dev *dev);
> > > +
> > >  #ifdef __cplusplus
> > >  }
> > >  #endif
> > > diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> > > index 424bc260fa..5904bb7bae 100644
> > > --- a/lib/ethdev/rte_ethdev.c
> > > +++ b/lib/ethdev/rte_ethdev.c
> > > @@ -44,6 +44,9 @@
> > >  static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
> > >  struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
> > >
> > > +/* public 'fast/burst' API */
> > > +struct rte_eth_burst_api rte_eth_burst_api[RTE_MAX_ETHPORTS];
> > > +
> > >  /* spinlock for eth device callbacks */
> > >  static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
> > >
> > > @@ -1788,6 +1791,9 @@ rte_eth_dev_start(uint16_t port_id)
> > >               (*dev->dev_ops->link_update)(dev, 0);
> > >       }
> > >
> > > +     /* expose selection of PMD rx/tx function */
> > > +     eth_dev_burst_api_setup(rte_eth_burst_api + port_id, dev);
> > > +
> > >       rte_ethdev_trace_start(port_id);
> > >       return 0;
> > >  }
> > > @@ -1810,6 +1816,9 @@ rte_eth_dev_stop(uint16_t port_id)
> > >               return 0;
> > >       }
> > >
> > > +     /* point rx/tx functions to dummy ones */
> > > +     eth_dev_burst_api_reset(rte_eth_burst_api + port_id);
> > > +
> > >       dev->data->dev_started = 0;
> > >       ret = (*dev->dev_ops->dev_stop)(dev);
> > >       rte_ethdev_trace_stop(port_id, ret);
> > > @@ -4568,6 +4577,14 @@ rte_eth_mirror_rule_reset(uint16_t port_id, uint8_t rule_id)
> > >       return eth_err(port_id, (*dev->dev_ops->mirror_rule_reset)(dev, rule_id));
> > >  }
> > >
> > > +RTE_INIT(eth_dev_init_burst_api)
> > > +{
> > > +     uint32_t i;
> > > +
> > > +     for (i = 0; i != RTE_DIM(rte_eth_burst_api); i++)
> > > +             eth_dev_burst_api_reset(rte_eth_burst_api + i);
> > > +}
> > > +
> > >  RTE_INIT(eth_dev_init_cb_lists)
> > >  {
> > >       uint16_t i;
> > > diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
> > > index 00f27c643a..da6de5de43 100644
> > > --- a/lib/ethdev/rte_ethdev_core.h
> > > +++ b/lib/ethdev/rte_ethdev_core.h
> > > @@ -53,6 +53,51 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
> > >  typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
> > >  /**< @internal Check the status of a Tx descriptor */
> > >
> > > +/**
> > > + * @internal
> > > + * Structure used to hold opaque pointernals to internal ethdev RX/TXi
> > > + * queues data.
> > > + * The main purpose to expose these pointers at all - allow compiler
> > > + * to fetch this data for 'fast' ethdev inline functions in advance.
> > > + */
> > > +struct rte_ethdev_qdata {
> > > +     void **data;
> > > +     /**< points to array of internal queue data pointers */
> > > +     void **clbk;
> > > +     /**< points to array of queue callback data pointers */
> > > +};
> > > +
> > > +/**
> > > + * @internal
> > > + * 'fast' ethdev funcions and related data are hold in a flat array.
> > > + * one entry per ethdev.
> > > + */
> > > +struct rte_eth_burst_api {
> >
> > 'ops' is better ? Like "struct rte_eth_burst_ops". ;-)
> 
> Since all fastpath APIs are not in bust in nature. IMO, rte_eth_fp_ops
> or so may be better.

I don't have any strong opinion about the name.
Whatever majority will decide is ok by me here.

> 
> >
> > > +
> > > +     /** first 64B line */
> > > +     eth_rx_burst_t rx_pkt_burst;
> > > +     /**< PMD receive function. */
> > > +     eth_tx_burst_t tx_pkt_burst;
> > > +     /**< PMD transmit function. */
> > > +     eth_tx_prep_t tx_pkt_prepare;
> > > +     /**< PMD transmit prepare function. */
> > > +     eth_rx_queue_count_t rx_queue_count;
> > > +     /**< Get the number of used RX descriptors. */
> > > +     eth_rx_descriptor_status_t rx_descriptor_status;
> > > +     /**< Check the status of a Rx descriptor. */
> > > +     eth_tx_descriptor_status_t tx_descriptor_status;
> > > +     /**< Check the status of a Tx descriptor. */
> > > +     uintptr_t reserved[2];
> > > +
> >
> > How about 32 bit system ? Does it need something like :

I don't think we need to anything special for 32-bit version here.
We have 16 pointers here in total right now,
on 32-bit systems it would fit into 64B anyway.

> >
> > __rte_cache_aligned for rxq to make sure 64B line?
> 
> __rte_cache_aligned_min for 128B CL systems.


> 
> >
> > > +     /** second 64B line */
> > > +     struct rte_ethdev_qdata rxq;
> > > +     struct rte_ethdev_qdata txq;
> > > +     uintptr_t reserved2[4];
> > > +
> > > +} __rte_cache_aligned;
> > > +
> > > +extern struct rte_eth_burst_api rte_eth_burst_api[RTE_MAX_ETHPORTS];
> > > +
> > >
> > >  /**
> > >   * @internal
> > > --
> > > 2.26.3
> >

^ permalink raw reply	[flat|nested] 112+ messages in thread

* [dpdk-dev] [PATCH v3 0/7] hide eth dev related structures
  2021-09-22 14:09 ` [dpdk-dev] [RFC v2 0/5] " Konstantin Ananyev
                     ` (4 preceding siblings ...)
  2021-09-22 14:09   ` [dpdk-dev] [RFC v2 5/5] ethdev: hide eth dev related structures Konstantin Ananyev
@ 2021-10-01 14:02   ` Konstantin Ananyev
  2021-10-01 14:02     ` [dpdk-dev] [PATCH v3 1/7] ethdev: allocate max space for internal queue array Konstantin Ananyev
                       ` (8 more replies)
  5 siblings, 9 replies; 112+ messages in thread
From: Konstantin Ananyev @ 2021-10-01 14:02 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

v3 changes:
 - Changes in public struct naming (Jerin/Haiyue)
 - Split patches
 - Update docs
 - Shamelessly included Andrew's patch:
   https://patches.dpdk.org/project/dpdk/patch/20210928154856.1015020-1-andrew.rybchenko@oktetlabs.ru/
   into these series.
   I have to do similar thing here, so decided to avoid duplicated effort.   

The aim of these patch series is to make rte_ethdev core data structures
(rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback, etc.) internal to
DPDK and not visible to the user.
That should allow future possible changes to core ethdev related structures
to be transparent to the user and help to improve ABI/API stability.
Note that current ethdev API is preserved, but it is a formal ABI break.

The work is based on previous discussions at:
https://www.mail-archive.com/dev@dpdk.org/msg211405.html
https://www.mail-archive.com/dev@dpdk.org/msg216685.html
and consists of the following main points:
1. Copy public 'fast' function pointers (rx_pkt_burst(), etc.) and
   related data pointer from rte_eth_dev into a separate flat array.
   We keep it public to still be able to use inline functions for these
   'fast' calls (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
   Note that apart from function pointers itself, each element of this
   flat array also contains two opaque pointers for each ethdev:
   1) a pointer to an array of internal queue data pointers
   2)  points to array of queue callback data pointers.
   Note that exposing this extra information allows us to avoid extra
   changes inside PMD level, plus should help to avoid possible
   performance degradation.
2. Change implementation of 'fast' inline ethdev functions
   (rte_eth_rx_burst(), etc.) to use new public flat array.
   While it is an ABI breakage, this change is intended to be transparent
   for both users (no changes in user app is required) and PMD developers
   (no changes in PMD is required).
   One extra note - with new implementation RX/TX callback invocation
   will cost one extra function call with this changes. That might cause
   some slowdown for code-path with RX/TX callbacks heavily involved.
   Hope such trade-off is acceptable for the community.
3. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
   things into internal header: <ethdev_driver.h>.

That approach was selected to:
  - Avoid(/minimize) possible performance losses.
  - Minimize required changes inside PMDs.
 
Performance testing results (ICX 2.0GHz, E810 (ice)):
 - testpmd macswap fwd mode, plus
   a) no RX/TX callbacks:
      no actual slowdown observed
   b) bpf-load rx 0 0 JM ./dpdk.org/examples/bpf/t3.o:
      ~2% slowdown
 - l3fwd: no actual slowdown observed

Would like to thank Ferruh and Jerrin for reviewing and testing previous
versions of these series. All other interested parties please don't be shy
and provide your feedback.

Konstantin Ananyev (7):
  ethdev: allocate max space for internal queue array
  ethdev: change input parameters for rx_queue_count
  ethdev: copy ethdev 'fast' API into separate structure
  ethdev: make burst functions to use new flat array
  ethdev: add API to retrieve multiple ethernet addresses
  ethdev: remove legacy Rx descriptor done API
  ethdev: hide eth dev related structures

 app/test-pmd/config.c                         |  23 +-
 doc/guides/nics/features.rst                  |   6 +-
 doc/guides/rel_notes/deprecation.rst          |   5 -
 doc/guides/rel_notes/release_21_11.rst        |  21 ++
 drivers/common/octeontx2/otx2_sec_idev.c      |   2 +-
 drivers/crypto/octeontx2/otx2_cryptodev_ops.c |   2 +-
 drivers/net/ark/ark_ethdev_rx.c               |   4 +-
 drivers/net/ark/ark_ethdev_rx.h               |   3 +-
 drivers/net/atlantic/atl_ethdev.h             |   2 +-
 drivers/net/atlantic/atl_rxtx.c               |   9 +-
 drivers/net/bnxt/bnxt_ethdev.c                |   8 +-
 drivers/net/cxgbe/base/adapter.h              |   2 +-
 drivers/net/dpaa/dpaa_ethdev.c                |   9 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |   9 +-
 drivers/net/dpaa2/dpaa2_ptp.c                 |   2 +-
 drivers/net/e1000/e1000_ethdev.h              |  10 +-
 drivers/net/e1000/em_ethdev.c                 |   1 -
 drivers/net/e1000/em_rxtx.c                   |  21 +-
 drivers/net/e1000/igb_ethdev.c                |   2 -
 drivers/net/e1000/igb_rxtx.c                  |  21 +-
 drivers/net/enic/enic_ethdev.c                |  12 +-
 drivers/net/fm10k/fm10k.h                     |   5 +-
 drivers/net/fm10k/fm10k_ethdev.c              |   1 -
 drivers/net/fm10k/fm10k_rxtx.c                |  29 +-
 drivers/net/hns3/hns3_rxtx.c                  |   7 +-
 drivers/net/hns3/hns3_rxtx.h                  |   2 +-
 drivers/net/i40e/i40e_ethdev.c                |   1 -
 drivers/net/i40e/i40e_ethdev_vf.c             |   1 -
 drivers/net/i40e/i40e_rxtx.c                  |  30 +-
 drivers/net/i40e/i40e_rxtx.h                  |   4 +-
 drivers/net/iavf/iavf_rxtx.c                  |   4 +-
 drivers/net/iavf/iavf_rxtx.h                  |   2 +-
 drivers/net/ice/ice_rxtx.c                    |   4 +-
 drivers/net/ice/ice_rxtx.h                    |   2 +-
 drivers/net/igc/igc_ethdev.c                  |   1 -
 drivers/net/igc/igc_txrx.c                    |  23 +-
 drivers/net/igc/igc_txrx.h                    |   5 +-
 drivers/net/ixgbe/ixgbe_ethdev.c              |   2 -
 drivers/net/ixgbe/ixgbe_ethdev.h              |   5 +-
 drivers/net/ixgbe/ixgbe_rxtx.c                |  22 +-
 drivers/net/mlx5/mlx5_rx.c                    |  26 +-
 drivers/net/mlx5/mlx5_rx.h                    |   2 +-
 drivers/net/netvsc/hn_rxtx.c                  |   4 +-
 drivers/net/netvsc/hn_var.h                   |   3 +-
 drivers/net/nfp/nfp_rxtx.c                    |   4 +-
 drivers/net/nfp/nfp_rxtx.h                    |   3 +-
 drivers/net/octeontx2/otx2_ethdev.c           |   1 -
 drivers/net/octeontx2/otx2_ethdev.h           |   3 +-
 drivers/net/octeontx2/otx2_ethdev_ops.c       |  20 +-
 drivers/net/sfc/sfc_ethdev.c                  |  29 +-
 drivers/net/thunderx/nicvf_ethdev.c           |   3 +-
 drivers/net/thunderx/nicvf_rxtx.c             |   4 +-
 drivers/net/thunderx/nicvf_rxtx.h             |   2 +-
 drivers/net/txgbe/txgbe_ethdev.h              |   3 +-
 drivers/net/txgbe/txgbe_rxtx.c                |   4 +-
 drivers/net/vhost/rte_eth_vhost.c             |   4 +-
 drivers/net/virtio/virtio_ethdev.c            |   1 -
 lib/ethdev/ethdev_driver.h                    | 149 +++++++++
 lib/ethdev/ethdev_private.c                   |  83 +++++
 lib/ethdev/ethdev_private.h                   |   7 +
 lib/ethdev/rte_ethdev.c                       |  79 +++--
 lib/ethdev/rte_ethdev.h                       | 288 ++++++++++++------
 lib/ethdev/rte_ethdev_core.h                  | 171 +++--------
 lib/ethdev/version.map                        |   6 +
 lib/eventdev/rte_event_eth_rx_adapter.c       |   2 +-
 lib/eventdev/rte_event_eth_tx_adapter.c       |   2 +-
 lib/eventdev/rte_eventdev.c                   |   2 +-
 67 files changed, 661 insertions(+), 568 deletions(-)

-- 
2.26.3


^ permalink raw reply	[flat|nested] 112+ messages in thread

* [dpdk-dev] [PATCH v3 1/7] ethdev: allocate max space for internal queue array
  2021-10-01 14:02   ` [dpdk-dev] [PATCH v3 0/7] " Konstantin Ananyev
@ 2021-10-01 14:02     ` Konstantin Ananyev
  2021-10-01 16:48       ` Ferruh Yigit
  2021-10-01 14:02     ` [dpdk-dev] [PATCH v3 2/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
                       ` (7 subsequent siblings)
  8 siblings, 1 reply; 112+ messages in thread
From: Konstantin Ananyev @ 2021-10-01 14:02 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

At queue configure stage always allocate space for maximum possible
number (RTE_MAX_QUEUES_PER_PORT) of queue pointers.
That will allow 'fast' inline functions (eth_rx_burst, etc.) to refer
pointer to internal queue data without extra checking of current number
of configured queues.
That would help in future to hide rte_eth_dev and related structures.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/ethdev/rte_ethdev.c | 36 +++++++++---------------------------
 1 file changed, 9 insertions(+), 27 deletions(-)

diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index daf5ca9242..424bc260fa 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -898,7 +898,8 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 
 	if (dev->data->rx_queues == NULL && nb_queues != 0) { /* first time configuration */
 		dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues",
-				sizeof(dev->data->rx_queues[0]) * nb_queues,
+				sizeof(dev->data->rx_queues[0]) *
+				RTE_MAX_QUEUES_PER_PORT,
 				RTE_CACHE_LINE_SIZE);
 		if (dev->data->rx_queues == NULL) {
 			dev->data->nb_rx_queues = 0;
@@ -909,21 +910,11 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 
 		rxq = dev->data->rx_queues;
 
-		for (i = nb_queues; i < old_nb_queues; i++)
+		for (i = nb_queues; i < old_nb_queues; i++) {
 			(*dev->dev_ops->rx_queue_release)(rxq[i]);
-		rxq = rte_realloc(rxq, sizeof(rxq[0]) * nb_queues,
-				RTE_CACHE_LINE_SIZE);
-		if (rxq == NULL)
-			return -(ENOMEM);
-		if (nb_queues > old_nb_queues) {
-			uint16_t new_qs = nb_queues - old_nb_queues;
-
-			memset(rxq + old_nb_queues, 0,
-				sizeof(rxq[0]) * new_qs);
+			rxq[i] = NULL;
 		}
 
-		dev->data->rx_queues = rxq;
-
 	} else if (dev->data->rx_queues != NULL && nb_queues == 0) {
 		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, -ENOTSUP);
 
@@ -1138,8 +1129,9 @@ eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 
 	if (dev->data->tx_queues == NULL && nb_queues != 0) { /* first time configuration */
 		dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues",
-						   sizeof(dev->data->tx_queues[0]) * nb_queues,
-						   RTE_CACHE_LINE_SIZE);
+				sizeof(dev->data->tx_queues[0]) *
+				RTE_MAX_QUEUES_PER_PORT,
+				RTE_CACHE_LINE_SIZE);
 		if (dev->data->tx_queues == NULL) {
 			dev->data->nb_tx_queues = 0;
 			return -(ENOMEM);
@@ -1149,21 +1141,11 @@ eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 
 		txq = dev->data->tx_queues;
 
-		for (i = nb_queues; i < old_nb_queues; i++)
+		for (i = nb_queues; i < old_nb_queues; i++) {
 			(*dev->dev_ops->tx_queue_release)(txq[i]);
-		txq = rte_realloc(txq, sizeof(txq[0]) * nb_queues,
-				  RTE_CACHE_LINE_SIZE);
-		if (txq == NULL)
-			return -ENOMEM;
-		if (nb_queues > old_nb_queues) {
-			uint16_t new_qs = nb_queues - old_nb_queues;
-
-			memset(txq + old_nb_queues, 0,
-			       sizeof(txq[0]) * new_qs);
+			txq[i] = NULL;
 		}
 
-		dev->data->tx_queues = txq;
-
 	} else if (dev->data->tx_queues != NULL && nb_queues == 0) {
 		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release, -ENOTSUP);
 
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [dpdk-dev] [PATCH v3 2/7] ethdev: change input parameters for rx_queue_count
  2021-10-01 14:02   ` [dpdk-dev] [PATCH v3 0/7] " Konstantin Ananyev
  2021-10-01 14:02     ` [dpdk-dev] [PATCH v3 1/7] ethdev: allocate max space for internal queue array Konstantin Ananyev
@ 2021-10-01 14:02     ` Konstantin Ananyev
  2021-10-01 14:02     ` [dpdk-dev] [PATCH v3 3/7] ethdev: copy ethdev 'fast' API into separate structure Konstantin Ananyev
                       ` (6 subsequent siblings)
  8 siblings, 0 replies; 112+ messages in thread
From: Konstantin Ananyev @ 2021-10-01 14:02 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Currently majority of 'fast' ethdev ops take pointers to internal
queue data structures as an input parameter.
While eth_rx_queue_count() takes a pointer to rte_eth_dev and queue
index.
For future work to hide rte_eth_devices[] and friends it would be
plausible to unify parameters list of all 'fast' ethdev ops.
This patch changes eth_rx_queue_count() to accept pointer to internal
queue data as input parameter.
While this change is transparent to user, it still counts as an ABI change,
as eth_rx_queue_count_t is used by ethdev public inline function
rte_eth_rx_queue_count().

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/release_21_11.rst  |  6 ++++++
 drivers/net/ark/ark_ethdev_rx.c         |  4 ++--
 drivers/net/ark/ark_ethdev_rx.h         |  3 +--
 drivers/net/atlantic/atl_ethdev.h       |  2 +-
 drivers/net/atlantic/atl_rxtx.c         |  9 ++-------
 drivers/net/bnxt/bnxt_ethdev.c          |  8 +++++---
 drivers/net/dpaa/dpaa_ethdev.c          |  9 ++++-----
 drivers/net/dpaa2/dpaa2_ethdev.c        |  9 ++++-----
 drivers/net/e1000/e1000_ethdev.h        |  6 ++----
 drivers/net/e1000/em_rxtx.c             |  4 ++--
 drivers/net/e1000/igb_rxtx.c            |  4 ++--
 drivers/net/enic/enic_ethdev.c          | 12 ++++++------
 drivers/net/fm10k/fm10k.h               |  2 +-
 drivers/net/fm10k/fm10k_rxtx.c          |  4 ++--
 drivers/net/hns3/hns3_rxtx.c            |  7 +++++--
 drivers/net/hns3/hns3_rxtx.h            |  2 +-
 drivers/net/i40e/i40e_rxtx.c            |  4 ++--
 drivers/net/i40e/i40e_rxtx.h            |  3 +--
 drivers/net/iavf/iavf_rxtx.c            |  4 ++--
 drivers/net/iavf/iavf_rxtx.h            |  2 +-
 drivers/net/ice/ice_rxtx.c              |  4 ++--
 drivers/net/ice/ice_rxtx.h              |  2 +-
 drivers/net/igc/igc_txrx.c              |  5 ++---
 drivers/net/igc/igc_txrx.h              |  3 +--
 drivers/net/ixgbe/ixgbe_ethdev.h        |  3 +--
 drivers/net/ixgbe/ixgbe_rxtx.c          |  4 ++--
 drivers/net/mlx5/mlx5_rx.c              | 26 ++++++++++++-------------
 drivers/net/mlx5/mlx5_rx.h              |  2 +-
 drivers/net/netvsc/hn_rxtx.c            |  4 ++--
 drivers/net/netvsc/hn_var.h             |  2 +-
 drivers/net/nfp/nfp_rxtx.c              |  4 ++--
 drivers/net/nfp/nfp_rxtx.h              |  3 +--
 drivers/net/octeontx2/otx2_ethdev.h     |  2 +-
 drivers/net/octeontx2/otx2_ethdev_ops.c |  8 ++++----
 drivers/net/sfc/sfc_ethdev.c            | 12 ++++++------
 drivers/net/thunderx/nicvf_ethdev.c     |  3 +--
 drivers/net/thunderx/nicvf_rxtx.c       |  4 ++--
 drivers/net/thunderx/nicvf_rxtx.h       |  2 +-
 drivers/net/txgbe/txgbe_ethdev.h        |  3 +--
 drivers/net/txgbe/txgbe_rxtx.c          |  4 ++--
 drivers/net/vhost/rte_eth_vhost.c       |  4 ++--
 lib/ethdev/rte_ethdev.h                 |  2 +-
 lib/ethdev/rte_ethdev_core.h            |  3 +--
 43 files changed, 103 insertions(+), 110 deletions(-)

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 73e377a007..5e73c19e07 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -205,6 +205,12 @@ ABI Changes
   ``rte_security_ipsec_xform`` to allow applications to configure SA soft
   and hard expiry limits. Limits can be either in number of packets or bytes.
 
+* ethdev: Input parameters for ``eth_rx_queue_count_t`` was changed.
+  Instead of pointer to ``rte_eth_dev`` and queue index, now it accepts pointer
+  to internal queue data as input parameter. While this change is transparent
+  to user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
+  is used by  public inline function ``rte_eth_rx_queue_count``.
+
 
 Known Issues
 ------------
diff --git a/drivers/net/ark/ark_ethdev_rx.c b/drivers/net/ark/ark_ethdev_rx.c
index d255f0177b..98658ce621 100644
--- a/drivers/net/ark/ark_ethdev_rx.c
+++ b/drivers/net/ark/ark_ethdev_rx.c
@@ -388,11 +388,11 @@ eth_ark_rx_queue_drain(struct ark_rx_queue *queue)
 }
 
 uint32_t
-eth_ark_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id)
+eth_ark_dev_rx_queue_count(void *rx_queue)
 {
 	struct ark_rx_queue *queue;
 
-	queue = dev->data->rx_queues[queue_id];
+	queue = rx_queue;
 	return (queue->prod_index - queue->cons_index);	/* mod arith */
 }
 
diff --git a/drivers/net/ark/ark_ethdev_rx.h b/drivers/net/ark/ark_ethdev_rx.h
index c8dc340a8a..859fcf1e6f 100644
--- a/drivers/net/ark/ark_ethdev_rx.h
+++ b/drivers/net/ark/ark_ethdev_rx.h
@@ -17,8 +17,7 @@ int eth_ark_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			       unsigned int socket_id,
 			       const struct rte_eth_rxconf *rx_conf,
 			       struct rte_mempool *mp);
-uint32_t eth_ark_dev_rx_queue_count(struct rte_eth_dev *dev,
-				    uint16_t rx_queue_id);
+uint32_t eth_ark_dev_rx_queue_count(void *rx_queue);
 int eth_ark_rx_stop_queue(struct rte_eth_dev *dev, uint16_t queue_id);
 int eth_ark_rx_start_queue(struct rte_eth_dev *dev, uint16_t queue_id);
 uint16_t eth_ark_recv_pkts_noop(void *rx_queue, struct rte_mbuf **rx_pkts,
diff --git a/drivers/net/atlantic/atl_ethdev.h b/drivers/net/atlantic/atl_ethdev.h
index f547571b5c..e808460520 100644
--- a/drivers/net/atlantic/atl_ethdev.h
+++ b/drivers/net/atlantic/atl_ethdev.h
@@ -66,7 +66,7 @@ int atl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
 
-uint32_t atl_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t atl_rx_queue_count(void *rx_queue);
 
 int atl_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int atl_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c
index 7d367c9306..35bb13044e 100644
--- a/drivers/net/atlantic/atl_rxtx.c
+++ b/drivers/net/atlantic/atl_rxtx.c
@@ -689,18 +689,13 @@ atl_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 /* Return Rx queue avail count */
 
 uint32_t
-atl_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+atl_rx_queue_count(void *rx_queue)
 {
 	struct atl_rx_queue *rxq;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (rx_queue_id >= dev->data->nb_rx_queues) {
-		PMD_DRV_LOG(ERR, "Invalid RX queue id=%d", rx_queue_id);
-		return 0;
-	}
-
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 
 	if (rxq == NULL)
 		return 0;
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 097dd10de9..e07242e961 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3130,20 +3130,22 @@ bnxt_dev_led_off_op(struct rte_eth_dev *dev)
 }
 
 static uint32_t
-bnxt_rx_queue_count_op(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+bnxt_rx_queue_count_op(void *rx_queue)
 {
-	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
+	struct bnxt *bp;
 	struct bnxt_cp_ring_info *cpr;
 	uint32_t desc = 0, raw_cons, cp_ring_size;
 	struct bnxt_rx_queue *rxq;
 	struct rx_pkt_cmpl *rxcmp;
 	int rc;
 
+	rxq = rx_queue;
+	bp = rxq->bp;
+
 	rc = is_bnxt_in_error(bp);
 	if (rc)
 		return rc;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
 	cpr = rxq->cp_ring;
 	raw_cons = cpr->cp_raw_cons;
 	cp_ring_size = cpr->cp_ring_struct->ring_size;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 36d8f9249d..b5589300c9 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1278,17 +1278,16 @@ static void dpaa_eth_tx_queue_release(void *txq __rte_unused)
 }
 
 static uint32_t
-dpaa_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+dpaa_dev_rx_queue_count(void *rx_queue)
 {
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
-	struct qman_fq *rxq = &dpaa_intf->rx_queues[rx_queue_id];
+	struct qman_fq *rxq = rx_queue;
 	u32 frm_cnt = 0;
 
 	PMD_INIT_FUNC_TRACE();
 
 	if (qman_query_fq_frm_cnt(rxq, &frm_cnt) == 0) {
-		DPAA_PMD_DEBUG("RX frame count for q(%d) is %u",
-			       rx_queue_id, frm_cnt);
+		DPAA_PMD_DEBUG("RX frame count for q(%p) is %u",
+			       rx_queue, frm_cnt);
 	}
 	return frm_cnt;
 }
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index c12169578e..b295af2a57 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1011,10 +1011,9 @@ dpaa2_dev_tx_queue_release(void *q __rte_unused)
 }
 
 static uint32_t
-dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+dpaa2_dev_rx_queue_count(void *rx_queue)
 {
 	int32_t ret;
-	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	struct dpaa2_queue *dpaa2_q;
 	struct qbman_swp *swp;
 	struct qbman_fq_query_np_rslt state;
@@ -1031,12 +1030,12 @@ dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 	swp = DPAA2_PER_LCORE_PORTAL;
 
-	dpaa2_q = (struct dpaa2_queue *)priv->rx_vq[rx_queue_id];
+	dpaa2_q = rx_queue;
 
 	if (qbman_fq_query_state(swp, dpaa2_q->fqid, &state) == 0) {
 		frame_cnt = qbman_fq_state_frame_count(&state);
-		DPAA2_PMD_DP_DEBUG("RX frame count for q(%d) is %u",
-				rx_queue_id, frame_cnt);
+		DPAA2_PMD_DP_DEBUG("RX frame count for q(%p) is %u",
+				rx_queue, frame_cnt);
 	}
 	return frame_cnt;
 }
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 3b4d9c3ee6..460e130a83 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -399,8 +399,7 @@ int eth_igb_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		const struct rte_eth_rxconf *rx_conf,
 		struct rte_mempool *mb_pool);
 
-uint32_t eth_igb_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t eth_igb_rx_queue_count(void *rx_queue);
 
 int eth_igb_rx_descriptor_done(void *rx_queue, uint16_t offset);
 
@@ -476,8 +475,7 @@ int eth_em_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		const struct rte_eth_rxconf *rx_conf,
 		struct rte_mempool *mb_pool);
 
-uint32_t eth_em_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t eth_em_rx_queue_count(void *rx_queue);
 
 int eth_em_rx_descriptor_done(void *rx_queue, uint16_t offset);
 
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index dfd8f2fd00..40de36cb20 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -1489,14 +1489,14 @@ eth_em_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-eth_em_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_em_rx_queue_count(void *rx_queue)
 {
 #define EM_RXQ_SCAN_INTERVAL 4
 	volatile struct e1000_rx_desc *rxdp;
 	struct em_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 278d5d2712..3210a0e008 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -1769,14 +1769,14 @@ eth_igb_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-eth_igb_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_igb_rx_queue_count(void *rx_queue)
 {
 #define IGB_RXQ_SCAN_INTERVAL 4
 	volatile union e1000_adv_rx_desc *rxdp;
 	struct igb_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 8d5797523b..5b2d60ad9c 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -233,18 +233,18 @@ static void enicpmd_dev_rx_queue_release(void *rxq)
 	enic_free_rq(rxq);
 }
 
-static uint32_t enicpmd_dev_rx_queue_count(struct rte_eth_dev *dev,
-					   uint16_t rx_queue_id)
+static uint32_t enicpmd_dev_rx_queue_count(void *rx_queue)
 {
-	struct enic *enic = pmd_priv(dev);
+	struct enic *enic;
+	struct vnic_rq *sop_rq;
 	uint32_t queue_count = 0;
 	struct vnic_cq *cq;
 	uint32_t cq_tail;
 	uint16_t cq_idx;
-	int rq_num;
 
-	rq_num = enic_rte_rq_idx_to_sop_idx(rx_queue_id);
-	cq = &enic->cq[enic_cq_rq(enic, rq_num)];
+	sop_rq = rx_queue;
+	enic = vnic_dev_priv(sop_rq->vdev);
+	cq = &enic->cq[enic_cq_rq(enic, sop_rq->index)];
 	cq_idx = cq->to_clean;
 
 	cq_tail = ioread32(&cq->ctrl->cq_tail);
diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h
index 916b856acc..648d12a1b4 100644
--- a/drivers/net/fm10k/fm10k.h
+++ b/drivers/net/fm10k/fm10k.h
@@ -324,7 +324,7 @@ uint16_t fm10k_recv_scattered_pkts(void *rx_queue,
 		struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
 
 uint32_t
-fm10k_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+fm10k_dev_rx_queue_count(void *rx_queue);
 
 int
 fm10k_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index 0a9a27aa5a..eab798e52c 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -367,14 +367,14 @@ fm10k_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 }
 
 uint32_t
-fm10k_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+fm10k_dev_rx_queue_count(void *rx_queue)
 {
 #define FM10K_RXQ_SCAN_INTERVAL 4
 	volatile union fm10k_rx_desc *rxdp;
 	struct fm10k_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->hw_ring[rxq->next_dd];
 	while ((desc < rxq->nb_desc) &&
 		rxdp->w.status & rte_cpu_to_le_16(FM10K_RXD_STATUS_DD)) {
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 481872e395..04791ae7d0 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -4673,7 +4673,7 @@ hns3_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
 }
 
 uint32_t
-hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+hns3_rx_queue_count(void *rx_queue)
 {
 	/*
 	 * Number of BDs that have been processed by the driver
@@ -4681,9 +4681,12 @@ hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	 */
 	uint32_t driver_hold_bd_num;
 	struct hns3_rx_queue *rxq;
+	const struct rte_eth_dev *dev;
 	uint32_t fbd_num;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
+	dev = &rte_eth_devices[rxq->port_id];
+
 	fbd_num = hns3_read_dev(rxq, HNS3_RING_RX_FBDNUM_REG);
 	if (dev->rx_pkt_burst == hns3_recv_pkts_vec ||
 	    dev->rx_pkt_burst == hns3_recv_pkts_vec_sve)
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index cd7c21c1d0..34a028701f 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -696,7 +696,7 @@ int hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 			struct rte_mempool *mp);
 int hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 			unsigned int socket, const struct rte_eth_txconf *conf);
-uint32_t hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t hns3_rx_queue_count(void *rx_queue);
 int hns3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int hns3_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int hns3_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 3eb82578b0..5493ae6bba 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2117,14 +2117,14 @@ i40e_dev_rx_queue_release(void *rxq)
 }
 
 uint32_t
-i40e_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+i40e_dev_rx_queue_count(void *rx_queue)
 {
 #define I40E_RXQ_SCAN_INTERVAL 4
 	volatile union i40e_rx_desc *rxdp;
 	struct i40e_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 	while ((desc < rxq->nb_rx_desc) &&
 		((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 5ccf5773e8..a08b80f020 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -225,8 +225,7 @@ int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
 int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
 void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
 
-uint32_t i40e_dev_rx_queue_count(struct rte_eth_dev *dev,
-				 uint16_t rx_queue_id);
+uint32_t i40e_dev_rx_queue_count(void *rx_queue);
 int i40e_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
 int i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 87afc0b4cb..3dc1f04380 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -2799,14 +2799,14 @@ iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 
 /* Get the number of used descriptors of a rx queue */
 uint32_t
-iavf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id)
+iavf_dev_rxq_count(void *rx_queue)
 {
 #define IAVF_RXQ_SCAN_INTERVAL 4
 	volatile union iavf_rx_desc *rxdp;
 	struct iavf_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index e210b913d6..2f7bec2b63 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -453,7 +453,7 @@ void iavf_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			  struct rte_eth_rxq_info *qinfo);
 void iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			  struct rte_eth_txq_info *qinfo);
-uint32_t iavf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id);
+uint32_t iavf_dev_rxq_count(void *rx_queue);
 int iavf_dev_rx_desc_status(void *rx_queue, uint16_t offset);
 int iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset);
 
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 5d7ab4f047..61936b0ab1 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1427,14 +1427,14 @@ ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 }
 
 uint32_t
-ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+ice_rx_queue_count(void *rx_queue)
 {
 #define ICE_RXQ_SCAN_INTERVAL 4
 	volatile union ice_rx_flex_desc *rxdp;
 	struct ice_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 	while ((desc < rxq->nb_rx_desc) &&
 	       rte_le_to_cpu_16(rxdp->wb.status_error0) &
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index b10db0874d..b45abec91a 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -222,7 +222,7 @@ uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 void ice_set_tx_function_flag(struct rte_eth_dev *dev,
 			      struct ice_tx_queue *txq);
 void ice_set_tx_function(struct rte_eth_dev *dev);
-uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t ice_rx_queue_count(void *rx_queue);
 void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 		      struct rte_eth_rxq_info *qinfo);
 void ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index b5489eedd2..437992ecdf 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -722,8 +722,7 @@ void eth_igc_rx_queue_release(void *rxq)
 		igc_rx_queue_release(rxq);
 }
 
-uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id)
+uint32_t eth_igc_rx_queue_count(void *rx_queue)
 {
 	/**
 	 * Check the DD bit of a rx descriptor of each 4 in a group,
@@ -736,7 +735,7 @@ uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
 	struct igc_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 
 	while (desc < rxq->nb_rx_desc - rxq->rx_tail) {
diff --git a/drivers/net/igc/igc_txrx.h b/drivers/net/igc/igc_txrx.h
index f2b2d75bbc..b0c4b3ebd9 100644
--- a/drivers/net/igc/igc_txrx.h
+++ b/drivers/net/igc/igc_txrx.h
@@ -22,8 +22,7 @@ int eth_igc_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		const struct rte_eth_rxconf *rx_conf,
 		struct rte_mempool *mb_pool);
 
-uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t eth_igc_rx_queue_count(void *rx_queue);
 
 int eth_igc_rx_descriptor_done(void *rx_queue, uint16_t offset);
 
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index a0ce18ca24..c5027be1dc 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -602,8 +602,7 @@ int  ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
 
-uint32_t ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t ixgbe_dev_rx_queue_count(void *rx_queue);
 
 int ixgbe_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
 
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index bfdfd5e755..1f802851e3 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -3258,14 +3258,14 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+ixgbe_dev_rx_queue_count(void *rx_queue)
 {
 #define IXGBE_RXQ_SCAN_INTERVAL 4
 	volatile union ixgbe_adv_rx_desc *rxdp;
 	struct ixgbe_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index e3b1051ba4..1a9eb35acc 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -240,32 +240,32 @@ mlx5_rx_burst_mode_get(struct rte_eth_dev *dev,
 /**
  * DPDK callback to get the number of used descriptors in a RX queue.
  *
- * @param dev
- *   Pointer to the device structure.
- *
- * @param rx_queue_id
- *   The Rx queue.
+ * @param rx_queue
+ *   The Rx queue pointer.
  *
  * @return
  *   The number of used rx descriptor.
  *   -EINVAL if the queue is invalid
  */
 uint32_t
-mlx5_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+mlx5_rx_queue_count(void *rx_queue)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq;
+	struct mlx5_rxq_data *rxq = rx_queue;
+	struct rte_eth_dev *dev;
+
+	if (!rxq) {
+		rte_errno = EINVAL;
+		return -rte_errno;
+	}
+
+	dev = &rte_eth_devices[rxq->port_id];
 
 	if (dev->rx_pkt_burst == NULL ||
 	    dev->rx_pkt_burst == removed_rx_burst) {
 		rte_errno = ENOTSUP;
 		return -rte_errno;
 	}
-	rxq = (*priv->rxqs)[rx_queue_id];
-	if (!rxq) {
-		rte_errno = EINVAL;
-		return -rte_errno;
-	}
+
 	return rx_queue_count(rxq);
 }
 
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 3f2b99fb65..5e4ac7324d 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -260,7 +260,7 @@ uint16_t mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts,
 uint16_t removed_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts,
 			  uint16_t pkts_n);
 int mlx5_rx_descriptor_status(void *rx_queue, uint16_t offset);
-uint32_t mlx5_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t mlx5_rx_queue_count(void *rx_queue);
 void mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 		       struct rte_eth_rxq_info *qinfo);
 int mlx5_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t rx_queue_id,
diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c
index c6bf7cc132..30aac371c8 100644
--- a/drivers/net/netvsc/hn_rxtx.c
+++ b/drivers/net/netvsc/hn_rxtx.c
@@ -1018,9 +1018,9 @@ hn_dev_rx_queue_release(void *arg)
  * For this device that means how many packets are pending in the ring.
  */
 uint32_t
-hn_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id)
+hn_dev_rx_queue_count(void *rx_queue)
 {
-	struct hn_rx_queue *rxq = dev->data->rx_queues[queue_id];
+	struct hn_rx_queue *rxq = rx_queue;
 
 	return rte_ring_count(rxq->rx_ring);
 }
diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h
index 43642408bc..2a2bac9338 100644
--- a/drivers/net/netvsc/hn_var.h
+++ b/drivers/net/netvsc/hn_var.h
@@ -215,7 +215,7 @@ int	hn_dev_rx_queue_setup(struct rte_eth_dev *dev,
 void	hn_dev_rx_queue_info(struct rte_eth_dev *dev, uint16_t queue_id,
 			     struct rte_eth_rxq_info *qinfo);
 void	hn_dev_rx_queue_release(void *arg);
-uint32_t hn_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id);
+uint32_t hn_dev_rx_queue_count(void *rx_queue);
 int	hn_dev_rx_queue_status(void *rxq, uint16_t offset);
 void	hn_dev_free_queues(struct rte_eth_dev *dev);
 
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 1402c5f84a..4b2ac4cc43 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -97,14 +97,14 @@ nfp_net_rx_freelist_setup(struct rte_eth_dev *dev)
 }
 
 uint32_t
-nfp_net_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
+nfp_net_rx_queue_count(void *rx_queue)
 {
 	struct nfp_net_rxq *rxq;
 	struct nfp_net_rx_desc *rxds;
 	uint32_t idx;
 	uint32_t count;
 
-	rxq = (struct nfp_net_rxq *)dev->data->rx_queues[queue_idx];
+	rxq = rx_queue;
 
 	idx = rxq->rd_p;
 
diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h
index b0a8bf81b0..0fd50a6c22 100644
--- a/drivers/net/nfp/nfp_rxtx.h
+++ b/drivers/net/nfp/nfp_rxtx.h
@@ -275,8 +275,7 @@ struct nfp_net_rxq {
 } __rte_aligned(64);
 
 int nfp_net_rx_freelist_setup(struct rte_eth_dev *dev);
-uint32_t nfp_net_rx_queue_count(struct rte_eth_dev *dev,
-				       uint16_t queue_idx);
+uint32_t nfp_net_rx_queue_count(void *rx_queue);
 uint16_t nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 				  uint16_t nb_pkts);
 void nfp_net_rx_queue_release(void *rxq);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 7871e3d30b..6696db6f6f 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -431,7 +431,7 @@ int otx2_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			   struct rte_eth_burst_mode *mode);
 int otx2_tx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			   struct rte_eth_burst_mode *mode);
-uint32_t otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t qidx);
+uint32_t otx2_nix_rx_queue_count(void *rx_queue);
 int otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt);
 int otx2_nix_rx_descriptor_done(void *rxq, uint16_t offset);
 int otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 552e6bd43d..e6f8e5bfc1 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -342,13 +342,13 @@ nix_rx_head_tail_get(struct otx2_eth_dev *dev,
 }
 
 uint32_t
-otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t queue_idx)
+otx2_nix_rx_queue_count(void *rx_queue)
 {
-	struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[queue_idx];
-	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_eth_rxq *rxq = rx_queue;
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(rxq->eth_dev);
 	uint32_t head, tail;
 
-	nix_rx_head_tail_get(dev, &head, &tail, queue_idx);
+	nix_rx_head_tail_get(dev, &head, &tail, rxq->rq);
 	return (tail - head) % rxq->qlen;
 }
 
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 2db0d000c3..4b5713f3ec 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1281,19 +1281,19 @@ sfc_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
  * use any process-local pointers from the adapter data.
  */
 static uint32_t
-sfc_rx_queue_count(struct rte_eth_dev *dev, uint16_t ethdev_qid)
+sfc_rx_queue_count(void *rx_queue)
 {
-	const struct sfc_adapter_priv *sap = sfc_adapter_priv_by_eth_dev(dev);
-	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
-	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
+	struct sfc_dp_rxq *dp_rxq = rx_queue;
+	const struct sfc_dp_rx *dp_rx;
 	struct sfc_rxq_info *rxq_info;
 
-	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
+	dp_rx = sfc_dp_rx_by_dp_rxq(dp_rxq);
+	rxq_info = sfc_rxq_info_by_dp_rxq(dp_rxq);
 
 	if ((rxq_info->state & SFC_RXQ_STARTED) == 0)
 		return 0;
 
-	return sap->dp_rx->qdesc_npending(rxq_info->dp);
+	return dp_rx->qdesc_npending(dp_rxq);
 }
 
 /*
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 561a98fc81..0e87620e42 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -1060,8 +1060,7 @@ nicvf_rx_queue_release_mbufs(struct rte_eth_dev *dev, struct nicvf_rxq *rxq)
 	if (dev->rx_pkt_burst == NULL)
 		return;
 
-	while ((rxq_cnt = nicvf_dev_rx_queue_count(dev,
-				nicvf_netdev_qidx(rxq->nic, rxq->queue_id)))) {
+	while ((rxq_cnt = nicvf_dev_rx_queue_count(rxq))) {
 		nb_pkts = dev->rx_pkt_burst(rxq, rx_pkts,
 					NICVF_MAX_RX_FREE_THRESH);
 		PMD_DRV_LOG(INFO, "nb_pkts=%d  rxq_cnt=%d", nb_pkts, rxq_cnt);
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index 91e09ff8d5..0d4f4ae87e 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -649,11 +649,11 @@ nicvf_recv_pkts_multiseg_cksum_vlan_strip(void *rx_queue,
 }
 
 uint32_t
-nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
+nicvf_dev_rx_queue_count(void *rx_queue)
 {
 	struct nicvf_rxq *rxq;
 
-	rxq = dev->data->rx_queues[queue_idx];
+	rxq = rx_queue;
 	return nicvf_addr_read(rxq->cq_status) & NICVF_CQ_CQE_COUNT_MASK;
 }
 
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index d6ed660b4e..271f329dc4 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -83,7 +83,7 @@ nicvf_mbuff_init_mseg_update(struct rte_mbuf *pkt, const uint64_t mbuf_init,
 	*(uint64_t *)(&pkt->rearm_data) = init.value;
 }
 
-uint32_t nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx);
+uint32_t nicvf_dev_rx_queue_count(void *rx_queue);
 uint32_t nicvf_dev_rbdr_refill(struct rte_eth_dev *dev, uint16_t queue_idx);
 
 uint16_t nicvf_recv_pkts_no_offload(void *rxq, struct rte_mbuf **rx_pkts,
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 3021933965..569cd6a48f 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -446,8 +446,7 @@ int  txgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
 
-uint32_t txgbe_dev_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t txgbe_dev_rx_queue_count(void *rx_queue);
 
 int txgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int txgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 1a261287d1..2a7cfdeedb 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -2688,14 +2688,14 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-txgbe_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+txgbe_dev_rx_queue_count(void *rx_queue)
 {
 #define TXGBE_RXQ_SCAN_INTERVAL 4
 	volatile struct txgbe_rx_desc *rxdp;
 	struct txgbe_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index a202931e9a..f2b3f142d8 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -1369,11 +1369,11 @@ eth_link_update(struct rte_eth_dev *dev __rte_unused,
 }
 
 static uint32_t
-eth_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_rx_queue_count(void *rx_queue)
 {
 	struct vhost_queue *vq;
 
-	vq = dev->data->rx_queues[rx_queue_id];
+	vq = rx_queue;
 	if (vq == NULL)
 		return 0;
 
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index afdc53b674..9642b7c00f 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -5060,7 +5060,7 @@ rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
 	    dev->data->rx_queues[queue_id] == NULL)
 		return -EINVAL;
 
-	return (int)(*dev->rx_queue_count)(dev, queue_id);
+	return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]);
 }
 
 /**
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index edf96de2dc..00f27c643a 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -41,8 +41,7 @@ typedef uint16_t (*eth_tx_prep_t)(void *txq,
 /**< @internal Prepare output packets on a transmit queue of an Ethernet device. */
 
 
-typedef uint32_t (*eth_rx_queue_count_t)(struct rte_eth_dev *dev,
-					 uint16_t rx_queue_id);
+typedef uint32_t (*eth_rx_queue_count_t)(void *rxq);
 /**< @internal Get number of used descriptors on a receive queue. */
 
 typedef int (*eth_rx_descriptor_done_t)(void *rxq, uint16_t offset);
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [dpdk-dev] [PATCH v3 3/7] ethdev: copy ethdev 'fast' API into separate structure
  2021-10-01 14:02   ` [dpdk-dev] [PATCH v3 0/7] " Konstantin Ananyev
  2021-10-01 14:02     ` [dpdk-dev] [PATCH v3 1/7] ethdev: allocate max space for internal queue array Konstantin Ananyev
  2021-10-01 14:02     ` [dpdk-dev] [PATCH v3 2/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
@ 2021-10-01 14:02     ` Konstantin Ananyev
  2021-10-01 14:02     ` [dpdk-dev] [PATCH v3 4/7] ethdev: make burst functions to use new flat array Konstantin Ananyev
                       ` (5 subsequent siblings)
  8 siblings, 0 replies; 112+ messages in thread
From: Konstantin Ananyev @ 2021-10-01 14:02 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Copy public function pointers (rx_pkt_burst(), etc.) and related
pointers to internal data from rte_eth_dev structure into a
separate flat array. That array will remain in a public header.
The intention here is to make rte_eth_dev and related structures internal.
That should allow future possible changes to core eth_dev structures
to be transparent to the user and help to avoid ABI/API breakages.
The plan is to keep minimal part of data from rte_eth_dev public,
so we still can use inline functions for 'fast' calls
(like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/ethdev/ethdev_private.c  | 52 ++++++++++++++++++++++++++++++++++++
 lib/ethdev/ethdev_private.h  |  7 +++++
 lib/ethdev/rte_ethdev.c      | 17 ++++++++++++
 lib/ethdev/rte_ethdev_core.h | 45 +++++++++++++++++++++++++++++++
 4 files changed, 121 insertions(+)

diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 012cf73ca2..3eeda6e9f9 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -174,3 +174,55 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
 		RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
 	return str == NULL ? -1 : 0;
 }
+
+static uint16_t
+dummy_eth_rx_burst(__rte_unused void *rxq,
+		__rte_unused struct rte_mbuf **rx_pkts,
+		__rte_unused uint16_t nb_pkts)
+{
+	RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+static uint16_t
+dummy_eth_tx_burst(__rte_unused void *txq,
+		__rte_unused struct rte_mbuf **tx_pkts,
+		__rte_unused uint16_t nb_pkts)
+{
+	RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n");
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+void
+eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo)
+{
+	static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
+	static const struct rte_eth_fp_ops dummy_ops = {
+		.rx_pkt_burst = dummy_eth_rx_burst,
+		.tx_pkt_burst = dummy_eth_tx_burst,
+		.rxq = {.data = dummy_data, .clbk = dummy_data,},
+		.txq = {.data = dummy_data, .clbk = dummy_data,},
+	};
+
+	*fpo = dummy_ops;
+}
+
+void
+eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
+		const struct rte_eth_dev *dev)
+{
+	fpo->rx_pkt_burst = dev->rx_pkt_burst;
+	fpo->tx_pkt_burst = dev->tx_pkt_burst;
+	fpo->tx_pkt_prepare = dev->tx_pkt_prepare;
+	fpo->rx_queue_count = dev->rx_queue_count;
+	fpo->rx_descriptor_status = dev->rx_descriptor_status;
+	fpo->tx_descriptor_status = dev->tx_descriptor_status;
+
+	fpo->rxq.data = dev->data->rx_queues;
+	fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
+
+	fpo->txq.data = dev->data->tx_queues;
+	fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
+}
diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
index 3724429577..40333e7651 100644
--- a/lib/ethdev/ethdev_private.h
+++ b/lib/ethdev/ethdev_private.h
@@ -26,4 +26,11 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
 /* Parse devargs value for representor parameter. */
 int rte_eth_devargs_parse_representor_ports(char *str, void *data);
 
+/* reset eth 'fast' API to dummy values */
+void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
+
+/* setup eth 'fast' API to ethdev values */
+void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
+		const struct rte_eth_dev *dev);
+
 #endif /* _ETH_PRIVATE_H_ */
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 424bc260fa..9fbb1bc3db 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -44,6 +44,9 @@
 static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
 struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
 
+/* public 'fast' API */
+struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
+
 /* spinlock for eth device callbacks */
 static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
 
@@ -1788,6 +1791,9 @@ rte_eth_dev_start(uint16_t port_id)
 		(*dev->dev_ops->link_update)(dev, 0);
 	}
 
+	/* expose selection of PMD rx/tx function */
+	eth_dev_fp_ops_setup(rte_eth_fp_ops + port_id, dev);
+
 	rte_ethdev_trace_start(port_id);
 	return 0;
 }
@@ -1810,6 +1816,9 @@ rte_eth_dev_stop(uint16_t port_id)
 		return 0;
 	}
 
+	/* point rx/tx functions to dummy ones */
+	eth_dev_fp_ops_reset(rte_eth_fp_ops + port_id);
+
 	dev->data->dev_started = 0;
 	ret = (*dev->dev_ops->dev_stop)(dev);
 	rte_ethdev_trace_stop(port_id, ret);
@@ -4568,6 +4577,14 @@ rte_eth_mirror_rule_reset(uint16_t port_id, uint8_t rule_id)
 	return eth_err(port_id, (*dev->dev_ops->mirror_rule_reset)(dev, rule_id));
 }
 
+RTE_INIT(eth_dev_init_fp_ops)
+{
+	uint32_t i;
+
+	for (i = 0; i != RTE_DIM(rte_eth_fp_ops); i++)
+		eth_dev_fp_ops_reset(rte_eth_fp_ops + i);
+}
+
 RTE_INIT(eth_dev_init_cb_lists)
 {
 	uint16_t i;
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index 00f27c643a..1335ef8bb2 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -53,6 +53,51 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
 typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
 /**< @internal Check the status of a Tx descriptor */
 
+/**
+ * @internal
+ * Structure used to hold opaque pointernals to internal ethdev RX/TXi
+ * queues data.
+ * The main purpose to expose these pointers at all - allow compiler
+ * to fetch this data for 'fast' ethdev inline functions in advance.
+ */
+struct rte_ethdev_qdata {
+	void **data;
+	/**< points to array of internal queue data pointers */
+	void **clbk;
+	/**< points to array of queue callback data pointers */
+};
+
+/**
+ * @internal
+ * 'fast' ethdev funcions and related data are hold in a flat array.
+ * one entry per ethdev.
+ */
+struct rte_eth_fp_ops {
+
+	/** first 64B line */
+	eth_rx_burst_t rx_pkt_burst;
+	/**< PMD receive function. */
+	eth_tx_burst_t tx_pkt_burst;
+	/**< PMD transmit function. */
+	eth_tx_prep_t tx_pkt_prepare;
+	/**< PMD transmit prepare function. */
+	eth_rx_queue_count_t rx_queue_count;
+	/**< Get the number of used RX descriptors. */
+	eth_rx_descriptor_status_t rx_descriptor_status;
+	/**< Check the status of a Rx descriptor. */
+	eth_tx_descriptor_status_t tx_descriptor_status;
+	/**< Check the status of a Tx descriptor. */
+	uintptr_t reserved[2];
+
+	/** second 64B line */
+	struct rte_ethdev_qdata rxq;
+	struct rte_ethdev_qdata txq;
+	uintptr_t reserved2[4];
+
+} __rte_cache_aligned;
+
+extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
+
 
 /**
  * @internal
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [dpdk-dev] [PATCH v3 4/7] ethdev: make burst functions to use new flat array
  2021-10-01 14:02   ` [dpdk-dev] [PATCH v3 0/7] " Konstantin Ananyev
                       ` (2 preceding siblings ...)
  2021-10-01 14:02     ` [dpdk-dev] [PATCH v3 3/7] ethdev: copy ethdev 'fast' API into separate structure Konstantin Ananyev
@ 2021-10-01 14:02     ` Konstantin Ananyev
  2021-10-01 16:46       ` Ferruh Yigit
  2021-10-01 14:02     ` [dpdk-dev] [PATCH v3 5/7] ethdev: add API to retrieve multiple ethernet addresses Konstantin Ananyev
                       ` (4 subsequent siblings)
  8 siblings, 1 reply; 112+ messages in thread
From: Konstantin Ananyev @ 2021-10-01 14:02 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Rework 'fast' burst functions to use rte_eth_fp_ops[].
While it is an API/ABI breakage, this change is intended to be
transparent for both users (no changes in user app is required) and
PMD developers (no changes in PMD is required).
One extra thing to note - RX/TX callback invocation will cause extra
function call with these changes. That might cause some insignificant
slowdown for code-path where RX/TX callbacks are heavily involved.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/ethdev/ethdev_private.c |  31 +++++
 lib/ethdev/rte_ethdev.h     | 244 ++++++++++++++++++++++++++----------
 lib/ethdev/version.map      |   5 +
 3 files changed, 212 insertions(+), 68 deletions(-)

diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 3eeda6e9f9..27d29b2ac6 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -226,3 +226,34 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
 	fpo->txq.data = dev->data->tx_queues;
 	fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
 }
+
+uint16_t
+__rte_eth_rx_epilog(uint16_t port_id, uint16_t queue_id,
+	struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
+	void *opaque)
+{
+	const struct rte_eth_rxtx_callback *cb = opaque;
+
+	while (cb != NULL) {
+		nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
+				nb_pkts, cb->param);
+		cb = cb->next;
+	}
+
+	return nb_rx;
+}
+
+uint16_t
+__rte_eth_tx_prolog(uint16_t port_id, uint16_t queue_id,
+	struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque)
+{
+	const struct rte_eth_rxtx_callback *cb = opaque;
+
+	while (cb != NULL) {
+		nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
+				cb->param);
+		cb = cb->next;
+	}
+
+	return nb_pkts;
+}
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 9642b7c00f..20ac5ba88d 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -4904,6 +4904,34 @@ int rte_eth_representor_info_get(uint16_t port_id,
 
 #include <rte_ethdev_core.h>
 
+/**
+ * @internal
+ * Helper routine for eth driver rx_burst API.
+ * Should be called at exit from PMD's rte_eth_rx_bulk implementation.
+ * Does necessary post-processing - invokes RX callbacks if any, etc.
+ *
+ * @param port_id
+ *  The port identifier of the Ethernet device.
+ * @param queue_id
+ *  The index of the receive queue from which to retrieve input packets.
+ * @param rx_pkts
+ *   The address of an array of pointers to *rte_mbuf* structures that
+ *   have been retrieved from the device.
+ * @param nb_pkts
+ *   The number of packets that were retrieved from the device.
+ * @param nb_pkts
+ *   The number of elements in *rx_pkts* array.
+ * @param opaque
+ *   Opaque pointer of RX queue callback related data.
+ *
+ * @return
+ *  The number of packets effectively supplied to the *rx_pkts* array.
+ */
+__rte_experimental
+uint16_t __rte_eth_rx_epilog(uint16_t port_id, uint16_t queue_id,
+		struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
+		void *opaque);
+
 /**
  *
  * Retrieve a burst of input packets from a receive queue of an Ethernet
@@ -4995,23 +5023,37 @@ static inline uint16_t
 rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
 		 struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
 {
-	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 	uint16_t nb_rx;
+	struct rte_eth_fp_ops *p;
+	void *cb, *qd;
+
+#ifdef RTE_ETHDEV_DEBUG_RX
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return 0;
+	}
+#endif
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->rxq.data[queue_id];
 
 #ifdef RTE_ETHDEV_DEBUG_RX
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
 
-	if (queue_id >= dev->data->nb_rx_queues) {
-		RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u\n", queue_id);
+	if (qd == NULL) {
+		RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u for port_id=%u\n",
+			queue_id, port_id);
 		return 0;
 	}
 #endif
-	nb_rx = (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id],
-				     rx_pkts, nb_pkts);
+
+	nb_rx = p->rx_pkt_burst(qd, rx_pkts, nb_pkts);
 
 #ifdef RTE_ETHDEV_RXTX_CALLBACKS
-	struct rte_eth_rxtx_callback *cb;
 
 	/* __ATOMIC_RELEASE memory order was used when the
 	 * call back was inserted into the list.
@@ -5019,16 +5061,10 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
 	 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
 	 * not required.
 	 */
-	cb = __atomic_load_n(&dev->post_rx_burst_cbs[queue_id],
-				__ATOMIC_RELAXED);
-
-	if (unlikely(cb != NULL)) {
-		do {
-			nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
-						nb_pkts, cb->param);
-			cb = cb->next;
-		} while (cb != NULL);
-	}
+	cb = __atomic_load_n((void **)&p->rxq.clbk[queue_id], __ATOMIC_RELAXED);
+	if (unlikely(cb != NULL))
+		nb_rx = __rte_eth_rx_epilog(port_id, queue_id, rx_pkts, nb_rx,
+				nb_pkts, cb);
 #endif
 
 	rte_ethdev_trace_rx_burst(port_id, queue_id, (void **)rx_pkts, nb_rx);
@@ -5051,16 +5087,27 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
 static inline int
 rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
 {
-	struct rte_eth_dev *dev;
+	struct rte_eth_fp_ops *p;
+	void *qd;
+
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return -EINVAL;
+	}
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->rxq.data[queue_id];
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
-	dev = &rte_eth_devices[port_id];
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_queue_count, -ENOTSUP);
-	if (queue_id >= dev->data->nb_rx_queues ||
-	    dev->data->rx_queues[queue_id] == NULL)
+	RTE_FUNC_PTR_OR_ERR_RET(*p->rx_queue_count, -ENOTSUP);
+	if (qd == NULL)
 		return -EINVAL;
 
-	return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]);
+	return (int)(*p->rx_queue_count)(qd);
 }
 
 /**
@@ -5133,21 +5180,30 @@ static inline int
 rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
 	uint16_t offset)
 {
-	struct rte_eth_dev *dev;
-	void *rxq;
+	struct rte_eth_fp_ops *p;
+	void *qd;
 
 #ifdef RTE_ETHDEV_DEBUG_RX
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return -EINVAL;
+	}
 #endif
-	dev = &rte_eth_devices[port_id];
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->rxq.data[queue_id];
+
 #ifdef RTE_ETHDEV_DEBUG_RX
-	if (queue_id >= dev->data->nb_rx_queues)
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (qd == NULL)
 		return -ENODEV;
 #endif
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_descriptor_status, -ENOTSUP);
-	rxq = dev->data->rx_queues[queue_id];
-
-	return (*dev->rx_descriptor_status)(rxq, offset);
+	RTE_FUNC_PTR_OR_ERR_RET(*p->rx_descriptor_status, -ENOTSUP);
+	return (*p->rx_descriptor_status)(qd, offset);
 }
 
 /**@{@name Tx hardware descriptor states
@@ -5194,23 +5250,55 @@ rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
 static inline int rte_eth_tx_descriptor_status(uint16_t port_id,
 	uint16_t queue_id, uint16_t offset)
 {
-	struct rte_eth_dev *dev;
-	void *txq;
+	struct rte_eth_fp_ops *p;
+	void *qd;
 
 #ifdef RTE_ETHDEV_DEBUG_TX
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return -EINVAL;
+	}
 #endif
-	dev = &rte_eth_devices[port_id];
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->txq.data[queue_id];
+
 #ifdef RTE_ETHDEV_DEBUG_TX
-	if (queue_id >= dev->data->nb_tx_queues)
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (qd == NULL)
 		return -ENODEV;
 #endif
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_descriptor_status, -ENOTSUP);
-	txq = dev->data->tx_queues[queue_id];
-
-	return (*dev->tx_descriptor_status)(txq, offset);
+	RTE_FUNC_PTR_OR_ERR_RET(*p->tx_descriptor_status, -ENOTSUP);
+	return (*p->tx_descriptor_status)(qd, offset);
 }
 
+/**
+ * @internal
+ * Helper routine for eth driver tx_burst API.
+ * Should be called before entry PMD's rte_eth_tx_bulk implementation.
+ * Does necessary pre-processing - invokes TX callbacks if any, etc.
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device.
+ * @param queue_id
+ *   The index of the transmit queue through which output packets must be
+ *   sent.
+ * @param tx_pkts
+ *   The address of an array of *nb_pkts* pointers to *rte_mbuf* structures
+ *   which contain the output packets.
+ * @param nb_pkts
+ *   The maximum number of packets to transmit.
+ * @return
+ *   The number of output packets to transmit.
+ */
+__rte_experimental
+uint16_t __rte_eth_tx_prolog(uint16_t port_id, uint16_t queue_id,
+	struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque);
+
 /**
  * Send a burst of output packets on a transmit queue of an Ethernet device.
  *
@@ -5281,20 +5369,34 @@ static inline uint16_t
 rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
 		 struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 {
-	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	struct rte_eth_fp_ops *p;
+	void *cb, *qd;
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return 0;
+	}
+#endif
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->txq.data[queue_id];
 
 #ifdef RTE_ETHDEV_DEBUG_TX
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0);
 
-	if (queue_id >= dev->data->nb_tx_queues) {
-		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
+	if (qd == NULL) {
+		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
+			queue_id, port_id);
 		return 0;
 	}
 #endif
 
 #ifdef RTE_ETHDEV_RXTX_CALLBACKS
-	struct rte_eth_rxtx_callback *cb;
 
 	/* __ATOMIC_RELEASE memory order was used when the
 	 * call back was inserted into the list.
@@ -5302,21 +5404,16 @@ rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
 	 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
 	 * not required.
 	 */
-	cb = __atomic_load_n(&dev->pre_tx_burst_cbs[queue_id],
-				__ATOMIC_RELAXED);
-
-	if (unlikely(cb != NULL)) {
-		do {
-			nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
-					cb->param);
-			cb = cb->next;
-		} while (cb != NULL);
-	}
+	cb = __atomic_load_n((void **)&p->txq.clbk[queue_id], __ATOMIC_RELAXED);
+	if (unlikely(cb != NULL))
+		nb_pkts = __rte_eth_tx_prolog(port_id, queue_id, tx_pkts,
+				nb_pkts, cb);
 #endif
 
-	rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts,
-		nb_pkts);
-	return (*dev->tx_pkt_burst)(dev->data->tx_queues[queue_id], tx_pkts, nb_pkts);
+	nb_pkts = p->tx_pkt_burst(qd, tx_pkts, nb_pkts);
+
+	rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts, nb_pkts);
+	return nb_pkts;
 }
 
 /**
@@ -5379,31 +5476,42 @@ static inline uint16_t
 rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id,
 		struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 {
-	struct rte_eth_dev *dev;
+	struct rte_eth_fp_ops *p;
+	void *qd;
 
 #ifdef RTE_ETHDEV_DEBUG_TX
-	if (!rte_eth_dev_is_valid_port(port_id)) {
-		RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
 		rte_errno = ENODEV;
 		return 0;
 	}
 #endif
 
-	dev = &rte_eth_devices[port_id];
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->txq.data[queue_id];
 
 #ifdef RTE_ETHDEV_DEBUG_TX
-	if (queue_id >= dev->data->nb_tx_queues) {
-		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
+	if (!rte_eth_dev_is_valid_port(port_id)) {
+		RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
+		rte_errno = ENODEV;
+		return 0;
+	}
+	if (qd == NULL) {
+		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
+			queue_id, port_id);
 		rte_errno = EINVAL;
 		return 0;
 	}
 #endif
 
-	if (!dev->tx_pkt_prepare)
+	if (!p->tx_pkt_prepare)
 		return nb_pkts;
 
-	return (*dev->tx_pkt_prepare)(dev->data->tx_queues[queue_id],
-			tx_pkts, nb_pkts);
+	return p->tx_pkt_prepare(qd, tx_pkts, nb_pkts);
 }
 
 #else
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 904bce6ea1..3a1278b448 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -247,11 +247,16 @@ EXPERIMENTAL {
 	rte_mtr_meter_policy_delete;
 	rte_mtr_meter_policy_update;
 	rte_mtr_meter_policy_validate;
+
+	# added in 21.05
+	__rte_eth_rx_epilog;
+	__rte_eth_tx_prolog;
 };
 
 INTERNAL {
 	global:
 
+	rte_eth_fp_ops;
 	rte_eth_dev_allocate;
 	rte_eth_dev_allocated;
 	rte_eth_dev_attach_secondary;
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [dpdk-dev] [PATCH v3 5/7] ethdev: add API to retrieve multiple ethernet addresses
  2021-10-01 14:02   ` [dpdk-dev] [PATCH v3 0/7] " Konstantin Ananyev
                       ` (3 preceding siblings ...)
  2021-10-01 14:02     ` [dpdk-dev] [PATCH v3 4/7] ethdev: make burst functions to use new flat array Konstantin Ananyev
@ 2021-10-01 14:02     ` Konstantin Ananyev
  2021-10-01 14:02     ` [dpdk-dev] [PATCH v3 6/7] ethdev: remove legacy Rx descriptor done API Konstantin Ananyev
                       ` (3 subsequent siblings)
  8 siblings, 0 replies; 112+ messages in thread
From: Konstantin Ananyev @ 2021-10-01 14:02 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Introduce rte_eth_macaddrs_get() to allow user to retrieve all ethernet
addresses assigned to given port.
Change testpmd to use this new function and avoid referencing directly
rte_eth_devices[].

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 app/test-pmd/config.c                  | 23 +++++++++++------------
 doc/guides/rel_notes/release_21_11.rst |  5 +++++
 lib/ethdev/rte_ethdev.c                | 25 +++++++++++++++++++++++++
 lib/ethdev/rte_ethdev.h                | 19 +++++++++++++++++++
 lib/ethdev/version.map                 |  1 +
 5 files changed, 61 insertions(+), 12 deletions(-)

diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 9c66329e96..7221644230 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -5215,20 +5215,20 @@ show_macs(portid_t port_id)
 {
 	char buf[RTE_ETHER_ADDR_FMT_SIZE];
 	struct rte_eth_dev_info dev_info;
-	struct rte_ether_addr *addr;
-	uint32_t i, num_macs = 0;
-	struct rte_eth_dev *dev;
-
-	dev = &rte_eth_devices[port_id];
+	int32_t i, rc, num_macs = 0;
 
 	if (eth_dev_info_get_print_err(port_id, &dev_info))
 		return;
 
-	for (i = 0; i < dev_info.max_mac_addrs; i++) {
-		addr = &dev->data->mac_addrs[i];
+	struct rte_ether_addr addr[dev_info.max_mac_addrs];
+	rc = rte_eth_macaddrs_get(port_id, addr, dev_info.max_mac_addrs);
+	if (rc < 0)
+		return;
+
+	for (i = 0; i < rc; i++) {
 
 		/* skip zero address */
-		if (rte_is_zero_ether_addr(addr))
+		if (rte_is_zero_ether_addr(&addr[i]))
 			continue;
 
 		num_macs++;
@@ -5236,14 +5236,13 @@ show_macs(portid_t port_id)
 
 	printf("Number of MAC address added: %d\n", num_macs);
 
-	for (i = 0; i < dev_info.max_mac_addrs; i++) {
-		addr = &dev->data->mac_addrs[i];
+	for (i = 0; i < rc; i++) {
 
 		/* skip zero address */
-		if (rte_is_zero_ether_addr(addr))
+		if (rte_is_zero_ether_addr(&addr[i]))
 			continue;
 
-		rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, addr);
+		rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, &addr[i]);
 		printf("  %s\n", buf);
 	}
 }
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 5e73c19e07..f959d95a32 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -121,6 +121,11 @@ New Features
   * Added tests to validate packets hard expiry.
   * Added tests to verify tunnel header verification in IPsec inbound.
 
+* **Add new function into ethdev lib.**
+
+  * Added ``rte_eth_macaddrs_get`` to allow user to retrieve all Ethernet
+    addresses aasigned to given ethernet port.
+
 
 Removed Items
 -------------
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 9fbb1bc3db..a74ef83aac 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -3572,6 +3572,31 @@ rte_eth_dev_set_ptypes(uint16_t port_id, uint32_t ptype_mask,
 	return ret;
 }
 
+int
+rte_eth_macaddrs_get(uint16_t port_id, struct rte_ether_addr ma[], uint32_t num)
+{
+	int32_t ret;
+	struct rte_eth_dev *dev;
+	struct rte_eth_dev_info dev_info;
+
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	dev = &rte_eth_devices[port_id];
+
+	ret = rte_eth_dev_info_get(port_id, &dev_info);
+	if (ret != 0)
+		return ret;
+
+	if (ma == NULL) {
+		RTE_ETHDEV_LOG(ERR, "%s: invalid parameters\n", __func__);
+		return -EINVAL;
+	}
+
+	num = RTE_MIN(dev_info.max_mac_addrs, num);
+	memcpy(ma, dev->data->mac_addrs, num * sizeof(ma[0]));
+
+	return num;
+}
+
 int
 rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr)
 {
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 20ac5ba88d..9038aea329 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -3037,6 +3037,25 @@ int rte_eth_dev_set_rx_queue_stats_mapping(uint16_t port_id,
  */
 int rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr);
 
+/**
+ * Retrieve the Ethernet addresses of an Ethernet device.
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device.
+ * @param ma
+ *   A pointer to an array of structures of type *ether_addr* to be filled with
+ *   the Ethernet addresses of the Ethernet device.
+ * @param num
+ *   Number of elements in the *ma* array.
+ * @return
+ *   - number of retrieved addresses if successful
+ *   - (-ENODEV) if *port_id* invalid.
+ *   - (-EINVAL) if bad parameter.
+ */
+__rte_experimental
+int rte_eth_macaddrs_get(uint16_t port_id, struct rte_ether_addr ma[],
+	uint32_t num);
+
 /**
  * Retrieve the contextual information of an Ethernet device.
  *
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 3a1278b448..979260bdc2 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -251,6 +251,7 @@ EXPERIMENTAL {
 	# added in 21.05
 	__rte_eth_rx_epilog;
 	__rte_eth_tx_prolog;
+	rte_eth_macaddrs_get;
 };
 
 INTERNAL {
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [dpdk-dev] [PATCH v3 6/7] ethdev: remove legacy Rx descriptor done API
  2021-10-01 14:02   ` [dpdk-dev] [PATCH v3 0/7] " Konstantin Ananyev
                       ` (4 preceding siblings ...)
  2021-10-01 14:02     ` [dpdk-dev] [PATCH v3 5/7] ethdev: add API to retrieve multiple ethernet addresses Konstantin Ananyev
@ 2021-10-01 14:02     ` Konstantin Ananyev
  2021-10-01 14:02     ` [dpdk-dev] [PATCH v3 7/7] ethdev: hide eth dev related structures Konstantin Ananyev
                       ` (2 subsequent siblings)
  8 siblings, 0 replies; 112+ messages in thread
From: Konstantin Ananyev @ 2021-10-01 14:02 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

rte_eth_rx_descriptor_status() should be used as a replacement.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/nics/features.rst            |  6 +-----
 doc/guides/rel_notes/deprecation.rst    |  5 -----
 doc/guides/rel_notes/release_21_11.rst  |  4 ++++
 drivers/net/e1000/e1000_ethdev.h        |  4 ----
 drivers/net/e1000/em_ethdev.c           |  1 -
 drivers/net/e1000/em_rxtx.c             | 17 ----------------
 drivers/net/e1000/igb_ethdev.c          |  2 --
 drivers/net/e1000/igb_rxtx.c            | 17 ----------------
 drivers/net/fm10k/fm10k.h               |  3 ---
 drivers/net/fm10k/fm10k_ethdev.c        |  1 -
 drivers/net/fm10k/fm10k_rxtx.c          | 25 ------------------------
 drivers/net/i40e/i40e_ethdev.c          |  1 -
 drivers/net/i40e/i40e_ethdev_vf.c       |  1 -
 drivers/net/i40e/i40e_rxtx.c            | 26 -------------------------
 drivers/net/i40e/i40e_rxtx.h            |  1 -
 drivers/net/igc/igc_ethdev.c            |  1 -
 drivers/net/igc/igc_txrx.c              | 18 -----------------
 drivers/net/igc/igc_txrx.h              |  2 --
 drivers/net/ixgbe/ixgbe_ethdev.c        |  2 --
 drivers/net/ixgbe/ixgbe_ethdev.h        |  2 --
 drivers/net/ixgbe/ixgbe_rxtx.c          | 18 -----------------
 drivers/net/octeontx2/otx2_ethdev.c     |  1 -
 drivers/net/octeontx2/otx2_ethdev.h     |  1 -
 drivers/net/octeontx2/otx2_ethdev_ops.c | 12 ------------
 drivers/net/sfc/sfc_ethdev.c            | 17 ----------------
 drivers/net/virtio/virtio_ethdev.c      |  1 -
 lib/ethdev/rte_ethdev.c                 |  1 -
 lib/ethdev/rte_ethdev.h                 | 25 ------------------------
 lib/ethdev/rte_ethdev_core.h            |  4 ----
 29 files changed, 5 insertions(+), 214 deletions(-)

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 4fce8cd1c9..a02ef25409 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -662,14 +662,10 @@ Rx descriptor status
 --------------------
 
 Supports check the status of a Rx descriptor. When ``rx_descriptor_status`` is
-used, status can be "Available", "Done" or "Unavailable". When
-``rx_descriptor_done`` is used, status can be "DD bit is set" or "DD bit is
-not set".
+used, status can be "Available", "Done" or "Unavailable".
 
 * **[implements] rte_eth_dev**: ``rx_descriptor_status``.
 * **[related]    API**: ``rte_eth_rx_descriptor_status()``.
-* **[implements] rte_eth_dev**: ``rx_descriptor_done``.
-* **[related]    API**: ``rte_eth_rx_descriptor_done()``.
 
 
 .. _nic_features_tx_descriptor_status:
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 05fc2fdee7..82e843a0b3 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -106,11 +106,6 @@ Deprecation Notices
   the device packet overhead can be calculated as:
   ``(struct rte_eth_dev_info).max_rx_pktlen - (struct rte_eth_dev_info).max_mtu``
 
-* ethdev: ``rx_descriptor_done`` dev_ops and ``rte_eth_rx_descriptor_done``
-  will be removed in 21.11.
-  Existing ``rte_eth_rx_descriptor_status`` and ``rte_eth_tx_descriptor_status``
-  APIs can be used as replacement.
-
 * ethdev: The port mirroring API can be replaced with a more fine grain flow API.
   The structs ``rte_eth_mirror_conf``, ``rte_eth_vlan_mirror`` and the functions
   ``rte_eth_mirror_rule_set``, ``rte_eth_mirror_rule_reset`` will be marked
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index f959d95a32..532be57ae9 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -146,6 +146,10 @@ Removed Items
   blacklist/whitelist are removed. Users must use the new
   block/allow list arguments.
 
+* ethdev: Removed ``rx_descriptor_done`` dev_ops and
+  ``rte_eth_rx_descriptor_done``.  Existing ``rte_eth_rx_descriptor_status``
+  APIs can be used as a replacement.
+
 
 API Changes
 -----------
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 460e130a83..fff52958df 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -401,8 +401,6 @@ int eth_igb_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 
 uint32_t eth_igb_rx_queue_count(void *rx_queue);
 
-int eth_igb_rx_descriptor_done(void *rx_queue, uint16_t offset);
-
 int eth_igb_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int eth_igb_tx_descriptor_status(void *tx_queue, uint16_t offset);
 
@@ -477,8 +475,6 @@ int eth_em_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 
 uint32_t eth_em_rx_queue_count(void *rx_queue);
 
-int eth_em_rx_descriptor_done(void *rx_queue, uint16_t offset);
-
 int eth_em_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int eth_em_tx_descriptor_status(void *tx_queue, uint16_t offset);
 
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index a0ca371b02..9e157e4ffe 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -247,7 +247,6 @@ eth_em_dev_init(struct rte_eth_dev *eth_dev)
 
 	eth_dev->dev_ops = &eth_em_ops;
 	eth_dev->rx_queue_count = eth_em_rx_queue_count;
-	eth_dev->rx_descriptor_done   = eth_em_rx_descriptor_done;
 	eth_dev->rx_descriptor_status = eth_em_rx_descriptor_status;
 	eth_dev->tx_descriptor_status = eth_em_tx_descriptor_status;
 	eth_dev->rx_pkt_burst = (eth_rx_burst_t)&eth_em_recv_pkts;
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index 40de36cb20..13ea3a77f4 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -1511,23 +1511,6 @@ eth_em_rx_queue_count(void *rx_queue)
 	return desc;
 }
 
-int
-eth_em_rx_descriptor_done(void *rx_queue, uint16_t offset)
-{
-	volatile struct e1000_rx_desc *rxdp;
-	struct em_rx_queue *rxq = rx_queue;
-	uint32_t desc;
-
-	if (unlikely(offset >= rxq->nb_rx_desc))
-		return 0;
-	desc = rxq->rx_tail + offset;
-	if (desc >= rxq->nb_rx_desc)
-		desc -= rxq->nb_rx_desc;
-
-	rxdp = &rxq->rx_ring[desc];
-	return !!(rxdp->status & E1000_RXD_STAT_DD);
-}
-
 int
 eth_em_rx_descriptor_status(void *rx_queue, uint16_t offset)
 {
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index d80fad01e3..e1bc3852fc 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -726,7 +726,6 @@ eth_igb_dev_init(struct rte_eth_dev *eth_dev)
 
 	eth_dev->dev_ops = &eth_igb_ops;
 	eth_dev->rx_queue_count = eth_igb_rx_queue_count;
-	eth_dev->rx_descriptor_done   = eth_igb_rx_descriptor_done;
 	eth_dev->rx_descriptor_status = eth_igb_rx_descriptor_status;
 	eth_dev->tx_descriptor_status = eth_igb_tx_descriptor_status;
 	eth_dev->rx_pkt_burst = &eth_igb_recv_pkts;
@@ -920,7 +919,6 @@ eth_igbvf_dev_init(struct rte_eth_dev *eth_dev)
 	PMD_INIT_FUNC_TRACE();
 
 	eth_dev->dev_ops = &igbvf_eth_dev_ops;
-	eth_dev->rx_descriptor_done   = eth_igb_rx_descriptor_done;
 	eth_dev->rx_descriptor_status = eth_igb_rx_descriptor_status;
 	eth_dev->tx_descriptor_status = eth_igb_tx_descriptor_status;
 	eth_dev->rx_pkt_burst = &eth_igb_recv_pkts;
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 3210a0e008..0ee1b8d48d 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -1791,23 +1791,6 @@ eth_igb_rx_queue_count(void *rx_queue)
 	return desc;
 }
 
-int
-eth_igb_rx_descriptor_done(void *rx_queue, uint16_t offset)
-{
-	volatile union e1000_adv_rx_desc *rxdp;
-	struct igb_rx_queue *rxq = rx_queue;
-	uint32_t desc;
-
-	if (unlikely(offset >= rxq->nb_rx_desc))
-		return 0;
-	desc = rxq->rx_tail + offset;
-	if (desc >= rxq->nb_rx_desc)
-		desc -= rxq->nb_rx_desc;
-
-	rxdp = &rxq->rx_ring[desc];
-	return !!(rxdp->wb.upper.status_error & E1000_RXD_STAT_DD);
-}
-
 int
 eth_igb_rx_descriptor_status(void *rx_queue, uint16_t offset)
 {
diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h
index 648d12a1b4..17c73c4dc5 100644
--- a/drivers/net/fm10k/fm10k.h
+++ b/drivers/net/fm10k/fm10k.h
@@ -326,9 +326,6 @@ uint16_t fm10k_recv_scattered_pkts(void *rx_queue,
 uint32_t
 fm10k_dev_rx_queue_count(void *rx_queue);
 
-int
-fm10k_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
-
 int
 fm10k_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 3236290e40..f9c287a6ba 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -3062,7 +3062,6 @@ eth_fm10k_dev_init(struct rte_eth_dev *dev)
 
 	dev->dev_ops = &fm10k_eth_dev_ops;
 	dev->rx_queue_count = fm10k_dev_rx_queue_count;
-	dev->rx_descriptor_done	= fm10k_dev_rx_descriptor_done;
 	dev->rx_descriptor_status = fm10k_dev_rx_descriptor_status;
 	dev->tx_descriptor_status = fm10k_dev_tx_descriptor_status;
 	dev->rx_pkt_burst = &fm10k_recv_pkts;
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index eab798e52c..b3515ae96a 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -393,31 +393,6 @@ fm10k_dev_rx_queue_count(void *rx_queue)
 	return desc;
 }
 
-int
-fm10k_dev_rx_descriptor_done(void *rx_queue, uint16_t offset)
-{
-	volatile union fm10k_rx_desc *rxdp;
-	struct fm10k_rx_queue *rxq = rx_queue;
-	uint16_t desc;
-	int ret;
-
-	if (unlikely(offset >= rxq->nb_desc)) {
-		PMD_DRV_LOG(ERR, "Invalid RX descriptor offset %u", offset);
-		return 0;
-	}
-
-	desc = rxq->next_dd + offset;
-	if (desc >= rxq->nb_desc)
-		desc -= rxq->nb_desc;
-
-	rxdp = &rxq->hw_ring[desc];
-
-	ret = !!(rxdp->w.status &
-			rte_cpu_to_le_16(FM10K_RXD_STATUS_DD));
-
-	return ret;
-}
-
 int
 fm10k_dev_rx_descriptor_status(void *rx_queue, uint16_t offset)
 {
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 7a2a8281d2..4fbaac4272 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1434,7 +1434,6 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
 
 	dev->dev_ops = &i40e_eth_dev_ops;
 	dev->rx_queue_count = i40e_dev_rx_queue_count;
-	dev->rx_descriptor_done = i40e_dev_rx_descriptor_done;
 	dev->rx_descriptor_status = i40e_dev_rx_descriptor_status;
 	dev->tx_descriptor_status = i40e_dev_tx_descriptor_status;
 	dev->rx_pkt_burst = i40e_recv_pkts;
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index e8dd6d1dab..d669ffd250 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -1571,7 +1571,6 @@ i40evf_dev_init(struct rte_eth_dev *eth_dev)
 	/* assign ops func pointer */
 	eth_dev->dev_ops = &i40evf_eth_dev_ops;
 	eth_dev->rx_queue_count       = i40e_dev_rx_queue_count;
-	eth_dev->rx_descriptor_done   = i40e_dev_rx_descriptor_done;
 	eth_dev->rx_descriptor_status = i40e_dev_rx_descriptor_status;
 	eth_dev->tx_descriptor_status = i40e_dev_tx_descriptor_status;
 	eth_dev->rx_pkt_burst = &i40e_recv_pkts;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 5493ae6bba..fad432d1bd 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2145,32 +2145,6 @@ i40e_dev_rx_queue_count(void *rx_queue)
 	return desc;
 }
 
-int
-i40e_dev_rx_descriptor_done(void *rx_queue, uint16_t offset)
-{
-	volatile union i40e_rx_desc *rxdp;
-	struct i40e_rx_queue *rxq = rx_queue;
-	uint16_t desc;
-	int ret;
-
-	if (unlikely(offset >= rxq->nb_rx_desc)) {
-		PMD_DRV_LOG(ERR, "Invalid RX descriptor id %u", offset);
-		return 0;
-	}
-
-	desc = rxq->rx_tail + offset;
-	if (desc >= rxq->nb_rx_desc)
-		desc -= rxq->nb_rx_desc;
-
-	rxdp = &(rxq->rx_ring[desc]);
-
-	ret = !!(((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
-		I40E_RXD_QW1_STATUS_MASK) >> I40E_RXD_QW1_STATUS_SHIFT) &
-				(1 << I40E_RX_DESC_STATUS_DD_SHIFT));
-
-	return ret;
-}
-
 int
 i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t offset)
 {
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index a08b80f020..d495a741b6 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -226,7 +226,6 @@ int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
 void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
 
 uint32_t i40e_dev_rx_queue_count(void *rx_queue);
-int i40e_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
 int i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
 
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 224a095483..9ddfe68a78 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -1227,7 +1227,6 @@ eth_igc_dev_init(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 	dev->dev_ops = &eth_igc_ops;
-	dev->rx_descriptor_done	= eth_igc_rx_descriptor_done;
 	dev->rx_queue_count = eth_igc_rx_queue_count;
 	dev->rx_descriptor_status = eth_igc_rx_descriptor_status;
 	dev->tx_descriptor_status = eth_igc_tx_descriptor_status;
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index 437992ecdf..2498cfd290 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -756,24 +756,6 @@ uint32_t eth_igc_rx_queue_count(void *rx_queue)
 	return desc;
 }
 
-int eth_igc_rx_descriptor_done(void *rx_queue, uint16_t offset)
-{
-	volatile union igc_adv_rx_desc *rxdp;
-	struct igc_rx_queue *rxq = rx_queue;
-	uint32_t desc;
-
-	if (unlikely(!rxq || offset >= rxq->nb_rx_desc))
-		return 0;
-
-	desc = rxq->rx_tail + offset;
-	if (desc >= rxq->nb_rx_desc)
-		desc -= rxq->nb_rx_desc;
-
-	rxdp = &rxq->rx_ring[desc];
-	return !!(rxdp->wb.upper.status_error &
-			rte_cpu_to_le_32(IGC_RXD_STAT_DD));
-}
-
 int eth_igc_rx_descriptor_status(void *rx_queue, uint16_t offset)
 {
 	struct igc_rx_queue *rxq = rx_queue;
diff --git a/drivers/net/igc/igc_txrx.h b/drivers/net/igc/igc_txrx.h
index b0c4b3ebd9..3b4c7450cd 100644
--- a/drivers/net/igc/igc_txrx.h
+++ b/drivers/net/igc/igc_txrx.h
@@ -24,8 +24,6 @@ int eth_igc_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 
 uint32_t eth_igc_rx_queue_count(void *rx_queue);
 
-int eth_igc_rx_descriptor_done(void *rx_queue, uint16_t offset);
-
 int eth_igc_rx_descriptor_status(void *rx_queue, uint16_t offset);
 
 int eth_igc_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 47693c0c47..78f61d3dac 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1057,7 +1057,6 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 
 	eth_dev->dev_ops = &ixgbe_eth_dev_ops;
 	eth_dev->rx_queue_count       = ixgbe_dev_rx_queue_count;
-	eth_dev->rx_descriptor_done   = ixgbe_dev_rx_descriptor_done;
 	eth_dev->rx_descriptor_status = ixgbe_dev_rx_descriptor_status;
 	eth_dev->tx_descriptor_status = ixgbe_dev_tx_descriptor_status;
 	eth_dev->rx_pkt_burst = &ixgbe_recv_pkts;
@@ -1546,7 +1545,6 @@ eth_ixgbevf_dev_init(struct rte_eth_dev *eth_dev)
 	PMD_INIT_FUNC_TRACE();
 
 	eth_dev->dev_ops = &ixgbevf_eth_dev_ops;
-	eth_dev->rx_descriptor_done   = ixgbe_dev_rx_descriptor_done;
 	eth_dev->rx_descriptor_status = ixgbe_dev_rx_descriptor_status;
 	eth_dev->tx_descriptor_status = ixgbe_dev_tx_descriptor_status;
 	eth_dev->rx_pkt_burst = &ixgbe_recv_pkts;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index c5027be1dc..6b7a4079db 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -604,8 +604,6 @@ int  ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 
 uint32_t ixgbe_dev_rx_queue_count(void *rx_queue);
 
-int ixgbe_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
-
 int ixgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int ixgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
 
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 1f802851e3..8e056db761 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -3281,24 +3281,6 @@ ixgbe_dev_rx_queue_count(void *rx_queue)
 	return desc;
 }
 
-int
-ixgbe_dev_rx_descriptor_done(void *rx_queue, uint16_t offset)
-{
-	volatile union ixgbe_adv_rx_desc *rxdp;
-	struct ixgbe_rx_queue *rxq = rx_queue;
-	uint32_t desc;
-
-	if (unlikely(offset >= rxq->nb_rx_desc))
-		return 0;
-	desc = rxq->rx_tail + offset;
-	if (desc >= rxq->nb_rx_desc)
-		desc -= rxq->nb_rx_desc;
-
-	rxdp = &rxq->rx_ring[desc];
-	return !!(rxdp->wb.upper.status_error &
-			rte_cpu_to_le_32(IXGBE_RXDADV_STAT_DD));
-}
-
 int
 ixgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset)
 {
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 75d4cabf2e..4b33056085 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -2449,7 +2449,6 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
 	int rc, max_entries;
 
 	eth_dev->dev_ops = &otx2_eth_dev_ops;
-	eth_dev->rx_descriptor_done = otx2_nix_rx_descriptor_done;
 	eth_dev->rx_queue_count = otx2_nix_rx_queue_count;
 	eth_dev->rx_descriptor_status = otx2_nix_rx_descriptor_status;
 	eth_dev->tx_descriptor_status = otx2_nix_tx_descriptor_status;
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 6696db6f6f..d28fcaa281 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -433,7 +433,6 @@ int otx2_tx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			   struct rte_eth_burst_mode *mode);
 uint32_t otx2_nix_rx_queue_count(void *rx_queue);
 int otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt);
-int otx2_nix_rx_descriptor_done(void *rxq, uint16_t offset);
 int otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int otx2_nix_tx_descriptor_status(void *tx_queue, uint16_t offset);
 
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index e6f8e5bfc1..3a763f691b 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -365,18 +365,6 @@ nix_offset_has_packet(uint32_t head, uint32_t tail, uint16_t offset)
 	return 0;
 }
 
-int
-otx2_nix_rx_descriptor_done(void *rx_queue, uint16_t offset)
-{
-	struct otx2_eth_rxq *rxq = rx_queue;
-	uint32_t head, tail;
-
-	nix_rx_head_tail_get(otx2_eth_pmd_priv(rxq->eth_dev),
-			     &head, &tail, rxq->rq);
-
-	return nix_offset_has_packet(head, tail, offset);
-}
-
 int
 otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset)
 {
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 4b5713f3ec..c9b01480f8 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1296,21 +1296,6 @@ sfc_rx_queue_count(void *rx_queue)
 	return dp_rx->qdesc_npending(dp_rxq);
 }
 
-/*
- * The function is used by the secondary process as well. It must not
- * use any process-local pointers from the adapter data.
- */
-static int
-sfc_rx_descriptor_done(void *queue, uint16_t offset)
-{
-	struct sfc_dp_rxq *dp_rxq = queue;
-	const struct sfc_dp_rx *dp_rx;
-
-	dp_rx = sfc_dp_rx_by_dp_rxq(dp_rxq);
-
-	return offset < dp_rx->qdesc_npending(dp_rxq);
-}
-
 /*
  * The function is used by the secondary process as well. It must not
  * use any process-local pointers from the adapter data.
@@ -2045,7 +2030,6 @@ sfc_eth_dev_set_ops(struct rte_eth_dev *dev)
 	dev->tx_pkt_burst = dp_tx->pkt_burst;
 
 	dev->rx_queue_count = sfc_rx_queue_count;
-	dev->rx_descriptor_done = sfc_rx_descriptor_done;
 	dev->rx_descriptor_status = sfc_rx_descriptor_status;
 	dev->tx_descriptor_status = sfc_tx_descriptor_status;
 	dev->dev_ops = &sfc_eth_dev_ops;
@@ -2153,7 +2137,6 @@ sfc_eth_dev_secondary_init(struct rte_eth_dev *dev, uint32_t logtype_main)
 	dev->tx_pkt_prepare = dp_tx->pkt_prepare;
 	dev->tx_pkt_burst = dp_tx->pkt_burst;
 	dev->rx_queue_count = sfc_rx_queue_count;
-	dev->rx_descriptor_done = sfc_rx_descriptor_done;
 	dev->rx_descriptor_status = sfc_rx_descriptor_status;
 	dev->tx_descriptor_status = sfc_tx_descriptor_status;
 	dev->dev_ops = &sfc_eth_dev_secondary_ops;
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index da1633d77e..c82089930f 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -1894,7 +1894,6 @@ eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
 	}
 
 	eth_dev->dev_ops = &virtio_eth_dev_ops;
-	eth_dev->rx_descriptor_done = virtio_dev_rx_queue_done;
 
 	if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
 		set_rxtx_funcs(eth_dev);
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index a74ef83aac..c386f8b9f0 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -591,7 +591,6 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
 	eth_dev->tx_pkt_burst = NULL;
 	eth_dev->tx_pkt_prepare = NULL;
 	eth_dev->rx_queue_count = NULL;
-	eth_dev->rx_descriptor_done = NULL;
 	eth_dev->rx_descriptor_status = NULL;
 	eth_dev->tx_descriptor_status = NULL;
 	eth_dev->dev_ops = NULL;
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 9038aea329..ddf2c23e37 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -5129,31 +5129,6 @@ rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
 	return (int)(*p->rx_queue_count)(qd);
 }
 
-/**
- * Check if the DD bit of the specific RX descriptor in the queue has been set
- *
- * @param port_id
- *  The port identifier of the Ethernet device.
- * @param queue_id
- *  The queue id on the specific port.
- * @param offset
- *  The offset of the descriptor ID from tail.
- * @return
- *  - (1) if the specific DD bit is set.
- *  - (0) if the specific DD bit is not set.
- *  - (-ENODEV) if *port_id* invalid.
- *  - (-ENOTSUP) if the device does not support this function
- */
-__rte_deprecated
-static inline int
-rte_eth_rx_descriptor_done(uint16_t port_id, uint16_t queue_id, uint16_t offset)
-{
-	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_descriptor_done, -ENOTSUP);
-	return (*dev->rx_descriptor_done)(dev->data->rx_queues[queue_id], offset);
-}
-
 /**@{@name Rx hardware descriptor states
  * @see rte_eth_rx_descriptor_status
  */
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index 1335ef8bb2..140350314d 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -44,9 +44,6 @@ typedef uint16_t (*eth_tx_prep_t)(void *txq,
 typedef uint32_t (*eth_rx_queue_count_t)(void *rxq);
 /**< @internal Get number of used descriptors on a receive queue. */
 
-typedef int (*eth_rx_descriptor_done_t)(void *rxq, uint16_t offset);
-/**< @internal Check DD bit of specific RX descriptor */
-
 typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
 /**< @internal Check the status of a Rx descriptor */
 
@@ -129,7 +126,6 @@ struct rte_eth_dev {
 	eth_tx_prep_t tx_pkt_prepare; /**< Pointer to PMD transmit prepare function. */
 
 	eth_rx_queue_count_t       rx_queue_count; /**< Get the number of used RX descriptors. */
-	eth_rx_descriptor_done_t   rx_descriptor_done;   /**< Check rxd DD bit. */
 	eth_rx_descriptor_status_t rx_descriptor_status; /**< Check the status of a Rx descriptor. */
 	eth_tx_descriptor_status_t tx_descriptor_status; /**< Check the status of a Tx descriptor. */
 
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [dpdk-dev] [PATCH v3 7/7] ethdev: hide eth dev related structures
  2021-10-01 14:02   ` [dpdk-dev] [PATCH v3 0/7] " Konstantin Ananyev
                       ` (5 preceding siblings ...)
  2021-10-01 14:02     ` [dpdk-dev] [PATCH v3 6/7] ethdev: remove legacy Rx descriptor done API Konstantin Ananyev
@ 2021-10-01 14:02     ` Konstantin Ananyev
  2021-10-01 16:53       ` Ferruh Yigit
  2021-10-01 17:02     ` [dpdk-dev] [PATCH v3 0/7] " Ferruh Yigit
  2021-10-04 13:55     ` [dpdk-dev] [PATCH v4 " Konstantin Ananyev
  8 siblings, 1 reply; 112+ messages in thread
From: Konstantin Ananyev @ 2021-10-01 14:02 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
data into private header (ethdev_driver.h).
Few minor changes to keep DPDK building after that.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/release_21_11.rst        |   6 +
 drivers/common/octeontx2/otx2_sec_idev.c      |   2 +-
 drivers/crypto/octeontx2/otx2_cryptodev_ops.c |   2 +-
 drivers/net/cxgbe/base/adapter.h              |   2 +-
 drivers/net/dpaa2/dpaa2_ptp.c                 |   2 +-
 drivers/net/netvsc/hn_var.h                   |   1 +
 lib/ethdev/ethdev_driver.h                    | 149 ++++++++++++++++++
 lib/ethdev/rte_ethdev_core.h                  | 143 -----------------
 lib/eventdev/rte_event_eth_rx_adapter.c       |   2 +-
 lib/eventdev/rte_event_eth_tx_adapter.c       |   2 +-
 lib/eventdev/rte_eventdev.c                   |   2 +-
 11 files changed, 163 insertions(+), 150 deletions(-)

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 532be57ae9..6aabcbcb26 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -220,6 +220,12 @@ ABI Changes
   to user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
   is used by  public inline function ``rte_eth_rx_queue_count``.
 
+* ethdev: Made ``rte_eth_dev``, ``rte_eth_dev_data``, ``rte_eth_rxtx_callback``
+  private data structures. ``rte_eth_devices[]`` can't be accessible directly
+  by user any more. While it is an ABI breakage, this change is intended
+  to be transparent for both users (no changes in user app is required) and
+  PMD developers (no changes in PMD is required).
+
 
 Known Issues
 ------------
diff --git a/drivers/common/octeontx2/otx2_sec_idev.c b/drivers/common/octeontx2/otx2_sec_idev.c
index 6e9643c383..b561b67174 100644
--- a/drivers/common/octeontx2/otx2_sec_idev.c
+++ b/drivers/common/octeontx2/otx2_sec_idev.c
@@ -4,7 +4,7 @@
 
 #include <rte_atomic.h>
 #include <rte_bus_pci.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_spinlock.h>
 
 #include "otx2_common.h"
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
index 37fad11d91..f0b72e05c2 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
@@ -6,7 +6,7 @@
 
 #include <cryptodev_pmd.h>
 #include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_event_crypto_adapter.h>
 
 #include "otx2_cryptodev.h"
diff --git a/drivers/net/cxgbe/base/adapter.h b/drivers/net/cxgbe/base/adapter.h
index 01a2a9d147..1c7c8afe16 100644
--- a/drivers/net/cxgbe/base/adapter.h
+++ b/drivers/net/cxgbe/base/adapter.h
@@ -12,7 +12,7 @@
 #include <rte_mbuf.h>
 #include <rte_io.h>
 #include <rte_rwlock.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 
 #include "../cxgbe_compat.h"
 #include "../cxgbe_ofld.h"
diff --git a/drivers/net/dpaa2/dpaa2_ptp.c b/drivers/net/dpaa2/dpaa2_ptp.c
index 899dd5d442..8d79e39244 100644
--- a/drivers/net/dpaa2/dpaa2_ptp.c
+++ b/drivers/net/dpaa2/dpaa2_ptp.c
@@ -10,7 +10,7 @@
 #include <unistd.h>
 #include <stdarg.h>
 
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_log.h>
 #include <rte_eth_ctrl.h>
 #include <rte_malloc.h>
diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h
index 2a2bac9338..74e6e6010d 100644
--- a/drivers/net/netvsc/hn_var.h
+++ b/drivers/net/netvsc/hn_var.h
@@ -7,6 +7,7 @@
  */
 
 #include <rte_eal_paging.h>
+#include <ethdev_driver.h>
 
 /*
  * Tunable ethdev params
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index cc2c75261c..63b04dce32 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -17,6 +17,155 @@
 
 #include <rte_ethdev.h>
 
+/**
+ * @internal
+ * Structure used to hold information about the callbacks to be called for a
+ * queue on RX and TX.
+ */
+struct rte_eth_rxtx_callback {
+	struct rte_eth_rxtx_callback *next;
+	union{
+		rte_rx_callback_fn rx;
+		rte_tx_callback_fn tx;
+	} fn;
+	void *param;
+};
+
+/**
+ * @internal
+ * The generic data structure associated with each ethernet device.
+ *
+ * Pointers to burst-oriented packet receive and transmit functions are
+ * located at the beginning of the structure, along with the pointer to
+ * where all the data elements for the particular device are stored in shared
+ * memory. This split allows the function pointer and driver data to be per-
+ * process, while the actual configuration data for the device is shared.
+ */
+struct rte_eth_dev {
+	eth_rx_burst_t rx_pkt_burst; /**< Pointer to PMD receive function. */
+	eth_tx_burst_t tx_pkt_burst; /**< Pointer to PMD transmit function. */
+	eth_tx_prep_t tx_pkt_prepare;
+	/**< Pointer to PMD transmit prepare function. */
+	eth_rx_queue_count_t rx_queue_count;
+	/**< Get the number of used RX descriptors. */
+	eth_rx_descriptor_status_t rx_descriptor_status;
+	/**< Check the status of a Rx descriptor. */
+	eth_tx_descriptor_status_t tx_descriptor_status;
+	/**< Check the status of a Tx descriptor. */
+
+	/**
+	 * Next two fields are per-device data but *data is shared between
+	 * primary and secondary processes and *process_private is per-process
+	 * private. The second one is managed by PMDs if necessary.
+	 */
+	struct rte_eth_dev_data *data;  /**< Pointer to device data. */
+	void *process_private; /**< Pointer to per-process device data. */
+	const struct eth_dev_ops *dev_ops; /**< Functions exported by PMD */
+	struct rte_device *device; /**< Backing device */
+	struct rte_intr_handle *intr_handle; /**< Device interrupt handle */
+	/** User application callbacks for NIC interrupts */
+	struct rte_eth_dev_cb_list link_intr_cbs;
+	/**
+	 * User-supplied functions called from rx_burst to post-process
+	 * received packets before passing them to the user
+	 */
+	struct rte_eth_rxtx_callback *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
+	/**
+	 * User-supplied functions called from tx_burst to pre-process
+	 * received packets before passing them to the driver for transmission.
+	 */
+	struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
+	enum rte_eth_dev_state state; /**< Flag indicating the port state */
+	void *security_ctx; /**< Context for security ops */
+
+	uint64_t reserved_64s[4]; /**< Reserved for future fields */
+	void *reserved_ptrs[4];   /**< Reserved for future fields */
+} __rte_cache_aligned;
+
+struct rte_eth_dev_sriov;
+struct rte_eth_dev_owner;
+
+/**
+ * @internal
+ * The data part, with no function pointers, associated with each ethernet
+ * device. This structure is safe to place in shared memory to be common
+ * among different processes in a multi-process configuration.
+ */
+struct rte_eth_dev_data {
+	char name[RTE_ETH_NAME_MAX_LEN]; /**< Unique identifier name */
+
+	void **rx_queues; /**< Array of pointers to RX queues. */
+	void **tx_queues; /**< Array of pointers to TX queues. */
+	uint16_t nb_rx_queues; /**< Number of RX queues. */
+	uint16_t nb_tx_queues; /**< Number of TX queues. */
+
+	struct rte_eth_dev_sriov sriov;    /**< SRIOV data */
+
+	void *dev_private;
+			/**< PMD-specific private data.
+			 *   @see rte_eth_dev_release_port()
+			 */
+
+	struct rte_eth_link dev_link;   /**< Link-level information & status. */
+	struct rte_eth_conf dev_conf;   /**< Configuration applied to device. */
+	uint16_t mtu;                   /**< Maximum Transmission Unit. */
+	uint32_t min_rx_buf_size;
+			/**< Common RX buffer size handled by all queues. */
+
+	uint64_t rx_mbuf_alloc_failed; /**< RX ring mbuf allocation failures. */
+	struct rte_ether_addr *mac_addrs;
+			/**< Device Ethernet link address.
+			 *   @see rte_eth_dev_release_port()
+			 */
+	uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
+			/**< Bitmap associating MAC addresses to pools. */
+	struct rte_ether_addr *hash_mac_addrs;
+			/**< Device Ethernet MAC addresses of hash filtering.
+			 *   @see rte_eth_dev_release_port()
+			 */
+	uint16_t port_id;           /**< Device [external] port identifier. */
+
+	__extension__
+	uint8_t promiscuous   : 1,
+		/**< RX promiscuous mode ON(1) / OFF(0). */
+		scattered_rx : 1,
+		/**< RX of scattered packets is ON(1) / OFF(0) */
+		all_multicast : 1,
+		/**< RX all multicast mode ON(1) / OFF(0). */
+		dev_started : 1,
+		/**< Device state: STARTED(1) / STOPPED(0). */
+		lro         : 1,
+		/**< RX LRO is ON(1) / OFF(0) */
+		dev_configured : 1;
+		/**< Indicates whether the device is configured.
+		 *   CONFIGURED(1) / NOT CONFIGURED(0).
+		 */
+	uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
+		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
+	uint8_t tx_queue_state[RTE_MAX_QUEUES_PER_PORT];
+		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
+	uint32_t dev_flags;             /**< Capabilities. */
+	int numa_node;                  /**< NUMA node connection. */
+	struct rte_vlan_filter_conf vlan_filter_conf;
+			/**< VLAN filter configuration. */
+	struct rte_eth_dev_owner owner; /**< The port owner. */
+	uint16_t representor_id;
+			/**< Switch-specific identifier.
+			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
+			 */
+
+	pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
+	uint64_t reserved_64s[4]; /**< Reserved for future fields */
+	void *reserved_ptrs[4];   /**< Reserved for future fields */
+} __rte_cache_aligned;
+
+/**
+ * @internal
+ * The pool of *rte_eth_dev* structures. The size of the pool
+ * is configured at compile-time in the <rte_ethdev.c> file.
+ */
+extern struct rte_eth_dev rte_eth_devices[];
+
 /**< @internal Declaration of the hairpin peer queue information structure. */
 struct rte_hairpin_peer_info;
 
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index 140350314d..691985b15a 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -95,147 +95,4 @@ struct rte_eth_fp_ops {
 
 extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
 
-
-/**
- * @internal
- * Structure used to hold information about the callbacks to be called for a
- * queue on RX and TX.
- */
-struct rte_eth_rxtx_callback {
-	struct rte_eth_rxtx_callback *next;
-	union{
-		rte_rx_callback_fn rx;
-		rte_tx_callback_fn tx;
-	} fn;
-	void *param;
-};
-
-/**
- * @internal
- * The generic data structure associated with each ethernet device.
- *
- * Pointers to burst-oriented packet receive and transmit functions are
- * located at the beginning of the structure, along with the pointer to
- * where all the data elements for the particular device are stored in shared
- * memory. This split allows the function pointer and driver data to be per-
- * process, while the actual configuration data for the device is shared.
- */
-struct rte_eth_dev {
-	eth_rx_burst_t rx_pkt_burst; /**< Pointer to PMD receive function. */
-	eth_tx_burst_t tx_pkt_burst; /**< Pointer to PMD transmit function. */
-	eth_tx_prep_t tx_pkt_prepare; /**< Pointer to PMD transmit prepare function. */
-
-	eth_rx_queue_count_t       rx_queue_count; /**< Get the number of used RX descriptors. */
-	eth_rx_descriptor_status_t rx_descriptor_status; /**< Check the status of a Rx descriptor. */
-	eth_tx_descriptor_status_t tx_descriptor_status; /**< Check the status of a Tx descriptor. */
-
-	/**
-	 * Next two fields are per-device data but *data is shared between
-	 * primary and secondary processes and *process_private is per-process
-	 * private. The second one is managed by PMDs if necessary.
-	 */
-	struct rte_eth_dev_data *data;  /**< Pointer to device data. */
-	void *process_private; /**< Pointer to per-process device data. */
-	const struct eth_dev_ops *dev_ops; /**< Functions exported by PMD */
-	struct rte_device *device; /**< Backing device */
-	struct rte_intr_handle *intr_handle; /**< Device interrupt handle */
-	/** User application callbacks for NIC interrupts */
-	struct rte_eth_dev_cb_list link_intr_cbs;
-	/**
-	 * User-supplied functions called from rx_burst to post-process
-	 * received packets before passing them to the user
-	 */
-	struct rte_eth_rxtx_callback *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
-	/**
-	 * User-supplied functions called from tx_burst to pre-process
-	 * received packets before passing them to the driver for transmission.
-	 */
-	struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
-	enum rte_eth_dev_state state; /**< Flag indicating the port state */
-	void *security_ctx; /**< Context for security ops */
-
-	uint64_t reserved_64s[4]; /**< Reserved for future fields */
-	void *reserved_ptrs[4];   /**< Reserved for future fields */
-} __rte_cache_aligned;
-
-struct rte_eth_dev_sriov;
-struct rte_eth_dev_owner;
-
-/**
- * @internal
- * The data part, with no function pointers, associated with each ethernet device.
- *
- * This structure is safe to place in shared memory to be common among different
- * processes in a multi-process configuration.
- */
-struct rte_eth_dev_data {
-	char name[RTE_ETH_NAME_MAX_LEN]; /**< Unique identifier name */
-
-	void **rx_queues; /**< Array of pointers to RX queues. */
-	void **tx_queues; /**< Array of pointers to TX queues. */
-	uint16_t nb_rx_queues; /**< Number of RX queues. */
-	uint16_t nb_tx_queues; /**< Number of TX queues. */
-
-	struct rte_eth_dev_sriov sriov;    /**< SRIOV data */
-
-	void *dev_private;
-			/**< PMD-specific private data.
-			 *   @see rte_eth_dev_release_port()
-			 */
-
-	struct rte_eth_link dev_link;   /**< Link-level information & status. */
-	struct rte_eth_conf dev_conf;   /**< Configuration applied to device. */
-	uint16_t mtu;                   /**< Maximum Transmission Unit. */
-	uint32_t min_rx_buf_size;
-			/**< Common RX buffer size handled by all queues. */
-
-	uint64_t rx_mbuf_alloc_failed; /**< RX ring mbuf allocation failures. */
-	struct rte_ether_addr *mac_addrs;
-			/**< Device Ethernet link address.
-			 *   @see rte_eth_dev_release_port()
-			 */
-	uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
-			/**< Bitmap associating MAC addresses to pools. */
-	struct rte_ether_addr *hash_mac_addrs;
-			/**< Device Ethernet MAC addresses of hash filtering.
-			 *   @see rte_eth_dev_release_port()
-			 */
-	uint16_t port_id;           /**< Device [external] port identifier. */
-
-	__extension__
-	uint8_t promiscuous   : 1, /**< RX promiscuous mode ON(1) / OFF(0). */
-		scattered_rx : 1,  /**< RX of scattered packets is ON(1) / OFF(0) */
-		all_multicast : 1, /**< RX all multicast mode ON(1) / OFF(0). */
-		dev_started : 1,   /**< Device state: STARTED(1) / STOPPED(0). */
-		lro         : 1,   /**< RX LRO is ON(1) / OFF(0) */
-		dev_configured : 1;
-		/**< Indicates whether the device is configured.
-		 *   CONFIGURED(1) / NOT CONFIGURED(0).
-		 */
-	uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
-		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
-	uint8_t tx_queue_state[RTE_MAX_QUEUES_PER_PORT];
-		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
-	uint32_t dev_flags;             /**< Capabilities. */
-	int numa_node;                  /**< NUMA node connection. */
-	struct rte_vlan_filter_conf vlan_filter_conf;
-			/**< VLAN filter configuration. */
-	struct rte_eth_dev_owner owner; /**< The port owner. */
-	uint16_t representor_id;
-			/**< Switch-specific identifier.
-			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
-			 */
-
-	pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
-	uint64_t reserved_64s[4]; /**< Reserved for future fields */
-	void *reserved_ptrs[4];   /**< Reserved for future fields */
-} __rte_cache_aligned;
-
-/**
- * @internal
- * The pool of *rte_eth_dev* structures. The size of the pool
- * is configured at compile-time in the <rte_ethdev.c> file.
- */
-extern struct rte_eth_dev rte_eth_devices[];
-
 #endif /* _RTE_ETHDEV_CORE_H_ */
diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c
index 13dfb28401..89c4ca5d40 100644
--- a/lib/eventdev/rte_event_eth_rx_adapter.c
+++ b/lib/eventdev/rte_event_eth_rx_adapter.c
@@ -11,7 +11,7 @@
 #include <rte_common.h>
 #include <rte_dev.h>
 #include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_log.h>
 #include <rte_malloc.h>
 #include <rte_service_component.h>
diff --git a/lib/eventdev/rte_event_eth_tx_adapter.c b/lib/eventdev/rte_event_eth_tx_adapter.c
index 18c0359db7..1c06c8707c 100644
--- a/lib/eventdev/rte_event_eth_tx_adapter.c
+++ b/lib/eventdev/rte_event_eth_tx_adapter.c
@@ -3,7 +3,7 @@
  */
 #include <rte_spinlock.h>
 #include <rte_service_component.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 
 #include "eventdev_pmd.h"
 #include "rte_eventdev_trace.h"
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index e347d6dfd5..ebef5f0906 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -29,7 +29,7 @@
 #include <rte_common.h>
 #include <rte_malloc.h>
 #include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_cryptodev.h>
 #include <cryptodev_pmd.h>
 #include <rte_telemetry.h>
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v3 4/7] ethdev: make burst functions to use new flat array
  2021-10-01 14:02     ` [dpdk-dev] [PATCH v3 4/7] ethdev: make burst functions to use new flat array Konstantin Ananyev
@ 2021-10-01 16:46       ` Ferruh Yigit
  2021-10-01 17:40         ` Ananyev, Konstantin
  0 siblings, 1 reply; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-01 16:46 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, mdr, jay.jayatheerthan

On 10/1/2021 3:02 PM, Konstantin Ananyev wrote:
> Rework 'fast' burst functions to use rte_eth_fp_ops[].
> While it is an API/ABI breakage, this change is intended to be
> transparent for both users (no changes in user app is required) and
> PMD developers (no changes in PMD is required).
> One extra thing to note - RX/TX callback invocation will cause extra
> function call with these changes. That might cause some insignificant
> slowdown for code-path where RX/TX callbacks are heavily involved.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

<...>

>  static inline int
>  rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
>  {
> -	struct rte_eth_dev *dev;
> +	struct rte_eth_fp_ops *p;
> +	void *qd;
> +
> +	if (port_id >= RTE_MAX_ETHPORTS ||
> +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> +		RTE_ETHDEV_LOG(ERR,
> +			"Invalid port_id=%u or queue_id=%u\n",
> +			port_id, queue_id);
> +		return -EINVAL;
> +	}

Should the checkes wrapped with '#ifdef RTE_ETHDEV_DEBUG_RX' like others?

<...>

> +++ b/lib/ethdev/version.map
> @@ -247,11 +247,16 @@ EXPERIMENTAL {
>  	rte_mtr_meter_policy_delete;
>  	rte_mtr_meter_policy_update;
>  	rte_mtr_meter_policy_validate;
> +
> +	# added in 21.05

s/21.05/21.11/

> +	__rte_eth_rx_epilog;
> +	__rte_eth_tx_prolog;

These are directly called by application and must be part of ABI, but marked as
'internal' and has '__rte' prefix to highligh it, this may be confusing.
What about making them proper, non-internal, API?

>  };
>  
>  INTERNAL {
>  	global:
>  
> +	rte_eth_fp_ops;

This variable is accessed in inline function, so accessed by application, not
sure if it suits the 'internal' object definition, internal should be only for
objects accessed by other parts of DPDK.
I think this can be added to 'DPDK_22'.

>  	rte_eth_dev_allocate;
>  	rte_eth_dev_allocated;
>  	rte_eth_dev_attach_secondary;
> 


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v3 1/7] ethdev: allocate max space for internal queue array
  2021-10-01 14:02     ` [dpdk-dev] [PATCH v3 1/7] ethdev: allocate max space for internal queue array Konstantin Ananyev
@ 2021-10-01 16:48       ` Ferruh Yigit
  0 siblings, 0 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-01 16:48 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, mdr, jay.jayatheerthan

On 10/1/2021 3:02 PM, Konstantin Ananyev wrote:
> At queue configure stage always allocate space for maximum possible
> number (RTE_MAX_QUEUES_PER_PORT) of queue pointers.
> That will allow 'fast' inline functions (eth_rx_burst, etc.) to refer
> pointer to internal queue data without extra checking of current number
> of configured queues.
> That would help in future to hide rte_eth_dev and related structures.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v3 7/7] ethdev: hide eth dev related structures
  2021-10-01 14:02     ` [dpdk-dev] [PATCH v3 7/7] ethdev: hide eth dev related structures Konstantin Ananyev
@ 2021-10-01 16:53       ` Ferruh Yigit
  2021-10-01 17:04         ` Ferruh Yigit
  0 siblings, 1 reply; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-01 16:53 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, mdr, jay.jayatheerthan

On 10/1/2021 3:02 PM, Konstantin Ananyev wrote:
> Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
> data into private header (ethdev_driver.h).
> Few minor changes to keep DPDK building after that.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>  doc/guides/rel_notes/release_21_11.rst        |   6 +
>  drivers/common/octeontx2/otx2_sec_idev.c      |   2 +-
>  drivers/crypto/octeontx2/otx2_cryptodev_ops.c |   2 +-
>  drivers/net/cxgbe/base/adapter.h              |   2 +-
>  drivers/net/dpaa2/dpaa2_ptp.c                 |   2 +-
>  drivers/net/netvsc/hn_var.h                   |   1 +
>  lib/ethdev/ethdev_driver.h                    | 149 ++++++++++++++++++
>  lib/ethdev/rte_ethdev_core.h                  | 143 -----------------
>  lib/eventdev/rte_event_eth_rx_adapter.c       |   2 +-
>  lib/eventdev/rte_event_eth_tx_adapter.c       |   2 +-
>  lib/eventdev/rte_eventdev.c                   |   2 +-
>  11 files changed, 163 insertions(+), 150 deletions(-)

'rte_eth_devices' also needs to be removed from 'lib/ethdev/version.map'.

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v3 0/7] hide eth dev related structures
  2021-10-01 14:02   ` [dpdk-dev] [PATCH v3 0/7] " Konstantin Ananyev
                       ` (6 preceding siblings ...)
  2021-10-01 14:02     ` [dpdk-dev] [PATCH v3 7/7] ethdev: hide eth dev related structures Konstantin Ananyev
@ 2021-10-01 17:02     ` Ferruh Yigit
  2021-10-04 13:55     ` [dpdk-dev] [PATCH v4 " Konstantin Ananyev
  8 siblings, 0 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-01 17:02 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, mdr, jay.jayatheerthan

On 10/1/2021 3:02 PM, Konstantin Ananyev wrote:
> v3 changes:
>  - Changes in public struct naming (Jerin/Haiyue)
>  - Split patches
>  - Update docs
>  - Shamelessly included Andrew's patch:
>    https://patches.dpdk.org/project/dpdk/patch/20210928154856.1015020-1-andrew.rybchenko@oktetlabs.ru/
>    into these series.
>    I have to do similar thing here, so decided to avoid duplicated effort.   
> 
> The aim of these patch series is to make rte_ethdev core data structures
> (rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback, etc.) internal to
> DPDK and not visible to the user.
> That should allow future possible changes to core ethdev related structures
> to be transparent to the user and help to improve ABI/API stability.
> Note that current ethdev API is preserved, but it is a formal ABI break.
> 
> The work is based on previous discussions at:
> https://www.mail-archive.com/dev@dpdk.org/msg211405.html
> https://www.mail-archive.com/dev@dpdk.org/msg216685.html
> and consists of the following main points:
> 1. Copy public 'fast' function pointers (rx_pkt_burst(), etc.) and
>    related data pointer from rte_eth_dev into a separate flat array.
>    We keep it public to still be able to use inline functions for these
>    'fast' calls (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
>    Note that apart from function pointers itself, each element of this
>    flat array also contains two opaque pointers for each ethdev:
>    1) a pointer to an array of internal queue data pointers
>    2)  points to array of queue callback data pointers.
>    Note that exposing this extra information allows us to avoid extra
>    changes inside PMD level, plus should help to avoid possible
>    performance degradation.
> 2. Change implementation of 'fast' inline ethdev functions
>    (rte_eth_rx_burst(), etc.) to use new public flat array.
>    While it is an ABI breakage, this change is intended to be transparent
>    for both users (no changes in user app is required) and PMD developers
>    (no changes in PMD is required).
>    One extra note - with new implementation RX/TX callback invocation
>    will cost one extra function call with this changes. That might cause
>    some slowdown for code-path with RX/TX callbacks heavily involved.
>    Hope such trade-off is acceptable for the community.
> 3. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
>    things into internal header: <ethdev_driver.h>.
> 
> That approach was selected to:
>   - Avoid(/minimize) possible performance losses.
>   - Minimize required changes inside PMDs.
>  

Overall +1 to the approach.

Only 'metrics' library is failing to build and it also needs to include driver
header:

diff --git a/lib/metrics/rte_metrics_telemetry.c
b/lib/metrics/rte_metrics_telemetry.c
index 269f8ef6135c..5be21b2e86c5 100644
--- a/lib/metrics/rte_metrics_telemetry.c
+++ b/lib/metrics/rte_metrics_telemetry.c
@@ -2,7 +2,7 @@
  * Copyright(c) 2020 Intel Corporation
  */

-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_string_fns.h>
 #ifdef RTE_LIB_TELEMETRY
 #include <telemetry_internal.h>

> Performance testing results (ICX 2.0GHz, E810 (ice)):
>  - testpmd macswap fwd mode, plus
>    a) no RX/TX callbacks:
>       no actual slowdown observed
>    b) bpf-load rx 0 0 JM ./dpdk.org/examples/bpf/t3.o:
>       ~2% slowdown
>  - l3fwd: no actual slowdown observed
> 
> Would like to thank Ferruh and Jerrin for reviewing and testing previous
> versions of these series. All other interested parties please don't be shy
> and provide your feedback.
> 
> Konstantin Ananyev (7):
>   ethdev: allocate max space for internal queue array
>   ethdev: change input parameters for rx_queue_count
>   ethdev: copy ethdev 'fast' API into separate structure
>   ethdev: make burst functions to use new flat array
>   ethdev: add API to retrieve multiple ethernet addresses
>   ethdev: remove legacy Rx descriptor done API
>   ethdev: hide eth dev related structures
> 

<...>


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v3 7/7] ethdev: hide eth dev related structures
  2021-10-01 16:53       ` Ferruh Yigit
@ 2021-10-01 17:04         ` Ferruh Yigit
  0 siblings, 0 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-01 17:04 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, mdr, jay.jayatheerthan

On 10/1/2021 5:53 PM, Ferruh Yigit wrote:
> On 10/1/2021 3:02 PM, Konstantin Ananyev wrote:
>> Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
>> data into private header (ethdev_driver.h).
>> Few minor changes to keep DPDK building after that.
>>
>> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>> ---
>>  doc/guides/rel_notes/release_21_11.rst        |   6 +
>>  drivers/common/octeontx2/otx2_sec_idev.c      |   2 +-
>>  drivers/crypto/octeontx2/otx2_cryptodev_ops.c |   2 +-
>>  drivers/net/cxgbe/base/adapter.h              |   2 +-
>>  drivers/net/dpaa2/dpaa2_ptp.c                 |   2 +-
>>  drivers/net/netvsc/hn_var.h                   |   1 +
>>  lib/ethdev/ethdev_driver.h                    | 149 ++++++++++++++++++
>>  lib/ethdev/rte_ethdev_core.h                  | 143 -----------------
>>  lib/eventdev/rte_event_eth_rx_adapter.c       |   2 +-
>>  lib/eventdev/rte_event_eth_tx_adapter.c       |   2 +-
>>  lib/eventdev/rte_eventdev.c                   |   2 +-
>>  11 files changed, 163 insertions(+), 150 deletions(-)
> 
> 'rte_eth_devices' also needs to be removed from 'lib/ethdev/version.map'.
> 

Nope, since other libraries are using it, it can't be removed, but it can be
moved to 'INTERNAL' section.

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v3 4/7] ethdev: make burst functions to use new flat array
  2021-10-01 16:46       ` Ferruh Yigit
@ 2021-10-01 17:40         ` Ananyev, Konstantin
  2021-10-04  8:46           ` Ferruh Yigit
  0 siblings, 1 reply; 112+ messages in thread
From: Ananyev, Konstantin @ 2021-10-01 17:40 UTC (permalink / raw)
  To: Yigit, Ferruh, dev
  Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
	maxime.coquelin, Xia, Chenbo, thomas, mdr, Jayatheerthan, Jay



> On 10/1/2021 3:02 PM, Konstantin Ananyev wrote:
> > Rework 'fast' burst functions to use rte_eth_fp_ops[].
> > While it is an API/ABI breakage, this change is intended to be
> > transparent for both users (no changes in user app is required) and
> > PMD developers (no changes in PMD is required).
> > One extra thing to note - RX/TX callback invocation will cause extra
> > function call with these changes. That might cause some insignificant
> > slowdown for code-path where RX/TX callbacks are heavily involved.
> >
> > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 
> <...>
> 
> >  static inline int
> >  rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
> >  {
> > -	struct rte_eth_dev *dev;
> > +	struct rte_eth_fp_ops *p;
> > +	void *qd;
> > +
> > +	if (port_id >= RTE_MAX_ETHPORTS ||
> > +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> > +		RTE_ETHDEV_LOG(ERR,
> > +			"Invalid port_id=%u or queue_id=%u\n",
> > +			port_id, queue_id);
> > +		return -EINVAL;
> > +	}
> 
> Should the checkes wrapped with '#ifdef RTE_ETHDEV_DEBUG_RX' like others?

Original rte_eth_rx_queue_count() always have similar checks enabled,
that's why I also kept them 'always on'. 

> 
> <...>
> 
> > +++ b/lib/ethdev/version.map
> > @@ -247,11 +247,16 @@ EXPERIMENTAL {
> >  	rte_mtr_meter_policy_delete;
> >  	rte_mtr_meter_policy_update;
> >  	rte_mtr_meter_policy_validate;
> > +
> > +	# added in 21.05
> 
> s/21.05/21.11/
> 
> > +	__rte_eth_rx_epilog;
> > +	__rte_eth_tx_prolog;
> 
> These are directly called by application and must be part of ABI, but marked as
> 'internal' and has '__rte' prefix to highligh it, this may be confusing.
> What about making them proper, non-internal, API?

Hmm not sure what do you suggest here.
We don't want users to call them explicitly.
They are sort of helpers for rte_eth_rx_burst/rte_eth_tx_burst.
So I did what I thought is our usual policy for such semi-internal thigns:
have '@intenal' in comments, but in version.map put them under EXPERIMETAL/global
section.

What do you think it should be instead?
 
> >  };
> >
> >  INTERNAL {
> >  	global:
> >
> > +	rte_eth_fp_ops;
> 
> This variable is accessed in inline function, so accessed by application, not
> sure if it suits the 'internal' object definition, internal should be only for
> objects accessed by other parts of DPDK.
> I think this can be added to 'DPDK_22'.
> 
> >  	rte_eth_dev_allocate;
> >  	rte_eth_dev_allocated;
> >  	rte_eth_dev_attach_secondary;
> >


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v3 4/7] ethdev: make burst functions to use new flat array
  2021-10-01 17:40         ` Ananyev, Konstantin
@ 2021-10-04  8:46           ` Ferruh Yigit
  2021-10-04  9:20             ` Ananyev, Konstantin
  0 siblings, 1 reply; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-04  8:46 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev
  Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
	maxime.coquelin, Xia, Chenbo, thomas, mdr, Jayatheerthan, Jay

On 10/1/2021 6:40 PM, Ananyev, Konstantin wrote:
> 
> 
>> On 10/1/2021 3:02 PM, Konstantin Ananyev wrote:
>>> Rework 'fast' burst functions to use rte_eth_fp_ops[].
>>> While it is an API/ABI breakage, this change is intended to be
>>> transparent for both users (no changes in user app is required) and
>>> PMD developers (no changes in PMD is required).
>>> One extra thing to note - RX/TX callback invocation will cause extra
>>> function call with these changes. That might cause some insignificant
>>> slowdown for code-path where RX/TX callbacks are heavily involved.
>>>
>>> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>>
>> <...>
>>
>>>  static inline int
>>>  rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
>>>  {
>>> -	struct rte_eth_dev *dev;
>>> +	struct rte_eth_fp_ops *p;
>>> +	void *qd;
>>> +
>>> +	if (port_id >= RTE_MAX_ETHPORTS ||
>>> +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
>>> +		RTE_ETHDEV_LOG(ERR,
>>> +			"Invalid port_id=%u or queue_id=%u\n",
>>> +			port_id, queue_id);
>>> +		return -EINVAL;
>>> +	}
>>
>> Should the checkes wrapped with '#ifdef RTE_ETHDEV_DEBUG_RX' like others?
> 
> Original rte_eth_rx_queue_count() always have similar checks enabled,
> that's why I also kept them 'always on'. 
> 
>>
>> <...>
>>
>>> +++ b/lib/ethdev/version.map
>>> @@ -247,11 +247,16 @@ EXPERIMENTAL {
>>>  	rte_mtr_meter_policy_delete;
>>>  	rte_mtr_meter_policy_update;
>>>  	rte_mtr_meter_policy_validate;
>>> +
>>> +	# added in 21.05
>>
>> s/21.05/21.11/
>>
>>> +	__rte_eth_rx_epilog;
>>> +	__rte_eth_tx_prolog;
>>
>> These are directly called by application and must be part of ABI, but marked as
>> 'internal' and has '__rte' prefix to highligh it, this may be confusing.
>> What about making them proper, non-internal, API?
> 
> Hmm not sure what do you suggest here.
> We don't want users to call them explicitly.
> They are sort of helpers for rte_eth_rx_burst/rte_eth_tx_burst.
> So I did what I thought is our usual policy for such semi-internal thigns:
> have '@intenal' in comments, but in version.map put them under EXPERIMETAL/global
> section.
> 
> What do you think it should be instead?
>  

Make them public API. (Basically just remove '__' prefix and @internal comment).

This way application can use them to run custom callback(s) (not only the
registered ones), not sure if this can be dangerous though.

We need to trace the ABI for these functions, making them public clarifies it.

Also comment can be updated to describe intended usage instead of marking them
internal, and applications can use these anyway if we mark them internal or not.


>>>  };
>>>
>>>  INTERNAL {
>>>  	global:
>>>
>>> +	rte_eth_fp_ops;
>>
>> This variable is accessed in inline function, so accessed by application, not
>> sure if it suits the 'internal' object definition, internal should be only for
>> objects accessed by other parts of DPDK.
>> I think this can be added to 'DPDK_22'.
>>
>>>  	rte_eth_dev_allocate;
>>>  	rte_eth_dev_allocated;
>>>  	rte_eth_dev_attach_secondary;
>>>
> 


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v3 4/7] ethdev: make burst functions to use new flat array
  2021-10-04  8:46           ` Ferruh Yigit
@ 2021-10-04  9:20             ` Ananyev, Konstantin
  2021-10-04 10:13               ` Ferruh Yigit
  0 siblings, 1 reply; 112+ messages in thread
From: Ananyev, Konstantin @ 2021-10-04  9:20 UTC (permalink / raw)
  To: Yigit, Ferruh, dev
  Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
	maxime.coquelin, Xia, Chenbo, thomas, mdr, Jayatheerthan, Jay


> >>
> >>>  static inline int
> >>>  rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
> >>>  {
> >>> -	struct rte_eth_dev *dev;
> >>> +	struct rte_eth_fp_ops *p;
> >>> +	void *qd;
> >>> +
> >>> +	if (port_id >= RTE_MAX_ETHPORTS ||
> >>> +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> >>> +		RTE_ETHDEV_LOG(ERR,
> >>> +			"Invalid port_id=%u or queue_id=%u\n",
> >>> +			port_id, queue_id);
> >>> +		return -EINVAL;
> >>> +	}
> >>
> >> Should the checkes wrapped with '#ifdef RTE_ETHDEV_DEBUG_RX' like others?
> >
> > Original rte_eth_rx_queue_count() always have similar checks enabled,
> > that's why I also kept them 'always on'.
> >
> >>
> >> <...>
> >>
> >>> +++ b/lib/ethdev/version.map
> >>> @@ -247,11 +247,16 @@ EXPERIMENTAL {
> >>>  	rte_mtr_meter_policy_delete;
> >>>  	rte_mtr_meter_policy_update;
> >>>  	rte_mtr_meter_policy_validate;
> >>> +
> >>> +	# added in 21.05
> >>
> >> s/21.05/21.11/
> >>
> >>> +	__rte_eth_rx_epilog;
> >>> +	__rte_eth_tx_prolog;
> >>
> >> These are directly called by application and must be part of ABI, but marked as
> >> 'internal' and has '__rte' prefix to highligh it, this may be confusing.
> >> What about making them proper, non-internal, API?
> >
> > Hmm not sure what do you suggest here.
> > We don't want users to call them explicitly.
> > They are sort of helpers for rte_eth_rx_burst/rte_eth_tx_burst.
> > So I did what I thought is our usual policy for such semi-internal thigns:
> > have '@intenal' in comments, but in version.map put them under EXPERIMETAL/global
> > section.
> >
> > What do you think it should be instead?
> >
> 
> Make them public API. (Basically just remove '__' prefix and @internal comment).
> 
> This way application can use them to run custom callback(s) (not only the
> registered ones), not sure if this can be dangerous though.

Hmm, as I said above, I don't want users to call them explicitly.
Do you have any good reason to allow it?

> 
> We need to trace the ABI for these functions, making them public clarifies it.

We do have plenty of semi-internal functions right now,
why adding that one will be a problem?
From other side - if we'll declare it public, we will have obligations to support it
in future releases, plus it might encourage users to use it on its own.
To me that sounds like extra headache without any gain in return.

> Also comment can be updated to describe intended usage instead of marking them
> internal, and applications can use these anyway if we mark them internal or not.


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v3 4/7] ethdev: make burst functions to use new flat array
  2021-10-04  9:20             ` Ananyev, Konstantin
@ 2021-10-04 10:13               ` Ferruh Yigit
  2021-10-04 11:17                 ` Ananyev, Konstantin
  0 siblings, 1 reply; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-04 10:13 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev
  Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
	maxime.coquelin, Xia, Chenbo, thomas, mdr, Jayatheerthan, Jay

On 10/4/2021 10:20 AM, Ananyev, Konstantin wrote:
> 
>>>>
>>>>>  static inline int
>>>>>  rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
>>>>>  {
>>>>> -	struct rte_eth_dev *dev;
>>>>> +	struct rte_eth_fp_ops *p;
>>>>> +	void *qd;
>>>>> +
>>>>> +	if (port_id >= RTE_MAX_ETHPORTS ||
>>>>> +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
>>>>> +		RTE_ETHDEV_LOG(ERR,
>>>>> +			"Invalid port_id=%u or queue_id=%u\n",
>>>>> +			port_id, queue_id);
>>>>> +		return -EINVAL;
>>>>> +	}
>>>>
>>>> Should the checkes wrapped with '#ifdef RTE_ETHDEV_DEBUG_RX' like others?
>>>
>>> Original rte_eth_rx_queue_count() always have similar checks enabled,
>>> that's why I also kept them 'always on'.
>>>
>>>>
>>>> <...>
>>>>
>>>>> +++ b/lib/ethdev/version.map
>>>>> @@ -247,11 +247,16 @@ EXPERIMENTAL {
>>>>>  	rte_mtr_meter_policy_delete;
>>>>>  	rte_mtr_meter_policy_update;
>>>>>  	rte_mtr_meter_policy_validate;
>>>>> +
>>>>> +	# added in 21.05
>>>>
>>>> s/21.05/21.11/
>>>>
>>>>> +	__rte_eth_rx_epilog;
>>>>> +	__rte_eth_tx_prolog;
>>>>
>>>> These are directly called by application and must be part of ABI, but marked as
>>>> 'internal' and has '__rte' prefix to highligh it, this may be confusing.
>>>> What about making them proper, non-internal, API?
>>>
>>> Hmm not sure what do you suggest here.
>>> We don't want users to call them explicitly.
>>> They are sort of helpers for rte_eth_rx_burst/rte_eth_tx_burst.
>>> So I did what I thought is our usual policy for such semi-internal thigns:
>>> have '@intenal' in comments, but in version.map put them under EXPERIMETAL/global
>>> section.
>>>
>>> What do you think it should be instead?
>>>
>>
>> Make them public API. (Basically just remove '__' prefix and @internal comment).
>>
>> This way application can use them to run custom callback(s) (not only the
>> registered ones), not sure if this can be dangerous though.
> 
> Hmm, as I said above, I don't want users to call them explicitly.
> Do you have any good reason to allow it?
> 

Just to get rid of this internal APIs that is exposed to application state.

>>
>> We need to trace the ABI for these functions, making them public clarifies it.
> 
> We do have plenty of semi-internal functions right now,
> why adding that one will be a problem?

As far as I remember existing ones are 'static inline' functions, and we don't
have an ABI concern with them. But these are actual functions called by application.

> From other side - if we'll declare it public, we will have obligations to support it
> in future releases, plus it might encourage users to use it on its own.
> To me that sounds like extra headache without any gain in return.
> 

If having those two as public API doesn't make sense, I agree with you.

>> Also comment can be updated to describe intended usage instead of marking them
>> internal, and applications can use these anyway if we mark them internal or not.
> 


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v3 4/7] ethdev: make burst functions to use new flat array
  2021-10-04 10:13               ` Ferruh Yigit
@ 2021-10-04 11:17                 ` Ananyev, Konstantin
  0 siblings, 0 replies; 112+ messages in thread
From: Ananyev, Konstantin @ 2021-10-04 11:17 UTC (permalink / raw)
  To: Yigit, Ferruh, dev
  Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
	maxime.coquelin, Xia, Chenbo, thomas, mdr, Jayatheerthan, Jay



> 
> On 10/4/2021 10:20 AM, Ananyev, Konstantin wrote:
> >
> >>>>
> >>>>>  static inline int
> >>>>>  rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
> >>>>>  {
> >>>>> -	struct rte_eth_dev *dev;
> >>>>> +	struct rte_eth_fp_ops *p;
> >>>>> +	void *qd;
> >>>>> +
> >>>>> +	if (port_id >= RTE_MAX_ETHPORTS ||
> >>>>> +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> >>>>> +		RTE_ETHDEV_LOG(ERR,
> >>>>> +			"Invalid port_id=%u or queue_id=%u\n",
> >>>>> +			port_id, queue_id);
> >>>>> +		return -EINVAL;
> >>>>> +	}
> >>>>
> >>>> Should the checkes wrapped with '#ifdef RTE_ETHDEV_DEBUG_RX' like others?
> >>>
> >>> Original rte_eth_rx_queue_count() always have similar checks enabled,
> >>> that's why I also kept them 'always on'.
> >>>
> >>>>
> >>>> <...>
> >>>>
> >>>>> +++ b/lib/ethdev/version.map
> >>>>> @@ -247,11 +247,16 @@ EXPERIMENTAL {
> >>>>>  	rte_mtr_meter_policy_delete;
> >>>>>  	rte_mtr_meter_policy_update;
> >>>>>  	rte_mtr_meter_policy_validate;
> >>>>> +
> >>>>> +	# added in 21.05
> >>>>
> >>>> s/21.05/21.11/
> >>>>
> >>>>> +	__rte_eth_rx_epilog;
> >>>>> +	__rte_eth_tx_prolog;
> >>>>
> >>>> These are directly called by application and must be part of ABI, but marked as
> >>>> 'internal' and has '__rte' prefix to highligh it, this may be confusing.
> >>>> What about making them proper, non-internal, API?
> >>>
> >>> Hmm not sure what do you suggest here.
> >>> We don't want users to call them explicitly.
> >>> They are sort of helpers for rte_eth_rx_burst/rte_eth_tx_burst.
> >>> So I did what I thought is our usual policy for such semi-internal thigns:
> >>> have '@intenal' in comments, but in version.map put them under EXPERIMETAL/global
> >>> section.
> >>>
> >>> What do you think it should be instead?
> >>>
> >>
> >> Make them public API. (Basically just remove '__' prefix and @internal comment).
> >>
> >> This way application can use them to run custom callback(s) (not only the
> >> registered ones), not sure if this can be dangerous though.
> >
> > Hmm, as I said above, I don't want users to call them explicitly.
> > Do you have any good reason to allow it?
> >
> 
> Just to get rid of this internal APIs that is exposed to application state.
> 
> >>
> >> We need to trace the ABI for these functions, making them public clarifies it.
> >
> > We do have plenty of semi-internal functions right now,
> > why adding that one will be a problem?
> 
> As far as I remember existing ones are 'static inline' functions, and we don't
> have an ABI concern with them. But these are actual functions called by application.

Not always.
As an example of internal but not static ones:
rte_mempool_check_cookies
rte_mempool_contig_blocks_check_cookies
rte_mempool_op_calc_mem_size_helper
_rte_pktmbuf_read

> 
> > From other side - if we'll declare it public, we will have obligations to support it
> > in future releases, plus it might encourage users to use it on its own.
> > To me that sounds like extra headache without any gain in return.
> >
> 
> If having those two as public API doesn't make sense, I agree with you.
> 
> >> Also comment can be updated to describe intended usage instead of marking them
> >> internal, and applications can use these anyway if we mark them internal or not.
> >


^ permalink raw reply	[flat|nested] 112+ messages in thread

* [dpdk-dev] [PATCH v4 0/7] hide eth dev related structures
  2021-10-01 14:02   ` [dpdk-dev] [PATCH v3 0/7] " Konstantin Ananyev
                       ` (7 preceding siblings ...)
  2021-10-01 17:02     ` [dpdk-dev] [PATCH v3 0/7] " Ferruh Yigit
@ 2021-10-04 13:55     ` Konstantin Ananyev
  2021-10-04 13:55       ` [dpdk-dev] [PATCH v4 1/7] ethdev: allocate max space for internal queue array Konstantin Ananyev
                         ` (8 more replies)
  8 siblings, 9 replies; 112+ messages in thread
From: Konstantin Ananyev @ 2021-10-04 13:55 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

v4 changes:
 - Fix secondary process attach (Pavan)
 - Fix build failure (Ferruh)
 - Update lib/ethdev/verion.map (Ferruh)
   Note that moving newly added symbols from EXPERIMENTAL to DPDK_22
   section makes checkpatch.sh to complain.

v3 changes:
 - Changes in public struct naming (Jerin/Haiyue)
 - Split patches
 - Update docs
 - Shamelessly included Andrew's patch:
   https://patches.dpdk.org/project/dpdk/patch/20210928154856.1015020-1-andrew.rybchenko@oktetlabs.ru/
   into these series.
   I have to do similar thing here, so decided to avoid duplicated effort.   

The aim of these patch series is to make rte_ethdev core data structures
(rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback, etc.) internal to
DPDK and not visible to the user.
That should allow future possible changes to core ethdev related structures
to be transparent to the user and help to improve ABI/API stability.
Note that current ethdev API is preserved, but it is a formal ABI break.

The work is based on previous discussions at:
https://www.mail-archive.com/dev@dpdk.org/msg211405.html
https://www.mail-archive.com/dev@dpdk.org/msg216685.html
and consists of the following main points:
1. Copy public 'fast' function pointers (rx_pkt_burst(), etc.) and
   related data pointer from rte_eth_dev into a separate flat array.
   We keep it public to still be able to use inline functions for these
   'fast' calls (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
   Note that apart from function pointers itself, each element of this
   flat array also contains two opaque pointers for each ethdev:
   1) a pointer to an array of internal queue data pointers
   2)  points to array of queue callback data pointers.
   Note that exposing this extra information allows us to avoid extra
   changes inside PMD level, plus should help to avoid possible
   performance degradation.
2. Change implementation of 'fast' inline ethdev functions
   (rte_eth_rx_burst(), etc.) to use new public flat array.
   While it is an ABI breakage, this change is intended to be transparent
   for both users (no changes in user app is required) and PMD developers
   (no changes in PMD is required).
   One extra note - with new implementation RX/TX callback invocation
   will cost one extra function call with this changes. That might cause
   some slowdown for code-path with RX/TX callbacks heavily involved.
   Hope such trade-off is acceptable for the community.
3. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
   things into internal header: <ethdev_driver.h>.

That approach was selected to:
  - Avoid(/minimize) possible performance losses.
  - Minimize required changes inside PMDs.
 
Performance testing results (ICX 2.0GHz, E810 (ice)):
 - testpmd macswap fwd mode, plus
   a) no RX/TX callbacks:
      no actual slowdown observed
   b) bpf-load rx 0 0 JM ./dpdk.org/examples/bpf/t3.o:
      ~2% slowdown
 - l3fwd: no actual slowdown observed

Would like to thank everyone who already reviewed and tested previous
versions of these series. All other interested parties please don't be shy
and provide your feedback.

Konstantin Ananyev (7):
  ethdev: allocate max space for internal queue array
  ethdev: change input parameters for rx_queue_count
  ethdev: copy ethdev 'fast' API into separate structure
  ethdev: make burst functions to use new flat array
  ethdev: add API to retrieve multiple ethernet addresses
  ethdev: remove legacy Rx descriptor done API
  ethdev: hide eth dev related structures

 app/test-pmd/config.c                         |  23 +-
 doc/guides/nics/features.rst                  |   6 +-
 doc/guides/rel_notes/deprecation.rst          |   5 -
 doc/guides/rel_notes/release_21_11.rst        |  21 ++
 drivers/common/octeontx2/otx2_sec_idev.c      |   2 +-
 drivers/crypto/octeontx2/otx2_cryptodev_ops.c |   2 +-
 drivers/net/ark/ark_ethdev_rx.c               |   4 +-
 drivers/net/ark/ark_ethdev_rx.h               |   3 +-
 drivers/net/atlantic/atl_ethdev.h             |   2 +-
 drivers/net/atlantic/atl_rxtx.c               |   9 +-
 drivers/net/bnxt/bnxt_ethdev.c                |   8 +-
 drivers/net/cxgbe/base/adapter.h              |   2 +-
 drivers/net/dpaa/dpaa_ethdev.c                |   9 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |   9 +-
 drivers/net/dpaa2/dpaa2_ptp.c                 |   2 +-
 drivers/net/e1000/e1000_ethdev.h              |  10 +-
 drivers/net/e1000/em_ethdev.c                 |   1 -
 drivers/net/e1000/em_rxtx.c                   |  21 +-
 drivers/net/e1000/igb_ethdev.c                |   2 -
 drivers/net/e1000/igb_rxtx.c                  |  21 +-
 drivers/net/enic/enic_ethdev.c                |  12 +-
 drivers/net/fm10k/fm10k.h                     |   5 +-
 drivers/net/fm10k/fm10k_ethdev.c              |   1 -
 drivers/net/fm10k/fm10k_rxtx.c                |  29 +-
 drivers/net/hns3/hns3_rxtx.c                  |   7 +-
 drivers/net/hns3/hns3_rxtx.h                  |   2 +-
 drivers/net/i40e/i40e_ethdev.c                |   1 -
 drivers/net/i40e/i40e_ethdev_vf.c             |   1 -
 drivers/net/i40e/i40e_rxtx.c                  |  30 +-
 drivers/net/i40e/i40e_rxtx.h                  |   4 +-
 drivers/net/iavf/iavf_rxtx.c                  |   4 +-
 drivers/net/iavf/iavf_rxtx.h                  |   2 +-
 drivers/net/ice/ice_rxtx.c                    |   4 +-
 drivers/net/ice/ice_rxtx.h                    |   2 +-
 drivers/net/igc/igc_ethdev.c                  |   1 -
 drivers/net/igc/igc_txrx.c                    |  23 +-
 drivers/net/igc/igc_txrx.h                    |   5 +-
 drivers/net/ixgbe/ixgbe_ethdev.c              |   2 -
 drivers/net/ixgbe/ixgbe_ethdev.h              |   5 +-
 drivers/net/ixgbe/ixgbe_rxtx.c                |  22 +-
 drivers/net/mlx5/mlx5_rx.c                    |  26 +-
 drivers/net/mlx5/mlx5_rx.h                    |   2 +-
 drivers/net/netvsc/hn_rxtx.c                  |   4 +-
 drivers/net/netvsc/hn_var.h                   |   3 +-
 drivers/net/nfp/nfp_rxtx.c                    |   4 +-
 drivers/net/nfp/nfp_rxtx.h                    |   3 +-
 drivers/net/octeontx2/otx2_ethdev.c           |   1 -
 drivers/net/octeontx2/otx2_ethdev.h           |   3 +-
 drivers/net/octeontx2/otx2_ethdev_ops.c       |  20 +-
 drivers/net/sfc/sfc_ethdev.c                  |  29 +-
 drivers/net/thunderx/nicvf_ethdev.c           |   3 +-
 drivers/net/thunderx/nicvf_rxtx.c             |   4 +-
 drivers/net/thunderx/nicvf_rxtx.h             |   2 +-
 drivers/net/txgbe/txgbe_ethdev.h              |   3 +-
 drivers/net/txgbe/txgbe_rxtx.c                |   4 +-
 drivers/net/vhost/rte_eth_vhost.c             |   4 +-
 drivers/net/virtio/virtio_ethdev.c            |   1 -
 lib/ethdev/ethdev_driver.h                    | 149 +++++++++
 lib/ethdev/ethdev_private.c                   |  83 +++++
 lib/ethdev/ethdev_private.h                   |   7 +
 lib/ethdev/rte_ethdev.c                       |  89 ++++--
 lib/ethdev/rte_ethdev.h                       | 286 ++++++++++++------
 lib/ethdev/rte_ethdev_core.h                  | 171 +++--------
 lib/ethdev/version.map                        |  10 +-
 lib/eventdev/rte_event_eth_rx_adapter.c       |   2 +-
 lib/eventdev/rte_event_eth_tx_adapter.c       |   2 +-
 lib/eventdev/rte_eventdev.c                   |   2 +-
 lib/metrics/rte_metrics_telemetry.c           |   2 +-
 68 files changed, 673 insertions(+), 570 deletions(-)

-- 
2.26.3


^ permalink raw reply	[flat|nested] 112+ messages in thread

* [dpdk-dev] [PATCH v4 1/7] ethdev: allocate max space for internal queue array
  2021-10-04 13:55     ` [dpdk-dev] [PATCH v4 " Konstantin Ananyev
@ 2021-10-04 13:55       ` Konstantin Ananyev
  2021-10-05 12:09         ` Thomas Monjalon
  2021-10-05 12:21         ` Thomas Monjalon
  2021-10-04 13:55       ` [dpdk-dev] [PATCH v4 2/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
                         ` (7 subsequent siblings)
  8 siblings, 2 replies; 112+ messages in thread
From: Konstantin Ananyev @ 2021-10-04 13:55 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

At queue configure stage always allocate space for maximum possible
number (RTE_MAX_QUEUES_PER_PORT) of queue pointers.
That will allow 'fast' inline functions (eth_rx_burst, etc.) to refer
pointer to internal queue data without extra checking of current number
of configured queues.
That would help in future to hide rte_eth_dev and related structures.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/ethdev/rte_ethdev.c | 36 +++++++++---------------------------
 1 file changed, 9 insertions(+), 27 deletions(-)

diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index daf5ca9242..424bc260fa 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -898,7 +898,8 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 
 	if (dev->data->rx_queues == NULL && nb_queues != 0) { /* first time configuration */
 		dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues",
-				sizeof(dev->data->rx_queues[0]) * nb_queues,
+				sizeof(dev->data->rx_queues[0]) *
+				RTE_MAX_QUEUES_PER_PORT,
 				RTE_CACHE_LINE_SIZE);
 		if (dev->data->rx_queues == NULL) {
 			dev->data->nb_rx_queues = 0;
@@ -909,21 +910,11 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 
 		rxq = dev->data->rx_queues;
 
-		for (i = nb_queues; i < old_nb_queues; i++)
+		for (i = nb_queues; i < old_nb_queues; i++) {
 			(*dev->dev_ops->rx_queue_release)(rxq[i]);
-		rxq = rte_realloc(rxq, sizeof(rxq[0]) * nb_queues,
-				RTE_CACHE_LINE_SIZE);
-		if (rxq == NULL)
-			return -(ENOMEM);
-		if (nb_queues > old_nb_queues) {
-			uint16_t new_qs = nb_queues - old_nb_queues;
-
-			memset(rxq + old_nb_queues, 0,
-				sizeof(rxq[0]) * new_qs);
+			rxq[i] = NULL;
 		}
 
-		dev->data->rx_queues = rxq;
-
 	} else if (dev->data->rx_queues != NULL && nb_queues == 0) {
 		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, -ENOTSUP);
 
@@ -1138,8 +1129,9 @@ eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 
 	if (dev->data->tx_queues == NULL && nb_queues != 0) { /* first time configuration */
 		dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues",
-						   sizeof(dev->data->tx_queues[0]) * nb_queues,
-						   RTE_CACHE_LINE_SIZE);
+				sizeof(dev->data->tx_queues[0]) *
+				RTE_MAX_QUEUES_PER_PORT,
+				RTE_CACHE_LINE_SIZE);
 		if (dev->data->tx_queues == NULL) {
 			dev->data->nb_tx_queues = 0;
 			return -(ENOMEM);
@@ -1149,21 +1141,11 @@ eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 
 		txq = dev->data->tx_queues;
 
-		for (i = nb_queues; i < old_nb_queues; i++)
+		for (i = nb_queues; i < old_nb_queues; i++) {
 			(*dev->dev_ops->tx_queue_release)(txq[i]);
-		txq = rte_realloc(txq, sizeof(txq[0]) * nb_queues,
-				  RTE_CACHE_LINE_SIZE);
-		if (txq == NULL)
-			return -ENOMEM;
-		if (nb_queues > old_nb_queues) {
-			uint16_t new_qs = nb_queues - old_nb_queues;
-
-			memset(txq + old_nb_queues, 0,
-			       sizeof(txq[0]) * new_qs);
+			txq[i] = NULL;
 		}
 
-		dev->data->tx_queues = txq;
-
 	} else if (dev->data->tx_queues != NULL && nb_queues == 0) {
 		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release, -ENOTSUP);
 
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [dpdk-dev] [PATCH v4 2/7] ethdev: change input parameters for rx_queue_count
  2021-10-04 13:55     ` [dpdk-dev] [PATCH v4 " Konstantin Ananyev
  2021-10-04 13:55       ` [dpdk-dev] [PATCH v4 1/7] ethdev: allocate max space for internal queue array Konstantin Ananyev
@ 2021-10-04 13:55       ` Konstantin Ananyev
  2021-10-04 13:55       ` [dpdk-dev] [PATCH v4 3/7] ethdev: copy ethdev 'fast' API into separate structure Konstantin Ananyev
                         ` (6 subsequent siblings)
  8 siblings, 0 replies; 112+ messages in thread
From: Konstantin Ananyev @ 2021-10-04 13:55 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Currently majority of 'fast' ethdev ops take pointers to internal
queue data structures as an input parameter.
While eth_rx_queue_count() takes a pointer to rte_eth_dev and queue
index.
For future work to hide rte_eth_devices[] and friends it would be
plausible to unify parameters list of all 'fast' ethdev ops.
This patch changes eth_rx_queue_count() to accept pointer to internal
queue data as input parameter.
While this change is transparent to user, it still counts as an ABI change,
as eth_rx_queue_count_t is used by ethdev public inline function
rte_eth_rx_queue_count().

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/release_21_11.rst  |  6 ++++++
 drivers/net/ark/ark_ethdev_rx.c         |  4 ++--
 drivers/net/ark/ark_ethdev_rx.h         |  3 +--
 drivers/net/atlantic/atl_ethdev.h       |  2 +-
 drivers/net/atlantic/atl_rxtx.c         |  9 ++-------
 drivers/net/bnxt/bnxt_ethdev.c          |  8 +++++---
 drivers/net/dpaa/dpaa_ethdev.c          |  9 ++++-----
 drivers/net/dpaa2/dpaa2_ethdev.c        |  9 ++++-----
 drivers/net/e1000/e1000_ethdev.h        |  6 ++----
 drivers/net/e1000/em_rxtx.c             |  4 ++--
 drivers/net/e1000/igb_rxtx.c            |  4 ++--
 drivers/net/enic/enic_ethdev.c          | 12 ++++++------
 drivers/net/fm10k/fm10k.h               |  2 +-
 drivers/net/fm10k/fm10k_rxtx.c          |  4 ++--
 drivers/net/hns3/hns3_rxtx.c            |  7 +++++--
 drivers/net/hns3/hns3_rxtx.h            |  2 +-
 drivers/net/i40e/i40e_rxtx.c            |  4 ++--
 drivers/net/i40e/i40e_rxtx.h            |  3 +--
 drivers/net/iavf/iavf_rxtx.c            |  4 ++--
 drivers/net/iavf/iavf_rxtx.h            |  2 +-
 drivers/net/ice/ice_rxtx.c              |  4 ++--
 drivers/net/ice/ice_rxtx.h              |  2 +-
 drivers/net/igc/igc_txrx.c              |  5 ++---
 drivers/net/igc/igc_txrx.h              |  3 +--
 drivers/net/ixgbe/ixgbe_ethdev.h        |  3 +--
 drivers/net/ixgbe/ixgbe_rxtx.c          |  4 ++--
 drivers/net/mlx5/mlx5_rx.c              | 26 ++++++++++++-------------
 drivers/net/mlx5/mlx5_rx.h              |  2 +-
 drivers/net/netvsc/hn_rxtx.c            |  4 ++--
 drivers/net/netvsc/hn_var.h             |  2 +-
 drivers/net/nfp/nfp_rxtx.c              |  4 ++--
 drivers/net/nfp/nfp_rxtx.h              |  3 +--
 drivers/net/octeontx2/otx2_ethdev.h     |  2 +-
 drivers/net/octeontx2/otx2_ethdev_ops.c |  8 ++++----
 drivers/net/sfc/sfc_ethdev.c            | 12 ++++++------
 drivers/net/thunderx/nicvf_ethdev.c     |  3 +--
 drivers/net/thunderx/nicvf_rxtx.c       |  4 ++--
 drivers/net/thunderx/nicvf_rxtx.h       |  2 +-
 drivers/net/txgbe/txgbe_ethdev.h        |  3 +--
 drivers/net/txgbe/txgbe_rxtx.c          |  4 ++--
 drivers/net/vhost/rte_eth_vhost.c       |  4 ++--
 lib/ethdev/rte_ethdev.h                 |  2 +-
 lib/ethdev/rte_ethdev_core.h            |  3 +--
 43 files changed, 103 insertions(+), 110 deletions(-)

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 37dc1a7786..fd80538b6c 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -213,6 +213,12 @@ ABI Changes
   ``rte_security_ipsec_xform`` to allow applications to configure SA soft
   and hard expiry limits. Limits can be either in number of packets or bytes.
 
+* ethdev: Input parameters for ``eth_rx_queue_count_t`` was changed.
+  Instead of pointer to ``rte_eth_dev`` and queue index, now it accepts pointer
+  to internal queue data as input parameter. While this change is transparent
+  to user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
+  is used by  public inline function ``rte_eth_rx_queue_count``.
+
 
 Known Issues
 ------------
diff --git a/drivers/net/ark/ark_ethdev_rx.c b/drivers/net/ark/ark_ethdev_rx.c
index d255f0177b..98658ce621 100644
--- a/drivers/net/ark/ark_ethdev_rx.c
+++ b/drivers/net/ark/ark_ethdev_rx.c
@@ -388,11 +388,11 @@ eth_ark_rx_queue_drain(struct ark_rx_queue *queue)
 }
 
 uint32_t
-eth_ark_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id)
+eth_ark_dev_rx_queue_count(void *rx_queue)
 {
 	struct ark_rx_queue *queue;
 
-	queue = dev->data->rx_queues[queue_id];
+	queue = rx_queue;
 	return (queue->prod_index - queue->cons_index);	/* mod arith */
 }
 
diff --git a/drivers/net/ark/ark_ethdev_rx.h b/drivers/net/ark/ark_ethdev_rx.h
index c8dc340a8a..859fcf1e6f 100644
--- a/drivers/net/ark/ark_ethdev_rx.h
+++ b/drivers/net/ark/ark_ethdev_rx.h
@@ -17,8 +17,7 @@ int eth_ark_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			       unsigned int socket_id,
 			       const struct rte_eth_rxconf *rx_conf,
 			       struct rte_mempool *mp);
-uint32_t eth_ark_dev_rx_queue_count(struct rte_eth_dev *dev,
-				    uint16_t rx_queue_id);
+uint32_t eth_ark_dev_rx_queue_count(void *rx_queue);
 int eth_ark_rx_stop_queue(struct rte_eth_dev *dev, uint16_t queue_id);
 int eth_ark_rx_start_queue(struct rte_eth_dev *dev, uint16_t queue_id);
 uint16_t eth_ark_recv_pkts_noop(void *rx_queue, struct rte_mbuf **rx_pkts,
diff --git a/drivers/net/atlantic/atl_ethdev.h b/drivers/net/atlantic/atl_ethdev.h
index f547571b5c..e808460520 100644
--- a/drivers/net/atlantic/atl_ethdev.h
+++ b/drivers/net/atlantic/atl_ethdev.h
@@ -66,7 +66,7 @@ int atl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
 
-uint32_t atl_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t atl_rx_queue_count(void *rx_queue);
 
 int atl_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int atl_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c
index 7d367c9306..35bb13044e 100644
--- a/drivers/net/atlantic/atl_rxtx.c
+++ b/drivers/net/atlantic/atl_rxtx.c
@@ -689,18 +689,13 @@ atl_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 /* Return Rx queue avail count */
 
 uint32_t
-atl_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+atl_rx_queue_count(void *rx_queue)
 {
 	struct atl_rx_queue *rxq;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (rx_queue_id >= dev->data->nb_rx_queues) {
-		PMD_DRV_LOG(ERR, "Invalid RX queue id=%d", rx_queue_id);
-		return 0;
-	}
-
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 
 	if (rxq == NULL)
 		return 0;
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 097dd10de9..e07242e961 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3130,20 +3130,22 @@ bnxt_dev_led_off_op(struct rte_eth_dev *dev)
 }
 
 static uint32_t
-bnxt_rx_queue_count_op(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+bnxt_rx_queue_count_op(void *rx_queue)
 {
-	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
+	struct bnxt *bp;
 	struct bnxt_cp_ring_info *cpr;
 	uint32_t desc = 0, raw_cons, cp_ring_size;
 	struct bnxt_rx_queue *rxq;
 	struct rx_pkt_cmpl *rxcmp;
 	int rc;
 
+	rxq = rx_queue;
+	bp = rxq->bp;
+
 	rc = is_bnxt_in_error(bp);
 	if (rc)
 		return rc;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
 	cpr = rxq->cp_ring;
 	raw_cons = cpr->cp_raw_cons;
 	cp_ring_size = cpr->cp_ring_struct->ring_size;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 36d8f9249d..b5589300c9 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1278,17 +1278,16 @@ static void dpaa_eth_tx_queue_release(void *txq __rte_unused)
 }
 
 static uint32_t
-dpaa_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+dpaa_dev_rx_queue_count(void *rx_queue)
 {
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
-	struct qman_fq *rxq = &dpaa_intf->rx_queues[rx_queue_id];
+	struct qman_fq *rxq = rx_queue;
 	u32 frm_cnt = 0;
 
 	PMD_INIT_FUNC_TRACE();
 
 	if (qman_query_fq_frm_cnt(rxq, &frm_cnt) == 0) {
-		DPAA_PMD_DEBUG("RX frame count for q(%d) is %u",
-			       rx_queue_id, frm_cnt);
+		DPAA_PMD_DEBUG("RX frame count for q(%p) is %u",
+			       rx_queue, frm_cnt);
 	}
 	return frm_cnt;
 }
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index c12169578e..b295af2a57 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1011,10 +1011,9 @@ dpaa2_dev_tx_queue_release(void *q __rte_unused)
 }
 
 static uint32_t
-dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+dpaa2_dev_rx_queue_count(void *rx_queue)
 {
 	int32_t ret;
-	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	struct dpaa2_queue *dpaa2_q;
 	struct qbman_swp *swp;
 	struct qbman_fq_query_np_rslt state;
@@ -1031,12 +1030,12 @@ dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 	swp = DPAA2_PER_LCORE_PORTAL;
 
-	dpaa2_q = (struct dpaa2_queue *)priv->rx_vq[rx_queue_id];
+	dpaa2_q = rx_queue;
 
 	if (qbman_fq_query_state(swp, dpaa2_q->fqid, &state) == 0) {
 		frame_cnt = qbman_fq_state_frame_count(&state);
-		DPAA2_PMD_DP_DEBUG("RX frame count for q(%d) is %u",
-				rx_queue_id, frame_cnt);
+		DPAA2_PMD_DP_DEBUG("RX frame count for q(%p) is %u",
+				rx_queue, frame_cnt);
 	}
 	return frame_cnt;
 }
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 3b4d9c3ee6..460e130a83 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -399,8 +399,7 @@ int eth_igb_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		const struct rte_eth_rxconf *rx_conf,
 		struct rte_mempool *mb_pool);
 
-uint32_t eth_igb_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t eth_igb_rx_queue_count(void *rx_queue);
 
 int eth_igb_rx_descriptor_done(void *rx_queue, uint16_t offset);
 
@@ -476,8 +475,7 @@ int eth_em_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		const struct rte_eth_rxconf *rx_conf,
 		struct rte_mempool *mb_pool);
 
-uint32_t eth_em_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t eth_em_rx_queue_count(void *rx_queue);
 
 int eth_em_rx_descriptor_done(void *rx_queue, uint16_t offset);
 
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index dfd8f2fd00..40de36cb20 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -1489,14 +1489,14 @@ eth_em_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-eth_em_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_em_rx_queue_count(void *rx_queue)
 {
 #define EM_RXQ_SCAN_INTERVAL 4
 	volatile struct e1000_rx_desc *rxdp;
 	struct em_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 278d5d2712..3210a0e008 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -1769,14 +1769,14 @@ eth_igb_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-eth_igb_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_igb_rx_queue_count(void *rx_queue)
 {
 #define IGB_RXQ_SCAN_INTERVAL 4
 	volatile union e1000_adv_rx_desc *rxdp;
 	struct igb_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 8d5797523b..5b2d60ad9c 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -233,18 +233,18 @@ static void enicpmd_dev_rx_queue_release(void *rxq)
 	enic_free_rq(rxq);
 }
 
-static uint32_t enicpmd_dev_rx_queue_count(struct rte_eth_dev *dev,
-					   uint16_t rx_queue_id)
+static uint32_t enicpmd_dev_rx_queue_count(void *rx_queue)
 {
-	struct enic *enic = pmd_priv(dev);
+	struct enic *enic;
+	struct vnic_rq *sop_rq;
 	uint32_t queue_count = 0;
 	struct vnic_cq *cq;
 	uint32_t cq_tail;
 	uint16_t cq_idx;
-	int rq_num;
 
-	rq_num = enic_rte_rq_idx_to_sop_idx(rx_queue_id);
-	cq = &enic->cq[enic_cq_rq(enic, rq_num)];
+	sop_rq = rx_queue;
+	enic = vnic_dev_priv(sop_rq->vdev);
+	cq = &enic->cq[enic_cq_rq(enic, sop_rq->index)];
 	cq_idx = cq->to_clean;
 
 	cq_tail = ioread32(&cq->ctrl->cq_tail);
diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h
index 916b856acc..648d12a1b4 100644
--- a/drivers/net/fm10k/fm10k.h
+++ b/drivers/net/fm10k/fm10k.h
@@ -324,7 +324,7 @@ uint16_t fm10k_recv_scattered_pkts(void *rx_queue,
 		struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
 
 uint32_t
-fm10k_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+fm10k_dev_rx_queue_count(void *rx_queue);
 
 int
 fm10k_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index 0a9a27aa5a..eab798e52c 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -367,14 +367,14 @@ fm10k_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 }
 
 uint32_t
-fm10k_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+fm10k_dev_rx_queue_count(void *rx_queue)
 {
 #define FM10K_RXQ_SCAN_INTERVAL 4
 	volatile union fm10k_rx_desc *rxdp;
 	struct fm10k_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->hw_ring[rxq->next_dd];
 	while ((desc < rxq->nb_desc) &&
 		rxdp->w.status & rte_cpu_to_le_16(FM10K_RXD_STATUS_DD)) {
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 481872e395..04791ae7d0 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -4673,7 +4673,7 @@ hns3_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
 }
 
 uint32_t
-hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+hns3_rx_queue_count(void *rx_queue)
 {
 	/*
 	 * Number of BDs that have been processed by the driver
@@ -4681,9 +4681,12 @@ hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	 */
 	uint32_t driver_hold_bd_num;
 	struct hns3_rx_queue *rxq;
+	const struct rte_eth_dev *dev;
 	uint32_t fbd_num;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
+	dev = &rte_eth_devices[rxq->port_id];
+
 	fbd_num = hns3_read_dev(rxq, HNS3_RING_RX_FBDNUM_REG);
 	if (dev->rx_pkt_burst == hns3_recv_pkts_vec ||
 	    dev->rx_pkt_burst == hns3_recv_pkts_vec_sve)
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index cd7c21c1d0..34a028701f 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -696,7 +696,7 @@ int hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 			struct rte_mempool *mp);
 int hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 			unsigned int socket, const struct rte_eth_txconf *conf);
-uint32_t hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t hns3_rx_queue_count(void *rx_queue);
 int hns3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int hns3_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int hns3_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 3eb82578b0..5493ae6bba 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2117,14 +2117,14 @@ i40e_dev_rx_queue_release(void *rxq)
 }
 
 uint32_t
-i40e_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+i40e_dev_rx_queue_count(void *rx_queue)
 {
 #define I40E_RXQ_SCAN_INTERVAL 4
 	volatile union i40e_rx_desc *rxdp;
 	struct i40e_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 	while ((desc < rxq->nb_rx_desc) &&
 		((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 5ccf5773e8..a08b80f020 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -225,8 +225,7 @@ int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
 int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
 void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
 
-uint32_t i40e_dev_rx_queue_count(struct rte_eth_dev *dev,
-				 uint16_t rx_queue_id);
+uint32_t i40e_dev_rx_queue_count(void *rx_queue);
 int i40e_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
 int i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 87afc0b4cb..3dc1f04380 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -2799,14 +2799,14 @@ iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 
 /* Get the number of used descriptors of a rx queue */
 uint32_t
-iavf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id)
+iavf_dev_rxq_count(void *rx_queue)
 {
 #define IAVF_RXQ_SCAN_INTERVAL 4
 	volatile union iavf_rx_desc *rxdp;
 	struct iavf_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index e210b913d6..2f7bec2b63 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -453,7 +453,7 @@ void iavf_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			  struct rte_eth_rxq_info *qinfo);
 void iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			  struct rte_eth_txq_info *qinfo);
-uint32_t iavf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id);
+uint32_t iavf_dev_rxq_count(void *rx_queue);
 int iavf_dev_rx_desc_status(void *rx_queue, uint16_t offset);
 int iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset);
 
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 5d7ab4f047..61936b0ab1 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1427,14 +1427,14 @@ ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 }
 
 uint32_t
-ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+ice_rx_queue_count(void *rx_queue)
 {
 #define ICE_RXQ_SCAN_INTERVAL 4
 	volatile union ice_rx_flex_desc *rxdp;
 	struct ice_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 	while ((desc < rxq->nb_rx_desc) &&
 	       rte_le_to_cpu_16(rxdp->wb.status_error0) &
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index b10db0874d..b45abec91a 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -222,7 +222,7 @@ uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 void ice_set_tx_function_flag(struct rte_eth_dev *dev,
 			      struct ice_tx_queue *txq);
 void ice_set_tx_function(struct rte_eth_dev *dev);
-uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t ice_rx_queue_count(void *rx_queue);
 void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 		      struct rte_eth_rxq_info *qinfo);
 void ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index b5489eedd2..437992ecdf 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -722,8 +722,7 @@ void eth_igc_rx_queue_release(void *rxq)
 		igc_rx_queue_release(rxq);
 }
 
-uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id)
+uint32_t eth_igc_rx_queue_count(void *rx_queue)
 {
 	/**
 	 * Check the DD bit of a rx descriptor of each 4 in a group,
@@ -736,7 +735,7 @@ uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
 	struct igc_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 
 	while (desc < rxq->nb_rx_desc - rxq->rx_tail) {
diff --git a/drivers/net/igc/igc_txrx.h b/drivers/net/igc/igc_txrx.h
index f2b2d75bbc..b0c4b3ebd9 100644
--- a/drivers/net/igc/igc_txrx.h
+++ b/drivers/net/igc/igc_txrx.h
@@ -22,8 +22,7 @@ int eth_igc_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		const struct rte_eth_rxconf *rx_conf,
 		struct rte_mempool *mb_pool);
 
-uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t eth_igc_rx_queue_count(void *rx_queue);
 
 int eth_igc_rx_descriptor_done(void *rx_queue, uint16_t offset);
 
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index a0ce18ca24..c5027be1dc 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -602,8 +602,7 @@ int  ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
 
-uint32_t ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t ixgbe_dev_rx_queue_count(void *rx_queue);
 
 int ixgbe_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
 
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index bfdfd5e755..1f802851e3 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -3258,14 +3258,14 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+ixgbe_dev_rx_queue_count(void *rx_queue)
 {
 #define IXGBE_RXQ_SCAN_INTERVAL 4
 	volatile union ixgbe_adv_rx_desc *rxdp;
 	struct ixgbe_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index e3b1051ba4..1a9eb35acc 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -240,32 +240,32 @@ mlx5_rx_burst_mode_get(struct rte_eth_dev *dev,
 /**
  * DPDK callback to get the number of used descriptors in a RX queue.
  *
- * @param dev
- *   Pointer to the device structure.
- *
- * @param rx_queue_id
- *   The Rx queue.
+ * @param rx_queue
+ *   The Rx queue pointer.
  *
  * @return
  *   The number of used rx descriptor.
  *   -EINVAL if the queue is invalid
  */
 uint32_t
-mlx5_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+mlx5_rx_queue_count(void *rx_queue)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq;
+	struct mlx5_rxq_data *rxq = rx_queue;
+	struct rte_eth_dev *dev;
+
+	if (!rxq) {
+		rte_errno = EINVAL;
+		return -rte_errno;
+	}
+
+	dev = &rte_eth_devices[rxq->port_id];
 
 	if (dev->rx_pkt_burst == NULL ||
 	    dev->rx_pkt_burst == removed_rx_burst) {
 		rte_errno = ENOTSUP;
 		return -rte_errno;
 	}
-	rxq = (*priv->rxqs)[rx_queue_id];
-	if (!rxq) {
-		rte_errno = EINVAL;
-		return -rte_errno;
-	}
+
 	return rx_queue_count(rxq);
 }
 
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 3f2b99fb65..5e4ac7324d 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -260,7 +260,7 @@ uint16_t mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts,
 uint16_t removed_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts,
 			  uint16_t pkts_n);
 int mlx5_rx_descriptor_status(void *rx_queue, uint16_t offset);
-uint32_t mlx5_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t mlx5_rx_queue_count(void *rx_queue);
 void mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 		       struct rte_eth_rxq_info *qinfo);
 int mlx5_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t rx_queue_id,
diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c
index c6bf7cc132..30aac371c8 100644
--- a/drivers/net/netvsc/hn_rxtx.c
+++ b/drivers/net/netvsc/hn_rxtx.c
@@ -1018,9 +1018,9 @@ hn_dev_rx_queue_release(void *arg)
  * For this device that means how many packets are pending in the ring.
  */
 uint32_t
-hn_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id)
+hn_dev_rx_queue_count(void *rx_queue)
 {
-	struct hn_rx_queue *rxq = dev->data->rx_queues[queue_id];
+	struct hn_rx_queue *rxq = rx_queue;
 
 	return rte_ring_count(rxq->rx_ring);
 }
diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h
index 43642408bc..2a2bac9338 100644
--- a/drivers/net/netvsc/hn_var.h
+++ b/drivers/net/netvsc/hn_var.h
@@ -215,7 +215,7 @@ int	hn_dev_rx_queue_setup(struct rte_eth_dev *dev,
 void	hn_dev_rx_queue_info(struct rte_eth_dev *dev, uint16_t queue_id,
 			     struct rte_eth_rxq_info *qinfo);
 void	hn_dev_rx_queue_release(void *arg);
-uint32_t hn_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id);
+uint32_t hn_dev_rx_queue_count(void *rx_queue);
 int	hn_dev_rx_queue_status(void *rxq, uint16_t offset);
 void	hn_dev_free_queues(struct rte_eth_dev *dev);
 
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 1402c5f84a..4b2ac4cc43 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -97,14 +97,14 @@ nfp_net_rx_freelist_setup(struct rte_eth_dev *dev)
 }
 
 uint32_t
-nfp_net_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
+nfp_net_rx_queue_count(void *rx_queue)
 {
 	struct nfp_net_rxq *rxq;
 	struct nfp_net_rx_desc *rxds;
 	uint32_t idx;
 	uint32_t count;
 
-	rxq = (struct nfp_net_rxq *)dev->data->rx_queues[queue_idx];
+	rxq = rx_queue;
 
 	idx = rxq->rd_p;
 
diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h
index b0a8bf81b0..0fd50a6c22 100644
--- a/drivers/net/nfp/nfp_rxtx.h
+++ b/drivers/net/nfp/nfp_rxtx.h
@@ -275,8 +275,7 @@ struct nfp_net_rxq {
 } __rte_aligned(64);
 
 int nfp_net_rx_freelist_setup(struct rte_eth_dev *dev);
-uint32_t nfp_net_rx_queue_count(struct rte_eth_dev *dev,
-				       uint16_t queue_idx);
+uint32_t nfp_net_rx_queue_count(void *rx_queue);
 uint16_t nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 				  uint16_t nb_pkts);
 void nfp_net_rx_queue_release(void *rxq);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 7871e3d30b..6696db6f6f 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -431,7 +431,7 @@ int otx2_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			   struct rte_eth_burst_mode *mode);
 int otx2_tx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			   struct rte_eth_burst_mode *mode);
-uint32_t otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t qidx);
+uint32_t otx2_nix_rx_queue_count(void *rx_queue);
 int otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt);
 int otx2_nix_rx_descriptor_done(void *rxq, uint16_t offset);
 int otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 552e6bd43d..e6f8e5bfc1 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -342,13 +342,13 @@ nix_rx_head_tail_get(struct otx2_eth_dev *dev,
 }
 
 uint32_t
-otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t queue_idx)
+otx2_nix_rx_queue_count(void *rx_queue)
 {
-	struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[queue_idx];
-	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_eth_rxq *rxq = rx_queue;
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(rxq->eth_dev);
 	uint32_t head, tail;
 
-	nix_rx_head_tail_get(dev, &head, &tail, queue_idx);
+	nix_rx_head_tail_get(dev, &head, &tail, rxq->rq);
 	return (tail - head) % rxq->qlen;
 }
 
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 2db0d000c3..4b5713f3ec 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1281,19 +1281,19 @@ sfc_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
  * use any process-local pointers from the adapter data.
  */
 static uint32_t
-sfc_rx_queue_count(struct rte_eth_dev *dev, uint16_t ethdev_qid)
+sfc_rx_queue_count(void *rx_queue)
 {
-	const struct sfc_adapter_priv *sap = sfc_adapter_priv_by_eth_dev(dev);
-	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
-	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
+	struct sfc_dp_rxq *dp_rxq = rx_queue;
+	const struct sfc_dp_rx *dp_rx;
 	struct sfc_rxq_info *rxq_info;
 
-	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
+	dp_rx = sfc_dp_rx_by_dp_rxq(dp_rxq);
+	rxq_info = sfc_rxq_info_by_dp_rxq(dp_rxq);
 
 	if ((rxq_info->state & SFC_RXQ_STARTED) == 0)
 		return 0;
 
-	return sap->dp_rx->qdesc_npending(rxq_info->dp);
+	return dp_rx->qdesc_npending(dp_rxq);
 }
 
 /*
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 561a98fc81..0e87620e42 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -1060,8 +1060,7 @@ nicvf_rx_queue_release_mbufs(struct rte_eth_dev *dev, struct nicvf_rxq *rxq)
 	if (dev->rx_pkt_burst == NULL)
 		return;
 
-	while ((rxq_cnt = nicvf_dev_rx_queue_count(dev,
-				nicvf_netdev_qidx(rxq->nic, rxq->queue_id)))) {
+	while ((rxq_cnt = nicvf_dev_rx_queue_count(rxq))) {
 		nb_pkts = dev->rx_pkt_burst(rxq, rx_pkts,
 					NICVF_MAX_RX_FREE_THRESH);
 		PMD_DRV_LOG(INFO, "nb_pkts=%d  rxq_cnt=%d", nb_pkts, rxq_cnt);
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index 91e09ff8d5..0d4f4ae87e 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -649,11 +649,11 @@ nicvf_recv_pkts_multiseg_cksum_vlan_strip(void *rx_queue,
 }
 
 uint32_t
-nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
+nicvf_dev_rx_queue_count(void *rx_queue)
 {
 	struct nicvf_rxq *rxq;
 
-	rxq = dev->data->rx_queues[queue_idx];
+	rxq = rx_queue;
 	return nicvf_addr_read(rxq->cq_status) & NICVF_CQ_CQE_COUNT_MASK;
 }
 
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index d6ed660b4e..271f329dc4 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -83,7 +83,7 @@ nicvf_mbuff_init_mseg_update(struct rte_mbuf *pkt, const uint64_t mbuf_init,
 	*(uint64_t *)(&pkt->rearm_data) = init.value;
 }
 
-uint32_t nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx);
+uint32_t nicvf_dev_rx_queue_count(void *rx_queue);
 uint32_t nicvf_dev_rbdr_refill(struct rte_eth_dev *dev, uint16_t queue_idx);
 
 uint16_t nicvf_recv_pkts_no_offload(void *rxq, struct rte_mbuf **rx_pkts,
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 3021933965..569cd6a48f 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -446,8 +446,7 @@ int  txgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
 
-uint32_t txgbe_dev_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t txgbe_dev_rx_queue_count(void *rx_queue);
 
 int txgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int txgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 1a261287d1..2a7cfdeedb 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -2688,14 +2688,14 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-txgbe_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+txgbe_dev_rx_queue_count(void *rx_queue)
 {
 #define TXGBE_RXQ_SCAN_INTERVAL 4
 	volatile struct txgbe_rx_desc *rxdp;
 	struct txgbe_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index a202931e9a..f2b3f142d8 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -1369,11 +1369,11 @@ eth_link_update(struct rte_eth_dev *dev __rte_unused,
 }
 
 static uint32_t
-eth_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_rx_queue_count(void *rx_queue)
 {
 	struct vhost_queue *vq;
 
-	vq = dev->data->rx_queues[rx_queue_id];
+	vq = rx_queue;
 	if (vq == NULL)
 		return 0;
 
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index afdc53b674..9642b7c00f 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -5060,7 +5060,7 @@ rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
 	    dev->data->rx_queues[queue_id] == NULL)
 		return -EINVAL;
 
-	return (int)(*dev->rx_queue_count)(dev, queue_id);
+	return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]);
 }
 
 /**
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index d2c9ec42c7..948c0b71c1 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -41,8 +41,7 @@ typedef uint16_t (*eth_tx_prep_t)(void *txq,
 /**< @internal Prepare output packets on a transmit queue of an Ethernet device. */
 
 
-typedef uint32_t (*eth_rx_queue_count_t)(struct rte_eth_dev *dev,
-					 uint16_t rx_queue_id);
+typedef uint32_t (*eth_rx_queue_count_t)(void *rxq);
 /**< @internal Get number of used descriptors on a receive queue. */
 
 typedef int (*eth_rx_descriptor_done_t)(void *rxq, uint16_t offset);
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [dpdk-dev] [PATCH v4 3/7] ethdev: copy ethdev 'fast' API into separate structure
  2021-10-04 13:55     ` [dpdk-dev] [PATCH v4 " Konstantin Ananyev
  2021-10-04 13:55       ` [dpdk-dev] [PATCH v4 1/7] ethdev: allocate max space for internal queue array Konstantin Ananyev
  2021-10-04 13:55       ` [dpdk-dev] [PATCH v4 2/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
@ 2021-10-04 13:55       ` Konstantin Ananyev
  2021-10-05 13:09         ` Thomas Monjalon
  2021-10-04 13:56       ` [dpdk-dev] [PATCH v4 4/7] ethdev: make burst functions to use new flat array Konstantin Ananyev
                         ` (5 subsequent siblings)
  8 siblings, 1 reply; 112+ messages in thread
From: Konstantin Ananyev @ 2021-10-04 13:55 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Copy public function pointers (rx_pkt_burst(), etc.) and related
pointers to internal data from rte_eth_dev structure into a
separate flat array. That array will remain in a public header.
The intention here is to make rte_eth_dev and related structures internal.
That should allow future possible changes to core eth_dev structures
to be transparent to the user and help to avoid ABI/API breakages.
The plan is to keep minimal part of data from rte_eth_dev public,
so we still can use inline functions for 'fast' calls
(like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/ethdev/ethdev_private.c  | 52 ++++++++++++++++++++++++++++++++++++
 lib/ethdev/ethdev_private.h  |  7 +++++
 lib/ethdev/rte_ethdev.c      | 27 +++++++++++++++++++
 lib/ethdev/rte_ethdev_core.h | 45 +++++++++++++++++++++++++++++++
 4 files changed, 131 insertions(+)

diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 012cf73ca2..3eeda6e9f9 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -174,3 +174,55 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
 		RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
 	return str == NULL ? -1 : 0;
 }
+
+static uint16_t
+dummy_eth_rx_burst(__rte_unused void *rxq,
+		__rte_unused struct rte_mbuf **rx_pkts,
+		__rte_unused uint16_t nb_pkts)
+{
+	RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+static uint16_t
+dummy_eth_tx_burst(__rte_unused void *txq,
+		__rte_unused struct rte_mbuf **tx_pkts,
+		__rte_unused uint16_t nb_pkts)
+{
+	RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n");
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+void
+eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo)
+{
+	static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
+	static const struct rte_eth_fp_ops dummy_ops = {
+		.rx_pkt_burst = dummy_eth_rx_burst,
+		.tx_pkt_burst = dummy_eth_tx_burst,
+		.rxq = {.data = dummy_data, .clbk = dummy_data,},
+		.txq = {.data = dummy_data, .clbk = dummy_data,},
+	};
+
+	*fpo = dummy_ops;
+}
+
+void
+eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
+		const struct rte_eth_dev *dev)
+{
+	fpo->rx_pkt_burst = dev->rx_pkt_burst;
+	fpo->tx_pkt_burst = dev->tx_pkt_burst;
+	fpo->tx_pkt_prepare = dev->tx_pkt_prepare;
+	fpo->rx_queue_count = dev->rx_queue_count;
+	fpo->rx_descriptor_status = dev->rx_descriptor_status;
+	fpo->tx_descriptor_status = dev->tx_descriptor_status;
+
+	fpo->rxq.data = dev->data->rx_queues;
+	fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
+
+	fpo->txq.data = dev->data->tx_queues;
+	fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
+}
diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
index 3724429577..40333e7651 100644
--- a/lib/ethdev/ethdev_private.h
+++ b/lib/ethdev/ethdev_private.h
@@ -26,4 +26,11 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
 /* Parse devargs value for representor parameter. */
 int rte_eth_devargs_parse_representor_ports(char *str, void *data);
 
+/* reset eth 'fast' API to dummy values */
+void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
+
+/* setup eth 'fast' API to ethdev values */
+void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
+		const struct rte_eth_dev *dev);
+
 #endif /* _ETH_PRIVATE_H_ */
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 424bc260fa..036c82cbfb 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -44,6 +44,9 @@
 static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
 struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
 
+/* public 'fast' API */
+struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
+
 /* spinlock for eth device callbacks */
 static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
 
@@ -578,6 +581,8 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
 		rte_eth_dev_callback_process(eth_dev,
 				RTE_ETH_EVENT_DESTROY, NULL);
 
+	eth_dev_fp_ops_reset(rte_eth_fp_ops + eth_dev->data->port_id);
+
 	rte_spinlock_lock(&eth_dev_shared_data->ownership_lock);
 
 	eth_dev->state = RTE_ETH_DEV_UNUSED;
@@ -1788,6 +1793,9 @@ rte_eth_dev_start(uint16_t port_id)
 		(*dev->dev_ops->link_update)(dev, 0);
 	}
 
+	/* expose selection of PMD rx/tx function */
+	eth_dev_fp_ops_setup(rte_eth_fp_ops + port_id, dev);
+
 	rte_ethdev_trace_start(port_id);
 	return 0;
 }
@@ -1810,6 +1818,9 @@ rte_eth_dev_stop(uint16_t port_id)
 		return 0;
 	}
 
+	/* point rx/tx functions to dummy ones */
+	eth_dev_fp_ops_reset(rte_eth_fp_ops + port_id);
+
 	dev->data->dev_started = 0;
 	ret = (*dev->dev_ops->dev_stop)(dev);
 	rte_ethdev_trace_stop(port_id, ret);
@@ -4568,6 +4579,14 @@ rte_eth_mirror_rule_reset(uint16_t port_id, uint8_t rule_id)
 	return eth_err(port_id, (*dev->dev_ops->mirror_rule_reset)(dev, rule_id));
 }
 
+RTE_INIT(eth_dev_init_fp_ops)
+{
+	uint32_t i;
+
+	for (i = 0; i != RTE_DIM(rte_eth_fp_ops); i++)
+		eth_dev_fp_ops_reset(rte_eth_fp_ops + i);
+}
+
 RTE_INIT(eth_dev_init_cb_lists)
 {
 	uint16_t i;
@@ -4736,6 +4755,14 @@ rte_eth_dev_probing_finish(struct rte_eth_dev *dev)
 	if (dev == NULL)
 		return;
 
+	/*
+	 * for secondary process, at that point we expect device
+	 * to be already 'usable', so shared data and all function pointers
+	 * for 'fast' devops have to be setup properly inside rte_eth_dev.
+	 */
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev);
+
 	rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL);
 
 	dev->state = RTE_ETH_DEV_ATTACHED;
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index 948c0b71c1..fe47a660c7 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -53,6 +53,51 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
 typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
 /**< @internal Check the status of a Tx descriptor */
 
+/**
+ * @internal
+ * Structure used to hold opaque pointernals to internal ethdev RX/TXi
+ * queues data.
+ * The main purpose to expose these pointers at all - allow compiler
+ * to fetch this data for 'fast' ethdev inline functions in advance.
+ */
+struct rte_ethdev_qdata {
+	void **data;
+	/**< points to array of internal queue data pointers */
+	void **clbk;
+	/**< points to array of queue callback data pointers */
+};
+
+/**
+ * @internal
+ * 'fast' ethdev funcions and related data are hold in a flat array.
+ * one entry per ethdev.
+ */
+struct rte_eth_fp_ops {
+
+	/** first 64B line */
+	eth_rx_burst_t rx_pkt_burst;
+	/**< PMD receive function. */
+	eth_tx_burst_t tx_pkt_burst;
+	/**< PMD transmit function. */
+	eth_tx_prep_t tx_pkt_prepare;
+	/**< PMD transmit prepare function. */
+	eth_rx_queue_count_t rx_queue_count;
+	/**< Get the number of used RX descriptors. */
+	eth_rx_descriptor_status_t rx_descriptor_status;
+	/**< Check the status of a Rx descriptor. */
+	eth_tx_descriptor_status_t tx_descriptor_status;
+	/**< Check the status of a Tx descriptor. */
+	uintptr_t reserved[2];
+
+	/** second 64B line */
+	struct rte_ethdev_qdata rxq;
+	struct rte_ethdev_qdata txq;
+	uintptr_t reserved2[4];
+
+} __rte_cache_aligned;
+
+extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
+
 
 /**
  * @internal
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [dpdk-dev] [PATCH v4 4/7] ethdev: make burst functions to use new flat array
  2021-10-04 13:55     ` [dpdk-dev] [PATCH v4 " Konstantin Ananyev
                         ` (2 preceding siblings ...)
  2021-10-04 13:55       ` [dpdk-dev] [PATCH v4 3/7] ethdev: copy ethdev 'fast' API into separate structure Konstantin Ananyev
@ 2021-10-04 13:56       ` Konstantin Ananyev
  2021-10-05  9:54         ` David Marchand
  2021-10-04 13:56       ` [dpdk-dev] [PATCH v4 5/7] ethdev: add API to retrieve multiple ethernet addresses Konstantin Ananyev
                         ` (4 subsequent siblings)
  8 siblings, 1 reply; 112+ messages in thread
From: Konstantin Ananyev @ 2021-10-04 13:56 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Rework 'fast' burst functions to use rte_eth_fp_ops[].
While it is an API/ABI breakage, this change is intended to be
transparent for both users (no changes in user app is required) and
PMD developers (no changes in PMD is required).
One extra thing to note - RX/TX callback invocation will cause extra
function call with these changes. That might cause some insignificant
slowdown for code-path where RX/TX callbacks are heavily involved.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/ethdev/ethdev_private.c |  31 +++++
 lib/ethdev/rte_ethdev.h     | 242 ++++++++++++++++++++++++++----------
 lib/ethdev/version.map      |   5 +
 3 files changed, 210 insertions(+), 68 deletions(-)

diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 3eeda6e9f9..27d29b2ac6 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -226,3 +226,34 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
 	fpo->txq.data = dev->data->tx_queues;
 	fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
 }
+
+uint16_t
+__rte_eth_rx_epilog(uint16_t port_id, uint16_t queue_id,
+	struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
+	void *opaque)
+{
+	const struct rte_eth_rxtx_callback *cb = opaque;
+
+	while (cb != NULL) {
+		nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
+				nb_pkts, cb->param);
+		cb = cb->next;
+	}
+
+	return nb_rx;
+}
+
+uint16_t
+__rte_eth_tx_prolog(uint16_t port_id, uint16_t queue_id,
+	struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque)
+{
+	const struct rte_eth_rxtx_callback *cb = opaque;
+
+	while (cb != NULL) {
+		nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
+				cb->param);
+		cb = cb->next;
+	}
+
+	return nb_pkts;
+}
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 9642b7c00f..7f68be406e 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -4904,6 +4904,33 @@ int rte_eth_representor_info_get(uint16_t port_id,
 
 #include <rte_ethdev_core.h>
 
+/**
+ * @internal
+ * Helper routine for eth driver rx_burst API.
+ * Should be called at exit from PMD's rte_eth_rx_bulk implementation.
+ * Does necessary post-processing - invokes RX callbacks if any, etc.
+ *
+ * @param port_id
+ *  The port identifier of the Ethernet device.
+ * @param queue_id
+ *  The index of the receive queue from which to retrieve input packets.
+ * @param rx_pkts
+ *   The address of an array of pointers to *rte_mbuf* structures that
+ *   have been retrieved from the device.
+ * @param nb_pkts
+ *   The number of packets that were retrieved from the device.
+ * @param nb_pkts
+ *   The number of elements in *rx_pkts* array.
+ * @param opaque
+ *   Opaque pointer of RX queue callback related data.
+ *
+ * @return
+ *  The number of packets effectively supplied to the *rx_pkts* array.
+ */
+uint16_t __rte_eth_rx_epilog(uint16_t port_id, uint16_t queue_id,
+		struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
+		void *opaque);
+
 /**
  *
  * Retrieve a burst of input packets from a receive queue of an Ethernet
@@ -4995,23 +5022,37 @@ static inline uint16_t
 rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
 		 struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
 {
-	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 	uint16_t nb_rx;
+	struct rte_eth_fp_ops *p;
+	void *cb, *qd;
+
+#ifdef RTE_ETHDEV_DEBUG_RX
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return 0;
+	}
+#endif
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->rxq.data[queue_id];
 
 #ifdef RTE_ETHDEV_DEBUG_RX
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
 
-	if (queue_id >= dev->data->nb_rx_queues) {
-		RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u\n", queue_id);
+	if (qd == NULL) {
+		RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u for port_id=%u\n",
+			queue_id, port_id);
 		return 0;
 	}
 #endif
-	nb_rx = (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id],
-				     rx_pkts, nb_pkts);
+
+	nb_rx = p->rx_pkt_burst(qd, rx_pkts, nb_pkts);
 
 #ifdef RTE_ETHDEV_RXTX_CALLBACKS
-	struct rte_eth_rxtx_callback *cb;
 
 	/* __ATOMIC_RELEASE memory order was used when the
 	 * call back was inserted into the list.
@@ -5019,16 +5060,10 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
 	 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
 	 * not required.
 	 */
-	cb = __atomic_load_n(&dev->post_rx_burst_cbs[queue_id],
-				__ATOMIC_RELAXED);
-
-	if (unlikely(cb != NULL)) {
-		do {
-			nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
-						nb_pkts, cb->param);
-			cb = cb->next;
-		} while (cb != NULL);
-	}
+	cb = __atomic_load_n((void **)&p->rxq.clbk[queue_id], __ATOMIC_RELAXED);
+	if (unlikely(cb != NULL))
+		nb_rx = __rte_eth_rx_epilog(port_id, queue_id, rx_pkts, nb_rx,
+				nb_pkts, cb);
 #endif
 
 	rte_ethdev_trace_rx_burst(port_id, queue_id, (void **)rx_pkts, nb_rx);
@@ -5051,16 +5086,27 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
 static inline int
 rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
 {
-	struct rte_eth_dev *dev;
+	struct rte_eth_fp_ops *p;
+	void *qd;
+
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return -EINVAL;
+	}
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->rxq.data[queue_id];
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
-	dev = &rte_eth_devices[port_id];
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_queue_count, -ENOTSUP);
-	if (queue_id >= dev->data->nb_rx_queues ||
-	    dev->data->rx_queues[queue_id] == NULL)
+	RTE_FUNC_PTR_OR_ERR_RET(*p->rx_queue_count, -ENOTSUP);
+	if (qd == NULL)
 		return -EINVAL;
 
-	return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]);
+	return (int)(*p->rx_queue_count)(qd);
 }
 
 /**
@@ -5133,21 +5179,30 @@ static inline int
 rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
 	uint16_t offset)
 {
-	struct rte_eth_dev *dev;
-	void *rxq;
+	struct rte_eth_fp_ops *p;
+	void *qd;
 
 #ifdef RTE_ETHDEV_DEBUG_RX
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return -EINVAL;
+	}
 #endif
-	dev = &rte_eth_devices[port_id];
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->rxq.data[queue_id];
+
 #ifdef RTE_ETHDEV_DEBUG_RX
-	if (queue_id >= dev->data->nb_rx_queues)
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (qd == NULL)
 		return -ENODEV;
 #endif
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_descriptor_status, -ENOTSUP);
-	rxq = dev->data->rx_queues[queue_id];
-
-	return (*dev->rx_descriptor_status)(rxq, offset);
+	RTE_FUNC_PTR_OR_ERR_RET(*p->rx_descriptor_status, -ENOTSUP);
+	return (*p->rx_descriptor_status)(qd, offset);
 }
 
 /**@{@name Tx hardware descriptor states
@@ -5194,23 +5249,54 @@ rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
 static inline int rte_eth_tx_descriptor_status(uint16_t port_id,
 	uint16_t queue_id, uint16_t offset)
 {
-	struct rte_eth_dev *dev;
-	void *txq;
+	struct rte_eth_fp_ops *p;
+	void *qd;
 
 #ifdef RTE_ETHDEV_DEBUG_TX
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return -EINVAL;
+	}
 #endif
-	dev = &rte_eth_devices[port_id];
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->txq.data[queue_id];
+
 #ifdef RTE_ETHDEV_DEBUG_TX
-	if (queue_id >= dev->data->nb_tx_queues)
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (qd == NULL)
 		return -ENODEV;
 #endif
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_descriptor_status, -ENOTSUP);
-	txq = dev->data->tx_queues[queue_id];
-
-	return (*dev->tx_descriptor_status)(txq, offset);
+	RTE_FUNC_PTR_OR_ERR_RET(*p->tx_descriptor_status, -ENOTSUP);
+	return (*p->tx_descriptor_status)(qd, offset);
 }
 
+/**
+ * @internal
+ * Helper routine for eth driver tx_burst API.
+ * Should be called before entry PMD's rte_eth_tx_bulk implementation.
+ * Does necessary pre-processing - invokes TX callbacks if any, etc.
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device.
+ * @param queue_id
+ *   The index of the transmit queue through which output packets must be
+ *   sent.
+ * @param tx_pkts
+ *   The address of an array of *nb_pkts* pointers to *rte_mbuf* structures
+ *   which contain the output packets.
+ * @param nb_pkts
+ *   The maximum number of packets to transmit.
+ * @return
+ *   The number of output packets to transmit.
+ */
+uint16_t __rte_eth_tx_prolog(uint16_t port_id, uint16_t queue_id,
+	struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque);
+
 /**
  * Send a burst of output packets on a transmit queue of an Ethernet device.
  *
@@ -5281,20 +5367,34 @@ static inline uint16_t
 rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
 		 struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 {
-	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	struct rte_eth_fp_ops *p;
+	void *cb, *qd;
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return 0;
+	}
+#endif
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->txq.data[queue_id];
 
 #ifdef RTE_ETHDEV_DEBUG_TX
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0);
 
-	if (queue_id >= dev->data->nb_tx_queues) {
-		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
+	if (qd == NULL) {
+		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
+			queue_id, port_id);
 		return 0;
 	}
 #endif
 
 #ifdef RTE_ETHDEV_RXTX_CALLBACKS
-	struct rte_eth_rxtx_callback *cb;
 
 	/* __ATOMIC_RELEASE memory order was used when the
 	 * call back was inserted into the list.
@@ -5302,21 +5402,16 @@ rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
 	 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
 	 * not required.
 	 */
-	cb = __atomic_load_n(&dev->pre_tx_burst_cbs[queue_id],
-				__ATOMIC_RELAXED);
-
-	if (unlikely(cb != NULL)) {
-		do {
-			nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
-					cb->param);
-			cb = cb->next;
-		} while (cb != NULL);
-	}
+	cb = __atomic_load_n((void **)&p->txq.clbk[queue_id], __ATOMIC_RELAXED);
+	if (unlikely(cb != NULL))
+		nb_pkts = __rte_eth_tx_prolog(port_id, queue_id, tx_pkts,
+				nb_pkts, cb);
 #endif
 
-	rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts,
-		nb_pkts);
-	return (*dev->tx_pkt_burst)(dev->data->tx_queues[queue_id], tx_pkts, nb_pkts);
+	nb_pkts = p->tx_pkt_burst(qd, tx_pkts, nb_pkts);
+
+	rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts, nb_pkts);
+	return nb_pkts;
 }
 
 /**
@@ -5379,31 +5474,42 @@ static inline uint16_t
 rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id,
 		struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 {
-	struct rte_eth_dev *dev;
+	struct rte_eth_fp_ops *p;
+	void *qd;
 
 #ifdef RTE_ETHDEV_DEBUG_TX
-	if (!rte_eth_dev_is_valid_port(port_id)) {
-		RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
 		rte_errno = ENODEV;
 		return 0;
 	}
 #endif
 
-	dev = &rte_eth_devices[port_id];
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->txq.data[queue_id];
 
 #ifdef RTE_ETHDEV_DEBUG_TX
-	if (queue_id >= dev->data->nb_tx_queues) {
-		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
+	if (!rte_eth_dev_is_valid_port(port_id)) {
+		RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
+		rte_errno = ENODEV;
+		return 0;
+	}
+	if (qd == NULL) {
+		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
+			queue_id, port_id);
 		rte_errno = EINVAL;
 		return 0;
 	}
 #endif
 
-	if (!dev->tx_pkt_prepare)
+	if (!p->tx_pkt_prepare)
 		return nb_pkts;
 
-	return (*dev->tx_pkt_prepare)(dev->data->tx_queues[queue_id],
-			tx_pkts, nb_pkts);
+	return p->tx_pkt_prepare(qd, tx_pkts, nb_pkts);
 }
 
 #else
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 904bce6ea1..2348ec3c3c 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -1,6 +1,10 @@
 DPDK_22 {
 	global:
 
+	# internal functions called by public inline ones
+	__rte_eth_rx_epilog;
+	__rte_eth_tx_prolog;
+
 	rte_eth_add_first_rx_callback;
 	rte_eth_add_rx_callback;
 	rte_eth_add_tx_callback;
@@ -76,6 +80,7 @@ DPDK_22 {
 	rte_eth_find_next_of;
 	rte_eth_find_next_owned_by;
 	rte_eth_find_next_sibling;
+	rte_eth_fp_ops;
 	rte_eth_iterator_cleanup;
 	rte_eth_iterator_init;
 	rte_eth_iterator_next;
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [dpdk-dev] [PATCH v4 5/7] ethdev: add API to retrieve multiple ethernet addresses
  2021-10-04 13:55     ` [dpdk-dev] [PATCH v4 " Konstantin Ananyev
                         ` (3 preceding siblings ...)
  2021-10-04 13:56       ` [dpdk-dev] [PATCH v4 4/7] ethdev: make burst functions to use new flat array Konstantin Ananyev
@ 2021-10-04 13:56       ` Konstantin Ananyev
  2021-10-05 13:13         ` Thomas Monjalon
  2021-10-04 13:56       ` [dpdk-dev] [PATCH v4 6/7] ethdev: remove legacy Rx descriptor done API Konstantin Ananyev
                         ` (3 subsequent siblings)
  8 siblings, 1 reply; 112+ messages in thread
From: Konstantin Ananyev @ 2021-10-04 13:56 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Introduce rte_eth_macaddrs_get() to allow user to retrieve all ethernet
addresses assigned to given port.
Change testpmd to use this new function and avoid referencing directly
rte_eth_devices[].

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 app/test-pmd/config.c                  | 23 +++++++++++------------
 doc/guides/rel_notes/release_21_11.rst |  5 +++++
 lib/ethdev/rte_ethdev.c                | 25 +++++++++++++++++++++++++
 lib/ethdev/rte_ethdev.h                | 19 +++++++++++++++++++
 lib/ethdev/version.map                 |  3 +++
 5 files changed, 63 insertions(+), 12 deletions(-)

diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 9c66329e96..7221644230 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -5215,20 +5215,20 @@ show_macs(portid_t port_id)
 {
 	char buf[RTE_ETHER_ADDR_FMT_SIZE];
 	struct rte_eth_dev_info dev_info;
-	struct rte_ether_addr *addr;
-	uint32_t i, num_macs = 0;
-	struct rte_eth_dev *dev;
-
-	dev = &rte_eth_devices[port_id];
+	int32_t i, rc, num_macs = 0;
 
 	if (eth_dev_info_get_print_err(port_id, &dev_info))
 		return;
 
-	for (i = 0; i < dev_info.max_mac_addrs; i++) {
-		addr = &dev->data->mac_addrs[i];
+	struct rte_ether_addr addr[dev_info.max_mac_addrs];
+	rc = rte_eth_macaddrs_get(port_id, addr, dev_info.max_mac_addrs);
+	if (rc < 0)
+		return;
+
+	for (i = 0; i < rc; i++) {
 
 		/* skip zero address */
-		if (rte_is_zero_ether_addr(addr))
+		if (rte_is_zero_ether_addr(&addr[i]))
 			continue;
 
 		num_macs++;
@@ -5236,14 +5236,13 @@ show_macs(portid_t port_id)
 
 	printf("Number of MAC address added: %d\n", num_macs);
 
-	for (i = 0; i < dev_info.max_mac_addrs; i++) {
-		addr = &dev->data->mac_addrs[i];
+	for (i = 0; i < rc; i++) {
 
 		/* skip zero address */
-		if (rte_is_zero_ether_addr(addr))
+		if (rte_is_zero_ether_addr(&addr[i]))
 			continue;
 
-		rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, addr);
+		rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, &addr[i]);
 		printf("  %s\n", buf);
 	}
 }
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index fd80538b6c..91c392c14e 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -125,6 +125,11 @@ New Features
   * Added tests to validate packets hard expiry.
   * Added tests to verify tunnel header verification in IPsec inbound.
 
+* **Add new function into ethdev lib.**
+
+  * Added ``rte_eth_macaddrs_get`` to allow user to retrieve all Ethernet
+    addresses aasigned to given ethernet port.
+
 
 Removed Items
 -------------
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 036c82cbfb..b051eff70e 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -3574,6 +3574,31 @@ rte_eth_dev_set_ptypes(uint16_t port_id, uint32_t ptype_mask,
 	return ret;
 }
 
+int
+rte_eth_macaddrs_get(uint16_t port_id, struct rte_ether_addr ma[], uint32_t num)
+{
+	int32_t ret;
+	struct rte_eth_dev *dev;
+	struct rte_eth_dev_info dev_info;
+
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	dev = &rte_eth_devices[port_id];
+
+	ret = rte_eth_dev_info_get(port_id, &dev_info);
+	if (ret != 0)
+		return ret;
+
+	if (ma == NULL) {
+		RTE_ETHDEV_LOG(ERR, "%s: invalid parameters\n", __func__);
+		return -EINVAL;
+	}
+
+	num = RTE_MIN(dev_info.max_mac_addrs, num);
+	memcpy(ma, dev->data->mac_addrs, num * sizeof(ma[0]));
+
+	return num;
+}
+
 int
 rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr)
 {
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 7f68be406e..047f7c9c5a 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -3037,6 +3037,25 @@ int rte_eth_dev_set_rx_queue_stats_mapping(uint16_t port_id,
  */
 int rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr);
 
+/**
+ * Retrieve the Ethernet addresses of an Ethernet device.
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device.
+ * @param ma
+ *   A pointer to an array of structures of type *ether_addr* to be filled with
+ *   the Ethernet addresses of the Ethernet device.
+ * @param num
+ *   Number of elements in the *ma* array.
+ * @return
+ *   - number of retrieved addresses if successful
+ *   - (-ENODEV) if *port_id* invalid.
+ *   - (-EINVAL) if bad parameter.
+ */
+__rte_experimental
+int rte_eth_macaddrs_get(uint16_t port_id, struct rte_ether_addr ma[],
+	uint32_t num);
+
 /**
  * Retrieve the contextual information of an Ethernet device.
  *
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 2348ec3c3c..0881202381 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -252,6 +252,9 @@ EXPERIMENTAL {
 	rte_mtr_meter_policy_delete;
 	rte_mtr_meter_policy_update;
 	rte_mtr_meter_policy_validate;
+
+	# added in 21.11
+	rte_eth_macaddrs_get;
 };
 
 INTERNAL {
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [dpdk-dev] [PATCH v4 6/7] ethdev: remove legacy Rx descriptor done API
  2021-10-04 13:55     ` [dpdk-dev] [PATCH v4 " Konstantin Ananyev
                         ` (4 preceding siblings ...)
  2021-10-04 13:56       ` [dpdk-dev] [PATCH v4 5/7] ethdev: add API to retrieve multiple ethernet addresses Konstantin Ananyev
@ 2021-10-04 13:56       ` Konstantin Ananyev
  2021-10-05 13:14         ` Thomas Monjalon
  2021-10-04 13:56       ` [dpdk-dev] [PATCH v4 7/7] ethdev: hide eth dev related structures Konstantin Ananyev
                         ` (2 subsequent siblings)
  8 siblings, 1 reply; 112+ messages in thread
From: Konstantin Ananyev @ 2021-10-04 13:56 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

rte_eth_rx_descriptor_status() should be used as a replacement.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/nics/features.rst            |  6 +-----
 doc/guides/rel_notes/deprecation.rst    |  5 -----
 doc/guides/rel_notes/release_21_11.rst  |  4 ++++
 drivers/net/e1000/e1000_ethdev.h        |  4 ----
 drivers/net/e1000/em_ethdev.c           |  1 -
 drivers/net/e1000/em_rxtx.c             | 17 ----------------
 drivers/net/e1000/igb_ethdev.c          |  2 --
 drivers/net/e1000/igb_rxtx.c            | 17 ----------------
 drivers/net/fm10k/fm10k.h               |  3 ---
 drivers/net/fm10k/fm10k_ethdev.c        |  1 -
 drivers/net/fm10k/fm10k_rxtx.c          | 25 ------------------------
 drivers/net/i40e/i40e_ethdev.c          |  1 -
 drivers/net/i40e/i40e_ethdev_vf.c       |  1 -
 drivers/net/i40e/i40e_rxtx.c            | 26 -------------------------
 drivers/net/i40e/i40e_rxtx.h            |  1 -
 drivers/net/igc/igc_ethdev.c            |  1 -
 drivers/net/igc/igc_txrx.c              | 18 -----------------
 drivers/net/igc/igc_txrx.h              |  2 --
 drivers/net/ixgbe/ixgbe_ethdev.c        |  2 --
 drivers/net/ixgbe/ixgbe_ethdev.h        |  2 --
 drivers/net/ixgbe/ixgbe_rxtx.c          | 18 -----------------
 drivers/net/octeontx2/otx2_ethdev.c     |  1 -
 drivers/net/octeontx2/otx2_ethdev.h     |  1 -
 drivers/net/octeontx2/otx2_ethdev_ops.c | 12 ------------
 drivers/net/sfc/sfc_ethdev.c            | 17 ----------------
 drivers/net/virtio/virtio_ethdev.c      |  1 -
 lib/ethdev/rte_ethdev.c                 |  1 -
 lib/ethdev/rte_ethdev.h                 | 25 ------------------------
 lib/ethdev/rte_ethdev_core.h            |  4 ----
 29 files changed, 5 insertions(+), 214 deletions(-)

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 4fce8cd1c9..a02ef25409 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -662,14 +662,10 @@ Rx descriptor status
 --------------------
 
 Supports check the status of a Rx descriptor. When ``rx_descriptor_status`` is
-used, status can be "Available", "Done" or "Unavailable". When
-``rx_descriptor_done`` is used, status can be "DD bit is set" or "DD bit is
-not set".
+used, status can be "Available", "Done" or "Unavailable".
 
 * **[implements] rte_eth_dev**: ``rx_descriptor_status``.
 * **[related]    API**: ``rte_eth_rx_descriptor_status()``.
-* **[implements] rte_eth_dev**: ``rx_descriptor_done``.
-* **[related]    API**: ``rte_eth_rx_descriptor_done()``.
 
 
 .. _nic_features_tx_descriptor_status:
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 05fc2fdee7..82e843a0b3 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -106,11 +106,6 @@ Deprecation Notices
   the device packet overhead can be calculated as:
   ``(struct rte_eth_dev_info).max_rx_pktlen - (struct rte_eth_dev_info).max_mtu``
 
-* ethdev: ``rx_descriptor_done`` dev_ops and ``rte_eth_rx_descriptor_done``
-  will be removed in 21.11.
-  Existing ``rte_eth_rx_descriptor_status`` and ``rte_eth_tx_descriptor_status``
-  APIs can be used as replacement.
-
 * ethdev: The port mirroring API can be replaced with a more fine grain flow API.
   The structs ``rte_eth_mirror_conf``, ``rte_eth_vlan_mirror`` and the functions
   ``rte_eth_mirror_rule_set``, ``rte_eth_mirror_rule_reset`` will be marked
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 91c392c14e..6055551443 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -150,6 +150,10 @@ Removed Items
   blacklist/whitelist are removed. Users must use the new
   block/allow list arguments.
 
+* ethdev: Removed ``rx_descriptor_done`` dev_ops and
+  ``rte_eth_rx_descriptor_done``.  Existing ``rte_eth_rx_descriptor_status``
+  APIs can be used as a replacement.
+
 
 API Changes
 -----------
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 460e130a83..fff52958df 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -401,8 +401,6 @@ int eth_igb_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 
 uint32_t eth_igb_rx_queue_count(void *rx_queue);
 
-int eth_igb_rx_descriptor_done(void *rx_queue, uint16_t offset);
-
 int eth_igb_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int eth_igb_tx_descriptor_status(void *tx_queue, uint16_t offset);
 
@@ -477,8 +475,6 @@ int eth_em_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 
 uint32_t eth_em_rx_queue_count(void *rx_queue);
 
-int eth_em_rx_descriptor_done(void *rx_queue, uint16_t offset);
-
 int eth_em_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int eth_em_tx_descriptor_status(void *tx_queue, uint16_t offset);
 
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index a0ca371b02..9e157e4ffe 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -247,7 +247,6 @@ eth_em_dev_init(struct rte_eth_dev *eth_dev)
 
 	eth_dev->dev_ops = &eth_em_ops;
 	eth_dev->rx_queue_count = eth_em_rx_queue_count;
-	eth_dev->rx_descriptor_done   = eth_em_rx_descriptor_done;
 	eth_dev->rx_descriptor_status = eth_em_rx_descriptor_status;
 	eth_dev->tx_descriptor_status = eth_em_tx_descriptor_status;
 	eth_dev->rx_pkt_burst = (eth_rx_burst_t)&eth_em_recv_pkts;
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index 40de36cb20..13ea3a77f4 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -1511,23 +1511,6 @@ eth_em_rx_queue_count(void *rx_queue)
 	return desc;
 }
 
-int
-eth_em_rx_descriptor_done(void *rx_queue, uint16_t offset)
-{
-	volatile struct e1000_rx_desc *rxdp;
-	struct em_rx_queue *rxq = rx_queue;
-	uint32_t desc;
-
-	if (unlikely(offset >= rxq->nb_rx_desc))
-		return 0;
-	desc = rxq->rx_tail + offset;
-	if (desc >= rxq->nb_rx_desc)
-		desc -= rxq->nb_rx_desc;
-
-	rxdp = &rxq->rx_ring[desc];
-	return !!(rxdp->status & E1000_RXD_STAT_DD);
-}
-
 int
 eth_em_rx_descriptor_status(void *rx_queue, uint16_t offset)
 {
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index d80fad01e3..e1bc3852fc 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -726,7 +726,6 @@ eth_igb_dev_init(struct rte_eth_dev *eth_dev)
 
 	eth_dev->dev_ops = &eth_igb_ops;
 	eth_dev->rx_queue_count = eth_igb_rx_queue_count;
-	eth_dev->rx_descriptor_done   = eth_igb_rx_descriptor_done;
 	eth_dev->rx_descriptor_status = eth_igb_rx_descriptor_status;
 	eth_dev->tx_descriptor_status = eth_igb_tx_descriptor_status;
 	eth_dev->rx_pkt_burst = &eth_igb_recv_pkts;
@@ -920,7 +919,6 @@ eth_igbvf_dev_init(struct rte_eth_dev *eth_dev)
 	PMD_INIT_FUNC_TRACE();
 
 	eth_dev->dev_ops = &igbvf_eth_dev_ops;
-	eth_dev->rx_descriptor_done   = eth_igb_rx_descriptor_done;
 	eth_dev->rx_descriptor_status = eth_igb_rx_descriptor_status;
 	eth_dev->tx_descriptor_status = eth_igb_tx_descriptor_status;
 	eth_dev->rx_pkt_burst = &eth_igb_recv_pkts;
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 3210a0e008..0ee1b8d48d 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -1791,23 +1791,6 @@ eth_igb_rx_queue_count(void *rx_queue)
 	return desc;
 }
 
-int
-eth_igb_rx_descriptor_done(void *rx_queue, uint16_t offset)
-{
-	volatile union e1000_adv_rx_desc *rxdp;
-	struct igb_rx_queue *rxq = rx_queue;
-	uint32_t desc;
-
-	if (unlikely(offset >= rxq->nb_rx_desc))
-		return 0;
-	desc = rxq->rx_tail + offset;
-	if (desc >= rxq->nb_rx_desc)
-		desc -= rxq->nb_rx_desc;
-
-	rxdp = &rxq->rx_ring[desc];
-	return !!(rxdp->wb.upper.status_error & E1000_RXD_STAT_DD);
-}
-
 int
 eth_igb_rx_descriptor_status(void *rx_queue, uint16_t offset)
 {
diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h
index 648d12a1b4..17c73c4dc5 100644
--- a/drivers/net/fm10k/fm10k.h
+++ b/drivers/net/fm10k/fm10k.h
@@ -326,9 +326,6 @@ uint16_t fm10k_recv_scattered_pkts(void *rx_queue,
 uint32_t
 fm10k_dev_rx_queue_count(void *rx_queue);
 
-int
-fm10k_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
-
 int
 fm10k_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 3236290e40..f9c287a6ba 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -3062,7 +3062,6 @@ eth_fm10k_dev_init(struct rte_eth_dev *dev)
 
 	dev->dev_ops = &fm10k_eth_dev_ops;
 	dev->rx_queue_count = fm10k_dev_rx_queue_count;
-	dev->rx_descriptor_done	= fm10k_dev_rx_descriptor_done;
 	dev->rx_descriptor_status = fm10k_dev_rx_descriptor_status;
 	dev->tx_descriptor_status = fm10k_dev_tx_descriptor_status;
 	dev->rx_pkt_burst = &fm10k_recv_pkts;
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index eab798e52c..b3515ae96a 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -393,31 +393,6 @@ fm10k_dev_rx_queue_count(void *rx_queue)
 	return desc;
 }
 
-int
-fm10k_dev_rx_descriptor_done(void *rx_queue, uint16_t offset)
-{
-	volatile union fm10k_rx_desc *rxdp;
-	struct fm10k_rx_queue *rxq = rx_queue;
-	uint16_t desc;
-	int ret;
-
-	if (unlikely(offset >= rxq->nb_desc)) {
-		PMD_DRV_LOG(ERR, "Invalid RX descriptor offset %u", offset);
-		return 0;
-	}
-
-	desc = rxq->next_dd + offset;
-	if (desc >= rxq->nb_desc)
-		desc -= rxq->nb_desc;
-
-	rxdp = &rxq->hw_ring[desc];
-
-	ret = !!(rxdp->w.status &
-			rte_cpu_to_le_16(FM10K_RXD_STATUS_DD));
-
-	return ret;
-}
-
 int
 fm10k_dev_rx_descriptor_status(void *rx_queue, uint16_t offset)
 {
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index bd97d93dd7..e5e26783bf 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1434,7 +1434,6 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
 
 	dev->dev_ops = &i40e_eth_dev_ops;
 	dev->rx_queue_count = i40e_dev_rx_queue_count;
-	dev->rx_descriptor_done = i40e_dev_rx_descriptor_done;
 	dev->rx_descriptor_status = i40e_dev_rx_descriptor_status;
 	dev->tx_descriptor_status = i40e_dev_tx_descriptor_status;
 	dev->rx_pkt_burst = i40e_recv_pkts;
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index e8dd6d1dab..d669ffd250 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -1571,7 +1571,6 @@ i40evf_dev_init(struct rte_eth_dev *eth_dev)
 	/* assign ops func pointer */
 	eth_dev->dev_ops = &i40evf_eth_dev_ops;
 	eth_dev->rx_queue_count       = i40e_dev_rx_queue_count;
-	eth_dev->rx_descriptor_done   = i40e_dev_rx_descriptor_done;
 	eth_dev->rx_descriptor_status = i40e_dev_rx_descriptor_status;
 	eth_dev->tx_descriptor_status = i40e_dev_tx_descriptor_status;
 	eth_dev->rx_pkt_burst = &i40e_recv_pkts;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 5493ae6bba..fad432d1bd 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2145,32 +2145,6 @@ i40e_dev_rx_queue_count(void *rx_queue)
 	return desc;
 }
 
-int
-i40e_dev_rx_descriptor_done(void *rx_queue, uint16_t offset)
-{
-	volatile union i40e_rx_desc *rxdp;
-	struct i40e_rx_queue *rxq = rx_queue;
-	uint16_t desc;
-	int ret;
-
-	if (unlikely(offset >= rxq->nb_rx_desc)) {
-		PMD_DRV_LOG(ERR, "Invalid RX descriptor id %u", offset);
-		return 0;
-	}
-
-	desc = rxq->rx_tail + offset;
-	if (desc >= rxq->nb_rx_desc)
-		desc -= rxq->nb_rx_desc;
-
-	rxdp = &(rxq->rx_ring[desc]);
-
-	ret = !!(((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
-		I40E_RXD_QW1_STATUS_MASK) >> I40E_RXD_QW1_STATUS_SHIFT) &
-				(1 << I40E_RX_DESC_STATUS_DD_SHIFT));
-
-	return ret;
-}
-
 int
 i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t offset)
 {
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index a08b80f020..d495a741b6 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -226,7 +226,6 @@ int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
 void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
 
 uint32_t i40e_dev_rx_queue_count(void *rx_queue);
-int i40e_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
 int i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
 
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 224a095483..9ddfe68a78 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -1227,7 +1227,6 @@ eth_igc_dev_init(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 	dev->dev_ops = &eth_igc_ops;
-	dev->rx_descriptor_done	= eth_igc_rx_descriptor_done;
 	dev->rx_queue_count = eth_igc_rx_queue_count;
 	dev->rx_descriptor_status = eth_igc_rx_descriptor_status;
 	dev->tx_descriptor_status = eth_igc_tx_descriptor_status;
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index 437992ecdf..2498cfd290 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -756,24 +756,6 @@ uint32_t eth_igc_rx_queue_count(void *rx_queue)
 	return desc;
 }
 
-int eth_igc_rx_descriptor_done(void *rx_queue, uint16_t offset)
-{
-	volatile union igc_adv_rx_desc *rxdp;
-	struct igc_rx_queue *rxq = rx_queue;
-	uint32_t desc;
-
-	if (unlikely(!rxq || offset >= rxq->nb_rx_desc))
-		return 0;
-
-	desc = rxq->rx_tail + offset;
-	if (desc >= rxq->nb_rx_desc)
-		desc -= rxq->nb_rx_desc;
-
-	rxdp = &rxq->rx_ring[desc];
-	return !!(rxdp->wb.upper.status_error &
-			rte_cpu_to_le_32(IGC_RXD_STAT_DD));
-}
-
 int eth_igc_rx_descriptor_status(void *rx_queue, uint16_t offset)
 {
 	struct igc_rx_queue *rxq = rx_queue;
diff --git a/drivers/net/igc/igc_txrx.h b/drivers/net/igc/igc_txrx.h
index b0c4b3ebd9..3b4c7450cd 100644
--- a/drivers/net/igc/igc_txrx.h
+++ b/drivers/net/igc/igc_txrx.h
@@ -24,8 +24,6 @@ int eth_igc_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 
 uint32_t eth_igc_rx_queue_count(void *rx_queue);
 
-int eth_igc_rx_descriptor_done(void *rx_queue, uint16_t offset);
-
 int eth_igc_rx_descriptor_status(void *rx_queue, uint16_t offset);
 
 int eth_igc_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 47693c0c47..78f61d3dac 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1057,7 +1057,6 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 
 	eth_dev->dev_ops = &ixgbe_eth_dev_ops;
 	eth_dev->rx_queue_count       = ixgbe_dev_rx_queue_count;
-	eth_dev->rx_descriptor_done   = ixgbe_dev_rx_descriptor_done;
 	eth_dev->rx_descriptor_status = ixgbe_dev_rx_descriptor_status;
 	eth_dev->tx_descriptor_status = ixgbe_dev_tx_descriptor_status;
 	eth_dev->rx_pkt_burst = &ixgbe_recv_pkts;
@@ -1546,7 +1545,6 @@ eth_ixgbevf_dev_init(struct rte_eth_dev *eth_dev)
 	PMD_INIT_FUNC_TRACE();
 
 	eth_dev->dev_ops = &ixgbevf_eth_dev_ops;
-	eth_dev->rx_descriptor_done   = ixgbe_dev_rx_descriptor_done;
 	eth_dev->rx_descriptor_status = ixgbe_dev_rx_descriptor_status;
 	eth_dev->tx_descriptor_status = ixgbe_dev_tx_descriptor_status;
 	eth_dev->rx_pkt_burst = &ixgbe_recv_pkts;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index c5027be1dc..6b7a4079db 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -604,8 +604,6 @@ int  ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 
 uint32_t ixgbe_dev_rx_queue_count(void *rx_queue);
 
-int ixgbe_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
-
 int ixgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int ixgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
 
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 1f802851e3..8e056db761 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -3281,24 +3281,6 @@ ixgbe_dev_rx_queue_count(void *rx_queue)
 	return desc;
 }
 
-int
-ixgbe_dev_rx_descriptor_done(void *rx_queue, uint16_t offset)
-{
-	volatile union ixgbe_adv_rx_desc *rxdp;
-	struct ixgbe_rx_queue *rxq = rx_queue;
-	uint32_t desc;
-
-	if (unlikely(offset >= rxq->nb_rx_desc))
-		return 0;
-	desc = rxq->rx_tail + offset;
-	if (desc >= rxq->nb_rx_desc)
-		desc -= rxq->nb_rx_desc;
-
-	rxdp = &rxq->rx_ring[desc];
-	return !!(rxdp->wb.upper.status_error &
-			rte_cpu_to_le_32(IXGBE_RXDADV_STAT_DD));
-}
-
 int
 ixgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset)
 {
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 75d4cabf2e..4b33056085 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -2449,7 +2449,6 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
 	int rc, max_entries;
 
 	eth_dev->dev_ops = &otx2_eth_dev_ops;
-	eth_dev->rx_descriptor_done = otx2_nix_rx_descriptor_done;
 	eth_dev->rx_queue_count = otx2_nix_rx_queue_count;
 	eth_dev->rx_descriptor_status = otx2_nix_rx_descriptor_status;
 	eth_dev->tx_descriptor_status = otx2_nix_tx_descriptor_status;
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 6696db6f6f..d28fcaa281 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -433,7 +433,6 @@ int otx2_tx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			   struct rte_eth_burst_mode *mode);
 uint32_t otx2_nix_rx_queue_count(void *rx_queue);
 int otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt);
-int otx2_nix_rx_descriptor_done(void *rxq, uint16_t offset);
 int otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int otx2_nix_tx_descriptor_status(void *tx_queue, uint16_t offset);
 
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index e6f8e5bfc1..3a763f691b 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -365,18 +365,6 @@ nix_offset_has_packet(uint32_t head, uint32_t tail, uint16_t offset)
 	return 0;
 }
 
-int
-otx2_nix_rx_descriptor_done(void *rx_queue, uint16_t offset)
-{
-	struct otx2_eth_rxq *rxq = rx_queue;
-	uint32_t head, tail;
-
-	nix_rx_head_tail_get(otx2_eth_pmd_priv(rxq->eth_dev),
-			     &head, &tail, rxq->rq);
-
-	return nix_offset_has_packet(head, tail, offset);
-}
-
 int
 otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset)
 {
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 4b5713f3ec..c9b01480f8 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1296,21 +1296,6 @@ sfc_rx_queue_count(void *rx_queue)
 	return dp_rx->qdesc_npending(dp_rxq);
 }
 
-/*
- * The function is used by the secondary process as well. It must not
- * use any process-local pointers from the adapter data.
- */
-static int
-sfc_rx_descriptor_done(void *queue, uint16_t offset)
-{
-	struct sfc_dp_rxq *dp_rxq = queue;
-	const struct sfc_dp_rx *dp_rx;
-
-	dp_rx = sfc_dp_rx_by_dp_rxq(dp_rxq);
-
-	return offset < dp_rx->qdesc_npending(dp_rxq);
-}
-
 /*
  * The function is used by the secondary process as well. It must not
  * use any process-local pointers from the adapter data.
@@ -2045,7 +2030,6 @@ sfc_eth_dev_set_ops(struct rte_eth_dev *dev)
 	dev->tx_pkt_burst = dp_tx->pkt_burst;
 
 	dev->rx_queue_count = sfc_rx_queue_count;
-	dev->rx_descriptor_done = sfc_rx_descriptor_done;
 	dev->rx_descriptor_status = sfc_rx_descriptor_status;
 	dev->tx_descriptor_status = sfc_tx_descriptor_status;
 	dev->dev_ops = &sfc_eth_dev_ops;
@@ -2153,7 +2137,6 @@ sfc_eth_dev_secondary_init(struct rte_eth_dev *dev, uint32_t logtype_main)
 	dev->tx_pkt_prepare = dp_tx->pkt_prepare;
 	dev->tx_pkt_burst = dp_tx->pkt_burst;
 	dev->rx_queue_count = sfc_rx_queue_count;
-	dev->rx_descriptor_done = sfc_rx_descriptor_done;
 	dev->rx_descriptor_status = sfc_rx_descriptor_status;
 	dev->tx_descriptor_status = sfc_tx_descriptor_status;
 	dev->dev_ops = &sfc_eth_dev_secondary_ops;
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index da1633d77e..c82089930f 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -1894,7 +1894,6 @@ eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
 	}
 
 	eth_dev->dev_ops = &virtio_eth_dev_ops;
-	eth_dev->rx_descriptor_done = virtio_dev_rx_queue_done;
 
 	if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
 		set_rxtx_funcs(eth_dev);
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index b051eff70e..6b8e32a38a 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -593,7 +593,6 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
 	eth_dev->tx_pkt_burst = NULL;
 	eth_dev->tx_pkt_prepare = NULL;
 	eth_dev->rx_queue_count = NULL;
-	eth_dev->rx_descriptor_done = NULL;
 	eth_dev->rx_descriptor_status = NULL;
 	eth_dev->tx_descriptor_status = NULL;
 	eth_dev->dev_ops = NULL;
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 047f7c9c5a..992ca4ee0d 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -5128,31 +5128,6 @@ rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
 	return (int)(*p->rx_queue_count)(qd);
 }
 
-/**
- * Check if the DD bit of the specific RX descriptor in the queue has been set
- *
- * @param port_id
- *  The port identifier of the Ethernet device.
- * @param queue_id
- *  The queue id on the specific port.
- * @param offset
- *  The offset of the descriptor ID from tail.
- * @return
- *  - (1) if the specific DD bit is set.
- *  - (0) if the specific DD bit is not set.
- *  - (-ENODEV) if *port_id* invalid.
- *  - (-ENOTSUP) if the device does not support this function
- */
-__rte_deprecated
-static inline int
-rte_eth_rx_descriptor_done(uint16_t port_id, uint16_t queue_id, uint16_t offset)
-{
-	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_descriptor_done, -ENOTSUP);
-	return (*dev->rx_descriptor_done)(dev->data->rx_queues[queue_id], offset);
-}
-
 /**@{@name Rx hardware descriptor states
  * @see rte_eth_rx_descriptor_status
  */
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index fe47a660c7..63078e1ef4 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -44,9 +44,6 @@ typedef uint16_t (*eth_tx_prep_t)(void *txq,
 typedef uint32_t (*eth_rx_queue_count_t)(void *rxq);
 /**< @internal Get number of used descriptors on a receive queue. */
 
-typedef int (*eth_rx_descriptor_done_t)(void *rxq, uint16_t offset);
-/**< @internal Check DD bit of specific RX descriptor */
-
 typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
 /**< @internal Check the status of a Rx descriptor */
 
@@ -129,7 +126,6 @@ struct rte_eth_dev {
 	eth_tx_prep_t tx_pkt_prepare; /**< Pointer to PMD transmit prepare function. */
 
 	eth_rx_queue_count_t       rx_queue_count; /**< Get the number of used RX descriptors. */
-	eth_rx_descriptor_done_t   rx_descriptor_done;   /**< Check rxd DD bit. */
 	eth_rx_descriptor_status_t rx_descriptor_status; /**< Check the status of a Rx descriptor. */
 	eth_tx_descriptor_status_t tx_descriptor_status; /**< Check the status of a Tx descriptor. */
 
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [dpdk-dev] [PATCH v4 7/7] ethdev: hide eth dev related structures
  2021-10-04 13:55     ` [dpdk-dev] [PATCH v4 " Konstantin Ananyev
                         ` (5 preceding siblings ...)
  2021-10-04 13:56       ` [dpdk-dev] [PATCH v4 6/7] ethdev: remove legacy Rx descriptor done API Konstantin Ananyev
@ 2021-10-04 13:56       ` Konstantin Ananyev
  2021-10-05 10:04         ` David Marchand
  2021-10-05 13:24         ` Thomas Monjalon
  2021-10-06 16:42       ` [dpdk-dev] [PATCH v4 0/7] " Ali Alnubani
  2021-10-07 11:27       ` [dpdk-dev] [PATCH v5 " Konstantin Ananyev
  8 siblings, 2 replies; 112+ messages in thread
From: Konstantin Ananyev @ 2021-10-04 13:56 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
data into private header (ethdev_driver.h).
Few minor changes to keep DPDK building after that.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/release_21_11.rst        |   6 +
 drivers/common/octeontx2/otx2_sec_idev.c      |   2 +-
 drivers/crypto/octeontx2/otx2_cryptodev_ops.c |   2 +-
 drivers/net/cxgbe/base/adapter.h              |   2 +-
 drivers/net/dpaa2/dpaa2_ptp.c                 |   2 +-
 drivers/net/netvsc/hn_var.h                   |   1 +
 lib/ethdev/ethdev_driver.h                    | 149 ++++++++++++++++++
 lib/ethdev/rte_ethdev_core.h                  | 143 -----------------
 lib/ethdev/version.map                        |   2 +-
 lib/eventdev/rte_event_eth_rx_adapter.c       |   2 +-
 lib/eventdev/rte_event_eth_tx_adapter.c       |   2 +-
 lib/eventdev/rte_eventdev.c                   |   2 +-
 lib/metrics/rte_metrics_telemetry.c           |   2 +-
 13 files changed, 165 insertions(+), 152 deletions(-)

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 6055551443..2944149943 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -228,6 +228,12 @@ ABI Changes
   to user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
   is used by  public inline function ``rte_eth_rx_queue_count``.
 
+* ethdev: Made ``rte_eth_dev``, ``rte_eth_dev_data``, ``rte_eth_rxtx_callback``
+  private data structures. ``rte_eth_devices[]`` can't be accessible directly
+  by user any more. While it is an ABI breakage, this change is intended
+  to be transparent for both users (no changes in user app is required) and
+  PMD developers (no changes in PMD is required).
+
 
 Known Issues
 ------------
diff --git a/drivers/common/octeontx2/otx2_sec_idev.c b/drivers/common/octeontx2/otx2_sec_idev.c
index 6e9643c383..b561b67174 100644
--- a/drivers/common/octeontx2/otx2_sec_idev.c
+++ b/drivers/common/octeontx2/otx2_sec_idev.c
@@ -4,7 +4,7 @@
 
 #include <rte_atomic.h>
 #include <rte_bus_pci.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_spinlock.h>
 
 #include "otx2_common.h"
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
index 37fad11d91..f0b72e05c2 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
@@ -6,7 +6,7 @@
 
 #include <cryptodev_pmd.h>
 #include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_event_crypto_adapter.h>
 
 #include "otx2_cryptodev.h"
diff --git a/drivers/net/cxgbe/base/adapter.h b/drivers/net/cxgbe/base/adapter.h
index 01a2a9d147..1c7c8afe16 100644
--- a/drivers/net/cxgbe/base/adapter.h
+++ b/drivers/net/cxgbe/base/adapter.h
@@ -12,7 +12,7 @@
 #include <rte_mbuf.h>
 #include <rte_io.h>
 #include <rte_rwlock.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 
 #include "../cxgbe_compat.h"
 #include "../cxgbe_ofld.h"
diff --git a/drivers/net/dpaa2/dpaa2_ptp.c b/drivers/net/dpaa2/dpaa2_ptp.c
index 899dd5d442..8d79e39244 100644
--- a/drivers/net/dpaa2/dpaa2_ptp.c
+++ b/drivers/net/dpaa2/dpaa2_ptp.c
@@ -10,7 +10,7 @@
 #include <unistd.h>
 #include <stdarg.h>
 
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_log.h>
 #include <rte_eth_ctrl.h>
 #include <rte_malloc.h>
diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h
index 2a2bac9338..74e6e6010d 100644
--- a/drivers/net/netvsc/hn_var.h
+++ b/drivers/net/netvsc/hn_var.h
@@ -7,6 +7,7 @@
  */
 
 #include <rte_eal_paging.h>
+#include <ethdev_driver.h>
 
 /*
  * Tunable ethdev params
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index cc2c75261c..63b04dce32 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -17,6 +17,155 @@
 
 #include <rte_ethdev.h>
 
+/**
+ * @internal
+ * Structure used to hold information about the callbacks to be called for a
+ * queue on RX and TX.
+ */
+struct rte_eth_rxtx_callback {
+	struct rte_eth_rxtx_callback *next;
+	union{
+		rte_rx_callback_fn rx;
+		rte_tx_callback_fn tx;
+	} fn;
+	void *param;
+};
+
+/**
+ * @internal
+ * The generic data structure associated with each ethernet device.
+ *
+ * Pointers to burst-oriented packet receive and transmit functions are
+ * located at the beginning of the structure, along with the pointer to
+ * where all the data elements for the particular device are stored in shared
+ * memory. This split allows the function pointer and driver data to be per-
+ * process, while the actual configuration data for the device is shared.
+ */
+struct rte_eth_dev {
+	eth_rx_burst_t rx_pkt_burst; /**< Pointer to PMD receive function. */
+	eth_tx_burst_t tx_pkt_burst; /**< Pointer to PMD transmit function. */
+	eth_tx_prep_t tx_pkt_prepare;
+	/**< Pointer to PMD transmit prepare function. */
+	eth_rx_queue_count_t rx_queue_count;
+	/**< Get the number of used RX descriptors. */
+	eth_rx_descriptor_status_t rx_descriptor_status;
+	/**< Check the status of a Rx descriptor. */
+	eth_tx_descriptor_status_t tx_descriptor_status;
+	/**< Check the status of a Tx descriptor. */
+
+	/**
+	 * Next two fields are per-device data but *data is shared between
+	 * primary and secondary processes and *process_private is per-process
+	 * private. The second one is managed by PMDs if necessary.
+	 */
+	struct rte_eth_dev_data *data;  /**< Pointer to device data. */
+	void *process_private; /**< Pointer to per-process device data. */
+	const struct eth_dev_ops *dev_ops; /**< Functions exported by PMD */
+	struct rte_device *device; /**< Backing device */
+	struct rte_intr_handle *intr_handle; /**< Device interrupt handle */
+	/** User application callbacks for NIC interrupts */
+	struct rte_eth_dev_cb_list link_intr_cbs;
+	/**
+	 * User-supplied functions called from rx_burst to post-process
+	 * received packets before passing them to the user
+	 */
+	struct rte_eth_rxtx_callback *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
+	/**
+	 * User-supplied functions called from tx_burst to pre-process
+	 * received packets before passing them to the driver for transmission.
+	 */
+	struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
+	enum rte_eth_dev_state state; /**< Flag indicating the port state */
+	void *security_ctx; /**< Context for security ops */
+
+	uint64_t reserved_64s[4]; /**< Reserved for future fields */
+	void *reserved_ptrs[4];   /**< Reserved for future fields */
+} __rte_cache_aligned;
+
+struct rte_eth_dev_sriov;
+struct rte_eth_dev_owner;
+
+/**
+ * @internal
+ * The data part, with no function pointers, associated with each ethernet
+ * device. This structure is safe to place in shared memory to be common
+ * among different processes in a multi-process configuration.
+ */
+struct rte_eth_dev_data {
+	char name[RTE_ETH_NAME_MAX_LEN]; /**< Unique identifier name */
+
+	void **rx_queues; /**< Array of pointers to RX queues. */
+	void **tx_queues; /**< Array of pointers to TX queues. */
+	uint16_t nb_rx_queues; /**< Number of RX queues. */
+	uint16_t nb_tx_queues; /**< Number of TX queues. */
+
+	struct rte_eth_dev_sriov sriov;    /**< SRIOV data */
+
+	void *dev_private;
+			/**< PMD-specific private data.
+			 *   @see rte_eth_dev_release_port()
+			 */
+
+	struct rte_eth_link dev_link;   /**< Link-level information & status. */
+	struct rte_eth_conf dev_conf;   /**< Configuration applied to device. */
+	uint16_t mtu;                   /**< Maximum Transmission Unit. */
+	uint32_t min_rx_buf_size;
+			/**< Common RX buffer size handled by all queues. */
+
+	uint64_t rx_mbuf_alloc_failed; /**< RX ring mbuf allocation failures. */
+	struct rte_ether_addr *mac_addrs;
+			/**< Device Ethernet link address.
+			 *   @see rte_eth_dev_release_port()
+			 */
+	uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
+			/**< Bitmap associating MAC addresses to pools. */
+	struct rte_ether_addr *hash_mac_addrs;
+			/**< Device Ethernet MAC addresses of hash filtering.
+			 *   @see rte_eth_dev_release_port()
+			 */
+	uint16_t port_id;           /**< Device [external] port identifier. */
+
+	__extension__
+	uint8_t promiscuous   : 1,
+		/**< RX promiscuous mode ON(1) / OFF(0). */
+		scattered_rx : 1,
+		/**< RX of scattered packets is ON(1) / OFF(0) */
+		all_multicast : 1,
+		/**< RX all multicast mode ON(1) / OFF(0). */
+		dev_started : 1,
+		/**< Device state: STARTED(1) / STOPPED(0). */
+		lro         : 1,
+		/**< RX LRO is ON(1) / OFF(0) */
+		dev_configured : 1;
+		/**< Indicates whether the device is configured.
+		 *   CONFIGURED(1) / NOT CONFIGURED(0).
+		 */
+	uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
+		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
+	uint8_t tx_queue_state[RTE_MAX_QUEUES_PER_PORT];
+		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
+	uint32_t dev_flags;             /**< Capabilities. */
+	int numa_node;                  /**< NUMA node connection. */
+	struct rte_vlan_filter_conf vlan_filter_conf;
+			/**< VLAN filter configuration. */
+	struct rte_eth_dev_owner owner; /**< The port owner. */
+	uint16_t representor_id;
+			/**< Switch-specific identifier.
+			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
+			 */
+
+	pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
+	uint64_t reserved_64s[4]; /**< Reserved for future fields */
+	void *reserved_ptrs[4];   /**< Reserved for future fields */
+} __rte_cache_aligned;
+
+/**
+ * @internal
+ * The pool of *rte_eth_dev* structures. The size of the pool
+ * is configured at compile-time in the <rte_ethdev.c> file.
+ */
+extern struct rte_eth_dev rte_eth_devices[];
+
 /**< @internal Declaration of the hairpin peer queue information structure. */
 struct rte_hairpin_peer_info;
 
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index 63078e1ef4..2d07db0811 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -95,147 +95,4 @@ struct rte_eth_fp_ops {
 
 extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
 
-
-/**
- * @internal
- * Structure used to hold information about the callbacks to be called for a
- * queue on RX and TX.
- */
-struct rte_eth_rxtx_callback {
-	struct rte_eth_rxtx_callback *next;
-	union{
-		rte_rx_callback_fn rx;
-		rte_tx_callback_fn tx;
-	} fn;
-	void *param;
-};
-
-/**
- * @internal
- * The generic data structure associated with each ethernet device.
- *
- * Pointers to burst-oriented packet receive and transmit functions are
- * located at the beginning of the structure, along with the pointer to
- * where all the data elements for the particular device are stored in shared
- * memory. This split allows the function pointer and driver data to be per-
- * process, while the actual configuration data for the device is shared.
- */
-struct rte_eth_dev {
-	eth_rx_burst_t rx_pkt_burst; /**< Pointer to PMD receive function. */
-	eth_tx_burst_t tx_pkt_burst; /**< Pointer to PMD transmit function. */
-	eth_tx_prep_t tx_pkt_prepare; /**< Pointer to PMD transmit prepare function. */
-
-	eth_rx_queue_count_t       rx_queue_count; /**< Get the number of used RX descriptors. */
-	eth_rx_descriptor_status_t rx_descriptor_status; /**< Check the status of a Rx descriptor. */
-	eth_tx_descriptor_status_t tx_descriptor_status; /**< Check the status of a Tx descriptor. */
-
-	/**
-	 * Next two fields are per-device data but *data is shared between
-	 * primary and secondary processes and *process_private is per-process
-	 * private. The second one is managed by PMDs if necessary.
-	 */
-	struct rte_eth_dev_data *data;  /**< Pointer to device data. */
-	void *process_private; /**< Pointer to per-process device data. */
-	const struct eth_dev_ops *dev_ops; /**< Functions exported by PMD */
-	struct rte_device *device; /**< Backing device */
-	struct rte_intr_handle *intr_handle; /**< Device interrupt handle */
-	/** User application callbacks for NIC interrupts */
-	struct rte_eth_dev_cb_list link_intr_cbs;
-	/**
-	 * User-supplied functions called from rx_burst to post-process
-	 * received packets before passing them to the user
-	 */
-	struct rte_eth_rxtx_callback *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
-	/**
-	 * User-supplied functions called from tx_burst to pre-process
-	 * received packets before passing them to the driver for transmission.
-	 */
-	struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
-	enum rte_eth_dev_state state; /**< Flag indicating the port state */
-	void *security_ctx; /**< Context for security ops */
-
-	uint64_t reserved_64s[4]; /**< Reserved for future fields */
-	void *reserved_ptrs[4];   /**< Reserved for future fields */
-} __rte_cache_aligned;
-
-struct rte_eth_dev_sriov;
-struct rte_eth_dev_owner;
-
-/**
- * @internal
- * The data part, with no function pointers, associated with each ethernet device.
- *
- * This structure is safe to place in shared memory to be common among different
- * processes in a multi-process configuration.
- */
-struct rte_eth_dev_data {
-	char name[RTE_ETH_NAME_MAX_LEN]; /**< Unique identifier name */
-
-	void **rx_queues; /**< Array of pointers to RX queues. */
-	void **tx_queues; /**< Array of pointers to TX queues. */
-	uint16_t nb_rx_queues; /**< Number of RX queues. */
-	uint16_t nb_tx_queues; /**< Number of TX queues. */
-
-	struct rte_eth_dev_sriov sriov;    /**< SRIOV data */
-
-	void *dev_private;
-			/**< PMD-specific private data.
-			 *   @see rte_eth_dev_release_port()
-			 */
-
-	struct rte_eth_link dev_link;   /**< Link-level information & status. */
-	struct rte_eth_conf dev_conf;   /**< Configuration applied to device. */
-	uint16_t mtu;                   /**< Maximum Transmission Unit. */
-	uint32_t min_rx_buf_size;
-			/**< Common RX buffer size handled by all queues. */
-
-	uint64_t rx_mbuf_alloc_failed; /**< RX ring mbuf allocation failures. */
-	struct rte_ether_addr *mac_addrs;
-			/**< Device Ethernet link address.
-			 *   @see rte_eth_dev_release_port()
-			 */
-	uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
-			/**< Bitmap associating MAC addresses to pools. */
-	struct rte_ether_addr *hash_mac_addrs;
-			/**< Device Ethernet MAC addresses of hash filtering.
-			 *   @see rte_eth_dev_release_port()
-			 */
-	uint16_t port_id;           /**< Device [external] port identifier. */
-
-	__extension__
-	uint8_t promiscuous   : 1, /**< RX promiscuous mode ON(1) / OFF(0). */
-		scattered_rx : 1,  /**< RX of scattered packets is ON(1) / OFF(0) */
-		all_multicast : 1, /**< RX all multicast mode ON(1) / OFF(0). */
-		dev_started : 1,   /**< Device state: STARTED(1) / STOPPED(0). */
-		lro         : 1,   /**< RX LRO is ON(1) / OFF(0) */
-		dev_configured : 1;
-		/**< Indicates whether the device is configured.
-		 *   CONFIGURED(1) / NOT CONFIGURED(0).
-		 */
-	uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
-		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
-	uint8_t tx_queue_state[RTE_MAX_QUEUES_PER_PORT];
-		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
-	uint32_t dev_flags;             /**< Capabilities. */
-	int numa_node;                  /**< NUMA node connection. */
-	struct rte_vlan_filter_conf vlan_filter_conf;
-			/**< VLAN filter configuration. */
-	struct rte_eth_dev_owner owner; /**< The port owner. */
-	uint16_t representor_id;
-			/**< Switch-specific identifier.
-			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
-			 */
-
-	pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
-	uint64_t reserved_64s[4]; /**< Reserved for future fields */
-	void *reserved_ptrs[4];   /**< Reserved for future fields */
-} __rte_cache_aligned;
-
-/**
- * @internal
- * The pool of *rte_eth_dev* structures. The size of the pool
- * is configured at compile-time in the <rte_ethdev.c> file.
- */
-extern struct rte_eth_dev rte_eth_devices[];
-
 #endif /* _RTE_ETHDEV_CORE_H_ */
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 0881202381..3dc494a016 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -75,7 +75,6 @@ DPDK_22 {
 	rte_eth_dev_udp_tunnel_port_add;
 	rte_eth_dev_udp_tunnel_port_delete;
 	rte_eth_dev_vlan_filter;
-	rte_eth_devices;
 	rte_eth_find_next;
 	rte_eth_find_next_of;
 	rte_eth_find_next_owned_by;
@@ -272,6 +271,7 @@ INTERNAL {
 	rte_eth_dev_release_port;
 	rte_eth_dev_internal_reset;
 	rte_eth_devargs_parse;
+	rte_eth_devices;
 	rte_eth_dma_zone_free;
 	rte_eth_dma_zone_reserve;
 	rte_eth_hairpin_queue_peer_bind;
diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c
index 13dfb28401..89c4ca5d40 100644
--- a/lib/eventdev/rte_event_eth_rx_adapter.c
+++ b/lib/eventdev/rte_event_eth_rx_adapter.c
@@ -11,7 +11,7 @@
 #include <rte_common.h>
 #include <rte_dev.h>
 #include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_log.h>
 #include <rte_malloc.h>
 #include <rte_service_component.h>
diff --git a/lib/eventdev/rte_event_eth_tx_adapter.c b/lib/eventdev/rte_event_eth_tx_adapter.c
index 18c0359db7..1c06c8707c 100644
--- a/lib/eventdev/rte_event_eth_tx_adapter.c
+++ b/lib/eventdev/rte_event_eth_tx_adapter.c
@@ -3,7 +3,7 @@
  */
 #include <rte_spinlock.h>
 #include <rte_service_component.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 
 #include "eventdev_pmd.h"
 #include "rte_eventdev_trace.h"
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index e347d6dfd5..ebef5f0906 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -29,7 +29,7 @@
 #include <rte_common.h>
 #include <rte_malloc.h>
 #include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_cryptodev.h>
 #include <cryptodev_pmd.h>
 #include <rte_telemetry.h>
diff --git a/lib/metrics/rte_metrics_telemetry.c b/lib/metrics/rte_metrics_telemetry.c
index 269f8ef613..5be21b2e86 100644
--- a/lib/metrics/rte_metrics_telemetry.c
+++ b/lib/metrics/rte_metrics_telemetry.c
@@ -2,7 +2,7 @@
  * Copyright(c) 2020 Intel Corporation
  */
 
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_string_fns.h>
 #ifdef RTE_LIB_TELEMETRY
 #include <telemetry_internal.h>
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v4 4/7] ethdev: make burst functions to use new flat array
  2021-10-04 13:56       ` [dpdk-dev] [PATCH v4 4/7] ethdev: make burst functions to use new flat array Konstantin Ananyev
@ 2021-10-05  9:54         ` David Marchand
  2021-10-05 10:13           ` Ananyev, Konstantin
  0 siblings, 1 reply; 112+ messages in thread
From: David Marchand @ 2021-10-05  9:54 UTC (permalink / raw)
  To: Konstantin Ananyev
  Cc: dev, Xiaoyun Li, Anoob Joseph, Jerin Jacob Kollanukkaran,
	Nithin Dabilpuram, Ankur Dwivedi, Shepard Siegel, Ed Czeck,
	John Miller, Igor Russkikh, Ajit Khaparde, Somnath Kotur,
	Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Wang, Haiyue,
	John Daley, Hyong Youb Kim, Qi Zhang, Xiao Wang, humin (Q),
	Yisen Zhuang, oulijun, Beilei Xing, Jingjing Wu, Qiming Yang,
	Matan Azrad, Slava Ovsiienko, Stephen Hemminger, Long Li,
	heinrich.kuhn, Kiran Kumar Kokkilagadda, Andrew Rybchenko,
	Maciej Czekaj, Jiawen Wu, Jian Wang, Maxime Coquelin, Xia,
	Chenbo, Thomas Monjalon, Yigit, Ferruh, Ray Kinsella,
	Jayatheerthan, Jay

On Mon, Oct 4, 2021 at 3:59 PM Konstantin Ananyev
<konstantin.ananyev@intel.com> wrote:
>
> Rework 'fast' burst functions to use rte_eth_fp_ops[].
> While it is an API/ABI breakage, this change is intended to be
> transparent for both users (no changes in user app is required) and
> PMD developers (no changes in PMD is required).
> One extra thing to note - RX/TX callback invocation will cause extra
> function call with these changes. That might cause some insignificant
> slowdown for code-path where RX/TX callbacks are heavily involved.
>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>  lib/ethdev/ethdev_private.c |  31 +++++
>  lib/ethdev/rte_ethdev.h     | 242 ++++++++++++++++++++++++++----------
>  lib/ethdev/version.map      |   5 +
>  3 files changed, 210 insertions(+), 68 deletions(-)
>
> diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
> index 3eeda6e9f9..27d29b2ac6 100644
> --- a/lib/ethdev/ethdev_private.c
> +++ b/lib/ethdev/ethdev_private.c
> @@ -226,3 +226,34 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
>         fpo->txq.data = dev->data->tx_queues;
>         fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
>  }
> +
> +uint16_t
> +__rte_eth_rx_epilog(uint16_t port_id, uint16_t queue_id,
> +       struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
> +       void *opaque)
> +{
> +       const struct rte_eth_rxtx_callback *cb = opaque;
> +
> +       while (cb != NULL) {
> +               nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
> +                               nb_pkts, cb->param);
> +               cb = cb->next;
> +       }
> +
> +       return nb_rx;
> +}

This helper name is ambiguous.
Maybe the intent was to have a generic place holder for updates in
future releases.
But in this series, __rte_eth_rx_epilog is invoked only if a rx
callback is registered, under #ifdef RTE_ETHDEV_RXTX_CALLBACKS.

I'd prefer we call it a spade, i.e. rte_eth_call_rx_callbacks, and it
does not need to be advertised as internal.


> +
> +uint16_t
> +__rte_eth_tx_prolog(uint16_t port_id, uint16_t queue_id,
> +       struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque)
> +{
> +       const struct rte_eth_rxtx_callback *cb = opaque;
> +
> +       while (cb != NULL) {
> +               nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
> +                               cb->param);
> +               cb = cb->next;
> +       }
> +
> +       return nb_pkts;
> +}

Idem, rte_eth_call_tx_callbacks.


-- 
David Marchand


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v4 7/7] ethdev: hide eth dev related structures
  2021-10-04 13:56       ` [dpdk-dev] [PATCH v4 7/7] ethdev: hide eth dev related structures Konstantin Ananyev
@ 2021-10-05 10:04         ` David Marchand
  2021-10-05 10:43           ` Ferruh Yigit
  2021-10-05 13:24         ` Thomas Monjalon
  1 sibling, 1 reply; 112+ messages in thread
From: David Marchand @ 2021-10-05 10:04 UTC (permalink / raw)
  To: Konstantin Ananyev
  Cc: dev, Xiaoyun Li, Anoob Joseph, Jerin Jacob Kollanukkaran,
	Nithin Dabilpuram, Ankur Dwivedi, Shepard Siegel, Ed Czeck,
	John Miller, Igor Russkikh, Ajit Khaparde, Somnath Kotur,
	Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Wang, Haiyue,
	John Daley, Hyong Youb Kim, Qi Zhang, Xiao Wang, humin (Q),
	Yisen Zhuang, oulijun, Beilei Xing, Jingjing Wu, Qiming Yang,
	Matan Azrad, Slava Ovsiienko, Stephen Hemminger, Long Li,
	heinrich.kuhn, Kiran Kumar Kokkilagadda, Andrew Rybchenko,
	Maciej Czekaj, Jiawen Wu, Jian Wang, Maxime Coquelin, Xia,
	Chenbo, Thomas Monjalon, Yigit, Ferruh, Ray Kinsella,
	Jayatheerthan, Jay

On Mon, Oct 4, 2021 at 3:59 PM Konstantin Ananyev
<konstantin.ananyev@intel.com> wrote:
>
> Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
> data into private header (ethdev_driver.h).
> Few minor changes to keep DPDK building after that.

This change is going to hurt a lot of people :-).
But this is a necessary move.

$ git grep-all -lw rte_eth_devices |grep -v \\.patch$
ANS/ans/ans_main.c
BESS/core/drivers/pmd.cc
dma_ip_drivers/QDMA/DPDK/drivers/net/qdma/qdma_xdebug.c
dma_ip_drivers/QDMA/DPDK/drivers/net/qdma/rte_pmd_qdma.c
dma_ip_drivers/QDMA/DPDK/examples/qdma_testapp/pcierw.c
dma_ip_drivers/QDMA/DPDK/examples/qdma_testapp/testapp.c
FD.io-VPP/src/plugins/dpdk/device/format.c
lagopus/src/dataplane/dpdk/dpdk_io.c
OVS/lib/netdev-dpdk.c
packet-journey/app/kni.c
pktgen-dpdk/app/pktgen-port-cfg.c
pktgen-dpdk/app/pktgen-port-cfg.h
pktgen-dpdk/app/pktgen-stats.c
Trex/src/dpdk_funcs.c
Trex/src/drivers/trex_i40e_fdir.c
Trex/src/drivers/trex_ixgbe_fdir.c
TungstenFabric-vRouter/gdb/vr_dpdk.gdb


I did not check all projects for their uses of rte_eth_devices, but I
did the job for OVS.
If you have cycles to review...
https://patchwork.ozlabs.org/project/openvswitch/patch/20210907082343.16370-1-david.marchand@redhat.com/

One nit:

>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>  doc/guides/rel_notes/release_21_11.rst        |   6 +
>  drivers/common/octeontx2/otx2_sec_idev.c      |   2 +-
>  drivers/crypto/octeontx2/otx2_cryptodev_ops.c |   2 +-
>  drivers/net/cxgbe/base/adapter.h              |   2 +-
>  drivers/net/dpaa2/dpaa2_ptp.c                 |   2 +-
>  drivers/net/netvsc/hn_var.h                   |   1 +
>  lib/ethdev/ethdev_driver.h                    | 149 ++++++++++++++++++
>  lib/ethdev/rte_ethdev_core.h                  | 143 -----------------
>  lib/ethdev/version.map                        |   2 +-
>  lib/eventdev/rte_event_eth_rx_adapter.c       |   2 +-
>  lib/eventdev/rte_event_eth_tx_adapter.c       |   2 +-
>  lib/eventdev/rte_eventdev.c                   |   2 +-
>  lib/metrics/rte_metrics_telemetry.c           |   2 +-
>  13 files changed, 165 insertions(+), 152 deletions(-)
>
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index 6055551443..2944149943 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -228,6 +228,12 @@ ABI Changes
>    to user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
>    is used by  public inline function ``rte_eth_rx_queue_count``.
>
> +* ethdev: Made ``rte_eth_dev``, ``rte_eth_dev_data``, ``rte_eth_rxtx_callback``
> +  private data structures. ``rte_eth_devices[]`` can't be accessible directly

accessed*

> +  by user any more. While it is an ABI breakage, this change is intended
> +  to be transparent for both users (no changes in user app is required) and
> +  PMD developers (no changes in PMD is required).
> +
>
>  Known Issues
>  ------------



-- 
David Marchand


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v4 4/7] ethdev: make burst functions to use new flat array
  2021-10-05  9:54         ` David Marchand
@ 2021-10-05 10:13           ` Ananyev, Konstantin
  0 siblings, 0 replies; 112+ messages in thread
From: Ananyev, Konstantin @ 2021-10-05 10:13 UTC (permalink / raw)
  To: David Marchand
  Cc: dev, Li, Xiaoyun, Anoob Joseph, Jerin Jacob Kollanukkaran,
	Nithin Dabilpuram, Ankur Dwivedi, Shepard Siegel, Ed Czeck,
	John Miller, Igor Russkikh, Ajit Khaparde, Somnath Kotur,
	Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Wang, Haiyue,
	Daley, John, Hyong Youb Kim, Zhang, Qi Z, Wang, Xiao W, humin (Q),
	Yisen Zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang, Qiming,
	Matan Azrad, Slava Ovsiienko, Stephen Hemminger, Long Li,
	heinrich.kuhn, Kiran Kumar Kokkilagadda, Andrew Rybchenko,
	Maciej Czekaj, Jiawen Wu, Jian Wang, Maxime Coquelin, Xia,
	Chenbo, Thomas Monjalon, Yigit, Ferruh, Ray Kinsella,
	Jayatheerthan, Jay



> >
> > Rework 'fast' burst functions to use rte_eth_fp_ops[].
> > While it is an API/ABI breakage, this change is intended to be
> > transparent for both users (no changes in user app is required) and
> > PMD developers (no changes in PMD is required).
> > One extra thing to note - RX/TX callback invocation will cause extra
> > function call with these changes. That might cause some insignificant
> > slowdown for code-path where RX/TX callbacks are heavily involved.
> >
> > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > ---
> >  lib/ethdev/ethdev_private.c |  31 +++++
> >  lib/ethdev/rte_ethdev.h     | 242 ++++++++++++++++++++++++++----------
> >  lib/ethdev/version.map      |   5 +
> >  3 files changed, 210 insertions(+), 68 deletions(-)
> >
> > diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
> > index 3eeda6e9f9..27d29b2ac6 100644
> > --- a/lib/ethdev/ethdev_private.c
> > +++ b/lib/ethdev/ethdev_private.c
> > @@ -226,3 +226,34 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> >         fpo->txq.data = dev->data->tx_queues;
> >         fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
> >  }
> > +
> > +uint16_t
> > +__rte_eth_rx_epilog(uint16_t port_id, uint16_t queue_id,
> > +       struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
> > +       void *opaque)
> > +{
> > +       const struct rte_eth_rxtx_callback *cb = opaque;
> > +
> > +       while (cb != NULL) {
> > +               nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
> > +                               nb_pkts, cb->param);
> > +               cb = cb->next;
> > +       }
> > +
> > +       return nb_rx;
> > +}
> 
> This helper name is ambiguous.
> Maybe the intent was to have a generic place holder for updates in
> future releases.

Yes, that was the intent.
We have array of opaque pointers (one per queue).
So I thought some generic name would be better - who knows
how we would need to change this function and its parameters in future.
 
> But in this series, __rte_eth_rx_epilog is invoked only if a rx
> callback is registered, under #ifdef RTE_ETHDEV_RXTX_CALLBACKS.

Hmm, yes it implies that we'll do callback underneath :)
 
> I'd prefer we call it a spade, i.e. rte_eth_call_rx_callbacks,

If there are no objections from other people - I am ok to rename it.

> and it
> does not need to be advertised as internal.

About internal vs public, I think Ferruh proposed the same.
I am not really fond of it as:
if we'll declare it public, we will have obligations to support it in future releases.
Plus it might encourage users to use it on its own, which I don't think is a right thing to do.

> 
> 
> > +
> > +uint16_t
> > +__rte_eth_tx_prolog(uint16_t port_id, uint16_t queue_id,
> > +       struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque)
> > +{
> > +       const struct rte_eth_rxtx_callback *cb = opaque;
> > +
> > +       while (cb != NULL) {
> > +               nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
> > +                               cb->param);
> > +               cb = cb->next;
> > +       }
> > +
> > +       return nb_pkts;
> > +}
> 
> Idem, rte_eth_call_tx_callbacks.
> 
> 
> --
> David Marchand


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v4 7/7] ethdev: hide eth dev related structures
  2021-10-05 10:04         ` David Marchand
@ 2021-10-05 10:43           ` Ferruh Yigit
  2021-10-05 11:37             ` David Marchand
  0 siblings, 1 reply; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-05 10:43 UTC (permalink / raw)
  To: David Marchand, Konstantin Ananyev, Thomas Monjalon
  Cc: dev, Xiaoyun Li, Anoob Joseph, Jerin Jacob Kollanukkaran,
	Nithin Dabilpuram, Ankur Dwivedi, Shepard Siegel, Ed Czeck,
	John Miller, Igor Russkikh, Ajit Khaparde, Somnath Kotur,
	Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Wang, Haiyue,
	John Daley, Hyong Youb Kim, Qi Zhang, Xiao Wang, humin (Q),
	Yisen Zhuang, oulijun, Beilei Xing, Jingjing Wu, Qiming Yang,
	Matan Azrad, Slava Ovsiienko, Stephen Hemminger, Long Li,
	heinrich.kuhn, Kiran Kumar Kokkilagadda, Andrew Rybchenko,
	Maciej Czekaj, Jiawen Wu, Jian Wang, Maxime Coquelin, Xia,
	Chenbo, Thomas Monjalon, Ray Kinsella, Jayatheerthan, Jay

On 10/5/2021 11:04 AM, David Marchand wrote:
> On Mon, Oct 4, 2021 at 3:59 PM Konstantin Ananyev
> <konstantin.ananyev@intel.com> wrote:
>>
>> Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
>> data into private header (ethdev_driver.h).
>> Few minor changes to keep DPDK building after that.
> 
> This change is going to hurt a lot of people :-).
> But this is a necessary move.
> 

+1 that it is necessary move, but I am surprised to see how much 'rte_eth_devices'
is accessed directly.

Do you have any idea/suggestion on how can we reduce the pain for them?

> $ git grep-all -lw rte_eth_devices |grep -v \\.patch$
> ANS/ans/ans_main.c
> BESS/core/drivers/pmd.cc
> dma_ip_drivers/QDMA/DPDK/drivers/net/qdma/qdma_xdebug.c
> dma_ip_drivers/QDMA/DPDK/drivers/net/qdma/rte_pmd_qdma.c
> dma_ip_drivers/QDMA/DPDK/examples/qdma_testapp/pcierw.c
> dma_ip_drivers/QDMA/DPDK/examples/qdma_testapp/testapp.c
> FD.io-VPP/src/plugins/dpdk/device/format.c
> lagopus/src/dataplane/dpdk/dpdk_io.c
> OVS/lib/netdev-dpdk.c
> packet-journey/app/kni.c
> pktgen-dpdk/app/pktgen-port-cfg.c
> pktgen-dpdk/app/pktgen-port-cfg.h
> pktgen-dpdk/app/pktgen-stats.c
> Trex/src/dpdk_funcs.c
> Trex/src/drivers/trex_i40e_fdir.c
> Trex/src/drivers/trex_ixgbe_fdir.c
> TungstenFabric-vRouter/gdb/vr_dpdk.gdb
> 
> 
> I did not check all projects for their uses of rte_eth_devices, but I
> did the job for OVS.
> If you have cycles to review...
> https://patchwork.ozlabs.org/project/openvswitch/patch/20210907082343.16370-1-david.marchand@redhat.com/
> 
> One nit:
> 
>>
>> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>> ---
>>   doc/guides/rel_notes/release_21_11.rst        |   6 +
>>   drivers/common/octeontx2/otx2_sec_idev.c      |   2 +-
>>   drivers/crypto/octeontx2/otx2_cryptodev_ops.c |   2 +-
>>   drivers/net/cxgbe/base/adapter.h              |   2 +-
>>   drivers/net/dpaa2/dpaa2_ptp.c                 |   2 +-
>>   drivers/net/netvsc/hn_var.h                   |   1 +
>>   lib/ethdev/ethdev_driver.h                    | 149 ++++++++++++++++++
>>   lib/ethdev/rte_ethdev_core.h                  | 143 -----------------
>>   lib/ethdev/version.map                        |   2 +-
>>   lib/eventdev/rte_event_eth_rx_adapter.c       |   2 +-
>>   lib/eventdev/rte_event_eth_tx_adapter.c       |   2 +-
>>   lib/eventdev/rte_eventdev.c                   |   2 +-
>>   lib/metrics/rte_metrics_telemetry.c           |   2 +-
>>   13 files changed, 165 insertions(+), 152 deletions(-)
>>
>> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
>> index 6055551443..2944149943 100644
>> --- a/doc/guides/rel_notes/release_21_11.rst
>> +++ b/doc/guides/rel_notes/release_21_11.rst
>> @@ -228,6 +228,12 @@ ABI Changes
>>     to user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
>>     is used by  public inline function ``rte_eth_rx_queue_count``.
>>
>> +* ethdev: Made ``rte_eth_dev``, ``rte_eth_dev_data``, ``rte_eth_rxtx_callback``
>> +  private data structures. ``rte_eth_devices[]`` can't be accessible directly
> 
> accessed*
> 
>> +  by user any more. While it is an ABI breakage, this change is intended
>> +  to be transparent for both users (no changes in user app is required) and
>> +  PMD developers (no changes in PMD is required).
>> +
>>
>>   Known Issues
>>   ------------
> 
> 
> 


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v4 7/7] ethdev: hide eth dev related structures
  2021-10-05 10:43           ` Ferruh Yigit
@ 2021-10-05 11:37             ` David Marchand
  2021-10-05 15:57               ` Ananyev, Konstantin
  0 siblings, 1 reply; 112+ messages in thread
From: David Marchand @ 2021-10-05 11:37 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: Konstantin Ananyev, Thomas Monjalon, dev, Xiaoyun Li,
	Anoob Joseph, Jerin Jacob Kollanukkaran, Nithin Dabilpuram,
	Ankur Dwivedi, Shepard Siegel, Ed Czeck, John Miller,
	Igor Russkikh, Ajit Khaparde, Somnath Kotur, Rahul Lakkireddy,
	Hemant Agrawal, Sachin Saxena, Wang, Haiyue, John Daley,
	Hyong Youb Kim, Qi Zhang, Xiao Wang, humin (Q),
	Yisen Zhuang, oulijun, Beilei Xing, Jingjing Wu, Qiming Yang,
	Matan Azrad, Slava Ovsiienko, Stephen Hemminger, Long Li,
	heinrich.kuhn, Kiran Kumar Kokkilagadda, Andrew Rybchenko,
	Maciej Czekaj, Jiawen Wu, Jian Wang, Maxime Coquelin, Xia,
	Chenbo, Ray Kinsella, Jayatheerthan, Jay

On Tue, Oct 5, 2021 at 12:43 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
> > This change is going to hurt a lot of people :-).
> > But this is a necessary move.
> >
>
> +1 that it is necessary move, but I am surprised to see how much 'rte_eth_devices'
> is accessed directly.
>
> Do you have any idea/suggestion on how can we reduce the pain for them?

From what I see, ethdev iterators are probably something that is not
known enough.
rte_eth_dev_info_get() also fills some other spots.
I don't have a magic answer, people need to look at existing API.


But I just scratched the surface, looking at rte_eth_devices[] accesses.
There might be other rte_eth_dev object dereferences (you get one
point calling rte_eth_dev_info_get) that my grep did not catch.


Details:

>
> > $ git grep-all -lw rte_eth_devices |grep -v \\.patch$
> > ANS/ans/ans_main.c

I think this code is just lagging behind what ethdev currently
provides/does wrt default offload config.
This code probably does not need to dereference rte_eth_devices[] to
query offloads configuration.


> > BESS/core/drivers/pmd.cc

ethdev iterators can replace those accesses.


> > dma_ip_drivers/QDMA/DPDK/drivers/net/qdma/qdma_xdebug.c
> > dma_ip_drivers/QDMA/DPDK/drivers/net/qdma/rte_pmd_qdma.c
> > dma_ip_drivers/QDMA/DPDK/examples/qdma_testapp/pcierw.c
> > dma_ip_drivers/QDMA/DPDK/examples/qdma_testapp/testapp.c

This is a DPDK clone, with an additional driver, so irrelevant.


> > FD.io-VPP/src/plugins/dpdk/device/format.c

rte_eth_rx_burst_mode_get() and rte_eth_tx_burst_mode_get() should do the job.
I wonder if those APIs were introduced in DPDK for VPP.. ?


> > lagopus/src/dataplane/dpdk/dpdk_io.c

Idem, ethdev iterators and rte_eth_dev_info_get() instead of direct
access for dev_flags.


> > OVS/lib/netdev-dpdk.c

For OVS, it was ethdev iterators + rte_eth_dev_info_get() where necessary.


> > packet-journey/app/kni.c

There might be something missing in current ethdev API.
This app wants to know if device is started... but on the other hand,
that's probably something the app tracks itself.


> > pktgen-dpdk/app/pktgen-port-cfg.c
> > pktgen-dpdk/app/pktgen-port-cfg.h
> > pktgen-dpdk/app/pktgen-stats.c

Accesses to offload configuration which I think are unneeded (like ANS).
Direct access for dev_flags, can be replaced with rte_eth_dev_info_get.
Direct access for name, can be replaced with rte_eth_dev_info_get.


> > Trex/src/dpdk_funcs.c
> > Trex/src/drivers/trex_i40e_fdir.c
> > Trex/src/drivers/trex_ixgbe_fdir.c

Here, there is some horror.
Directly casting and accessing hardware:
    struct rte_eth_dev *dev = &rte_eth_devices[repid];
        struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);

        I40E_WRITE_REG(hw, I40E_GLQF_ORT(12), 0x00000062);
        I40E_WRITE_REG(hw, I40E_GLQF_PIT(2), 0x000024A0);
        I40E_WRITE_REG(hw,
I40E_PRTQF_FD_INSET(I40E_FILTER_PCTYPE_NONF_IPV4_UDP, 0), 0);
etc...

This code probably bypasses too much of dpdk API, I stopped at this.


> > TungstenFabric-vRouter/gdb/vr_dpdk.gdb

Mm, interesting, this part displays DPDK internals from gdb.
That's something I have in my todolist for a long time, providing some
common gdb scripts in DPDK...


-- 
David Marchand


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v4 1/7] ethdev: allocate max space for internal queue array
  2021-10-04 13:55       ` [dpdk-dev] [PATCH v4 1/7] ethdev: allocate max space for internal queue array Konstantin Ananyev
@ 2021-10-05 12:09         ` Thomas Monjalon
  2021-10-05 16:45           ` Ananyev, Konstantin
  2021-10-05 12:21         ` Thomas Monjalon
  1 sibling, 1 reply; 112+ messages in thread
From: Thomas Monjalon @ 2021-10-05 12:09 UTC (permalink / raw)
  To: Konstantin Ananyev
  Cc: dev, xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, ferruh.yigit, mdr, jay.jayatheerthan

04/10/2021 15:55, Konstantin Ananyev:
> At queue configure stage always allocate space for maximum possible
> number (RTE_MAX_QUEUES_PER_PORT) of queue pointers.
> That will allow 'fast' inline functions (eth_rx_burst, etc.) to refer
> pointer to internal queue data without extra checking of current number
> of configured queues.

What is the memory usage overhead per port?
We should consider cases with thousand of virtual ports.




^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v4 1/7] ethdev: allocate max space for internal queue array
  2021-10-04 13:55       ` [dpdk-dev] [PATCH v4 1/7] ethdev: allocate max space for internal queue array Konstantin Ananyev
  2021-10-05 12:09         ` Thomas Monjalon
@ 2021-10-05 12:21         ` Thomas Monjalon
  1 sibling, 0 replies; 112+ messages in thread
From: Thomas Monjalon @ 2021-10-05 12:21 UTC (permalink / raw)
  To: Konstantin Ananyev
  Cc: dev, xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, ferruh.yigit, mdr, jay.jayatheerthan

04/10/2021 15:55, Konstantin Ananyev:
> At queue configure stage always allocate space for maximum possible
> number (RTE_MAX_QUEUES_PER_PORT) of queue pointers.

The problem with this max is that it cannot be changed dynamically.
That could be another patch, but I would like to see this number
a default max which can be changed with an init function called
before any other configuration.




^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v4 3/7] ethdev: copy ethdev 'fast' API into separate structure
  2021-10-04 13:55       ` [dpdk-dev] [PATCH v4 3/7] ethdev: copy ethdev 'fast' API into separate structure Konstantin Ananyev
@ 2021-10-05 13:09         ` Thomas Monjalon
  2021-10-05 16:41           ` Ananyev, Konstantin
  0 siblings, 1 reply; 112+ messages in thread
From: Thomas Monjalon @ 2021-10-05 13:09 UTC (permalink / raw)
  To: Konstantin Ananyev
  Cc: dev, xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, ferruh.yigit, mdr, jay.jayatheerthan

04/10/2021 15:55, Konstantin Ananyev:
> Copy public function pointers (rx_pkt_burst(), etc.) and related
> pointers to internal data from rte_eth_dev structure into a
> separate flat array. That array will remain in a public header.
> The intention here is to make rte_eth_dev and related structures internal.
> That should allow future possible changes to core eth_dev structures
> to be transparent to the user and help to avoid ABI/API breakages.
> The plan is to keep minimal part of data from rte_eth_dev public,
> so we still can use inline functions for 'fast' calls
> (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.

I don't understand why 'fast' is quoted.
It looks strange.


> +/* reset eth 'fast' API to dummy values */
> +void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
> +
> +/* setup eth 'fast' API to ethdev values */
> +void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> +		const struct rte_eth_dev *dev);

I assume "fp" stands for fast path.
Please write "fast path" completely in the comments.

> +	/* expose selection of PMD rx/tx function */
> +	eth_dev_fp_ops_setup(rte_eth_fp_ops + port_id, dev);
[...]
> +	/* point rx/tx functions to dummy ones */
> +	eth_dev_fp_ops_reset(rte_eth_fp_ops + port_id);

Nit: Rx/Tx
or could be "fast path", to be consistent.

> +	/*
> +	 * for secondary process, at that point we expect device
> +	 * to be already 'usable', so shared data and all function pointers
> +	 * for 'fast' devops have to be setup properly inside rte_eth_dev.
> +	 */
> +	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
> +		eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev);
> +
>  	rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL);
>  
>  	dev->state = RTE_ETH_DEV_ATTACHED;
> diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
> index 948c0b71c1..fe47a660c7 100644
> --- a/lib/ethdev/rte_ethdev_core.h
> +++ b/lib/ethdev/rte_ethdev_core.h
> @@ -53,6 +53,51 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
>  typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
>  /**< @internal Check the status of a Tx descriptor */
>  
> +/**
> + * @internal
> + * Structure used to hold opaque pointernals to internal ethdev RX/TXi

typos in above line

> + * queues data.
> + * The main purpose to expose these pointers at all - allow compiler
> + * to fetch this data for 'fast' ethdev inline functions in advance.
> + */
> +struct rte_ethdev_qdata {
> +	void **data;
> +	/**< points to array of internal queue data pointers */
> +	void **clbk;
> +	/**< points to array of queue callback data pointers */
> +};
> +
> +/**
> + * @internal
> + * 'fast' ethdev funcions and related data are hold in a flat array.
> + * one entry per ethdev.
> + */
> +struct rte_eth_fp_ops {
> +
> +	/** first 64B line */
> +	eth_rx_burst_t rx_pkt_burst;
> +	/**< PMD receive function. */
> +	eth_tx_burst_t tx_pkt_burst;
> +	/**< PMD transmit function. */
> +	eth_tx_prep_t tx_pkt_prepare;
> +	/**< PMD transmit prepare function. */
> +	eth_rx_queue_count_t rx_queue_count;
> +	/**< Get the number of used RX descriptors. */
> +	eth_rx_descriptor_status_t rx_descriptor_status;
> +	/**< Check the status of a Rx descriptor. */
> +	eth_tx_descriptor_status_t tx_descriptor_status;
> +	/**< Check the status of a Tx descriptor. */
> +	uintptr_t reserved[2];

uintptr_t size is not fix.
I think you mean uint64_t.

> +
> +	/** second 64B line */
> +	struct rte_ethdev_qdata rxq;
> +	struct rte_ethdev_qdata txq;
> +	uintptr_t reserved2[4];
> +
> +} __rte_cache_aligned;
> +
> +extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];




^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v4 5/7] ethdev: add API to retrieve multiple ethernet addresses
  2021-10-04 13:56       ` [dpdk-dev] [PATCH v4 5/7] ethdev: add API to retrieve multiple ethernet addresses Konstantin Ananyev
@ 2021-10-05 13:13         ` Thomas Monjalon
  2021-10-05 16:35           ` Ananyev, Konstantin
  0 siblings, 1 reply; 112+ messages in thread
From: Thomas Monjalon @ 2021-10-05 13:13 UTC (permalink / raw)
  To: Konstantin Ananyev
  Cc: dev, xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, ferruh.yigit, mdr, jay.jayatheerthan

04/10/2021 15:56, Konstantin Ananyev:
> Introduce rte_eth_macaddrs_get() to allow user to retrieve all ethernet
> addresses assigned to given port.

We already have functions to get MAC addresses.
Please explain the difference.

[...]
> +* **Add new function into ethdev lib.**
> +
> +  * Added ``rte_eth_macaddrs_get`` to allow user to retrieve all Ethernet
> +    addresses aasigned to given ethernet port.

typo above

> +/**
> + * Retrieve the Ethernet addresses of an Ethernet device.
> + *
> + * @param port_id
> + *   The port identifier of the Ethernet device.
> + * @param ma
> + *   A pointer to an array of structures of type *ether_addr* to be filled with
> + *   the Ethernet addresses of the Ethernet device.
> + * @param num
> + *   Number of elements in the *ma* array.
> + * @return
> + *   - number of retrieved addresses if successful
> + *   - (-ENODEV) if *port_id* invalid.
> + *   - (-EINVAL) if bad parameter.

Which error if the array is too small?

> + */
> +__rte_experimental
> +int rte_eth_macaddrs_get(uint16_t port_id, struct rte_ether_addr ma[],

Please don't use array syntax in parameters, it should be a pointer.
How do we get the number of returned addresses?

> +	uint32_t num);

Another approach would be to get addresses one by one by passing an index.




^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v4 6/7] ethdev: remove legacy Rx descriptor done API
  2021-10-04 13:56       ` [dpdk-dev] [PATCH v4 6/7] ethdev: remove legacy Rx descriptor done API Konstantin Ananyev
@ 2021-10-05 13:14         ` Thomas Monjalon
  2021-10-05 16:21           ` Ananyev, Konstantin
  0 siblings, 1 reply; 112+ messages in thread
From: Thomas Monjalon @ 2021-10-05 13:14 UTC (permalink / raw)
  To: Konstantin Ananyev
  Cc: dev, xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, ferruh.yigit, mdr, jay.jayatheerthan

04/10/2021 15:56, Konstantin Ananyev:
> rte_eth_rx_descriptor_status() should be used as a replacement.
> 
> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

It should be the first patch, or even a standalone patch to apply quickly.




^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v4 7/7] ethdev: hide eth dev related structures
  2021-10-04 13:56       ` [dpdk-dev] [PATCH v4 7/7] ethdev: hide eth dev related structures Konstantin Ananyev
  2021-10-05 10:04         ` David Marchand
@ 2021-10-05 13:24         ` Thomas Monjalon
  2021-10-05 16:19           ` Ananyev, Konstantin
  1 sibling, 1 reply; 112+ messages in thread
From: Thomas Monjalon @ 2021-10-05 13:24 UTC (permalink / raw)
  To: Konstantin Ananyev
  Cc: dev, xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, ferruh.yigit, mdr, jay.jayatheerthan

04/10/2021 15:56, Konstantin Ananyev:
> Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
> data into private header (ethdev_driver.h).
[...]
> +/**
> + * @internal
> + * Structure used to hold information about the callbacks to be called for a
> + * queue on RX and TX.
> + */
> +struct rte_eth_rxtx_callback {
> +	struct rte_eth_rxtx_callback *next;
> +	union{
> +		rte_rx_callback_fn rx;
> +		rte_tx_callback_fn tx;
> +	} fn;
> +	void *param;
> +};
> +
> +/**
> + * @internal
> + * The generic data structure associated with each ethernet device.
> + *
> + * Pointers to burst-oriented packet receive and transmit functions are
> + * located at the beginning of the structure, along with the pointer to
> + * where all the data elements for the particular device are stored in shared
> + * memory. This split allows the function pointer and driver data to be per-
> + * process, while the actual configuration data for the device is shared.
> + */
> +struct rte_eth_dev {
> +	eth_rx_burst_t rx_pkt_burst; /**< Pointer to PMD receive function. */
> +	eth_tx_burst_t tx_pkt_burst; /**< Pointer to PMD transmit function. */
> +	eth_tx_prep_t tx_pkt_prepare;
> +	/**< Pointer to PMD transmit prepare function. */
> +	eth_rx_queue_count_t rx_queue_count;
> +	/**< Get the number of used RX descriptors. */
> +	eth_rx_descriptor_status_t rx_descriptor_status;
> +	/**< Check the status of a Rx descriptor. */
> +	eth_tx_descriptor_status_t tx_descriptor_status;
> +	/**< Check the status of a Tx descriptor. */

Why not using the new struct rte_eth_fp_ops?

> +
> +	/**
> +	 * Next two fields are per-device data but *data is shared between
> +	 * primary and secondary processes and *process_private is per-process
> +	 * private. The second one is managed by PMDs if necessary.
> +	 */
> +	struct rte_eth_dev_data *data;  /**< Pointer to device data. */

We should mention that "data" is shared between processes.

> +	void *process_private; /**< Pointer to per-process device data. */
> +	const struct eth_dev_ops *dev_ops; /**< Functions exported by PMD */
> +	struct rte_device *device; /**< Backing device */
> +	struct rte_intr_handle *intr_handle; /**< Device interrupt handle */
> +	/** User application callbacks for NIC interrupts */
> +	struct rte_eth_dev_cb_list link_intr_cbs;
> +	/**
> +	 * User-supplied functions called from rx_burst to post-process
> +	 * received packets before passing them to the user
> +	 */
> +	struct rte_eth_rxtx_callback *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
> +	/**
> +	 * User-supplied functions called from tx_burst to pre-process
> +	 * received packets before passing them to the driver for transmission.
> +	 */
> +	struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
> +	enum rte_eth_dev_state state; /**< Flag indicating the port state */
> +	void *security_ctx; /**< Context for security ops */
> +
> +	uint64_t reserved_64s[4]; /**< Reserved for future fields */
> +	void *reserved_ptrs[4];   /**< Reserved for future fields */
> +} __rte_cache_aligned;
> +
> +struct rte_eth_dev_sriov;
> +struct rte_eth_dev_owner;
> +
> +/**
> + * @internal
> + * The data part, with no function pointers, associated with each ethernet
> + * device. This structure is safe to place in shared memory to be common
> + * among different processes in a multi-process configuration.
> + */
> +struct rte_eth_dev_data {
> +	char name[RTE_ETH_NAME_MAX_LEN]; /**< Unique identifier name */
> +
> +	void **rx_queues; /**< Array of pointers to RX queues. */
> +	void **tx_queues; /**< Array of pointers to TX queues. */
> +	uint16_t nb_rx_queues; /**< Number of RX queues. */
> +	uint16_t nb_tx_queues; /**< Number of TX queues. */
> +
> +	struct rte_eth_dev_sriov sriov;    /**< SRIOV data */
> +
> +	void *dev_private;
> +			/**< PMD-specific private data.
> +			 *   @see rte_eth_dev_release_port()
> +			 */
> +
> +	struct rte_eth_link dev_link;   /**< Link-level information & status. */
> +	struct rte_eth_conf dev_conf;   /**< Configuration applied to device. */
> +	uint16_t mtu;                   /**< Maximum Transmission Unit. */
> +	uint32_t min_rx_buf_size;
> +			/**< Common RX buffer size handled by all queues. */
> +
> +	uint64_t rx_mbuf_alloc_failed; /**< RX ring mbuf allocation failures. */
> +	struct rte_ether_addr *mac_addrs;
> +			/**< Device Ethernet link address.
> +			 *   @see rte_eth_dev_release_port()
> +			 */
> +	uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
> +			/**< Bitmap associating MAC addresses to pools. */
> +	struct rte_ether_addr *hash_mac_addrs;
> +			/**< Device Ethernet MAC addresses of hash filtering.
> +			 *   @see rte_eth_dev_release_port()
> +			 */
> +	uint16_t port_id;           /**< Device [external] port identifier. */
> +
> +	__extension__
> +	uint8_t promiscuous   : 1,
> +		/**< RX promiscuous mode ON(1) / OFF(0). */
> +		scattered_rx : 1,
> +		/**< RX of scattered packets is ON(1) / OFF(0) */
> +		all_multicast : 1,
> +		/**< RX all multicast mode ON(1) / OFF(0). */
> +		dev_started : 1,
> +		/**< Device state: STARTED(1) / STOPPED(0). */
> +		lro         : 1,
> +		/**< RX LRO is ON(1) / OFF(0) */
> +		dev_configured : 1;
> +		/**< Indicates whether the device is configured.
> +		 *   CONFIGURED(1) / NOT CONFIGURED(0).
> +		 */
> +	uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
> +		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
> +	uint8_t tx_queue_state[RTE_MAX_QUEUES_PER_PORT];
> +		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
> +	uint32_t dev_flags;             /**< Capabilities. */
> +	int numa_node;                  /**< NUMA node connection. */
> +	struct rte_vlan_filter_conf vlan_filter_conf;
> +			/**< VLAN filter configuration. */
> +	struct rte_eth_dev_owner owner; /**< The port owner. */
> +	uint16_t representor_id;
> +			/**< Switch-specific identifier.
> +			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
> +			 */
> +
> +	pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
> +	uint64_t reserved_64s[4]; /**< Reserved for future fields */
> +	void *reserved_ptrs[4];   /**< Reserved for future fields */
> +} __rte_cache_aligned;
> +
> +/**
> + * @internal
> + * The pool of *rte_eth_dev* structures. The size of the pool
> + * is configured at compile-time in the <rte_ethdev.c> file.
> + */
> +extern struct rte_eth_dev rte_eth_devices[];

Later we should add a function to configure the size of this array dynamically
in the early DPDK init stage.



^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v4 7/7] ethdev: hide eth dev related structures
  2021-10-05 11:37             ` David Marchand
@ 2021-10-05 15:57               ` Ananyev, Konstantin
  0 siblings, 0 replies; 112+ messages in thread
From: Ananyev, Konstantin @ 2021-10-05 15:57 UTC (permalink / raw)
  To: David Marchand, Yigit, Ferruh
  Cc: Thomas Monjalon, dev, Li, Xiaoyun, Anoob Joseph,
	Jerin Jacob Kollanukkaran, Nithin Dabilpuram, Ankur Dwivedi,
	Shepard Siegel, Ed Czeck, John Miller, Igor Russkikh,
	Ajit Khaparde, Somnath Kotur, Rahul Lakkireddy, Hemant Agrawal,
	Sachin Saxena, Wang, Haiyue, Daley, John, Hyong Youb Kim, Zhang,
	Qi Z, Wang, Xiao W, humin (Q),
	Yisen Zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang, Qiming,
	Matan Azrad, Slava Ovsiienko, Stephen Hemminger, Long Li,
	heinrich.kuhn, Kiran Kumar Kokkilagadda, Andrew Rybchenko,
	Maciej Czekaj, Jiawen Wu, Jian Wang, Maxime Coquelin, Xia,
	Chenbo, Ray Kinsella, Jayatheerthan, Jay



> 
> On Tue, Oct 5, 2021 at 12:43 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
> > > This change is going to hurt a lot of people :-).
> > > But this is a necessary move.
> > >
> >
> > +1 that it is necessary move, but I am surprised to see how much 'rte_eth_devices'
> > is accessed directly.
> >
> > Do you have any idea/suggestion on how can we reduce the pain for them?
> 
> From what I see, ethdev iterators are probably something that is not
> known enough.
> rte_eth_dev_info_get() also fills some other spots.
> I don't have a magic answer, people need to look at existing API.
> 
> 
> But I just scratched the surface, looking at rte_eth_devices[] accesses.
> There might be other rte_eth_dev object dereferences (you get one
> point calling rte_eth_dev_info_get) that my grep did not catch.
> 

Indeed, that's a quite lot...
Thanks a lot for such detailed list, very interesting.
Wonder should we heads up coming changes to these guys...
Or might be interested persons are already aware (by reading dev@dpdk.org or so).

> Details:
> 
> >
> > > $ git grep-all -lw rte_eth_devices |grep -v \\.patch$
> > > ANS/ans/ans_main.c
> 
> I think this code is just lagging behind what ethdev currently
> provides/does wrt default offload config.
> This code probably does not need to dereference rte_eth_devices[] to
> query offloads configuration.
> 
> 
> > > BESS/core/drivers/pmd.cc
> 
> ethdev iterators can replace those accesses.
> 
> 
> > > dma_ip_drivers/QDMA/DPDK/drivers/net/qdma/qdma_xdebug.c
> > > dma_ip_drivers/QDMA/DPDK/drivers/net/qdma/rte_pmd_qdma.c
> > > dma_ip_drivers/QDMA/DPDK/examples/qdma_testapp/pcierw.c
> > > dma_ip_drivers/QDMA/DPDK/examples/qdma_testapp/testapp.c
> 
> This is a DPDK clone, with an additional driver, so irrelevant.
> 
> 
> > > FD.io-VPP/src/plugins/dpdk/device/format.c
> 
> rte_eth_rx_burst_mode_get() and rte_eth_tx_burst_mode_get() should do the job.
> I wonder if those APIs were introduced in DPDK for VPP.. ?
> 
> 
> > > lagopus/src/dataplane/dpdk/dpdk_io.c
> 
> Idem, ethdev iterators and rte_eth_dev_info_get() instead of direct
> access for dev_flags.
> 
> 
> > > OVS/lib/netdev-dpdk.c
> 
> For OVS, it was ethdev iterators + rte_eth_dev_info_get() where necessary.
> 
> 
> > > packet-journey/app/kni.c
> 
> There might be something missing in current ethdev API.
> This app wants to know if device is started... but on the other hand,
> that's probably something the app tracks itself.
> 
> 
> > > pktgen-dpdk/app/pktgen-port-cfg.c
> > > pktgen-dpdk/app/pktgen-port-cfg.h
> > > pktgen-dpdk/app/pktgen-stats.c
> 
> Accesses to offload configuration which I think are unneeded (like ANS).
> Direct access for dev_flags, can be replaced with rte_eth_dev_info_get.
> Direct access for name, can be replaced with rte_eth_dev_info_get.
> 
> 
> > > Trex/src/dpdk_funcs.c
> > > Trex/src/drivers/trex_i40e_fdir.c
> > > Trex/src/drivers/trex_ixgbe_fdir.c
> 
> Here, there is some horror.
> Directly casting and accessing hardware:
>     struct rte_eth_dev *dev = &rte_eth_devices[repid];
>         struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> 
>         I40E_WRITE_REG(hw, I40E_GLQF_ORT(12), 0x00000062);
>         I40E_WRITE_REG(hw, I40E_GLQF_PIT(2), 0x000024A0);
>         I40E_WRITE_REG(hw,
> I40E_PRTQF_FD_INSET(I40E_FILTER_PCTYPE_NONF_IPV4_UDP, 0), 0);
> etc...
> 
> This code probably bypasses too much of dpdk API, I stopped at this.
> 
> 
> > > TungstenFabric-vRouter/gdb/vr_dpdk.gdb
> 
> Mm, interesting, this part displays DPDK internals from gdb.
> That's something I have in my todolist for a long time, providing some
> common gdb scripts in DPDK...
> 
> 
> --
> David Marchand


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v4 7/7] ethdev: hide eth dev related structures
  2021-10-05 13:24         ` Thomas Monjalon
@ 2021-10-05 16:19           ` Ananyev, Konstantin
  2021-10-05 16:25             ` Thomas Monjalon
  0 siblings, 1 reply; 112+ messages in thread
From: Ananyev, Konstantin @ 2021-10-05 16:19 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
	maxime.coquelin, Xia, Chenbo, Yigit, Ferruh, mdr, Jayatheerthan,
	Jay



 
> 04/10/2021 15:56, Konstantin Ananyev:
> > Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
> > data into private header (ethdev_driver.h).
> [...]
> > +/**
> > + * @internal
> > + * Structure used to hold information about the callbacks to be called for a
> > + * queue on RX and TX.
> > + */
> > +struct rte_eth_rxtx_callback {
> > +	struct rte_eth_rxtx_callback *next;
> > +	union{
> > +		rte_rx_callback_fn rx;
> > +		rte_tx_callback_fn tx;
> > +	} fn;
> > +	void *param;
> > +};
> > +
> > +/**
> > + * @internal
> > + * The generic data structure associated with each ethernet device.
> > + *
> > + * Pointers to burst-oriented packet receive and transmit functions are
> > + * located at the beginning of the structure, along with the pointer to
> > + * where all the data elements for the particular device are stored in shared
> > + * memory. This split allows the function pointer and driver data to be per-
> > + * process, while the actual configuration data for the device is shared.
> > + */
> > +struct rte_eth_dev {
> > +	eth_rx_burst_t rx_pkt_burst; /**< Pointer to PMD receive function. */
> > +	eth_tx_burst_t tx_pkt_burst; /**< Pointer to PMD transmit function. */
> > +	eth_tx_prep_t tx_pkt_prepare;
> > +	/**< Pointer to PMD transmit prepare function. */
> > +	eth_rx_queue_count_t rx_queue_count;
> > +	/**< Get the number of used RX descriptors. */
> > +	eth_rx_descriptor_status_t rx_descriptor_status;
> > +	/**< Check the status of a Rx descriptor. */
> > +	eth_tx_descriptor_status_t tx_descriptor_status;
> > +	/**< Check the status of a Tx descriptor. */
> 
> Why not using the new struct rte_eth_fp_ops?

We don't want to change each and every driver for this change.
The idea beyond it:
1. PMDs keep to setup fast-path function pointers and related data 
    inside rte_eth_dev struct in the same way they did it before.
2. Inside rte_eth_dev_start() and inside rte_eth_dev_probing_finish()
   (for secondary process) we call eth_dev_fp_ops_setup, which
   copies these function and data pointers into rte_eth_fp_ops[port_id].
3. Inside rte_eth_dev_stop() and inside rte_eth_dev_release_port()
    we call eth_dev_fp_ops_reset(), which resets rte_eth_fp_ops[port_id]
    into some dummy values.

> 
> > +
> > +	/**
> > +	 * Next two fields are per-device data but *data is shared between
> > +	 * primary and secondary processes and *process_private is per-process
> > +	 * private. The second one is managed by PMDs if necessary.
> > +	 */
> > +	struct rte_eth_dev_data *data;  /**< Pointer to device data. */
> 
> We should mention that "data" is shared between processes.

I think the comment above states exactly that.
In fact, it is just cut and paste from lib/ethdev/rte_ethdev_core.h to 
lib/ethdev/ethdev_driver.h.

> 
> > +	void *process_private; /**< Pointer to per-process device data. */
> > +	const struct eth_dev_ops *dev_ops; /**< Functions exported by PMD */
> > +	struct rte_device *device; /**< Backing device */
> > +	struct rte_intr_handle *intr_handle; /**< Device interrupt handle */
> > +	/** User application callbacks for NIC interrupts */
> > +	struct rte_eth_dev_cb_list link_intr_cbs;
> > +	/**
> > +	 * User-supplied functions called from rx_burst to post-process
> > +	 * received packets before passing them to the user
> > +	 */
> > +	struct rte_eth_rxtx_callback *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
> > +	/**
> > +	 * User-supplied functions called from tx_burst to pre-process
> > +	 * received packets before passing them to the driver for transmission.
> > +	 */
> > +	struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
> > +	enum rte_eth_dev_state state; /**< Flag indicating the port state */
> > +	void *security_ctx; /**< Context for security ops */
> > +
> > +	uint64_t reserved_64s[4]; /**< Reserved for future fields */
> > +	void *reserved_ptrs[4];   /**< Reserved for future fields */
> > +} __rte_cache_aligned;
> > +
> > +struct rte_eth_dev_sriov;
> > +struct rte_eth_dev_owner;
> > +
> > +/**
> > + * @internal
> > + * The data part, with no function pointers, associated with each ethernet
> > + * device. This structure is safe to place in shared memory to be common
> > + * among different processes in a multi-process configuration.
> > + */
> > +struct rte_eth_dev_data {
> > +	char name[RTE_ETH_NAME_MAX_LEN]; /**< Unique identifier name */
> > +
> > +	void **rx_queues; /**< Array of pointers to RX queues. */
> > +	void **tx_queues; /**< Array of pointers to TX queues. */
> > +	uint16_t nb_rx_queues; /**< Number of RX queues. */
> > +	uint16_t nb_tx_queues; /**< Number of TX queues. */
> > +
> > +	struct rte_eth_dev_sriov sriov;    /**< SRIOV data */
> > +
> > +	void *dev_private;
> > +			/**< PMD-specific private data.
> > +			 *   @see rte_eth_dev_release_port()
> > +			 */
> > +
> > +	struct rte_eth_link dev_link;   /**< Link-level information & status. */
> > +	struct rte_eth_conf dev_conf;   /**< Configuration applied to device. */
> > +	uint16_t mtu;                   /**< Maximum Transmission Unit. */
> > +	uint32_t min_rx_buf_size;
> > +			/**< Common RX buffer size handled by all queues. */
> > +
> > +	uint64_t rx_mbuf_alloc_failed; /**< RX ring mbuf allocation failures. */
> > +	struct rte_ether_addr *mac_addrs;
> > +			/**< Device Ethernet link address.
> > +			 *   @see rte_eth_dev_release_port()
> > +			 */
> > +	uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
> > +			/**< Bitmap associating MAC addresses to pools. */
> > +	struct rte_ether_addr *hash_mac_addrs;
> > +			/**< Device Ethernet MAC addresses of hash filtering.
> > +			 *   @see rte_eth_dev_release_port()
> > +			 */
> > +	uint16_t port_id;           /**< Device [external] port identifier. */
> > +
> > +	__extension__
> > +	uint8_t promiscuous   : 1,
> > +		/**< RX promiscuous mode ON(1) / OFF(0). */
> > +		scattered_rx : 1,
> > +		/**< RX of scattered packets is ON(1) / OFF(0) */
> > +		all_multicast : 1,
> > +		/**< RX all multicast mode ON(1) / OFF(0). */
> > +		dev_started : 1,
> > +		/**< Device state: STARTED(1) / STOPPED(0). */
> > +		lro         : 1,
> > +		/**< RX LRO is ON(1) / OFF(0) */
> > +		dev_configured : 1;
> > +		/**< Indicates whether the device is configured.
> > +		 *   CONFIGURED(1) / NOT CONFIGURED(0).
> > +		 */
> > +	uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
> > +		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
> > +	uint8_t tx_queue_state[RTE_MAX_QUEUES_PER_PORT];
> > +		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
> > +	uint32_t dev_flags;             /**< Capabilities. */
> > +	int numa_node;                  /**< NUMA node connection. */
> > +	struct rte_vlan_filter_conf vlan_filter_conf;
> > +			/**< VLAN filter configuration. */
> > +	struct rte_eth_dev_owner owner; /**< The port owner. */
> > +	uint16_t representor_id;
> > +			/**< Switch-specific identifier.
> > +			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
> > +			 */
> > +
> > +	pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
> > +	uint64_t reserved_64s[4]; /**< Reserved for future fields */
> > +	void *reserved_ptrs[4];   /**< Reserved for future fields */
> > +} __rte_cache_aligned;
> > +
> > +/**
> > + * @internal
> > + * The pool of *rte_eth_dev* structures. The size of the pool
> > + * is configured at compile-time in the <rte_ethdev.c> file.
> > + */
> > +extern struct rte_eth_dev rte_eth_devices[];
> 
> Later we should add a function to configure the size of this array dynamically
> in the early DPDK init stage.

After we will hide rte_eth_devices[] and friends, we should be able to do
with them whatever we want.
But I suppose, that should be a subject of separate patch/discussion,
Probably not in 21.11 timeframe.  




^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v4 6/7] ethdev: remove legacy Rx descriptor done API
  2021-10-05 13:14         ` Thomas Monjalon
@ 2021-10-05 16:21           ` Ananyev, Konstantin
  0 siblings, 0 replies; 112+ messages in thread
From: Ananyev, Konstantin @ 2021-10-05 16:21 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
	maxime.coquelin, Xia, Chenbo, Yigit, Ferruh, mdr, Jayatheerthan,
	Jay


> 
> 04/10/2021 15:56, Konstantin Ananyev:
> > rte_eth_rx_descriptor_status() should be used as a replacement.
> >
> > Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> > Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
> > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 
> It should be the first patch, or even a standalone patch to apply quickly.

I am fine either way.
If you'll apply it before I'll prepare v5, I can remove it from the series,
otherwise will move to the top.  

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v4 7/7] ethdev: hide eth dev related structures
  2021-10-05 16:19           ` Ananyev, Konstantin
@ 2021-10-05 16:25             ` Thomas Monjalon
  0 siblings, 0 replies; 112+ messages in thread
From: Thomas Monjalon @ 2021-10-05 16:25 UTC (permalink / raw)
  To: Ananyev, Konstantin
  Cc: dev, Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
	maxime.coquelin, Xia, Chenbo, Yigit, Ferruh, mdr, Jayatheerthan,
	Jay

05/10/2021 18:19, Ananyev, Konstantin:
> > 04/10/2021 15:56, Konstantin Ananyev:
> > > Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
> > > data into private header (ethdev_driver.h).
> > [...]
> > > +/**
> > > + * @internal
> > > + * Structure used to hold information about the callbacks to be called for a
> > > + * queue on RX and TX.
> > > + */
> > > +struct rte_eth_rxtx_callback {
> > > +	struct rte_eth_rxtx_callback *next;
> > > +	union{
> > > +		rte_rx_callback_fn rx;
> > > +		rte_tx_callback_fn tx;
> > > +	} fn;
> > > +	void *param;
> > > +};
> > > +
> > > +/**
> > > + * @internal
> > > + * The generic data structure associated with each ethernet device.
> > > + *
> > > + * Pointers to burst-oriented packet receive and transmit functions are
> > > + * located at the beginning of the structure, along with the pointer to
> > > + * where all the data elements for the particular device are stored in shared
> > > + * memory. This split allows the function pointer and driver data to be per-
> > > + * process, while the actual configuration data for the device is shared.
> > > + */
> > > +struct rte_eth_dev {
> > > +	eth_rx_burst_t rx_pkt_burst; /**< Pointer to PMD receive function. */
> > > +	eth_tx_burst_t tx_pkt_burst; /**< Pointer to PMD transmit function. */
> > > +	eth_tx_prep_t tx_pkt_prepare;
> > > +	/**< Pointer to PMD transmit prepare function. */
> > > +	eth_rx_queue_count_t rx_queue_count;
> > > +	/**< Get the number of used RX descriptors. */
> > > +	eth_rx_descriptor_status_t rx_descriptor_status;
> > > +	/**< Check the status of a Rx descriptor. */
> > > +	eth_tx_descriptor_status_t tx_descriptor_status;
> > > +	/**< Check the status of a Tx descriptor. */
> > 
> > Why not using the new struct rte_eth_fp_ops?
> 
> We don't want to change each and every driver for this change.
> The idea beyond it:
> 1. PMDs keep to setup fast-path function pointers and related data 
>     inside rte_eth_dev struct in the same way they did it before.
> 2. Inside rte_eth_dev_start() and inside rte_eth_dev_probing_finish()
>    (for secondary process) we call eth_dev_fp_ops_setup, which
>    copies these function and data pointers into rte_eth_fp_ops[port_id].
> 3. Inside rte_eth_dev_stop() and inside rte_eth_dev_release_port()
>     we call eth_dev_fp_ops_reset(), which resets rte_eth_fp_ops[port_id]
>     into some dummy values.

OK please add this explanation in the commit log.

> > > +
> > > +	/**
> > > +	 * Next two fields are per-device data but *data is shared between
> > > +	 * primary and secondary processes and *process_private is per-process
> > > +	 * private. The second one is managed by PMDs if necessary.
> > > +	 */
> > > +	struct rte_eth_dev_data *data;  /**< Pointer to device data. */
> > 
> > We should mention that "data" is shared between processes.
> 
> I think the comment above states exactly that.
> In fact, it is just cut and paste from lib/ethdev/rte_ethdev_core.h to 
> lib/ethdev/ethdev_driver.h.

True, but it is confusing, and we cannot have 2 comments for the same field.
The sentence "Next two fields are per-device data" is useless.
Let's comment each field separately.

> > > +	void *process_private; /**< Pointer to per-process device data. */
> > > +	const struct eth_dev_ops *dev_ops; /**< Functions exported by PMD */
> > > +	struct rte_device *device; /**< Backing device */
> > > +	struct rte_intr_handle *intr_handle; /**< Device interrupt handle */
> > > +	/** User application callbacks for NIC interrupts */
> > > +	struct rte_eth_dev_cb_list link_intr_cbs;
> > > +	/**
> > > +	 * User-supplied functions called from rx_burst to post-process
> > > +	 * received packets before passing them to the user
> > > +	 */
> > > +	struct rte_eth_rxtx_callback *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
> > > +	/**
> > > +	 * User-supplied functions called from tx_burst to pre-process
> > > +	 * received packets before passing them to the driver for transmission.
> > > +	 */
> > > +	struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
> > > +	enum rte_eth_dev_state state; /**< Flag indicating the port state */
> > > +	void *security_ctx; /**< Context for security ops */
> > > +
> > > +	uint64_t reserved_64s[4]; /**< Reserved for future fields */
> > > +	void *reserved_ptrs[4];   /**< Reserved for future fields */
> > > +} __rte_cache_aligned;
[...]
> > > +extern struct rte_eth_dev rte_eth_devices[];
> > 
> > Later we should add a function to configure the size of this array dynamically
> > in the early DPDK init stage.
> 
> After we will hide rte_eth_devices[] and friends, we should be able to do
> with them whatever we want.
> But I suppose, that should be a subject of separate patch/discussion,
> Probably not in 21.11 timeframe.  

Yes



^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v4 5/7] ethdev: add API to retrieve multiple ethernet addresses
  2021-10-05 13:13         ` Thomas Monjalon
@ 2021-10-05 16:35           ` Ananyev, Konstantin
  2021-10-05 16:45             ` Thomas Monjalon
  0 siblings, 1 reply; 112+ messages in thread
From: Ananyev, Konstantin @ 2021-10-05 16:35 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
	maxime.coquelin, Xia, Chenbo, Yigit, Ferruh, mdr, Jayatheerthan,
	Jay



> 
> 04/10/2021 15:56, Konstantin Ananyev:
> > Introduce rte_eth_macaddrs_get() to allow user to retrieve all ethernet
> > addresses assigned to given port.
> 
> We already have functions to get MAC addresses.
> Please explain the difference.

rte_eth_macaddr_get() returns just first (primary) MAC address
assigned to the port.
That one allow user to retrieve all addresses assigned to the port.  
> 
> [...]
> > +* **Add new function into ethdev lib.**
> > +
> > +  * Added ``rte_eth_macaddrs_get`` to allow user to retrieve all Ethernet
> > +    addresses aasigned to given ethernet port.
> 
> typo above
> 
> > +/**
> > + * Retrieve the Ethernet addresses of an Ethernet device.
> > + *
> > + * @param port_id
> > + *   The port identifier of the Ethernet device.
> > + * @param ma
> > + *   A pointer to an array of structures of type *ether_addr* to be filled with
> > + *   the Ethernet addresses of the Ethernet device.
> > + * @param num
> > + *   Number of elements in the *ma* array.
> > + * @return
> > + *   - number of retrieved addresses if successful
> > + *   - (-ENODEV) if *port_id* invalid.
> > + *   - (-EINVAL) if bad parameter.
> 
> Which error if the array is too small?

None, we just return up to *num* addresses, that's it.
 
> > + */
> > +__rte_experimental
> > +int rte_eth_macaddrs_get(uint16_t port_id, struct rte_ether_addr ma[],
> 
> Please don't use array syntax in parameters, it should be a pointer.
> How do we get the number of returned addresses?
> 
> > +	uint32_t num);

From above:
 * @return
 *   - number of retrieved addresses if successful
 
> Another approach would be to get addresses one by one by passing an index.
> 

Yes, it is another possible way.
Though current one seems a bit more convenient to me. 

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v4 3/7] ethdev: copy ethdev 'fast' API into separate structure
  2021-10-05 13:09         ` Thomas Monjalon
@ 2021-10-05 16:41           ` Ananyev, Konstantin
  2021-10-05 16:48             ` Thomas Monjalon
  0 siblings, 1 reply; 112+ messages in thread
From: Ananyev, Konstantin @ 2021-10-05 16:41 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
	maxime.coquelin, Xia, Chenbo, Yigit, Ferruh, mdr, Jayatheerthan,
	Jay

> 04/10/2021 15:55, Konstantin Ananyev:
> > Copy public function pointers (rx_pkt_burst(), etc.) and related
> > pointers to internal data from rte_eth_dev structure into a
> > separate flat array. That array will remain in a public header.
> > The intention here is to make rte_eth_dev and related structures internal.
> > That should allow future possible changes to core eth_dev structures
> > to be transparent to the user and help to avoid ABI/API breakages.
> > The plan is to keep minimal part of data from rte_eth_dev public,
> > so we still can use inline functions for 'fast' calls
> > (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
> 
> I don't understand why 'fast' is quoted.
> It looks strange.
> 
> 
> > +/* reset eth 'fast' API to dummy values */
> > +void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
> > +
> > +/* setup eth 'fast' API to ethdev values */
> > +void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> > +		const struct rte_eth_dev *dev);
> 
> I assume "fp" stands for fast path.

Yes.

> Please write "fast path" completely in the comments.

Ok.

> > +	/* expose selection of PMD rx/tx function */
> > +	eth_dev_fp_ops_setup(rte_eth_fp_ops + port_id, dev);
> [...]
> > +	/* point rx/tx functions to dummy ones */
> > +	eth_dev_fp_ops_reset(rte_eth_fp_ops + port_id);
> 
> Nit: Rx/Tx
> or could be "fast path", to be consistent.
> 
> > +	/*
> > +	 * for secondary process, at that point we expect device
> > +	 * to be already 'usable', so shared data and all function pointers
> > +	 * for 'fast' devops have to be setup properly inside rte_eth_dev.
> > +	 */
> > +	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
> > +		eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev);
> > +
> >  	rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL);
> >
> >  	dev->state = RTE_ETH_DEV_ATTACHED;
> > diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
> > index 948c0b71c1..fe47a660c7 100644
> > --- a/lib/ethdev/rte_ethdev_core.h
> > +++ b/lib/ethdev/rte_ethdev_core.h
> > @@ -53,6 +53,51 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
> >  typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
> >  /**< @internal Check the status of a Tx descriptor */
> >
> > +/**
> > + * @internal
> > + * Structure used to hold opaque pointernals to internal ethdev RX/TXi
> 
> typos in above line
> 
> > + * queues data.
> > + * The main purpose to expose these pointers at all - allow compiler
> > + * to fetch this data for 'fast' ethdev inline functions in advance.
> > + */
> > +struct rte_ethdev_qdata {
> > +	void **data;
> > +	/**< points to array of internal queue data pointers */
> > +	void **clbk;
> > +	/**< points to array of queue callback data pointers */
> > +};
> > +
> > +/**
> > + * @internal
> > + * 'fast' ethdev funcions and related data are hold in a flat array.
> > + * one entry per ethdev.
> > + */
> > +struct rte_eth_fp_ops {
> > +
> > +	/** first 64B line */
> > +	eth_rx_burst_t rx_pkt_burst;
> > +	/**< PMD receive function. */
> > +	eth_tx_burst_t tx_pkt_burst;
> > +	/**< PMD transmit function. */
> > +	eth_tx_prep_t tx_pkt_prepare;
> > +	/**< PMD transmit prepare function. */
> > +	eth_rx_queue_count_t rx_queue_count;
> > +	/**< Get the number of used RX descriptors. */
> > +	eth_rx_descriptor_status_t rx_descriptor_status;
> > +	/**< Check the status of a Rx descriptor. */
> > +	eth_tx_descriptor_status_t tx_descriptor_status;
> > +	/**< Check the status of a Tx descriptor. */
> > +	uintptr_t reserved[2];
> 
> uintptr_t size is not fix.
> I think you mean uint64_t.

Nope, I meant 'uintptr_t' here.
That way it fits really nicely to both 64-bit and 32-bit systems.
For 64-bit systems we have all function pointers on first 64B line,
and all data pointers on second 64B line.
For 32-bit systems we have all fields within first 64B line.  

> > +
> > +	/** second 64B line */
> > +	struct rte_ethdev_qdata rxq;
> > +	struct rte_ethdev_qdata txq;
> > +	uintptr_t reserved2[4];
> > +
> > +} __rte_cache_aligned;
> > +
> > +extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
> 
> 


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v4 5/7] ethdev: add API to retrieve multiple ethernet addresses
  2021-10-05 16:35           ` Ananyev, Konstantin
@ 2021-10-05 16:45             ` Thomas Monjalon
  2021-10-05 17:12               ` Ananyev, Konstantin
  0 siblings, 1 reply; 112+ messages in thread
From: Thomas Monjalon @ 2021-10-05 16:45 UTC (permalink / raw)
  To: Ananyev, Konstantin
  Cc: dev, Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
	maxime.coquelin, Xia, Chenbo, Yigit, Ferruh, mdr, Jayatheerthan,
	Jay

05/10/2021 18:35, Ananyev, Konstantin:
> > 04/10/2021 15:56, Konstantin Ananyev:
> > > Introduce rte_eth_macaddrs_get() to allow user to retrieve all ethernet
> > > addresses assigned to given port.
> > 
> > We already have functions to get MAC addresses.
> > Please explain the difference.
> 
> rte_eth_macaddr_get() returns just first (primary) MAC address
> assigned to the port.
> That one allow user to retrieve all addresses assigned to the port.

I was sure we had other function.
Anyway, would be good to reference rte_eth_macaddr_get in the commit.


> > > +/**
> > > + * Retrieve the Ethernet addresses of an Ethernet device.
> > > + *
> > > + * @param port_id
> > > + *   The port identifier of the Ethernet device.
> > > + * @param ma
> > > + *   A pointer to an array of structures of type *ether_addr* to be filled with
> > > + *   the Ethernet addresses of the Ethernet device.
> > > + * @param num
> > > + *   Number of elements in the *ma* array.
> > > + * @return
> > > + *   - number of retrieved addresses if successful
> > > + *   - (-ENODEV) if *port_id* invalid.
> > > + *   - (-EINVAL) if bad parameter.
> > 
> > Which error if the array is too small?
> 
> None, we just return up to *num* addresses, that's it.

So we don't know know whether there are more.

> > > + */
> > > +__rte_experimental
> > > +int rte_eth_macaddrs_get(uint16_t port_id, struct rte_ether_addr ma[],
> > 
> > Please don't use array syntax in parameters, it should be a pointer.
> > How do we get the number of returned addresses?
> > 
> > > +	uint32_t num);
> 
> From above:
>  * @return
>  *   - number of retrieved addresses if successful
>  
> > Another approach would be to get addresses one by one by passing an index.
> > 
> 
> Yes, it is another possible way.
> Though current one seems a bit more convenient to me. 

It misses a way to get the number of addresses.



^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v4 1/7] ethdev: allocate max space for internal queue array
  2021-10-05 12:09         ` Thomas Monjalon
@ 2021-10-05 16:45           ` Ananyev, Konstantin
  2021-10-05 16:49             ` Thomas Monjalon
  0 siblings, 1 reply; 112+ messages in thread
From: Ananyev, Konstantin @ 2021-10-05 16:45 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
	maxime.coquelin, Xia, Chenbo, Yigit, Ferruh, mdr, Jayatheerthan,
	Jay

> > At queue configure stage always allocate space for maximum possible
> > number (RTE_MAX_QUEUES_PER_PORT) of queue pointers.
> > That will allow 'fast' inline functions (eth_rx_burst, etc.) to refer
> > pointer to internal queue data without extra checking of current number
> > of configured queues.
> 
> What is the memory usage overhead per port?

(2*sizeof(uintptr_t))* RTE_MAX_QUEUES_PER_PORT
With RTE_MAX_QUEUES_PER_PORT==1024 (default value) it is 16KB per port. 

> We should consider cases with thousand of virtual ports.

For 1K ports (with 1K queues each) it will be 16MB.


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v4 3/7] ethdev: copy ethdev 'fast' API into separate structure
  2021-10-05 16:41           ` Ananyev, Konstantin
@ 2021-10-05 16:48             ` Thomas Monjalon
  2021-10-05 17:04               ` Ananyev, Konstantin
  0 siblings, 1 reply; 112+ messages in thread
From: Thomas Monjalon @ 2021-10-05 16:48 UTC (permalink / raw)
  To: Ananyev, Konstantin
  Cc: dev, Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
	maxime.coquelin, Xia, Chenbo, Yigit, Ferruh, mdr, Jayatheerthan,
	Jay

05/10/2021 18:41, Ananyev, Konstantin:
> > > +struct rte_eth_fp_ops {
> > > +
> > > +	/** first 64B line */
> > > +	eth_rx_burst_t rx_pkt_burst;
> > > +	/**< PMD receive function. */
> > > +	eth_tx_burst_t tx_pkt_burst;
> > > +	/**< PMD transmit function. */
> > > +	eth_tx_prep_t tx_pkt_prepare;
> > > +	/**< PMD transmit prepare function. */
> > > +	eth_rx_queue_count_t rx_queue_count;
> > > +	/**< Get the number of used RX descriptors. */
> > > +	eth_rx_descriptor_status_t rx_descriptor_status;
> > > +	/**< Check the status of a Rx descriptor. */
> > > +	eth_tx_descriptor_status_t tx_descriptor_status;
> > > +	/**< Check the status of a Tx descriptor. */
> > > +	uintptr_t reserved[2];
> > 
> > uintptr_t size is not fix.
> > I think you mean uint64_t.
> 
> Nope, I meant 'uintptr_t' here.
> That way it fits really nicely to both 64-bit and 32-bit systems.
> For 64-bit systems we have all function pointers on first 64B line,
> and all data pointers on second 64B line.
> For 32-bit systems we have all fields within first 64B line.  

OK but then the next comment is partially wrong:

> > > +
> > > +	/** second 64B line */
> > > +	struct rte_ethdev_qdata rxq;
> > > +	struct rte_ethdev_qdata txq;
> > > +	uintptr_t reserved2[4];
> > > +
> > > +} __rte_cache_aligned;




^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v4 1/7] ethdev: allocate max space for internal queue array
  2021-10-05 16:45           ` Ananyev, Konstantin
@ 2021-10-05 16:49             ` Thomas Monjalon
  0 siblings, 0 replies; 112+ messages in thread
From: Thomas Monjalon @ 2021-10-05 16:49 UTC (permalink / raw)
  To: Ananyev, Konstantin
  Cc: dev, Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
	maxime.coquelin, Xia, Chenbo, Yigit, Ferruh, mdr, Jayatheerthan,
	Jay

05/10/2021 18:45, Ananyev, Konstantin:
> > > At queue configure stage always allocate space for maximum possible
> > > number (RTE_MAX_QUEUES_PER_PORT) of queue pointers.
> > > That will allow 'fast' inline functions (eth_rx_burst, etc.) to refer
> > > pointer to internal queue data without extra checking of current number
> > > of configured queues.
> > 
> > What is the memory usage overhead per port?
> 
> (2*sizeof(uintptr_t))* RTE_MAX_QUEUES_PER_PORT
> With RTE_MAX_QUEUES_PER_PORT==1024 (default value) it is 16KB per port. 

Please add it in the commit log.

> > We should consider cases with thousand of virtual ports.
> 
> For 1K ports (with 1K queues each) it will be 16MB.

OK it looks reasonnable.



^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v4 3/7] ethdev: copy ethdev 'fast' API into separate structure
  2021-10-05 16:48             ` Thomas Monjalon
@ 2021-10-05 17:04               ` Ananyev, Konstantin
  0 siblings, 0 replies; 112+ messages in thread
From: Ananyev, Konstantin @ 2021-10-05 17:04 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
	maxime.coquelin, Xia, Chenbo, Yigit, Ferruh, mdr, Jayatheerthan,
	Jay



> > > > +struct rte_eth_fp_ops {
> > > > +
> > > > +	/** first 64B line */
> > > > +	eth_rx_burst_t rx_pkt_burst;
> > > > +	/**< PMD receive function. */
> > > > +	eth_tx_burst_t tx_pkt_burst;
> > > > +	/**< PMD transmit function. */
> > > > +	eth_tx_prep_t tx_pkt_prepare;
> > > > +	/**< PMD transmit prepare function. */
> > > > +	eth_rx_queue_count_t rx_queue_count;
> > > > +	/**< Get the number of used RX descriptors. */
> > > > +	eth_rx_descriptor_status_t rx_descriptor_status;
> > > > +	/**< Check the status of a Rx descriptor. */
> > > > +	eth_tx_descriptor_status_t tx_descriptor_status;
> > > > +	/**< Check the status of a Tx descriptor. */
> > > > +	uintptr_t reserved[2];
> > >
> > > uintptr_t size is not fix.
> > > I think you mean uint64_t.
> >
> > Nope, I meant 'uintptr_t' here.
> > That way it fits really nicely to both 64-bit and 32-bit systems.
> > For 64-bit systems we have all function pointers on first 64B line,
> > and all data pointers on second 64B line.
> > For 32-bit systems we have all fields within first 64B line.
> 
> OK but then the next comment is partially wrong:

True.
In fact, after I replied to you, just thought that might be
better to have first 64B line for RX functions and data,
second 64B line for TX functions and data.
Will probably give it a try.
Anyway, will update the comment.  

> 
> > > > +
> > > > +	/** second 64B line */
> > > > +	struct rte_ethdev_qdata rxq;
> > > > +	struct rte_ethdev_qdata txq;
> > > > +	uintptr_t reserved2[4];
> > > > +
> > > > +} __rte_cache_aligned;
> 
> 


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v4 5/7] ethdev: add API to retrieve multiple ethernet addresses
  2021-10-05 16:45             ` Thomas Monjalon
@ 2021-10-05 17:12               ` Ananyev, Konstantin
  2021-10-05 17:41                 ` Thomas Monjalon
  0 siblings, 1 reply; 112+ messages in thread
From: Ananyev, Konstantin @ 2021-10-05 17:12 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
	maxime.coquelin, Xia, Chenbo, Yigit, Ferruh, mdr, Jayatheerthan,
	Jay



 
> 05/10/2021 18:35, Ananyev, Konstantin:
> > > 04/10/2021 15:56, Konstantin Ananyev:
> > > > Introduce rte_eth_macaddrs_get() to allow user to retrieve all ethernet
> > > > addresses assigned to given port.
> > >
> > > We already have functions to get MAC addresses.
> > > Please explain the difference.
> >
> > rte_eth_macaddr_get() returns just first (primary) MAC address
> > assigned to the port.
> > That one allow user to retrieve all addresses assigned to the port.
> 
> I was sure we had other function.

I didn't find one.
If we do, I am ok to drop that change and rework testpmd code to use
existing one instead.

> Anyway, would be good to reference rte_eth_macaddr_get in the commit.
> 
> 
> > > > +/**
> > > > + * Retrieve the Ethernet addresses of an Ethernet device.
> > > > + *
> > > > + * @param port_id
> > > > + *   The port identifier of the Ethernet device.
> > > > + * @param ma
> > > > + *   A pointer to an array of structures of type *ether_addr* to be filled with
> > > > + *   the Ethernet addresses of the Ethernet device.
> > > > + * @param num
> > > > + *   Number of elements in the *ma* array.
> > > > + * @return
> > > > + *   - number of retrieved addresses if successful
> > > > + *   - (-ENODEV) if *port_id* invalid.
> > > > + *   - (-EINVAL) if bad parameter.
> > >
> > > Which error if the array is too small?
> >
> > None, we just return up to *num* addresses, that's it.
> 
> So we don't know know whether there are more.

rte_eth_dev_info. max_mac_addrs tells max number of MACs for this port.

> 
> > > > + */
> > > > +__rte_experimental
> > > > +int rte_eth_macaddrs_get(uint16_t port_id, struct rte_ether_addr ma[],
> > >
> > > Please don't use array syntax in parameters, it should be a pointer.
> > > How do we get the number of returned addresses?
> > >
> > > > +	uint32_t num);
> >
> > From above:
> >  * @return
> >  *   - number of retrieved addresses if successful
> >
> > > Another approach would be to get addresses one by one by passing an index.
> > >
> >
> > Yes, it is another possible way.
> > Though current one seems a bit more convenient to me.
> 
> It misses a way to get the number of addresses.
> 


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v4 5/7] ethdev: add API to retrieve multiple ethernet addresses
  2021-10-05 17:12               ` Ananyev, Konstantin
@ 2021-10-05 17:41                 ` Thomas Monjalon
  0 siblings, 0 replies; 112+ messages in thread
From: Thomas Monjalon @ 2021-10-05 17:41 UTC (permalink / raw)
  To: Ananyev, Konstantin
  Cc: dev, Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
	maxime.coquelin, Xia, Chenbo, Yigit, Ferruh, mdr, Jayatheerthan,
	Jay

05/10/2021 19:12, Ananyev, Konstantin:
> > 05/10/2021 18:35, Ananyev, Konstantin:
> > > > 04/10/2021 15:56, Konstantin Ananyev:
> > > > > Introduce rte_eth_macaddrs_get() to allow user to retrieve all ethernet
> > > > > addresses assigned to given port.
> > > >
> > > > We already have functions to get MAC addresses.
> > > > Please explain the difference.
> > >
> > > rte_eth_macaddr_get() returns just first (primary) MAC address
> > > assigned to the port.
> > > That one allow user to retrieve all addresses assigned to the port.
> > 
> > I was sure we had other function.
> 
> I didn't find one.
> If we do, I am ok to drop that change and rework testpmd code to use
> existing one instead.

No there is no alternative, so your patch is welcome.

> > Anyway, would be good to reference rte_eth_macaddr_get in the commit.
> > 
> > 
> > > > > +/**
> > > > > + * Retrieve the Ethernet addresses of an Ethernet device.
> > > > > + *
> > > > > + * @param port_id
> > > > > + *   The port identifier of the Ethernet device.
> > > > > + * @param ma
> > > > > + *   A pointer to an array of structures of type *ether_addr* to be filled with
> > > > > + *   the Ethernet addresses of the Ethernet device.
> > > > > + * @param num
> > > > > + *   Number of elements in the *ma* array.
> > > > > + * @return
> > > > > + *   - number of retrieved addresses if successful
> > > > > + *   - (-ENODEV) if *port_id* invalid.
> > > > > + *   - (-EINVAL) if bad parameter.
> > > >
> > > > Which error if the array is too small?
> > >
> > > None, we just return up to *num* addresses, that's it.
> > 
> > So we don't know know whether there are more.
> 
> rte_eth_dev_info. max_mac_addrs tells max number of MACs for this port.

Right. Please mention it in the comment of this function.




^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v4 0/7] hide eth dev related structures
  2021-10-04 13:55     ` [dpdk-dev] [PATCH v4 " Konstantin Ananyev
                         ` (6 preceding siblings ...)
  2021-10-04 13:56       ` [dpdk-dev] [PATCH v4 7/7] ethdev: hide eth dev related structures Konstantin Ananyev
@ 2021-10-06 16:42       ` Ali Alnubani
  2021-10-06 17:26         ` Ali Alnubani
  2021-10-07 11:27       ` [dpdk-dev] [PATCH v5 " Konstantin Ananyev
  8 siblings, 1 reply; 112+ messages in thread
From: Ali Alnubani @ 2021-10-06 16:42 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	Matan Azrad, Slava Ovsiienko, sthemmin, NBU-Contact-longli,
	heinrich.kuhn, kirankumark, andrew.rybchenko, mczekaj, jiawenwu,
	jianwang, maxime.coquelin, chenbo.xia,
	NBU-Contact-Thomas Monjalon, ferruh.yigit, mdr,
	jay.jayatheerthan

> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Konstantin Ananyev
> Sent: Monday, October 4, 2021 4:56 PM
> To: dev@dpdk.org
> Cc: xiaoyun.li@intel.com; anoobj@marvell.com; jerinj@marvell.com;
> ndabilpuram@marvell.com; adwivedi@marvell.com;
> shepard.siegel@atomicrules.com; ed.czeck@atomicrules.com;
> john.miller@atomicrules.com; irusskikh@marvell.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com;
> rahul.lakkireddy@chelsio.com; hemant.agrawal@nxp.com;
> sachin.saxena@oss.nxp.com; haiyue.wang@intel.com; johndale@cisco.com;
> hyonkim@cisco.com; qi.z.zhang@intel.com; xiao.w.wang@intel.com;
> humin29@huawei.com; yisen.zhuang@huawei.com; oulijun@huawei.com;
> beilei.xing@intel.com; jingjing.wu@intel.com; qiming.yang@intel.com;
> Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; sthemmin@microsoft.com; NBU-Contact-longli
> <longli@microsoft.com>; heinrich.kuhn@corigine.com;
> kirankumark@marvell.com; andrew.rybchenko@oktetlabs.ru;
> mczekaj@marvell.com; jiawenwu@trustnetic.com;
> jianwang@trustnetic.com; maxime.coquelin@redhat.com;
> chenbo.xia@intel.com; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; ferruh.yigit@intel.com; mdr@ashroe.eu;
> jay.jayatheerthan@intel.com; Konstantin Ananyev
> <konstantin.ananyev@intel.com>
> Subject: [dpdk-dev] [PATCH v4 0/7] hide eth dev related structures
> 
> v4 changes:
>  - Fix secondary process attach (Pavan)
>  - Fix build failure (Ferruh)
>  - Update lib/ethdev/verion.map (Ferruh)
>    Note that moving newly added symbols from EXPERIMENTAL to DPDK_22
>    section makes checkpatch.sh to complain.
> 
> v3 changes:
>  - Changes in public struct naming (Jerin/Haiyue)
>  - Split patches
>  - Update docs
>  - Shamelessly included Andrew's patch:
>    https://patches.dpdk.org/project/dpdk/patch/20210928154856.1015020-1-
> andrew.rybchenko@oktetlabs.ru/
>    into these series.
>    I have to do similar thing here, so decided to avoid duplicated effort.
> 
> The aim of these patch series is to make rte_ethdev core data structures
> (rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback, etc.) internal to
> DPDK and not visible to the user.
> That should allow future possible changes to core ethdev related structures
> to be transparent to the user and help to improve ABI/API stability.
> Note that current ethdev API is preserved, but it is a formal ABI break.
> 
> The work is based on previous discussions at:
> https://www.mail-archive.com/dev@dpdk.org/msg211405.html
> https://www.mail-archive.com/dev@dpdk.org/msg216685.html
> and consists of the following main points:
> 1. Copy public 'fast' function pointers (rx_pkt_burst(), etc.) and
>    related data pointer from rte_eth_dev into a separate flat array.
>    We keep it public to still be able to use inline functions for these
>    'fast' calls (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
>    Note that apart from function pointers itself, each element of this
>    flat array also contains two opaque pointers for each ethdev:
>    1) a pointer to an array of internal queue data pointers
>    2)  points to array of queue callback data pointers.
>    Note that exposing this extra information allows us to avoid extra
>    changes inside PMD level, plus should help to avoid possible
>    performance degradation.
> 2. Change implementation of 'fast' inline ethdev functions
>    (rte_eth_rx_burst(), etc.) to use new public flat array.
>    While it is an ABI breakage, this change is intended to be transparent
>    for both users (no changes in user app is required) and PMD developers
>    (no changes in PMD is required).
>    One extra note - with new implementation RX/TX callback invocation
>    will cost one extra function call with this changes. That might cause
>    some slowdown for code-path with RX/TX callbacks heavily involved.
>    Hope such trade-off is acceptable for the community.
> 3. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
>    things into internal header: <ethdev_driver.h>.
> 
> That approach was selected to:
>   - Avoid(/minimize) possible performance losses.
>   - Minimize required changes inside PMDs.
> 
> Performance testing results (ICX 2.0GHz, E810 (ice)):
>  - testpmd macswap fwd mode, plus
>    a) no RX/TX callbacks:
>       no actual slowdown observed
>    b) bpf-load rx 0 0 JM ./dpdk.org/examples/bpf/t3.o:
>       ~2% slowdown
>  - l3fwd: no actual slowdown observed
> 
> Would like to thank everyone who already reviewed and tested previous
> versions of these series. All other interested parties please don't be shy and
> provide your feedback.
> 
> Konstantin Ananyev (7):
>   ethdev: allocate max space for internal queue array
>   ethdev: change input parameters for rx_queue_count
>   ethdev: copy ethdev 'fast' API into separate structure
>   ethdev: make burst functions to use new flat array
>   ethdev: add API to retrieve multiple ethernet addresses
>   ethdev: remove legacy Rx descriptor done API
>   ethdev: hide eth dev related structures
> 

Tested single and multi-core packet forwarding performance with testpmd on both ConnectX-5 and ConnectX-6 Dx.

Tested-by: Ali Alnubani <alialnu@nvidia.com>

Thanks,
Ali

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v4 0/7] hide eth dev related structures
  2021-10-06 16:42       ` [dpdk-dev] [PATCH v4 0/7] " Ali Alnubani
@ 2021-10-06 17:26         ` Ali Alnubani
  0 siblings, 0 replies; 112+ messages in thread
From: Ali Alnubani @ 2021-10-06 17:26 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	Matan Azrad, Slava Ovsiienko, sthemmin, NBU-Contact-longli,
	heinrich.kuhn, kirankumark, andrew.rybchenko, mczekaj, jiawenwu,
	jianwang, maxime.coquelin, chenbo.xia,
	NBU-Contact-Thomas Monjalon, ferruh.yigit, mdr,
	jay.jayatheerthan

> -----Original Message-----
> From: Ali Alnubani
> Sent: Wednesday, October 6, 2021 7:43 PM
> To: Konstantin Ananyev <konstantin.ananyev@intel.com>; dev@dpdk.org
> Cc: xiaoyun.li@intel.com; anoobj@marvell.com; jerinj@marvell.com;
> ndabilpuram@marvell.com; adwivedi@marvell.com;
> shepard.siegel@atomicrules.com; ed.czeck@atomicrules.com;
> john.miller@atomicrules.com; irusskikh@marvell.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com;
> rahul.lakkireddy@chelsio.com; hemant.agrawal@nxp.com;
> sachin.saxena@oss.nxp.com; haiyue.wang@intel.com; johndale@cisco.com;
> hyonkim@cisco.com; qi.z.zhang@intel.com; xiao.w.wang@intel.com;
> humin29@huawei.com; yisen.zhuang@huawei.com; oulijun@huawei.com;
> beilei.xing@intel.com; jingjing.wu@intel.com; qiming.yang@intel.com;
> Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; sthemmin@microsoft.com; NBU-Contact-longli
> <longli@microsoft.com>; heinrich.kuhn@corigine.com;
> kirankumark@marvell.com; andrew.rybchenko@oktetlabs.ru;
> mczekaj@marvell.com; jiawenwu@trustnetic.com;
> jianwang@trustnetic.com; maxime.coquelin@redhat.com;
> chenbo.xia@intel.com; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; ferruh.yigit@intel.com; mdr@ashroe.eu;
> jay.jayatheerthan@intel.com
> Subject: RE: [dpdk-dev] [PATCH v4 0/7] hide eth dev related structures
> 
> > -----Original Message-----
> > From: dev <dev-bounces@dpdk.org> On Behalf Of Konstantin Ananyev
> > Sent: Monday, October 4, 2021 4:56 PM
> > To: dev@dpdk.org
> > Cc: xiaoyun.li@intel.com; anoobj@marvell.com; jerinj@marvell.com;
> > ndabilpuram@marvell.com; adwivedi@marvell.com;
> > shepard.siegel@atomicrules.com; ed.czeck@atomicrules.com;
> > john.miller@atomicrules.com; irusskikh@marvell.com;
> > ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com;
> > rahul.lakkireddy@chelsio.com; hemant.agrawal@nxp.com;
> > sachin.saxena@oss.nxp.com; haiyue.wang@intel.com;
> johndale@cisco.com;
> > hyonkim@cisco.com; qi.z.zhang@intel.com; xiao.w.wang@intel.com;
> > humin29@huawei.com; yisen.zhuang@huawei.com; oulijun@huawei.com;
> > beilei.xing@intel.com; jingjing.wu@intel.com; qiming.yang@intel.com;
> > Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> > <viacheslavo@nvidia.com>; sthemmin@microsoft.com; NBU-Contact-longli
> > <longli@microsoft.com>; heinrich.kuhn@corigine.com;
> > kirankumark@marvell.com; andrew.rybchenko@oktetlabs.ru;
> > mczekaj@marvell.com; jiawenwu@trustnetic.com;
> jianwang@trustnetic.com;
> > maxime.coquelin@redhat.com; chenbo.xia@intel.com; NBU-Contact-
> Thomas
> > Monjalon <thomas@monjalon.net>; ferruh.yigit@intel.com;
> mdr@ashroe.eu;
> > jay.jayatheerthan@intel.com; Konstantin Ananyev
> > <konstantin.ananyev@intel.com>
> > Subject: [dpdk-dev] [PATCH v4 0/7] hide eth dev related structures
> >
> > v4 changes:
> >  - Fix secondary process attach (Pavan)
> >  - Fix build failure (Ferruh)
> >  - Update lib/ethdev/verion.map (Ferruh)
> >    Note that moving newly added symbols from EXPERIMENTAL to DPDK_22
> >    section makes checkpatch.sh to complain.
> >
> > v3 changes:
> >  - Changes in public struct naming (Jerin/Haiyue)
> >  - Split patches
> >  - Update docs
> >  - Shamelessly included Andrew's patch:
> >
> > https://patches.dpdk.org/project/dpdk/patch/20210928154856.1015020-1-
> > andrew.rybchenko@oktetlabs.ru/
> >    into these series.
> >    I have to do similar thing here, so decided to avoid duplicated effort.
> >
> > The aim of these patch series is to make rte_ethdev core data
> > structures (rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback,
> > etc.) internal to DPDK and not visible to the user.
> > That should allow future possible changes to core ethdev related
> > structures to be transparent to the user and help to improve ABI/API
> stability.
> > Note that current ethdev API is preserved, but it is a formal ABI break.
> >
> > The work is based on previous discussions at:
> > https://www.mail-archive.com/dev@dpdk.org/msg211405.html
> > https://www.mail-archive.com/dev@dpdk.org/msg216685.html
> > and consists of the following main points:
> > 1. Copy public 'fast' function pointers (rx_pkt_burst(), etc.) and
> >    related data pointer from rte_eth_dev into a separate flat array.
> >    We keep it public to still be able to use inline functions for these
> >    'fast' calls (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
> >    Note that apart from function pointers itself, each element of this
> >    flat array also contains two opaque pointers for each ethdev:
> >    1) a pointer to an array of internal queue data pointers
> >    2)  points to array of queue callback data pointers.
> >    Note that exposing this extra information allows us to avoid extra
> >    changes inside PMD level, plus should help to avoid possible
> >    performance degradation.
> > 2. Change implementation of 'fast' inline ethdev functions
> >    (rte_eth_rx_burst(), etc.) to use new public flat array.
> >    While it is an ABI breakage, this change is intended to be transparent
> >    for both users (no changes in user app is required) and PMD developers
> >    (no changes in PMD is required).
> >    One extra note - with new implementation RX/TX callback invocation
> >    will cost one extra function call with this changes. That might cause
> >    some slowdown for code-path with RX/TX callbacks heavily involved.
> >    Hope such trade-off is acceptable for the community.
> > 3. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and
> related
> >    things into internal header: <ethdev_driver.h>.
> >
> > That approach was selected to:
> >   - Avoid(/minimize) possible performance losses.
> >   - Minimize required changes inside PMDs.
> >
> > Performance testing results (ICX 2.0GHz, E810 (ice)):
> >  - testpmd macswap fwd mode, plus
> >    a) no RX/TX callbacks:
> >       no actual slowdown observed
> >    b) bpf-load rx 0 0 JM ./dpdk.org/examples/bpf/t3.o:
> >       ~2% slowdown
> >  - l3fwd: no actual slowdown observed
> >
> > Would like to thank everyone who already reviewed and tested previous
> > versions of these series. All other interested parties please don't be
> > shy and provide your feedback.
> >
> > Konstantin Ananyev (7):
> >   ethdev: allocate max space for internal queue array
> >   ethdev: change input parameters for rx_queue_count
> >   ethdev: copy ethdev 'fast' API into separate structure
> >   ethdev: make burst functions to use new flat array
> >   ethdev: add API to retrieve multiple ethernet addresses
> >   ethdev: remove legacy Rx descriptor done API
> >   ethdev: hide eth dev related structures
> >
> 
> Tested single and multi-core packet forwarding performance with testpmd
> on both ConnectX-5 and ConnectX-6 Dx.
> 

I should have mentioned that I didn't see any noticeable regressions with the cases I mentioned.

> Tested-by: Ali Alnubani <alialnu@nvidia.com>
> 
> Thanks,
> Ali

^ permalink raw reply	[flat|nested] 112+ messages in thread

* [dpdk-dev] [PATCH v5 0/7] hide eth dev related structures
  2021-10-04 13:55     ` [dpdk-dev] [PATCH v4 " Konstantin Ananyev
                         ` (7 preceding siblings ...)
  2021-10-06 16:42       ` [dpdk-dev] [PATCH v4 0/7] " Ali Alnubani
@ 2021-10-07 11:27       ` Konstantin Ananyev
  2021-10-07 11:27         ` [dpdk-dev] [PATCH v5 1/7] ethdev: remove legacy Rx descriptor done API Konstantin Ananyev
                           ` (8 more replies)
  8 siblings, 9 replies; 112+ messages in thread
From: Konstantin Ananyev @ 2021-10-07 11:27 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

v5 changes:
- Fix spelling (Thomas/David)
- Rename internal helper functions (David)
- Reorder patches and update commit messages (Thomas)
- Update comments (Thomas)
- Changed layout in rte_eth_fp_ops, to group functions and
   related data based on their functionality:
   first 64B line for Rx, second one for Tx.
   Didn't observe any real performance difference comparing to
   original layout. Though decided to keep a new one, as it seems
   a bit more plausible. 

v4 changes:
 - Fix secondary process attach (Pavan)
 - Fix build failure (Ferruh)
 - Update lib/ethdev/verion.map (Ferruh)
   Note that moving newly added symbols from EXPERIMENTAL to DPDK_22
   section makes checkpatch.sh to complain.

v3 changes:
 - Changes in public struct naming (Jerin/Haiyue)
 - Split patches
 - Update docs
 - Shamelessly included Andrew's patch:
   https://patches.dpdk.org/project/dpdk/patch/20210928154856.1015020-1-andrew.rybchenko@oktetlabs.ru/
   into these series.
   I have to do similar thing here, so decided to avoid duplicated effort.

The aim of these patch series is to make rte_ethdev core data structures
(rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback, etc.) internal to
DPDK and not visible to the user.
That should allow future possible changes to core ethdev related structures
to be transparent to the user and help to improve ABI/API stability.
Note that current ethdev API is preserved, but it is a formal ABI break.

The work is based on previous discussions at:
https://www.mail-archive.com/dev@dpdk.org/msg211405.html
https://www.mail-archive.com/dev@dpdk.org/msg216685.html
and consists of the following main points:
1. Copy public 'fast' function pointers (rx_pkt_burst(), etc.) and
   related data pointer from rte_eth_dev into a separate flat array.
   We keep it public to still be able to use inline functions for these
   'fast' calls (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
   Note that apart from function pointers itself, each element of this
   flat array also contains two opaque pointers for each ethdev:
   1) a pointer to an array of internal queue data pointers
   2)  points to array of queue callback data pointers.
   Note that exposing this extra information allows us to avoid extra
   changes inside PMD level, plus should help to avoid possible
   performance degradation.
2. Change implementation of 'fast' inline ethdev functions
   (rte_eth_rx_burst(), etc.) to use new public flat array.
   While it is an ABI breakage, this change is intended to be transparent
   for both users (no changes in user app is required) and PMD developers
   (no changes in PMD is required).
   One extra note - with new implementation RX/TX callback invocation
   will cost one extra function call with this changes. That might cause
   some slowdown for code-path with RX/TX callbacks heavily involved.
   Hope such trade-off is acceptable for the community.
3. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
   things into internal header: <ethdev_driver.h>.

That approach was selected to:
  - Avoid(/minimize) possible performance losses.
  - Minimize required changes inside PMDs.

Performance testing results (ICX 2.0GHz, E810 (ice)):
 - testpmd macswap fwd mode, plus
   a) no RX/TX callbacks:
      no actual slowdown observed
   b) bpf-load rx 0 0 JM ./dpdk.org/examples/bpf/t3.o:
      ~2% slowdown
 - l3fwd: no actual slowdown observed

Would like to thank everyone who already reviewed and tested previous
versions of these series. All other interested parties please don't be shy
and provide your feedback.

Andrew Rybchenko (1):
  ethdev: remove legacy Rx descriptor done API

Konstantin Ananyev (6):
  ethdev: allocate max space for internal queue array
  ethdev: change input parameters for rx_queue_count
  ethdev: copy fast-path API into separate structure
  ethdev: make fast-path functions to use new flat array
  ethdev: add API to retrieve multiple ethernet addresses
  ethdev: hide eth dev related structures

 app/test-pmd/config.c                         |  23 +-
 doc/guides/nics/features.rst                  |   6 +-
 doc/guides/rel_notes/deprecation.rst          |   5 -
 doc/guides/rel_notes/release_21_11.rst        |  21 ++
 drivers/common/octeontx2/otx2_sec_idev.c      |   2 +-
 drivers/crypto/octeontx2/otx2_cryptodev_ops.c |   2 +-
 drivers/net/ark/ark_ethdev_rx.c               |   4 +-
 drivers/net/ark/ark_ethdev_rx.h               |   3 +-
 drivers/net/atlantic/atl_ethdev.h             |   2 +-
 drivers/net/atlantic/atl_rxtx.c               |   9 +-
 drivers/net/bnxt/bnxt_ethdev.c                |   8 +-
 drivers/net/cxgbe/base/adapter.h              |   2 +-
 drivers/net/dpaa/dpaa_ethdev.c                |   9 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |   9 +-
 drivers/net/dpaa2/dpaa2_ptp.c                 |   2 +-
 drivers/net/e1000/e1000_ethdev.h              |  10 +-
 drivers/net/e1000/em_ethdev.c                 |   1 -
 drivers/net/e1000/em_rxtx.c                   |  21 +-
 drivers/net/e1000/igb_ethdev.c                |   2 -
 drivers/net/e1000/igb_rxtx.c                  |  21 +-
 drivers/net/enic/enic_ethdev.c                |  12 +-
 drivers/net/fm10k/fm10k.h                     |   5 +-
 drivers/net/fm10k/fm10k_ethdev.c              |   1 -
 drivers/net/fm10k/fm10k_rxtx.c                |  29 +-
 drivers/net/hns3/hns3_rxtx.c                  |   7 +-
 drivers/net/hns3/hns3_rxtx.h                  |   2 +-
 drivers/net/i40e/i40e_ethdev.c                |   1 -
 drivers/net/i40e/i40e_rxtx.c                  |  30 +-
 drivers/net/i40e/i40e_rxtx.h                  |   4 +-
 drivers/net/iavf/iavf_rxtx.c                  |   4 +-
 drivers/net/iavf/iavf_rxtx.h                  |   2 +-
 drivers/net/ice/ice_rxtx.c                    |   4 +-
 drivers/net/ice/ice_rxtx.h                    |   2 +-
 drivers/net/igc/igc_ethdev.c                  |   1 -
 drivers/net/igc/igc_txrx.c                    |  23 +-
 drivers/net/igc/igc_txrx.h                    |   5 +-
 drivers/net/ixgbe/ixgbe_ethdev.c              |   2 -
 drivers/net/ixgbe/ixgbe_ethdev.h              |   5 +-
 drivers/net/ixgbe/ixgbe_rxtx.c                |  22 +-
 drivers/net/mlx5/mlx5_rx.c                    |  26 +-
 drivers/net/mlx5/mlx5_rx.h                    |   2 +-
 drivers/net/netvsc/hn_rxtx.c                  |   4 +-
 drivers/net/netvsc/hn_var.h                   |   3 +-
 drivers/net/nfp/nfp_rxtx.c                    |   4 +-
 drivers/net/nfp/nfp_rxtx.h                    |   3 +-
 drivers/net/octeontx2/otx2_ethdev.c           |   1 -
 drivers/net/octeontx2/otx2_ethdev.h           |   3 +-
 drivers/net/octeontx2/otx2_ethdev_ops.c       |  20 +-
 drivers/net/sfc/sfc_ethdev.c                  |  29 +-
 drivers/net/thunderx/nicvf_ethdev.c           |   3 +-
 drivers/net/thunderx/nicvf_rxtx.c             |   4 +-
 drivers/net/thunderx/nicvf_rxtx.h             |   2 +-
 drivers/net/txgbe/txgbe_ethdev.h              |   3 +-
 drivers/net/txgbe/txgbe_rxtx.c                |   4 +-
 drivers/net/vhost/rte_eth_vhost.c             |   4 +-
 drivers/net/virtio/virtio_ethdev.c            |   1 -
 lib/ethdev/ethdev_driver.h                    | 148 +++++++++
 lib/ethdev/ethdev_private.c                   |  83 +++++
 lib/ethdev/ethdev_private.h                   |   7 +
 lib/ethdev/rte_ethdev.c                       |  89 ++++--
 lib/ethdev/rte_ethdev.h                       | 288 ++++++++++++------
 lib/ethdev/rte_ethdev_core.h                  | 171 +++--------
 lib/ethdev/version.map                        |   8 +-
 lib/eventdev/rte_event_eth_rx_adapter.c       |   2 +-
 lib/eventdev/rte_event_eth_tx_adapter.c       |   2 +-
 lib/eventdev/rte_eventdev.c                   |   2 +-
 lib/metrics/rte_metrics_telemetry.c           |   2 +-
 67 files changed, 677 insertions(+), 564 deletions(-)

-- 
2.26.3


^ permalink raw reply	[flat|nested] 112+ messages in thread

* [dpdk-dev] [PATCH v5 1/7] ethdev: remove legacy Rx descriptor done API
  2021-10-07 11:27       ` [dpdk-dev] [PATCH v5 " Konstantin Ananyev
@ 2021-10-07 11:27         ` Konstantin Ananyev
  2021-10-07 11:27         ` [dpdk-dev] [PATCH v5 2/7] ethdev: allocate max space for internal queue array Konstantin Ananyev
                           ` (7 subsequent siblings)
  8 siblings, 0 replies; 112+ messages in thread
From: Konstantin Ananyev @ 2021-10-07 11:27 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan

From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

rte_eth_rx_descriptor_status() should be used as a replacement.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/nics/features.rst            |  6 +-----
 doc/guides/rel_notes/deprecation.rst    |  5 -----
 doc/guides/rel_notes/release_21_11.rst  |  4 ++++
 drivers/net/e1000/e1000_ethdev.h        |  4 ----
 drivers/net/e1000/em_ethdev.c           |  1 -
 drivers/net/e1000/em_rxtx.c             | 17 ----------------
 drivers/net/e1000/igb_ethdev.c          |  2 --
 drivers/net/e1000/igb_rxtx.c            | 17 ----------------
 drivers/net/fm10k/fm10k.h               |  3 ---
 drivers/net/fm10k/fm10k_ethdev.c        |  1 -
 drivers/net/fm10k/fm10k_rxtx.c          | 25 ------------------------
 drivers/net/i40e/i40e_ethdev.c          |  1 -
 drivers/net/i40e/i40e_rxtx.c            | 26 -------------------------
 drivers/net/i40e/i40e_rxtx.h            |  1 -
 drivers/net/igc/igc_ethdev.c            |  1 -
 drivers/net/igc/igc_txrx.c              | 18 -----------------
 drivers/net/igc/igc_txrx.h              |  2 --
 drivers/net/ixgbe/ixgbe_ethdev.c        |  2 --
 drivers/net/ixgbe/ixgbe_ethdev.h        |  2 --
 drivers/net/ixgbe/ixgbe_rxtx.c          | 18 -----------------
 drivers/net/octeontx2/otx2_ethdev.c     |  1 -
 drivers/net/octeontx2/otx2_ethdev.h     |  1 -
 drivers/net/octeontx2/otx2_ethdev_ops.c | 12 ------------
 drivers/net/sfc/sfc_ethdev.c            | 17 ----------------
 drivers/net/virtio/virtio_ethdev.c      |  1 -
 lib/ethdev/rte_ethdev.c                 |  1 -
 lib/ethdev/rte_ethdev.h                 | 25 ------------------------
 lib/ethdev/rte_ethdev_core.h            |  4 ----
 28 files changed, 5 insertions(+), 213 deletions(-)

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 4fce8cd1c9..a02ef25409 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -662,14 +662,10 @@ Rx descriptor status
 --------------------
 
 Supports check the status of a Rx descriptor. When ``rx_descriptor_status`` is
-used, status can be "Available", "Done" or "Unavailable". When
-``rx_descriptor_done`` is used, status can be "DD bit is set" or "DD bit is
-not set".
+used, status can be "Available", "Done" or "Unavailable".
 
 * **[implements] rte_eth_dev**: ``rx_descriptor_status``.
 * **[related]    API**: ``rte_eth_rx_descriptor_status()``.
-* **[implements] rte_eth_dev**: ``rx_descriptor_done``.
-* **[related]    API**: ``rte_eth_rx_descriptor_done()``.
 
 
 .. _nic_features_tx_descriptor_status:
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index a2fe766d4b..3d1331bbcb 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -106,11 +106,6 @@ Deprecation Notices
   the device packet overhead can be calculated as:
   ``(struct rte_eth_dev_info).max_rx_pktlen - (struct rte_eth_dev_info).max_mtu``
 
-* ethdev: ``rx_descriptor_done`` dev_ops and ``rte_eth_rx_descriptor_done``
-  will be removed in 21.11.
-  Existing ``rte_eth_rx_descriptor_status`` and ``rte_eth_tx_descriptor_status``
-  APIs can be used as replacement.
-
 * ethdev: The port mirroring API can be replaced with a more fine grain flow API.
   The structs ``rte_eth_mirror_conf``, ``rte_eth_vlan_mirror`` and the functions
   ``rte_eth_mirror_rule_set``, ``rte_eth_mirror_rule_reset`` will be marked
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index dfc2cbdeed..436d29afda 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -154,6 +154,10 @@ Removed Items
   iavf already became the default VF driver for i40e devices,
   so there is no need to maintain i40evf.
 
+* ethdev: Removed ``rx_descriptor_done`` dev_ops and
+  ``rte_eth_rx_descriptor_done``.  Existing ``rte_eth_rx_descriptor_status``
+  APIs can be used as a replacement.
+
 
 API Changes
 -----------
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 3b4d9c3ee6..c01e3ee9c5 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -402,8 +402,6 @@ int eth_igb_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 uint32_t eth_igb_rx_queue_count(struct rte_eth_dev *dev,
 		uint16_t rx_queue_id);
 
-int eth_igb_rx_descriptor_done(void *rx_queue, uint16_t offset);
-
 int eth_igb_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int eth_igb_tx_descriptor_status(void *tx_queue, uint16_t offset);
 
@@ -479,8 +477,6 @@ int eth_em_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 uint32_t eth_em_rx_queue_count(struct rte_eth_dev *dev,
 		uint16_t rx_queue_id);
 
-int eth_em_rx_descriptor_done(void *rx_queue, uint16_t offset);
-
 int eth_em_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int eth_em_tx_descriptor_status(void *tx_queue, uint16_t offset);
 
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index a0ca371b02..9e157e4ffe 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -247,7 +247,6 @@ eth_em_dev_init(struct rte_eth_dev *eth_dev)
 
 	eth_dev->dev_ops = &eth_em_ops;
 	eth_dev->rx_queue_count = eth_em_rx_queue_count;
-	eth_dev->rx_descriptor_done   = eth_em_rx_descriptor_done;
 	eth_dev->rx_descriptor_status = eth_em_rx_descriptor_status;
 	eth_dev->tx_descriptor_status = eth_em_tx_descriptor_status;
 	eth_dev->rx_pkt_burst = (eth_rx_burst_t)&eth_em_recv_pkts;
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index dfd8f2fd00..048b9148ed 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -1511,23 +1511,6 @@ eth_em_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	return desc;
 }
 
-int
-eth_em_rx_descriptor_done(void *rx_queue, uint16_t offset)
-{
-	volatile struct e1000_rx_desc *rxdp;
-	struct em_rx_queue *rxq = rx_queue;
-	uint32_t desc;
-
-	if (unlikely(offset >= rxq->nb_rx_desc))
-		return 0;
-	desc = rxq->rx_tail + offset;
-	if (desc >= rxq->nb_rx_desc)
-		desc -= rxq->nb_rx_desc;
-
-	rxdp = &rxq->rx_ring[desc];
-	return !!(rxdp->status & E1000_RXD_STAT_DD);
-}
-
 int
 eth_em_rx_descriptor_status(void *rx_queue, uint16_t offset)
 {
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index d80fad01e3..e1bc3852fc 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -726,7 +726,6 @@ eth_igb_dev_init(struct rte_eth_dev *eth_dev)
 
 	eth_dev->dev_ops = &eth_igb_ops;
 	eth_dev->rx_queue_count = eth_igb_rx_queue_count;
-	eth_dev->rx_descriptor_done   = eth_igb_rx_descriptor_done;
 	eth_dev->rx_descriptor_status = eth_igb_rx_descriptor_status;
 	eth_dev->tx_descriptor_status = eth_igb_tx_descriptor_status;
 	eth_dev->rx_pkt_burst = &eth_igb_recv_pkts;
@@ -920,7 +919,6 @@ eth_igbvf_dev_init(struct rte_eth_dev *eth_dev)
 	PMD_INIT_FUNC_TRACE();
 
 	eth_dev->dev_ops = &igbvf_eth_dev_ops;
-	eth_dev->rx_descriptor_done   = eth_igb_rx_descriptor_done;
 	eth_dev->rx_descriptor_status = eth_igb_rx_descriptor_status;
 	eth_dev->tx_descriptor_status = eth_igb_tx_descriptor_status;
 	eth_dev->rx_pkt_burst = &eth_igb_recv_pkts;
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 278d5d2712..46a7789d90 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -1791,23 +1791,6 @@ eth_igb_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	return desc;
 }
 
-int
-eth_igb_rx_descriptor_done(void *rx_queue, uint16_t offset)
-{
-	volatile union e1000_adv_rx_desc *rxdp;
-	struct igb_rx_queue *rxq = rx_queue;
-	uint32_t desc;
-
-	if (unlikely(offset >= rxq->nb_rx_desc))
-		return 0;
-	desc = rxq->rx_tail + offset;
-	if (desc >= rxq->nb_rx_desc)
-		desc -= rxq->nb_rx_desc;
-
-	rxdp = &rxq->rx_ring[desc];
-	return !!(rxdp->wb.upper.status_error & E1000_RXD_STAT_DD);
-}
-
 int
 eth_igb_rx_descriptor_status(void *rx_queue, uint16_t offset)
 {
diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h
index 916b856acc..2e47ada829 100644
--- a/drivers/net/fm10k/fm10k.h
+++ b/drivers/net/fm10k/fm10k.h
@@ -326,9 +326,6 @@ uint16_t fm10k_recv_scattered_pkts(void *rx_queue,
 uint32_t
 fm10k_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 
-int
-fm10k_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
-
 int
 fm10k_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 3236290e40..f9c287a6ba 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -3062,7 +3062,6 @@ eth_fm10k_dev_init(struct rte_eth_dev *dev)
 
 	dev->dev_ops = &fm10k_eth_dev_ops;
 	dev->rx_queue_count = fm10k_dev_rx_queue_count;
-	dev->rx_descriptor_done	= fm10k_dev_rx_descriptor_done;
 	dev->rx_descriptor_status = fm10k_dev_rx_descriptor_status;
 	dev->tx_descriptor_status = fm10k_dev_tx_descriptor_status;
 	dev->rx_pkt_burst = &fm10k_recv_pkts;
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index 0a9a27aa5a..d9833505d1 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -393,31 +393,6 @@ fm10k_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	return desc;
 }
 
-int
-fm10k_dev_rx_descriptor_done(void *rx_queue, uint16_t offset)
-{
-	volatile union fm10k_rx_desc *rxdp;
-	struct fm10k_rx_queue *rxq = rx_queue;
-	uint16_t desc;
-	int ret;
-
-	if (unlikely(offset >= rxq->nb_desc)) {
-		PMD_DRV_LOG(ERR, "Invalid RX descriptor offset %u", offset);
-		return 0;
-	}
-
-	desc = rxq->next_dd + offset;
-	if (desc >= rxq->nb_desc)
-		desc -= rxq->nb_desc;
-
-	rxdp = &rxq->hw_ring[desc];
-
-	ret = !!(rxdp->w.status &
-			rte_cpu_to_le_16(FM10K_RXD_STATUS_DD));
-
-	return ret;
-}
-
 int
 fm10k_dev_rx_descriptor_status(void *rx_queue, uint16_t offset)
 {
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index bd97d93dd7..e5e26783bf 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1434,7 +1434,6 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
 
 	dev->dev_ops = &i40e_eth_dev_ops;
 	dev->rx_queue_count = i40e_dev_rx_queue_count;
-	dev->rx_descriptor_done = i40e_dev_rx_descriptor_done;
 	dev->rx_descriptor_status = i40e_dev_rx_descriptor_status;
 	dev->tx_descriptor_status = i40e_dev_tx_descriptor_status;
 	dev->rx_pkt_burst = i40e_recv_pkts;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index d5847ac6b5..0fd9fef8e0 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2135,32 +2135,6 @@ i40e_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	return desc;
 }
 
-int
-i40e_dev_rx_descriptor_done(void *rx_queue, uint16_t offset)
-{
-	volatile union i40e_rx_desc *rxdp;
-	struct i40e_rx_queue *rxq = rx_queue;
-	uint16_t desc;
-	int ret;
-
-	if (unlikely(offset >= rxq->nb_rx_desc)) {
-		PMD_DRV_LOG(ERR, "Invalid RX descriptor id %u", offset);
-		return 0;
-	}
-
-	desc = rxq->rx_tail + offset;
-	if (desc >= rxq->nb_rx_desc)
-		desc -= rxq->nb_rx_desc;
-
-	rxdp = &(rxq->rx_ring[desc]);
-
-	ret = !!(((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
-		I40E_RXD_QW1_STATUS_MASK) >> I40E_RXD_QW1_STATUS_SHIFT) &
-				(1 << I40E_RX_DESC_STATUS_DD_SHIFT));
-
-	return ret;
-}
-
 int
 i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t offset)
 {
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 5ccf5773e8..842924bce5 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -227,7 +227,6 @@ void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
 
 uint32_t i40e_dev_rx_queue_count(struct rte_eth_dev *dev,
 				 uint16_t rx_queue_id);
-int i40e_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
 int i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
 
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 224a095483..9ddfe68a78 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -1227,7 +1227,6 @@ eth_igc_dev_init(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 	dev->dev_ops = &eth_igc_ops;
-	dev->rx_descriptor_done	= eth_igc_rx_descriptor_done;
 	dev->rx_queue_count = eth_igc_rx_queue_count;
 	dev->rx_descriptor_status = eth_igc_rx_descriptor_status;
 	dev->tx_descriptor_status = eth_igc_tx_descriptor_status;
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index b5489eedd2..3979dca660 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -757,24 +757,6 @@ uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
 	return desc;
 }
 
-int eth_igc_rx_descriptor_done(void *rx_queue, uint16_t offset)
-{
-	volatile union igc_adv_rx_desc *rxdp;
-	struct igc_rx_queue *rxq = rx_queue;
-	uint32_t desc;
-
-	if (unlikely(!rxq || offset >= rxq->nb_rx_desc))
-		return 0;
-
-	desc = rxq->rx_tail + offset;
-	if (desc >= rxq->nb_rx_desc)
-		desc -= rxq->nb_rx_desc;
-
-	rxdp = &rxq->rx_ring[desc];
-	return !!(rxdp->wb.upper.status_error &
-			rte_cpu_to_le_32(IGC_RXD_STAT_DD));
-}
-
 int eth_igc_rx_descriptor_status(void *rx_queue, uint16_t offset)
 {
 	struct igc_rx_queue *rxq = rx_queue;
diff --git a/drivers/net/igc/igc_txrx.h b/drivers/net/igc/igc_txrx.h
index f2b2d75bbc..d6f3799639 100644
--- a/drivers/net/igc/igc_txrx.h
+++ b/drivers/net/igc/igc_txrx.h
@@ -25,8 +25,6 @@ int eth_igc_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
 		uint16_t rx_queue_id);
 
-int eth_igc_rx_descriptor_done(void *rx_queue, uint16_t offset);
-
 int eth_igc_rx_descriptor_status(void *rx_queue, uint16_t offset);
 
 int eth_igc_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 47693c0c47..78f61d3dac 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1057,7 +1057,6 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 
 	eth_dev->dev_ops = &ixgbe_eth_dev_ops;
 	eth_dev->rx_queue_count       = ixgbe_dev_rx_queue_count;
-	eth_dev->rx_descriptor_done   = ixgbe_dev_rx_descriptor_done;
 	eth_dev->rx_descriptor_status = ixgbe_dev_rx_descriptor_status;
 	eth_dev->tx_descriptor_status = ixgbe_dev_tx_descriptor_status;
 	eth_dev->rx_pkt_burst = &ixgbe_recv_pkts;
@@ -1546,7 +1545,6 @@ eth_ixgbevf_dev_init(struct rte_eth_dev *eth_dev)
 	PMD_INIT_FUNC_TRACE();
 
 	eth_dev->dev_ops = &ixgbevf_eth_dev_ops;
-	eth_dev->rx_descriptor_done   = ixgbe_dev_rx_descriptor_done;
 	eth_dev->rx_descriptor_status = ixgbe_dev_rx_descriptor_status;
 	eth_dev->tx_descriptor_status = ixgbe_dev_tx_descriptor_status;
 	eth_dev->rx_pkt_burst = &ixgbe_recv_pkts;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index a0ce18ca24..37976902a1 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -605,8 +605,6 @@ int  ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 uint32_t ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev,
 		uint16_t rx_queue_id);
 
-int ixgbe_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
-
 int ixgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int ixgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
 
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index bfdfd5e755..0af9ce8aee 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -3281,24 +3281,6 @@ ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	return desc;
 }
 
-int
-ixgbe_dev_rx_descriptor_done(void *rx_queue, uint16_t offset)
-{
-	volatile union ixgbe_adv_rx_desc *rxdp;
-	struct ixgbe_rx_queue *rxq = rx_queue;
-	uint32_t desc;
-
-	if (unlikely(offset >= rxq->nb_rx_desc))
-		return 0;
-	desc = rxq->rx_tail + offset;
-	if (desc >= rxq->nb_rx_desc)
-		desc -= rxq->nb_rx_desc;
-
-	rxdp = &rxq->rx_ring[desc];
-	return !!(rxdp->wb.upper.status_error &
-			rte_cpu_to_le_32(IXGBE_RXDADV_STAT_DD));
-}
-
 int
 ixgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset)
 {
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 75d4cabf2e..4b33056085 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -2449,7 +2449,6 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
 	int rc, max_entries;
 
 	eth_dev->dev_ops = &otx2_eth_dev_ops;
-	eth_dev->rx_descriptor_done = otx2_nix_rx_descriptor_done;
 	eth_dev->rx_queue_count = otx2_nix_rx_queue_count;
 	eth_dev->rx_descriptor_status = otx2_nix_rx_descriptor_status;
 	eth_dev->tx_descriptor_status = otx2_nix_tx_descriptor_status;
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 7871e3d30b..90bafcea8e 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -433,7 +433,6 @@ int otx2_tx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			   struct rte_eth_burst_mode *mode);
 uint32_t otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t qidx);
 int otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt);
-int otx2_nix_rx_descriptor_done(void *rxq, uint16_t offset);
 int otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int otx2_nix_tx_descriptor_status(void *tx_queue, uint16_t offset);
 
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 552e6bd43d..5cb3905b64 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -365,18 +365,6 @@ nix_offset_has_packet(uint32_t head, uint32_t tail, uint16_t offset)
 	return 0;
 }
 
-int
-otx2_nix_rx_descriptor_done(void *rx_queue, uint16_t offset)
-{
-	struct otx2_eth_rxq *rxq = rx_queue;
-	uint32_t head, tail;
-
-	nix_rx_head_tail_get(otx2_eth_pmd_priv(rxq->eth_dev),
-			     &head, &tail, rxq->rq);
-
-	return nix_offset_has_packet(head, tail, offset);
-}
-
 int
 otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset)
 {
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 2db0d000c3..8debebc96e 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1296,21 +1296,6 @@ sfc_rx_queue_count(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 	return sap->dp_rx->qdesc_npending(rxq_info->dp);
 }
 
-/*
- * The function is used by the secondary process as well. It must not
- * use any process-local pointers from the adapter data.
- */
-static int
-sfc_rx_descriptor_done(void *queue, uint16_t offset)
-{
-	struct sfc_dp_rxq *dp_rxq = queue;
-	const struct sfc_dp_rx *dp_rx;
-
-	dp_rx = sfc_dp_rx_by_dp_rxq(dp_rxq);
-
-	return offset < dp_rx->qdesc_npending(dp_rxq);
-}
-
 /*
  * The function is used by the secondary process as well. It must not
  * use any process-local pointers from the adapter data.
@@ -2045,7 +2030,6 @@ sfc_eth_dev_set_ops(struct rte_eth_dev *dev)
 	dev->tx_pkt_burst = dp_tx->pkt_burst;
 
 	dev->rx_queue_count = sfc_rx_queue_count;
-	dev->rx_descriptor_done = sfc_rx_descriptor_done;
 	dev->rx_descriptor_status = sfc_rx_descriptor_status;
 	dev->tx_descriptor_status = sfc_tx_descriptor_status;
 	dev->dev_ops = &sfc_eth_dev_ops;
@@ -2153,7 +2137,6 @@ sfc_eth_dev_secondary_init(struct rte_eth_dev *dev, uint32_t logtype_main)
 	dev->tx_pkt_prepare = dp_tx->pkt_prepare;
 	dev->tx_pkt_burst = dp_tx->pkt_burst;
 	dev->rx_queue_count = sfc_rx_queue_count;
-	dev->rx_descriptor_done = sfc_rx_descriptor_done;
 	dev->rx_descriptor_status = sfc_rx_descriptor_status;
 	dev->tx_descriptor_status = sfc_tx_descriptor_status;
 	dev->dev_ops = &sfc_eth_dev_secondary_ops;
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index b60eeb24ab..7032189869 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -1907,7 +1907,6 @@ eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
 	}
 
 	eth_dev->dev_ops = &virtio_eth_dev_ops;
-	eth_dev->rx_descriptor_done = virtio_dev_rx_queue_done;
 
 	if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
 		set_rxtx_funcs(eth_dev);
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index daf5ca9242..ed37f8871b 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -588,7 +588,6 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
 	eth_dev->tx_pkt_burst = NULL;
 	eth_dev->tx_pkt_prepare = NULL;
 	eth_dev->rx_queue_count = NULL;
-	eth_dev->rx_descriptor_done = NULL;
 	eth_dev->rx_descriptor_status = NULL;
 	eth_dev->tx_descriptor_status = NULL;
 	eth_dev->dev_ops = NULL;
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index afdc53b674..0bff526819 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -5063,31 +5063,6 @@ rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
 	return (int)(*dev->rx_queue_count)(dev, queue_id);
 }
 
-/**
- * Check if the DD bit of the specific RX descriptor in the queue has been set
- *
- * @param port_id
- *  The port identifier of the Ethernet device.
- * @param queue_id
- *  The queue id on the specific port.
- * @param offset
- *  The offset of the descriptor ID from tail.
- * @return
- *  - (1) if the specific DD bit is set.
- *  - (0) if the specific DD bit is not set.
- *  - (-ENODEV) if *port_id* invalid.
- *  - (-ENOTSUP) if the device does not support this function
- */
-__rte_deprecated
-static inline int
-rte_eth_rx_descriptor_done(uint16_t port_id, uint16_t queue_id, uint16_t offset)
-{
-	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_descriptor_done, -ENOTSUP);
-	return (*dev->rx_descriptor_done)(dev->data->rx_queues[queue_id], offset);
-}
-
 /**@{@name Rx hardware descriptor states
  * @see rte_eth_rx_descriptor_status
  */
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index d2c9ec42c7..2296872888 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -45,9 +45,6 @@ typedef uint32_t (*eth_rx_queue_count_t)(struct rte_eth_dev *dev,
 					 uint16_t rx_queue_id);
 /**< @internal Get number of used descriptors on a receive queue. */
 
-typedef int (*eth_rx_descriptor_done_t)(void *rxq, uint16_t offset);
-/**< @internal Check DD bit of specific RX descriptor */
-
 typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
 /**< @internal Check the status of a Rx descriptor */
 
@@ -85,7 +82,6 @@ struct rte_eth_dev {
 	eth_tx_prep_t tx_pkt_prepare; /**< Pointer to PMD transmit prepare function. */
 
 	eth_rx_queue_count_t       rx_queue_count; /**< Get the number of used RX descriptors. */
-	eth_rx_descriptor_done_t   rx_descriptor_done;   /**< Check rxd DD bit. */
 	eth_rx_descriptor_status_t rx_descriptor_status; /**< Check the status of a Rx descriptor. */
 	eth_tx_descriptor_status_t tx_descriptor_status; /**< Check the status of a Tx descriptor. */
 
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [dpdk-dev] [PATCH v5 2/7] ethdev: allocate max space for internal queue array
  2021-10-07 11:27       ` [dpdk-dev] [PATCH v5 " Konstantin Ananyev
  2021-10-07 11:27         ` [dpdk-dev] [PATCH v5 1/7] ethdev: remove legacy Rx descriptor done API Konstantin Ananyev
@ 2021-10-07 11:27         ` Konstantin Ananyev
  2021-10-11  9:20           ` Andrew Rybchenko
  2021-10-07 11:27         ` [dpdk-dev] [PATCH v5 3/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
                           ` (6 subsequent siblings)
  8 siblings, 1 reply; 112+ messages in thread
From: Konstantin Ananyev @ 2021-10-07 11:27 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

At queue configure stage always allocate space for maximum possible
number (RTE_MAX_QUEUES_PER_PORT) of queue pointers.
That will allow 'fast' inline functions (eth_rx_burst, etc.) to refer
pointer to internal queue data without extra checking of current number
of configured queues.
That would help in future to hide rte_eth_dev and related structures.
It means that from now on, each ethdev port will always consume:
((2*sizeof(uintptr_t))* RTE_MAX_QUEUES_PER_PORT)
bytes of memory for its queue pointers.
With RTE_MAX_QUEUES_PER_PORT==1024 (default value) it is 16KB per port.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/ethdev/rte_ethdev.c | 36 +++++++++---------------------------
 1 file changed, 9 insertions(+), 27 deletions(-)

diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index ed37f8871b..c8abda6dd7 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -897,7 +897,8 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 
 	if (dev->data->rx_queues == NULL && nb_queues != 0) { /* first time configuration */
 		dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues",
-				sizeof(dev->data->rx_queues[0]) * nb_queues,
+				sizeof(dev->data->rx_queues[0]) *
+				RTE_MAX_QUEUES_PER_PORT,
 				RTE_CACHE_LINE_SIZE);
 		if (dev->data->rx_queues == NULL) {
 			dev->data->nb_rx_queues = 0;
@@ -908,21 +909,11 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 
 		rxq = dev->data->rx_queues;
 
-		for (i = nb_queues; i < old_nb_queues; i++)
+		for (i = nb_queues; i < old_nb_queues; i++) {
 			(*dev->dev_ops->rx_queue_release)(rxq[i]);
-		rxq = rte_realloc(rxq, sizeof(rxq[0]) * nb_queues,
-				RTE_CACHE_LINE_SIZE);
-		if (rxq == NULL)
-			return -(ENOMEM);
-		if (nb_queues > old_nb_queues) {
-			uint16_t new_qs = nb_queues - old_nb_queues;
-
-			memset(rxq + old_nb_queues, 0,
-				sizeof(rxq[0]) * new_qs);
+			rxq[i] = NULL;
 		}
 
-		dev->data->rx_queues = rxq;
-
 	} else if (dev->data->rx_queues != NULL && nb_queues == 0) {
 		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, -ENOTSUP);
 
@@ -1137,8 +1128,9 @@ eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 
 	if (dev->data->tx_queues == NULL && nb_queues != 0) { /* first time configuration */
 		dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues",
-						   sizeof(dev->data->tx_queues[0]) * nb_queues,
-						   RTE_CACHE_LINE_SIZE);
+				sizeof(dev->data->tx_queues[0]) *
+				RTE_MAX_QUEUES_PER_PORT,
+				RTE_CACHE_LINE_SIZE);
 		if (dev->data->tx_queues == NULL) {
 			dev->data->nb_tx_queues = 0;
 			return -(ENOMEM);
@@ -1148,21 +1140,11 @@ eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 
 		txq = dev->data->tx_queues;
 
-		for (i = nb_queues; i < old_nb_queues; i++)
+		for (i = nb_queues; i < old_nb_queues; i++) {
 			(*dev->dev_ops->tx_queue_release)(txq[i]);
-		txq = rte_realloc(txq, sizeof(txq[0]) * nb_queues,
-				  RTE_CACHE_LINE_SIZE);
-		if (txq == NULL)
-			return -ENOMEM;
-		if (nb_queues > old_nb_queues) {
-			uint16_t new_qs = nb_queues - old_nb_queues;
-
-			memset(txq + old_nb_queues, 0,
-			       sizeof(txq[0]) * new_qs);
+			txq[i] = NULL;
 		}
 
-		dev->data->tx_queues = txq;
-
 	} else if (dev->data->tx_queues != NULL && nb_queues == 0) {
 		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release, -ENOTSUP);
 
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [dpdk-dev] [PATCH v5 3/7] ethdev: change input parameters for rx_queue_count
  2021-10-07 11:27       ` [dpdk-dev] [PATCH v5 " Konstantin Ananyev
  2021-10-07 11:27         ` [dpdk-dev] [PATCH v5 1/7] ethdev: remove legacy Rx descriptor done API Konstantin Ananyev
  2021-10-07 11:27         ` [dpdk-dev] [PATCH v5 2/7] ethdev: allocate max space for internal queue array Konstantin Ananyev
@ 2021-10-07 11:27         ` Konstantin Ananyev
  2021-10-11  8:06           ` Andrew Rybchenko
  2021-10-12 17:59           ` Hyong Youb Kim (hyonkim)
  2021-10-07 11:27         ` [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure Konstantin Ananyev
                           ` (5 subsequent siblings)
  8 siblings, 2 replies; 112+ messages in thread
From: Konstantin Ananyev @ 2021-10-07 11:27 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Currently majority of fast-path ethdev ops take pointers to internal
queue data structures as an input parameter.
While eth_rx_queue_count() takes a pointer to rte_eth_dev and queue
index.
For future work to hide rte_eth_devices[] and friends it would be
plausible to unify parameters list of all fast-path ethdev ops.
This patch changes eth_rx_queue_count() to accept pointer to internal
queue data as input parameter.
While this change is transparent to user, it still counts as an ABI change,
as eth_rx_queue_count_t is used by ethdev public inline function
rte_eth_rx_queue_count().

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/release_21_11.rst  |  6 ++++++
 drivers/net/ark/ark_ethdev_rx.c         |  4 ++--
 drivers/net/ark/ark_ethdev_rx.h         |  3 +--
 drivers/net/atlantic/atl_ethdev.h       |  2 +-
 drivers/net/atlantic/atl_rxtx.c         |  9 ++-------
 drivers/net/bnxt/bnxt_ethdev.c          |  8 +++++---
 drivers/net/dpaa/dpaa_ethdev.c          |  9 ++++-----
 drivers/net/dpaa2/dpaa2_ethdev.c        |  9 ++++-----
 drivers/net/e1000/e1000_ethdev.h        |  6 ++----
 drivers/net/e1000/em_rxtx.c             |  4 ++--
 drivers/net/e1000/igb_rxtx.c            |  4 ++--
 drivers/net/enic/enic_ethdev.c          | 12 ++++++------
 drivers/net/fm10k/fm10k.h               |  2 +-
 drivers/net/fm10k/fm10k_rxtx.c          |  4 ++--
 drivers/net/hns3/hns3_rxtx.c            |  7 +++++--
 drivers/net/hns3/hns3_rxtx.h            |  2 +-
 drivers/net/i40e/i40e_rxtx.c            |  4 ++--
 drivers/net/i40e/i40e_rxtx.h            |  3 +--
 drivers/net/iavf/iavf_rxtx.c            |  4 ++--
 drivers/net/iavf/iavf_rxtx.h            |  2 +-
 drivers/net/ice/ice_rxtx.c              |  4 ++--
 drivers/net/ice/ice_rxtx.h              |  2 +-
 drivers/net/igc/igc_txrx.c              |  5 ++---
 drivers/net/igc/igc_txrx.h              |  3 +--
 drivers/net/ixgbe/ixgbe_ethdev.h        |  3 +--
 drivers/net/ixgbe/ixgbe_rxtx.c          |  4 ++--
 drivers/net/mlx5/mlx5_rx.c              | 26 ++++++++++++-------------
 drivers/net/mlx5/mlx5_rx.h              |  2 +-
 drivers/net/netvsc/hn_rxtx.c            |  4 ++--
 drivers/net/netvsc/hn_var.h             |  2 +-
 drivers/net/nfp/nfp_rxtx.c              |  4 ++--
 drivers/net/nfp/nfp_rxtx.h              |  3 +--
 drivers/net/octeontx2/otx2_ethdev.h     |  2 +-
 drivers/net/octeontx2/otx2_ethdev_ops.c |  8 ++++----
 drivers/net/sfc/sfc_ethdev.c            | 12 ++++++------
 drivers/net/thunderx/nicvf_ethdev.c     |  3 +--
 drivers/net/thunderx/nicvf_rxtx.c       |  4 ++--
 drivers/net/thunderx/nicvf_rxtx.h       |  2 +-
 drivers/net/txgbe/txgbe_ethdev.h        |  3 +--
 drivers/net/txgbe/txgbe_rxtx.c          |  4 ++--
 drivers/net/vhost/rte_eth_vhost.c       |  4 ++--
 lib/ethdev/rte_ethdev.h                 |  2 +-
 lib/ethdev/rte_ethdev_core.h            |  3 +--
 43 files changed, 103 insertions(+), 110 deletions(-)

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 436d29afda..ca5d169598 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -226,6 +226,12 @@ ABI Changes
   ``rte_security_ipsec_xform`` to allow applications to configure SA soft
   and hard expiry limits. Limits can be either in number of packets or bytes.
 
+* ethdev: Input parameters for ``eth_rx_queue_count_t`` was changed.
+  Instead of pointer to ``rte_eth_dev`` and queue index, now it accepts pointer
+  to internal queue data as input parameter. While this change is transparent
+  to user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
+  is used by  public inline function ``rte_eth_rx_queue_count``.
+
 
 Known Issues
 ------------
diff --git a/drivers/net/ark/ark_ethdev_rx.c b/drivers/net/ark/ark_ethdev_rx.c
index d255f0177b..98658ce621 100644
--- a/drivers/net/ark/ark_ethdev_rx.c
+++ b/drivers/net/ark/ark_ethdev_rx.c
@@ -388,11 +388,11 @@ eth_ark_rx_queue_drain(struct ark_rx_queue *queue)
 }
 
 uint32_t
-eth_ark_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id)
+eth_ark_dev_rx_queue_count(void *rx_queue)
 {
 	struct ark_rx_queue *queue;
 
-	queue = dev->data->rx_queues[queue_id];
+	queue = rx_queue;
 	return (queue->prod_index - queue->cons_index);	/* mod arith */
 }
 
diff --git a/drivers/net/ark/ark_ethdev_rx.h b/drivers/net/ark/ark_ethdev_rx.h
index c8dc340a8a..859fcf1e6f 100644
--- a/drivers/net/ark/ark_ethdev_rx.h
+++ b/drivers/net/ark/ark_ethdev_rx.h
@@ -17,8 +17,7 @@ int eth_ark_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			       unsigned int socket_id,
 			       const struct rte_eth_rxconf *rx_conf,
 			       struct rte_mempool *mp);
-uint32_t eth_ark_dev_rx_queue_count(struct rte_eth_dev *dev,
-				    uint16_t rx_queue_id);
+uint32_t eth_ark_dev_rx_queue_count(void *rx_queue);
 int eth_ark_rx_stop_queue(struct rte_eth_dev *dev, uint16_t queue_id);
 int eth_ark_rx_start_queue(struct rte_eth_dev *dev, uint16_t queue_id);
 uint16_t eth_ark_recv_pkts_noop(void *rx_queue, struct rte_mbuf **rx_pkts,
diff --git a/drivers/net/atlantic/atl_ethdev.h b/drivers/net/atlantic/atl_ethdev.h
index f547571b5c..e808460520 100644
--- a/drivers/net/atlantic/atl_ethdev.h
+++ b/drivers/net/atlantic/atl_ethdev.h
@@ -66,7 +66,7 @@ int atl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
 
-uint32_t atl_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t atl_rx_queue_count(void *rx_queue);
 
 int atl_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int atl_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c
index 7d367c9306..35bb13044e 100644
--- a/drivers/net/atlantic/atl_rxtx.c
+++ b/drivers/net/atlantic/atl_rxtx.c
@@ -689,18 +689,13 @@ atl_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 /* Return Rx queue avail count */
 
 uint32_t
-atl_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+atl_rx_queue_count(void *rx_queue)
 {
 	struct atl_rx_queue *rxq;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (rx_queue_id >= dev->data->nb_rx_queues) {
-		PMD_DRV_LOG(ERR, "Invalid RX queue id=%d", rx_queue_id);
-		return 0;
-	}
-
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 
 	if (rxq == NULL)
 		return 0;
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index aa7e7fdc85..72c3d4f0fc 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3150,20 +3150,22 @@ bnxt_dev_led_off_op(struct rte_eth_dev *dev)
 }
 
 static uint32_t
-bnxt_rx_queue_count_op(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+bnxt_rx_queue_count_op(void *rx_queue)
 {
-	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
+	struct bnxt *bp;
 	struct bnxt_cp_ring_info *cpr;
 	uint32_t desc = 0, raw_cons, cp_ring_size;
 	struct bnxt_rx_queue *rxq;
 	struct rx_pkt_cmpl *rxcmp;
 	int rc;
 
+	rxq = rx_queue;
+	bp = rxq->bp;
+
 	rc = is_bnxt_in_error(bp);
 	if (rc)
 		return rc;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
 	cpr = rxq->cp_ring;
 	raw_cons = cpr->cp_raw_cons;
 	cp_ring_size = cpr->cp_ring_struct->ring_size;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 36d8f9249d..b5589300c9 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1278,17 +1278,16 @@ static void dpaa_eth_tx_queue_release(void *txq __rte_unused)
 }
 
 static uint32_t
-dpaa_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+dpaa_dev_rx_queue_count(void *rx_queue)
 {
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
-	struct qman_fq *rxq = &dpaa_intf->rx_queues[rx_queue_id];
+	struct qman_fq *rxq = rx_queue;
 	u32 frm_cnt = 0;
 
 	PMD_INIT_FUNC_TRACE();
 
 	if (qman_query_fq_frm_cnt(rxq, &frm_cnt) == 0) {
-		DPAA_PMD_DEBUG("RX frame count for q(%d) is %u",
-			       rx_queue_id, frm_cnt);
+		DPAA_PMD_DEBUG("RX frame count for q(%p) is %u",
+			       rx_queue, frm_cnt);
 	}
 	return frm_cnt;
 }
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 275656fbe4..43d46b595e 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1013,10 +1013,9 @@ dpaa2_dev_tx_queue_release(void *q __rte_unused)
 }
 
 static uint32_t
-dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+dpaa2_dev_rx_queue_count(void *rx_queue)
 {
 	int32_t ret;
-	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	struct dpaa2_queue *dpaa2_q;
 	struct qbman_swp *swp;
 	struct qbman_fq_query_np_rslt state;
@@ -1033,12 +1032,12 @@ dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 	swp = DPAA2_PER_LCORE_PORTAL;
 
-	dpaa2_q = (struct dpaa2_queue *)priv->rx_vq[rx_queue_id];
+	dpaa2_q = rx_queue;
 
 	if (qbman_fq_query_state(swp, dpaa2_q->fqid, &state) == 0) {
 		frame_cnt = qbman_fq_state_frame_count(&state);
-		DPAA2_PMD_DP_DEBUG("RX frame count for q(%d) is %u",
-				rx_queue_id, frame_cnt);
+		DPAA2_PMD_DP_DEBUG("RX frame count for q(%p) is %u",
+				rx_queue, frame_cnt);
 	}
 	return frame_cnt;
 }
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index c01e3ee9c5..fff52958df 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -399,8 +399,7 @@ int eth_igb_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		const struct rte_eth_rxconf *rx_conf,
 		struct rte_mempool *mb_pool);
 
-uint32_t eth_igb_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t eth_igb_rx_queue_count(void *rx_queue);
 
 int eth_igb_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int eth_igb_tx_descriptor_status(void *tx_queue, uint16_t offset);
@@ -474,8 +473,7 @@ int eth_em_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		const struct rte_eth_rxconf *rx_conf,
 		struct rte_mempool *mb_pool);
 
-uint32_t eth_em_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t eth_em_rx_queue_count(void *rx_queue);
 
 int eth_em_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int eth_em_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index 048b9148ed..13ea3a77f4 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -1489,14 +1489,14 @@ eth_em_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-eth_em_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_em_rx_queue_count(void *rx_queue)
 {
 #define EM_RXQ_SCAN_INTERVAL 4
 	volatile struct e1000_rx_desc *rxdp;
 	struct em_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 46a7789d90..0ee1b8d48d 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -1769,14 +1769,14 @@ eth_igb_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-eth_igb_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_igb_rx_queue_count(void *rx_queue)
 {
 #define IGB_RXQ_SCAN_INTERVAL 4
 	volatile union e1000_adv_rx_desc *rxdp;
 	struct igb_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 8d5797523b..5b2d60ad9c 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -233,18 +233,18 @@ static void enicpmd_dev_rx_queue_release(void *rxq)
 	enic_free_rq(rxq);
 }
 
-static uint32_t enicpmd_dev_rx_queue_count(struct rte_eth_dev *dev,
-					   uint16_t rx_queue_id)
+static uint32_t enicpmd_dev_rx_queue_count(void *rx_queue)
 {
-	struct enic *enic = pmd_priv(dev);
+	struct enic *enic;
+	struct vnic_rq *sop_rq;
 	uint32_t queue_count = 0;
 	struct vnic_cq *cq;
 	uint32_t cq_tail;
 	uint16_t cq_idx;
-	int rq_num;
 
-	rq_num = enic_rte_rq_idx_to_sop_idx(rx_queue_id);
-	cq = &enic->cq[enic_cq_rq(enic, rq_num)];
+	sop_rq = rx_queue;
+	enic = vnic_dev_priv(sop_rq->vdev);
+	cq = &enic->cq[enic_cq_rq(enic, sop_rq->index)];
 	cq_idx = cq->to_clean;
 
 	cq_tail = ioread32(&cq->ctrl->cq_tail);
diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h
index 2e47ada829..17c73c4dc5 100644
--- a/drivers/net/fm10k/fm10k.h
+++ b/drivers/net/fm10k/fm10k.h
@@ -324,7 +324,7 @@ uint16_t fm10k_recv_scattered_pkts(void *rx_queue,
 		struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
 
 uint32_t
-fm10k_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+fm10k_dev_rx_queue_count(void *rx_queue);
 
 int
 fm10k_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index d9833505d1..b3515ae96a 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -367,14 +367,14 @@ fm10k_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 }
 
 uint32_t
-fm10k_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+fm10k_dev_rx_queue_count(void *rx_queue)
 {
 #define FM10K_RXQ_SCAN_INTERVAL 4
 	volatile union fm10k_rx_desc *rxdp;
 	struct fm10k_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->hw_ring[rxq->next_dd];
 	while ((desc < rxq->nb_desc) &&
 		rxdp->w.status & rte_cpu_to_le_16(FM10K_RXD_STATUS_DD)) {
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 481872e395..04791ae7d0 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -4673,7 +4673,7 @@ hns3_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
 }
 
 uint32_t
-hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+hns3_rx_queue_count(void *rx_queue)
 {
 	/*
 	 * Number of BDs that have been processed by the driver
@@ -4681,9 +4681,12 @@ hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	 */
 	uint32_t driver_hold_bd_num;
 	struct hns3_rx_queue *rxq;
+	const struct rte_eth_dev *dev;
 	uint32_t fbd_num;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
+	dev = &rte_eth_devices[rxq->port_id];
+
 	fbd_num = hns3_read_dev(rxq, HNS3_RING_RX_FBDNUM_REG);
 	if (dev->rx_pkt_burst == hns3_recv_pkts_vec ||
 	    dev->rx_pkt_burst == hns3_recv_pkts_vec_sve)
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index cd7c21c1d0..34a028701f 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -696,7 +696,7 @@ int hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 			struct rte_mempool *mp);
 int hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 			unsigned int socket, const struct rte_eth_txconf *conf);
-uint32_t hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t hns3_rx_queue_count(void *rx_queue);
 int hns3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int hns3_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int hns3_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 0fd9fef8e0..f5bebde302 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2107,14 +2107,14 @@ i40e_dev_rx_queue_release(void *rxq)
 }
 
 uint32_t
-i40e_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+i40e_dev_rx_queue_count(void *rx_queue)
 {
 #define I40E_RXQ_SCAN_INTERVAL 4
 	volatile union i40e_rx_desc *rxdp;
 	struct i40e_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 	while ((desc < rxq->nb_rx_desc) &&
 		((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 842924bce5..d495a741b6 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -225,8 +225,7 @@ int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
 int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
 void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
 
-uint32_t i40e_dev_rx_queue_count(struct rte_eth_dev *dev,
-				 uint16_t rx_queue_id);
+uint32_t i40e_dev_rx_queue_count(void *rx_queue);
 int i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
 
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 87afc0b4cb..3dc1f04380 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -2799,14 +2799,14 @@ iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 
 /* Get the number of used descriptors of a rx queue */
 uint32_t
-iavf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id)
+iavf_dev_rxq_count(void *rx_queue)
 {
 #define IAVF_RXQ_SCAN_INTERVAL 4
 	volatile union iavf_rx_desc *rxdp;
 	struct iavf_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index e210b913d6..2f7bec2b63 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -453,7 +453,7 @@ void iavf_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			  struct rte_eth_rxq_info *qinfo);
 void iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			  struct rte_eth_txq_info *qinfo);
-uint32_t iavf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id);
+uint32_t iavf_dev_rxq_count(void *rx_queue);
 int iavf_dev_rx_desc_status(void *rx_queue, uint16_t offset);
 int iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset);
 
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 83fb788e69..c92fc5053b 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1444,14 +1444,14 @@ ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 }
 
 uint32_t
-ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+ice_rx_queue_count(void *rx_queue)
 {
 #define ICE_RXQ_SCAN_INTERVAL 4
 	volatile union ice_rx_flex_desc *rxdp;
 	struct ice_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 	while ((desc < rxq->nb_rx_desc) &&
 	       rte_le_to_cpu_16(rxdp->wb.status_error0) &
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index eef76ffdc5..8d078e0edc 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -226,7 +226,7 @@ uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 void ice_set_tx_function_flag(struct rte_eth_dev *dev,
 			      struct ice_tx_queue *txq);
 void ice_set_tx_function(struct rte_eth_dev *dev);
-uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t ice_rx_queue_count(void *rx_queue);
 void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 		      struct rte_eth_rxq_info *qinfo);
 void ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index 3979dca660..2498cfd290 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -722,8 +722,7 @@ void eth_igc_rx_queue_release(void *rxq)
 		igc_rx_queue_release(rxq);
 }
 
-uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id)
+uint32_t eth_igc_rx_queue_count(void *rx_queue)
 {
 	/**
 	 * Check the DD bit of a rx descriptor of each 4 in a group,
@@ -736,7 +735,7 @@ uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
 	struct igc_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 
 	while (desc < rxq->nb_rx_desc - rxq->rx_tail) {
diff --git a/drivers/net/igc/igc_txrx.h b/drivers/net/igc/igc_txrx.h
index d6f3799639..3b4c7450cd 100644
--- a/drivers/net/igc/igc_txrx.h
+++ b/drivers/net/igc/igc_txrx.h
@@ -22,8 +22,7 @@ int eth_igc_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		const struct rte_eth_rxconf *rx_conf,
 		struct rte_mempool *mb_pool);
 
-uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t eth_igc_rx_queue_count(void *rx_queue);
 
 int eth_igc_rx_descriptor_status(void *rx_queue, uint16_t offset);
 
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 37976902a1..6b7a4079db 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -602,8 +602,7 @@ int  ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
 
-uint32_t ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t ixgbe_dev_rx_queue_count(void *rx_queue);
 
 int ixgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int ixgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 0af9ce8aee..8e056db761 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -3258,14 +3258,14 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+ixgbe_dev_rx_queue_count(void *rx_queue)
 {
 #define IXGBE_RXQ_SCAN_INTERVAL 4
 	volatile union ixgbe_adv_rx_desc *rxdp;
 	struct ixgbe_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index e3b1051ba4..1a9eb35acc 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -240,32 +240,32 @@ mlx5_rx_burst_mode_get(struct rte_eth_dev *dev,
 /**
  * DPDK callback to get the number of used descriptors in a RX queue.
  *
- * @param dev
- *   Pointer to the device structure.
- *
- * @param rx_queue_id
- *   The Rx queue.
+ * @param rx_queue
+ *   The Rx queue pointer.
  *
  * @return
  *   The number of used rx descriptor.
  *   -EINVAL if the queue is invalid
  */
 uint32_t
-mlx5_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+mlx5_rx_queue_count(void *rx_queue)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq;
+	struct mlx5_rxq_data *rxq = rx_queue;
+	struct rte_eth_dev *dev;
+
+	if (!rxq) {
+		rte_errno = EINVAL;
+		return -rte_errno;
+	}
+
+	dev = &rte_eth_devices[rxq->port_id];
 
 	if (dev->rx_pkt_burst == NULL ||
 	    dev->rx_pkt_burst == removed_rx_burst) {
 		rte_errno = ENOTSUP;
 		return -rte_errno;
 	}
-	rxq = (*priv->rxqs)[rx_queue_id];
-	if (!rxq) {
-		rte_errno = EINVAL;
-		return -rte_errno;
-	}
+
 	return rx_queue_count(rxq);
 }
 
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 3f2b99fb65..5e4ac7324d 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -260,7 +260,7 @@ uint16_t mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts,
 uint16_t removed_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts,
 			  uint16_t pkts_n);
 int mlx5_rx_descriptor_status(void *rx_queue, uint16_t offset);
-uint32_t mlx5_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t mlx5_rx_queue_count(void *rx_queue);
 void mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 		       struct rte_eth_rxq_info *qinfo);
 int mlx5_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t rx_queue_id,
diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c
index c6bf7cc132..30aac371c8 100644
--- a/drivers/net/netvsc/hn_rxtx.c
+++ b/drivers/net/netvsc/hn_rxtx.c
@@ -1018,9 +1018,9 @@ hn_dev_rx_queue_release(void *arg)
  * For this device that means how many packets are pending in the ring.
  */
 uint32_t
-hn_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id)
+hn_dev_rx_queue_count(void *rx_queue)
 {
-	struct hn_rx_queue *rxq = dev->data->rx_queues[queue_id];
+	struct hn_rx_queue *rxq = rx_queue;
 
 	return rte_ring_count(rxq->rx_ring);
 }
diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h
index 43642408bc..2a2bac9338 100644
--- a/drivers/net/netvsc/hn_var.h
+++ b/drivers/net/netvsc/hn_var.h
@@ -215,7 +215,7 @@ int	hn_dev_rx_queue_setup(struct rte_eth_dev *dev,
 void	hn_dev_rx_queue_info(struct rte_eth_dev *dev, uint16_t queue_id,
 			     struct rte_eth_rxq_info *qinfo);
 void	hn_dev_rx_queue_release(void *arg);
-uint32_t hn_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id);
+uint32_t hn_dev_rx_queue_count(void *rx_queue);
 int	hn_dev_rx_queue_status(void *rxq, uint16_t offset);
 void	hn_dev_free_queues(struct rte_eth_dev *dev);
 
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 1402c5f84a..4b2ac4cc43 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -97,14 +97,14 @@ nfp_net_rx_freelist_setup(struct rte_eth_dev *dev)
 }
 
 uint32_t
-nfp_net_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
+nfp_net_rx_queue_count(void *rx_queue)
 {
 	struct nfp_net_rxq *rxq;
 	struct nfp_net_rx_desc *rxds;
 	uint32_t idx;
 	uint32_t count;
 
-	rxq = (struct nfp_net_rxq *)dev->data->rx_queues[queue_idx];
+	rxq = rx_queue;
 
 	idx = rxq->rd_p;
 
diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h
index b0a8bf81b0..0fd50a6c22 100644
--- a/drivers/net/nfp/nfp_rxtx.h
+++ b/drivers/net/nfp/nfp_rxtx.h
@@ -275,8 +275,7 @@ struct nfp_net_rxq {
 } __rte_aligned(64);
 
 int nfp_net_rx_freelist_setup(struct rte_eth_dev *dev);
-uint32_t nfp_net_rx_queue_count(struct rte_eth_dev *dev,
-				       uint16_t queue_idx);
+uint32_t nfp_net_rx_queue_count(void *rx_queue);
 uint16_t nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 				  uint16_t nb_pkts);
 void nfp_net_rx_queue_release(void *rxq);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 90bafcea8e..d28fcaa281 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -431,7 +431,7 @@ int otx2_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			   struct rte_eth_burst_mode *mode);
 int otx2_tx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			   struct rte_eth_burst_mode *mode);
-uint32_t otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t qidx);
+uint32_t otx2_nix_rx_queue_count(void *rx_queue);
 int otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt);
 int otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int otx2_nix_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 5cb3905b64..3a763f691b 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -342,13 +342,13 @@ nix_rx_head_tail_get(struct otx2_eth_dev *dev,
 }
 
 uint32_t
-otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t queue_idx)
+otx2_nix_rx_queue_count(void *rx_queue)
 {
-	struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[queue_idx];
-	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_eth_rxq *rxq = rx_queue;
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(rxq->eth_dev);
 	uint32_t head, tail;
 
-	nix_rx_head_tail_get(dev, &head, &tail, queue_idx);
+	nix_rx_head_tail_get(dev, &head, &tail, rxq->rq);
 	return (tail - head) % rxq->qlen;
 }
 
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 8debebc96e..c9b01480f8 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1281,19 +1281,19 @@ sfc_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
  * use any process-local pointers from the adapter data.
  */
 static uint32_t
-sfc_rx_queue_count(struct rte_eth_dev *dev, uint16_t ethdev_qid)
+sfc_rx_queue_count(void *rx_queue)
 {
-	const struct sfc_adapter_priv *sap = sfc_adapter_priv_by_eth_dev(dev);
-	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
-	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
+	struct sfc_dp_rxq *dp_rxq = rx_queue;
+	const struct sfc_dp_rx *dp_rx;
 	struct sfc_rxq_info *rxq_info;
 
-	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
+	dp_rx = sfc_dp_rx_by_dp_rxq(dp_rxq);
+	rxq_info = sfc_rxq_info_by_dp_rxq(dp_rxq);
 
 	if ((rxq_info->state & SFC_RXQ_STARTED) == 0)
 		return 0;
 
-	return sap->dp_rx->qdesc_npending(rxq_info->dp);
+	return dp_rx->qdesc_npending(dp_rxq);
 }
 
 /*
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 561a98fc81..0e87620e42 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -1060,8 +1060,7 @@ nicvf_rx_queue_release_mbufs(struct rte_eth_dev *dev, struct nicvf_rxq *rxq)
 	if (dev->rx_pkt_burst == NULL)
 		return;
 
-	while ((rxq_cnt = nicvf_dev_rx_queue_count(dev,
-				nicvf_netdev_qidx(rxq->nic, rxq->queue_id)))) {
+	while ((rxq_cnt = nicvf_dev_rx_queue_count(rxq))) {
 		nb_pkts = dev->rx_pkt_burst(rxq, rx_pkts,
 					NICVF_MAX_RX_FREE_THRESH);
 		PMD_DRV_LOG(INFO, "nb_pkts=%d  rxq_cnt=%d", nb_pkts, rxq_cnt);
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index 91e09ff8d5..0d4f4ae87e 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -649,11 +649,11 @@ nicvf_recv_pkts_multiseg_cksum_vlan_strip(void *rx_queue,
 }
 
 uint32_t
-nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
+nicvf_dev_rx_queue_count(void *rx_queue)
 {
 	struct nicvf_rxq *rxq;
 
-	rxq = dev->data->rx_queues[queue_idx];
+	rxq = rx_queue;
 	return nicvf_addr_read(rxq->cq_status) & NICVF_CQ_CQE_COUNT_MASK;
 }
 
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index d6ed660b4e..271f329dc4 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -83,7 +83,7 @@ nicvf_mbuff_init_mseg_update(struct rte_mbuf *pkt, const uint64_t mbuf_init,
 	*(uint64_t *)(&pkt->rearm_data) = init.value;
 }
 
-uint32_t nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx);
+uint32_t nicvf_dev_rx_queue_count(void *rx_queue);
 uint32_t nicvf_dev_rbdr_refill(struct rte_eth_dev *dev, uint16_t queue_idx);
 
 uint16_t nicvf_recv_pkts_no_offload(void *rxq, struct rte_mbuf **rx_pkts,
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 3021933965..569cd6a48f 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -446,8 +446,7 @@ int  txgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
 
-uint32_t txgbe_dev_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t txgbe_dev_rx_queue_count(void *rx_queue);
 
 int txgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int txgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 1a261287d1..2a7cfdeedb 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -2688,14 +2688,14 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-txgbe_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+txgbe_dev_rx_queue_count(void *rx_queue)
 {
 #define TXGBE_RXQ_SCAN_INTERVAL 4
 	volatile struct txgbe_rx_desc *rxdp;
 	struct txgbe_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index a202931e9a..f2b3f142d8 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -1369,11 +1369,11 @@ eth_link_update(struct rte_eth_dev *dev __rte_unused,
 }
 
 static uint32_t
-eth_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_rx_queue_count(void *rx_queue)
 {
 	struct vhost_queue *vq;
 
-	vq = dev->data->rx_queues[rx_queue_id];
+	vq = rx_queue;
 	if (vq == NULL)
 		return 0;
 
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 0bff526819..cdd16d6e57 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -5060,7 +5060,7 @@ rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
 	    dev->data->rx_queues[queue_id] == NULL)
 		return -EINVAL;
 
-	return (int)(*dev->rx_queue_count)(dev, queue_id);
+	return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]);
 }
 
 /**@{@name Rx hardware descriptor states
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index 2296872888..51cd68de94 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -41,8 +41,7 @@ typedef uint16_t (*eth_tx_prep_t)(void *txq,
 /**< @internal Prepare output packets on a transmit queue of an Ethernet device. */
 
 
-typedef uint32_t (*eth_rx_queue_count_t)(struct rte_eth_dev *dev,
-					 uint16_t rx_queue_id);
+typedef uint32_t (*eth_rx_queue_count_t)(void *rxq);
 /**< @internal Get number of used descriptors on a receive queue. */
 
 typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure
  2021-10-07 11:27       ` [dpdk-dev] [PATCH v5 " Konstantin Ananyev
                           ` (2 preceding siblings ...)
  2021-10-07 11:27         ` [dpdk-dev] [PATCH v5 3/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
@ 2021-10-07 11:27         ` Konstantin Ananyev
  2021-10-09 12:05           ` fengchengwen
  2021-10-11  8:25           ` Andrew Rybchenko
  2021-10-07 11:27         ` [dpdk-dev] [PATCH v5 5/7] ethdev: make fast-path functions to use new flat array Konstantin Ananyev
                           ` (4 subsequent siblings)
  8 siblings, 2 replies; 112+ messages in thread
From: Konstantin Ananyev @ 2021-10-07 11:27 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Copy public function pointers (rx_pkt_burst(), etc.) and related
pointers to internal data from rte_eth_dev structure into a
separate flat array. That array will remain in a public header.
The intention here is to make rte_eth_dev and related structures internal.
That should allow future possible changes to core eth_dev structures
to be transparent to the user and help to avoid ABI/API breakages.
The plan is to keep minimal part of data from rte_eth_dev public,
so we still can use inline functions for fast-path calls
(like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
The whole idea beyond this new schema:
1. PMDs keep to setup fast-path function pointers and related data
   inside rte_eth_dev struct in the same way they did it before.
2. Inside rte_eth_dev_start() and inside rte_eth_dev_probing_finish()
   (for secondary process) we call eth_dev_fp_ops_setup, which
   copies these function and data pointers into rte_eth_fp_ops[port_id].
3. Inside rte_eth_dev_stop() and inside rte_eth_dev_release_port()
   we call eth_dev_fp_ops_reset(), which resets rte_eth_fp_ops[port_id]
   into some dummy values.
4. fast-path ethdev API (rte_eth_rx_burst(), etc.) will use that new
   flat array to call PMD specific functions.
That approach should allow us to make rte_eth_devices[] private
without introducing regression and help to avoid changes in drivers code.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/ethdev/ethdev_private.c  | 52 ++++++++++++++++++++++++++++++++++
 lib/ethdev/ethdev_private.h  |  7 +++++
 lib/ethdev/rte_ethdev.c      | 27 ++++++++++++++++++
 lib/ethdev/rte_ethdev_core.h | 55 ++++++++++++++++++++++++++++++++++++
 4 files changed, 141 insertions(+)

diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 012cf73ca2..3eeda6e9f9 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -174,3 +174,55 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
 		RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
 	return str == NULL ? -1 : 0;
 }
+
+static uint16_t
+dummy_eth_rx_burst(__rte_unused void *rxq,
+		__rte_unused struct rte_mbuf **rx_pkts,
+		__rte_unused uint16_t nb_pkts)
+{
+	RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+static uint16_t
+dummy_eth_tx_burst(__rte_unused void *txq,
+		__rte_unused struct rte_mbuf **tx_pkts,
+		__rte_unused uint16_t nb_pkts)
+{
+	RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n");
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+void
+eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo)
+{
+	static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
+	static const struct rte_eth_fp_ops dummy_ops = {
+		.rx_pkt_burst = dummy_eth_rx_burst,
+		.tx_pkt_burst = dummy_eth_tx_burst,
+		.rxq = {.data = dummy_data, .clbk = dummy_data,},
+		.txq = {.data = dummy_data, .clbk = dummy_data,},
+	};
+
+	*fpo = dummy_ops;
+}
+
+void
+eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
+		const struct rte_eth_dev *dev)
+{
+	fpo->rx_pkt_burst = dev->rx_pkt_burst;
+	fpo->tx_pkt_burst = dev->tx_pkt_burst;
+	fpo->tx_pkt_prepare = dev->tx_pkt_prepare;
+	fpo->rx_queue_count = dev->rx_queue_count;
+	fpo->rx_descriptor_status = dev->rx_descriptor_status;
+	fpo->tx_descriptor_status = dev->tx_descriptor_status;
+
+	fpo->rxq.data = dev->data->rx_queues;
+	fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
+
+	fpo->txq.data = dev->data->tx_queues;
+	fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
+}
diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
index 3724429577..5721be7bdc 100644
--- a/lib/ethdev/ethdev_private.h
+++ b/lib/ethdev/ethdev_private.h
@@ -26,4 +26,11 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
 /* Parse devargs value for representor parameter. */
 int rte_eth_devargs_parse_representor_ports(char *str, void *data);
 
+/* reset eth fast-path API to dummy values */
+void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
+
+/* setup eth fast-path API to ethdev values */
+void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
+		const struct rte_eth_dev *dev);
+
 #endif /* _ETH_PRIVATE_H_ */
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index c8abda6dd7..9f7a0cbb8c 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -44,6 +44,9 @@
 static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
 struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
 
+/* public fast-path API */
+struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
+
 /* spinlock for eth device callbacks */
 static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
 
@@ -578,6 +581,8 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
 		rte_eth_dev_callback_process(eth_dev,
 				RTE_ETH_EVENT_DESTROY, NULL);
 
+	eth_dev_fp_ops_reset(rte_eth_fp_ops + eth_dev->data->port_id);
+
 	rte_spinlock_lock(&eth_dev_shared_data->ownership_lock);
 
 	eth_dev->state = RTE_ETH_DEV_UNUSED;
@@ -1787,6 +1792,9 @@ rte_eth_dev_start(uint16_t port_id)
 		(*dev->dev_ops->link_update)(dev, 0);
 	}
 
+	/* expose selection of PMD fast-path functions */
+	eth_dev_fp_ops_setup(rte_eth_fp_ops + port_id, dev);
+
 	rte_ethdev_trace_start(port_id);
 	return 0;
 }
@@ -1809,6 +1817,9 @@ rte_eth_dev_stop(uint16_t port_id)
 		return 0;
 	}
 
+	/* point fast-path functions to dummy ones */
+	eth_dev_fp_ops_reset(rte_eth_fp_ops + port_id);
+
 	dev->data->dev_started = 0;
 	ret = (*dev->dev_ops->dev_stop)(dev);
 	rte_ethdev_trace_stop(port_id, ret);
@@ -4567,6 +4578,14 @@ rte_eth_mirror_rule_reset(uint16_t port_id, uint8_t rule_id)
 	return eth_err(port_id, (*dev->dev_ops->mirror_rule_reset)(dev, rule_id));
 }
 
+RTE_INIT(eth_dev_init_fp_ops)
+{
+	uint32_t i;
+
+	for (i = 0; i != RTE_DIM(rte_eth_fp_ops); i++)
+		eth_dev_fp_ops_reset(rte_eth_fp_ops + i);
+}
+
 RTE_INIT(eth_dev_init_cb_lists)
 {
 	uint16_t i;
@@ -4735,6 +4754,14 @@ rte_eth_dev_probing_finish(struct rte_eth_dev *dev)
 	if (dev == NULL)
 		return;
 
+	/*
+	 * for secondary process, at that point we expect device
+	 * to be already 'usable', so shared data and all function pointers
+	 * for fast-path devops have to be setup properly inside rte_eth_dev.
+	 */
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev);
+
 	rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL);
 
 	dev->state = RTE_ETH_DEV_ATTACHED;
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index 51cd68de94..d5853dff86 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -50,6 +50,61 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
 typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
 /**< @internal Check the status of a Tx descriptor */
 
+/**
+ * @internal
+ * Structure used to hold opaque pointers to internal ethdev Rx/Tx
+ * queues data.
+ * The main purpose to expose these pointers at all - allow compiler
+ * to fetch this data for fast-path ethdev inline functions in advance.
+ */
+struct rte_ethdev_qdata {
+	void **data;
+	/**< points to array of internal queue data pointers */
+	void **clbk;
+	/**< points to array of queue callback data pointers */
+};
+
+/**
+ * @internal
+ * fast-path ethdev functions and related data are hold in a flat array.
+ * One entry per ethdev.
+ * On 64-bit systems contents of this structure occupy exactly two 64B lines.
+ * On 32-bit systems contents of this structure fits into one 64B line.
+ */
+struct rte_eth_fp_ops {
+
+	/**
+	 * Rx fast-path functions and related data.
+	 * 64-bit systems: occupies first 64B line
+	 */
+	eth_rx_burst_t rx_pkt_burst;
+	/**< PMD receive function. */
+	eth_rx_queue_count_t rx_queue_count;
+	/**< Get the number of used RX descriptors. */
+	eth_rx_descriptor_status_t rx_descriptor_status;
+	/**< Check the status of a Rx descriptor. */
+	struct rte_ethdev_qdata rxq;
+	/**< Rx queues data. */
+	uintptr_t reserved1[3];
+
+	/**
+	 * Tx fast-path functions and related data.
+	 * 64-bit systems: occupies second 64B line
+	 */
+	eth_tx_burst_t tx_pkt_burst;
+	/**< PMD transmit function. */
+	eth_tx_prep_t tx_pkt_prepare;
+	/**< PMD transmit prepare function. */
+	eth_tx_descriptor_status_t tx_descriptor_status;
+	/**< Check the status of a Tx descriptor. */
+	struct rte_ethdev_qdata txq;
+	/**< Tx queues data. */
+	uintptr_t reserved2[3];
+
+} __rte_cache_aligned;
+
+extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
+
 
 /**
  * @internal
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [dpdk-dev] [PATCH v5 5/7] ethdev: make fast-path functions to use new flat array
  2021-10-07 11:27       ` [dpdk-dev] [PATCH v5 " Konstantin Ananyev
                           ` (3 preceding siblings ...)
  2021-10-07 11:27         ` [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure Konstantin Ananyev
@ 2021-10-07 11:27         ` Konstantin Ananyev
  2021-10-11  9:02           ` Andrew Rybchenko
  2021-10-07 11:27         ` [dpdk-dev] [PATCH v5 6/7] ethdev: add API to retrieve multiple ethernet addresses Konstantin Ananyev
                           ` (3 subsequent siblings)
  8 siblings, 1 reply; 112+ messages in thread
From: Konstantin Ananyev @ 2021-10-07 11:27 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Rework fast-path ethdev functions to use rte_eth_fp_ops[].
While it is an API/ABI breakage, this change is intended to be
transparent for both users (no changes in user app is required) and
PMD developers (no changes in PMD is required).
One extra thing to note - RX/TX callback invocation will cause extra
function call with these changes. That might cause some insignificant
slowdown for code-path where RX/TX callbacks are heavily involved.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/ethdev/ethdev_private.c |  31 +++++
 lib/ethdev/rte_ethdev.h     | 242 ++++++++++++++++++++++++++----------
 lib/ethdev/version.map      |   3 +
 3 files changed, 208 insertions(+), 68 deletions(-)

diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 3eeda6e9f9..1222c6f84e 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -226,3 +226,34 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
 	fpo->txq.data = dev->data->tx_queues;
 	fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
 }
+
+uint16_t
+rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id,
+	struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
+	void *opaque)
+{
+	const struct rte_eth_rxtx_callback *cb = opaque;
+
+	while (cb != NULL) {
+		nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
+				nb_pkts, cb->param);
+		cb = cb->next;
+	}
+
+	return nb_rx;
+}
+
+uint16_t
+rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
+	struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque)
+{
+	const struct rte_eth_rxtx_callback *cb = opaque;
+
+	while (cb != NULL) {
+		nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
+				cb->param);
+		cb = cb->next;
+	}
+
+	return nb_pkts;
+}
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index cdd16d6e57..c0e1a40681 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -4904,6 +4904,33 @@ int rte_eth_representor_info_get(uint16_t port_id,
 
 #include <rte_ethdev_core.h>
 
+/**
+ * @internal
+ * Helper routine for eth driver rx_burst API.
+ * Should be called at exit from PMD's rte_eth_rx_bulk implementation.
+ * Does necessary post-processing - invokes RX callbacks if any, etc.
+ *
+ * @param port_id
+ *  The port identifier of the Ethernet device.
+ * @param queue_id
+ *  The index of the receive queue from which to retrieve input packets.
+ * @param rx_pkts
+ *   The address of an array of pointers to *rte_mbuf* structures that
+ *   have been retrieved from the device.
+ * @param nb_pkts
+ *   The number of packets that were retrieved from the device.
+ * @param nb_pkts
+ *   The number of elements in *rx_pkts* array.
+ * @param opaque
+ *   Opaque pointer of RX queue callback related data.
+ *
+ * @return
+ *  The number of packets effectively supplied to the *rx_pkts* array.
+ */
+uint16_t rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id,
+		struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
+		void *opaque);
+
 /**
  *
  * Retrieve a burst of input packets from a receive queue of an Ethernet
@@ -4995,23 +5022,37 @@ static inline uint16_t
 rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
 		 struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
 {
-	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 	uint16_t nb_rx;
+	struct rte_eth_fp_ops *p;
+	void *cb, *qd;
+
+#ifdef RTE_ETHDEV_DEBUG_RX
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return 0;
+	}
+#endif
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->rxq.data[queue_id];
 
 #ifdef RTE_ETHDEV_DEBUG_RX
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
 
-	if (queue_id >= dev->data->nb_rx_queues) {
-		RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u\n", queue_id);
+	if (qd == NULL) {
+		RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u for port_id=%u\n",
+			queue_id, port_id);
 		return 0;
 	}
 #endif
-	nb_rx = (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id],
-				     rx_pkts, nb_pkts);
+
+	nb_rx = p->rx_pkt_burst(qd, rx_pkts, nb_pkts);
 
 #ifdef RTE_ETHDEV_RXTX_CALLBACKS
-	struct rte_eth_rxtx_callback *cb;
 
 	/* __ATOMIC_RELEASE memory order was used when the
 	 * call back was inserted into the list.
@@ -5019,16 +5060,10 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
 	 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
 	 * not required.
 	 */
-	cb = __atomic_load_n(&dev->post_rx_burst_cbs[queue_id],
-				__ATOMIC_RELAXED);
-
-	if (unlikely(cb != NULL)) {
-		do {
-			nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
-						nb_pkts, cb->param);
-			cb = cb->next;
-		} while (cb != NULL);
-	}
+	cb = __atomic_load_n((void **)&p->rxq.clbk[queue_id], __ATOMIC_RELAXED);
+	if (unlikely(cb != NULL))
+		nb_rx = rte_eth_call_rx_callbacks(port_id, queue_id, rx_pkts,
+				nb_rx, nb_pkts, cb);
 #endif
 
 	rte_ethdev_trace_rx_burst(port_id, queue_id, (void **)rx_pkts, nb_rx);
@@ -5051,16 +5086,27 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
 static inline int
 rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
 {
-	struct rte_eth_dev *dev;
+	struct rte_eth_fp_ops *p;
+	void *qd;
+
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return -EINVAL;
+	}
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->rxq.data[queue_id];
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
-	dev = &rte_eth_devices[port_id];
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_queue_count, -ENOTSUP);
-	if (queue_id >= dev->data->nb_rx_queues ||
-	    dev->data->rx_queues[queue_id] == NULL)
+	RTE_FUNC_PTR_OR_ERR_RET(*p->rx_queue_count, -ENOTSUP);
+	if (qd == NULL)
 		return -EINVAL;
 
-	return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]);
+	return (int)(*p->rx_queue_count)(qd);
 }
 
 /**@{@name Rx hardware descriptor states
@@ -5108,21 +5154,30 @@ static inline int
 rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
 	uint16_t offset)
 {
-	struct rte_eth_dev *dev;
-	void *rxq;
+	struct rte_eth_fp_ops *p;
+	void *qd;
 
 #ifdef RTE_ETHDEV_DEBUG_RX
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return -EINVAL;
+	}
 #endif
-	dev = &rte_eth_devices[port_id];
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->rxq.data[queue_id];
+
 #ifdef RTE_ETHDEV_DEBUG_RX
-	if (queue_id >= dev->data->nb_rx_queues)
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (qd == NULL)
 		return -ENODEV;
 #endif
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_descriptor_status, -ENOTSUP);
-	rxq = dev->data->rx_queues[queue_id];
-
-	return (*dev->rx_descriptor_status)(rxq, offset);
+	RTE_FUNC_PTR_OR_ERR_RET(*p->rx_descriptor_status, -ENOTSUP);
+	return (*p->rx_descriptor_status)(qd, offset);
 }
 
 /**@{@name Tx hardware descriptor states
@@ -5169,23 +5224,54 @@ rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
 static inline int rte_eth_tx_descriptor_status(uint16_t port_id,
 	uint16_t queue_id, uint16_t offset)
 {
-	struct rte_eth_dev *dev;
-	void *txq;
+	struct rte_eth_fp_ops *p;
+	void *qd;
 
 #ifdef RTE_ETHDEV_DEBUG_TX
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return -EINVAL;
+	}
 #endif
-	dev = &rte_eth_devices[port_id];
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->txq.data[queue_id];
+
 #ifdef RTE_ETHDEV_DEBUG_TX
-	if (queue_id >= dev->data->nb_tx_queues)
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (qd == NULL)
 		return -ENODEV;
 #endif
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_descriptor_status, -ENOTSUP);
-	txq = dev->data->tx_queues[queue_id];
-
-	return (*dev->tx_descriptor_status)(txq, offset);
+	RTE_FUNC_PTR_OR_ERR_RET(*p->tx_descriptor_status, -ENOTSUP);
+	return (*p->tx_descriptor_status)(qd, offset);
 }
 
+/**
+ * @internal
+ * Helper routine for eth driver tx_burst API.
+ * Should be called before entry PMD's rte_eth_tx_bulk implementation.
+ * Does necessary pre-processing - invokes TX callbacks if any, etc.
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device.
+ * @param queue_id
+ *   The index of the transmit queue through which output packets must be
+ *   sent.
+ * @param tx_pkts
+ *   The address of an array of *nb_pkts* pointers to *rte_mbuf* structures
+ *   which contain the output packets.
+ * @param nb_pkts
+ *   The maximum number of packets to transmit.
+ * @return
+ *   The number of output packets to transmit.
+ */
+uint16_t rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
+	struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque);
+
 /**
  * Send a burst of output packets on a transmit queue of an Ethernet device.
  *
@@ -5256,20 +5342,34 @@ static inline uint16_t
 rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
 		 struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 {
-	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	struct rte_eth_fp_ops *p;
+	void *cb, *qd;
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return 0;
+	}
+#endif
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->txq.data[queue_id];
 
 #ifdef RTE_ETHDEV_DEBUG_TX
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0);
 
-	if (queue_id >= dev->data->nb_tx_queues) {
-		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
+	if (qd == NULL) {
+		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
+			queue_id, port_id);
 		return 0;
 	}
 #endif
 
 #ifdef RTE_ETHDEV_RXTX_CALLBACKS
-	struct rte_eth_rxtx_callback *cb;
 
 	/* __ATOMIC_RELEASE memory order was used when the
 	 * call back was inserted into the list.
@@ -5277,21 +5377,16 @@ rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
 	 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
 	 * not required.
 	 */
-	cb = __atomic_load_n(&dev->pre_tx_burst_cbs[queue_id],
-				__ATOMIC_RELAXED);
-
-	if (unlikely(cb != NULL)) {
-		do {
-			nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
-					cb->param);
-			cb = cb->next;
-		} while (cb != NULL);
-	}
+	cb = __atomic_load_n((void **)&p->txq.clbk[queue_id], __ATOMIC_RELAXED);
+	if (unlikely(cb != NULL))
+		nb_pkts = rte_eth_call_tx_callbacks(port_id, queue_id, tx_pkts,
+				nb_pkts, cb);
 #endif
 
-	rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts,
-		nb_pkts);
-	return (*dev->tx_pkt_burst)(dev->data->tx_queues[queue_id], tx_pkts, nb_pkts);
+	nb_pkts = p->tx_pkt_burst(qd, tx_pkts, nb_pkts);
+
+	rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts, nb_pkts);
+	return nb_pkts;
 }
 
 /**
@@ -5354,31 +5449,42 @@ static inline uint16_t
 rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id,
 		struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 {
-	struct rte_eth_dev *dev;
+	struct rte_eth_fp_ops *p;
+	void *qd;
 
 #ifdef RTE_ETHDEV_DEBUG_TX
-	if (!rte_eth_dev_is_valid_port(port_id)) {
-		RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
 		rte_errno = ENODEV;
 		return 0;
 	}
 #endif
 
-	dev = &rte_eth_devices[port_id];
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->txq.data[queue_id];
 
 #ifdef RTE_ETHDEV_DEBUG_TX
-	if (queue_id >= dev->data->nb_tx_queues) {
-		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
+	if (!rte_eth_dev_is_valid_port(port_id)) {
+		RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
+		rte_errno = ENODEV;
+		return 0;
+	}
+	if (qd == NULL) {
+		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
+			queue_id, port_id);
 		rte_errno = EINVAL;
 		return 0;
 	}
 #endif
 
-	if (!dev->tx_pkt_prepare)
+	if (!p->tx_pkt_prepare)
 		return nb_pkts;
 
-	return (*dev->tx_pkt_prepare)(dev->data->tx_queues[queue_id],
-			tx_pkts, nb_pkts);
+	return p->tx_pkt_prepare(qd, tx_pkts, nb_pkts);
 }
 
 #else
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 904bce6ea1..79e62dcf61 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -7,6 +7,8 @@ DPDK_22 {
 	rte_eth_allmulticast_disable;
 	rte_eth_allmulticast_enable;
 	rte_eth_allmulticast_get;
+	rte_eth_call_rx_callbacks;
+	rte_eth_call_tx_callbacks;
 	rte_eth_dev_adjust_nb_rx_tx_desc;
 	rte_eth_dev_callback_register;
 	rte_eth_dev_callback_unregister;
@@ -76,6 +78,7 @@ DPDK_22 {
 	rte_eth_find_next_of;
 	rte_eth_find_next_owned_by;
 	rte_eth_find_next_sibling;
+	rte_eth_fp_ops;
 	rte_eth_iterator_cleanup;
 	rte_eth_iterator_init;
 	rte_eth_iterator_next;
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [dpdk-dev] [PATCH v5 6/7] ethdev: add API to retrieve multiple ethernet addresses
  2021-10-07 11:27       ` [dpdk-dev] [PATCH v5 " Konstantin Ananyev
                           ` (4 preceding siblings ...)
  2021-10-07 11:27         ` [dpdk-dev] [PATCH v5 5/7] ethdev: make fast-path functions to use new flat array Konstantin Ananyev
@ 2021-10-07 11:27         ` Konstantin Ananyev
  2021-10-11  9:09           ` Andrew Rybchenko
  2021-10-07 11:27         ` [dpdk-dev] [PATCH v5 7/7] ethdev: hide eth dev related structures Konstantin Ananyev
                           ` (2 subsequent siblings)
  8 siblings, 1 reply; 112+ messages in thread
From: Konstantin Ananyev @ 2021-10-07 11:27 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Introduce rte_eth_macaddrs_get() to allow user to retrieve all ethernet
addresses assigned to given port.
Change testpmd to use this new function and avoid referencing directly
rte_eth_devices[].

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 app/test-pmd/config.c                  | 23 +++++++++++------------
 doc/guides/rel_notes/release_21_11.rst |  5 +++++
 lib/ethdev/rte_ethdev.c                | 25 +++++++++++++++++++++++++
 lib/ethdev/rte_ethdev.h                | 21 +++++++++++++++++++++
 lib/ethdev/version.map                 |  3 +++
 5 files changed, 65 insertions(+), 12 deletions(-)

diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 9c66329e96..7221644230 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -5215,20 +5215,20 @@ show_macs(portid_t port_id)
 {
 	char buf[RTE_ETHER_ADDR_FMT_SIZE];
 	struct rte_eth_dev_info dev_info;
-	struct rte_ether_addr *addr;
-	uint32_t i, num_macs = 0;
-	struct rte_eth_dev *dev;
-
-	dev = &rte_eth_devices[port_id];
+	int32_t i, rc, num_macs = 0;
 
 	if (eth_dev_info_get_print_err(port_id, &dev_info))
 		return;
 
-	for (i = 0; i < dev_info.max_mac_addrs; i++) {
-		addr = &dev->data->mac_addrs[i];
+	struct rte_ether_addr addr[dev_info.max_mac_addrs];
+	rc = rte_eth_macaddrs_get(port_id, addr, dev_info.max_mac_addrs);
+	if (rc < 0)
+		return;
+
+	for (i = 0; i < rc; i++) {
 
 		/* skip zero address */
-		if (rte_is_zero_ether_addr(addr))
+		if (rte_is_zero_ether_addr(&addr[i]))
 			continue;
 
 		num_macs++;
@@ -5236,14 +5236,13 @@ show_macs(portid_t port_id)
 
 	printf("Number of MAC address added: %d\n", num_macs);
 
-	for (i = 0; i < dev_info.max_mac_addrs; i++) {
-		addr = &dev->data->mac_addrs[i];
+	for (i = 0; i < rc; i++) {
 
 		/* skip zero address */
-		if (rte_is_zero_ether_addr(addr))
+		if (rte_is_zero_ether_addr(&addr[i]))
 			continue;
 
-		rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, addr);
+		rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, &addr[i]);
 		printf("  %s\n", buf);
 	}
 }
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index ca5d169598..0874108b1d 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -130,6 +130,11 @@ New Features
   * Added tests to validate packets hard expiry.
   * Added tests to verify tunnel header verification in IPsec inbound.
 
+* **Add new function into ethdev lib.**
+
+  * Added ``rte_eth_macaddrs_get`` to allow user to retrieve all Ethernet
+    addresses assigned to given ethernet port.
+
 
 Removed Items
 -------------
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 9f7a0cbb8c..83b57a7524 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -3573,6 +3573,31 @@ rte_eth_dev_set_ptypes(uint16_t port_id, uint32_t ptype_mask,
 	return ret;
 }
 
+int
+rte_eth_macaddrs_get(uint16_t port_id, struct rte_ether_addr ma[], uint32_t num)
+{
+	int32_t ret;
+	struct rte_eth_dev *dev;
+	struct rte_eth_dev_info dev_info;
+
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	dev = &rte_eth_devices[port_id];
+
+	ret = rte_eth_dev_info_get(port_id, &dev_info);
+	if (ret != 0)
+		return ret;
+
+	if (ma == NULL) {
+		RTE_ETHDEV_LOG(ERR, "%s: invalid parameters\n", __func__);
+		return -EINVAL;
+	}
+
+	num = RTE_MIN(dev_info.max_mac_addrs, num);
+	memcpy(ma, dev->data->mac_addrs, num * sizeof(ma[0]));
+
+	return num;
+}
+
 int
 rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr)
 {
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index c0e1a40681..77314ecf24 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -3037,6 +3037,27 @@ int rte_eth_dev_set_rx_queue_stats_mapping(uint16_t port_id,
  */
 int rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr);
 
+/**
+ * Retrieve the Ethernet addresses of an Ethernet device.
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device.
+ * @param ma
+ *   A pointer to an array of structures of type *ether_addr* to be filled with
+ *   the Ethernet addresses of the Ethernet device.
+ * @param num
+ *   Number of elements in the *ma* array.
+ *   Note that  rte_eth_dev_info::max_mac_addrs can be used to retrieve
+ *   max number of Ethernet addresses for given port.
+ * @return
+ *   - number of retrieved addresses if successful
+ *   - (-ENODEV) if *port_id* invalid.
+ *   - (-EINVAL) if bad parameter.
+ */
+__rte_experimental
+int rte_eth_macaddrs_get(uint16_t port_id, struct rte_ether_addr *ma,
+	uint32_t num);
+
 /**
  * Retrieve the contextual information of an Ethernet device.
  *
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 79e62dcf61..2bad712958 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -250,6 +250,9 @@ EXPERIMENTAL {
 	rte_mtr_meter_policy_delete;
 	rte_mtr_meter_policy_update;
 	rte_mtr_meter_policy_validate;
+
+	# added in 21.11
+	rte_eth_macaddrs_get;
 };
 
 INTERNAL {
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* [dpdk-dev] [PATCH v5 7/7] ethdev: hide eth dev related structures
  2021-10-07 11:27       ` [dpdk-dev] [PATCH v5 " Konstantin Ananyev
                           ` (5 preceding siblings ...)
  2021-10-07 11:27         ` [dpdk-dev] [PATCH v5 6/7] ethdev: add API to retrieve multiple ethernet addresses Konstantin Ananyev
@ 2021-10-07 11:27         ` Konstantin Ananyev
  2021-10-11  9:20           ` Andrew Rybchenko
  2021-10-08 18:13         ` [dpdk-dev] [PATCH v5 0/7] " Slava Ovsiienko
  2021-10-11  9:22         ` Andrew Rybchenko
  8 siblings, 1 reply; 112+ messages in thread
From: Konstantin Ananyev @ 2021-10-07 11:27 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
data into private header (ethdev_driver.h).
Few minor changes to keep DPDK building after that.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/release_21_11.rst        |   6 +
 drivers/common/octeontx2/otx2_sec_idev.c      |   2 +-
 drivers/crypto/octeontx2/otx2_cryptodev_ops.c |   2 +-
 drivers/net/cxgbe/base/adapter.h              |   2 +-
 drivers/net/dpaa2/dpaa2_ptp.c                 |   2 +-
 drivers/net/netvsc/hn_var.h                   |   1 +
 lib/ethdev/ethdev_driver.h                    | 148 ++++++++++++++++++
 lib/ethdev/rte_ethdev_core.h                  | 143 -----------------
 lib/ethdev/version.map                        |   2 +-
 lib/eventdev/rte_event_eth_rx_adapter.c       |   2 +-
 lib/eventdev/rte_event_eth_tx_adapter.c       |   2 +-
 lib/eventdev/rte_eventdev.c                   |   2 +-
 lib/metrics/rte_metrics_telemetry.c           |   2 +-
 13 files changed, 164 insertions(+), 152 deletions(-)

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 0874108b1d..4c9751bb1d 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -237,6 +237,12 @@ ABI Changes
   to user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
   is used by  public inline function ``rte_eth_rx_queue_count``.
 
+* ethdev: Made ``rte_eth_dev``, ``rte_eth_dev_data``, ``rte_eth_rxtx_callback``
+  private data structures. ``rte_eth_devices[]`` can't be accessed directly
+  by user any more. While it is an ABI breakage, this change is intended
+  to be transparent for both users (no changes in user app is required) and
+  PMD developers (no changes in PMD is required).
+
 
 Known Issues
 ------------
diff --git a/drivers/common/octeontx2/otx2_sec_idev.c b/drivers/common/octeontx2/otx2_sec_idev.c
index 6e9643c383..b561b67174 100644
--- a/drivers/common/octeontx2/otx2_sec_idev.c
+++ b/drivers/common/octeontx2/otx2_sec_idev.c
@@ -4,7 +4,7 @@
 
 #include <rte_atomic.h>
 #include <rte_bus_pci.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_spinlock.h>
 
 #include "otx2_common.h"
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
index 37fad11d91..f0b72e05c2 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
@@ -6,7 +6,7 @@
 
 #include <cryptodev_pmd.h>
 #include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_event_crypto_adapter.h>
 
 #include "otx2_cryptodev.h"
diff --git a/drivers/net/cxgbe/base/adapter.h b/drivers/net/cxgbe/base/adapter.h
index 01a2a9d147..1c7c8afe16 100644
--- a/drivers/net/cxgbe/base/adapter.h
+++ b/drivers/net/cxgbe/base/adapter.h
@@ -12,7 +12,7 @@
 #include <rte_mbuf.h>
 #include <rte_io.h>
 #include <rte_rwlock.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 
 #include "../cxgbe_compat.h"
 #include "../cxgbe_ofld.h"
diff --git a/drivers/net/dpaa2/dpaa2_ptp.c b/drivers/net/dpaa2/dpaa2_ptp.c
index 899dd5d442..8d79e39244 100644
--- a/drivers/net/dpaa2/dpaa2_ptp.c
+++ b/drivers/net/dpaa2/dpaa2_ptp.c
@@ -10,7 +10,7 @@
 #include <unistd.h>
 #include <stdarg.h>
 
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_log.h>
 #include <rte_eth_ctrl.h>
 #include <rte_malloc.h>
diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h
index 2a2bac9338..74e6e6010d 100644
--- a/drivers/net/netvsc/hn_var.h
+++ b/drivers/net/netvsc/hn_var.h
@@ -7,6 +7,7 @@
  */
 
 #include <rte_eal_paging.h>
+#include <ethdev_driver.h>
 
 /*
  * Tunable ethdev params
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index cc2c75261c..a743553d81 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -17,6 +17,154 @@
 
 #include <rte_ethdev.h>
 
+/**
+ * @internal
+ * Structure used to hold information about the callbacks to be called for a
+ * queue on RX and TX.
+ */
+struct rte_eth_rxtx_callback {
+	struct rte_eth_rxtx_callback *next;
+	union{
+		rte_rx_callback_fn rx;
+		rte_tx_callback_fn tx;
+	} fn;
+	void *param;
+};
+
+/**
+ * @internal
+ * The generic data structure associated with each ethernet device.
+ *
+ * Pointers to burst-oriented packet receive and transmit functions are
+ * located at the beginning of the structure, along with the pointer to
+ * where all the data elements for the particular device are stored in shared
+ * memory. This split allows the function pointer and driver data to be per-
+ * process, while the actual configuration data for the device is shared.
+ */
+struct rte_eth_dev {
+	eth_rx_burst_t rx_pkt_burst; /**< Pointer to PMD receive function. */
+	eth_tx_burst_t tx_pkt_burst; /**< Pointer to PMD transmit function. */
+	eth_tx_prep_t tx_pkt_prepare;
+	/**< Pointer to PMD transmit prepare function. */
+	eth_rx_queue_count_t rx_queue_count;
+	/**< Get the number of used RX descriptors. */
+	eth_rx_descriptor_status_t rx_descriptor_status;
+	/**< Check the status of a Rx descriptor. */
+	eth_tx_descriptor_status_t tx_descriptor_status;
+	/**< Check the status of a Tx descriptor. */
+
+	/**
+	 * points to device data that is shared between
+	 * primary and secondary processes.
+	 */
+	struct rte_eth_dev_data *data;
+	void *process_private; /**< Pointer to per-process device data. */
+	const struct eth_dev_ops *dev_ops; /**< Functions exported by PMD */
+	struct rte_device *device; /**< Backing device */
+	struct rte_intr_handle *intr_handle; /**< Device interrupt handle */
+	/** User application callbacks for NIC interrupts */
+	struct rte_eth_dev_cb_list link_intr_cbs;
+	/**
+	 * User-supplied functions called from rx_burst to post-process
+	 * received packets before passing them to the user
+	 */
+	struct rte_eth_rxtx_callback *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
+	/**
+	 * User-supplied functions called from tx_burst to pre-process
+	 * received packets before passing them to the driver for transmission.
+	 */
+	struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
+	enum rte_eth_dev_state state; /**< Flag indicating the port state */
+	void *security_ctx; /**< Context for security ops */
+
+	uint64_t reserved_64s[4]; /**< Reserved for future fields */
+	void *reserved_ptrs[4];   /**< Reserved for future fields */
+} __rte_cache_aligned;
+
+struct rte_eth_dev_sriov;
+struct rte_eth_dev_owner;
+
+/**
+ * @internal
+ * The data part, with no function pointers, associated with each ethernet
+ * device. This structure is safe to place in shared memory to be common
+ * among different processes in a multi-process configuration.
+ */
+struct rte_eth_dev_data {
+	char name[RTE_ETH_NAME_MAX_LEN]; /**< Unique identifier name */
+
+	void **rx_queues; /**< Array of pointers to RX queues. */
+	void **tx_queues; /**< Array of pointers to TX queues. */
+	uint16_t nb_rx_queues; /**< Number of RX queues. */
+	uint16_t nb_tx_queues; /**< Number of TX queues. */
+
+	struct rte_eth_dev_sriov sriov;    /**< SRIOV data */
+
+	void *dev_private;
+			/**< PMD-specific private data.
+			 *   @see rte_eth_dev_release_port()
+			 */
+
+	struct rte_eth_link dev_link;   /**< Link-level information & status. */
+	struct rte_eth_conf dev_conf;   /**< Configuration applied to device. */
+	uint16_t mtu;                   /**< Maximum Transmission Unit. */
+	uint32_t min_rx_buf_size;
+			/**< Common RX buffer size handled by all queues. */
+
+	uint64_t rx_mbuf_alloc_failed; /**< RX ring mbuf allocation failures. */
+	struct rte_ether_addr *mac_addrs;
+			/**< Device Ethernet link address.
+			 *   @see rte_eth_dev_release_port()
+			 */
+	uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
+			/**< Bitmap associating MAC addresses to pools. */
+	struct rte_ether_addr *hash_mac_addrs;
+			/**< Device Ethernet MAC addresses of hash filtering.
+			 *   @see rte_eth_dev_release_port()
+			 */
+	uint16_t port_id;           /**< Device [external] port identifier. */
+
+	__extension__
+	uint8_t promiscuous   : 1,
+		/**< RX promiscuous mode ON(1) / OFF(0). */
+		scattered_rx : 1,
+		/**< RX of scattered packets is ON(1) / OFF(0) */
+		all_multicast : 1,
+		/**< RX all multicast mode ON(1) / OFF(0). */
+		dev_started : 1,
+		/**< Device state: STARTED(1) / STOPPED(0). */
+		lro         : 1,
+		/**< RX LRO is ON(1) / OFF(0) */
+		dev_configured : 1;
+		/**< Indicates whether the device is configured.
+		 *   CONFIGURED(1) / NOT CONFIGURED(0).
+		 */
+	uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
+		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
+	uint8_t tx_queue_state[RTE_MAX_QUEUES_PER_PORT];
+		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
+	uint32_t dev_flags;             /**< Capabilities. */
+	int numa_node;                  /**< NUMA node connection. */
+	struct rte_vlan_filter_conf vlan_filter_conf;
+			/**< VLAN filter configuration. */
+	struct rte_eth_dev_owner owner; /**< The port owner. */
+	uint16_t representor_id;
+			/**< Switch-specific identifier.
+			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
+			 */
+
+	pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
+	uint64_t reserved_64s[4]; /**< Reserved for future fields */
+	void *reserved_ptrs[4];   /**< Reserved for future fields */
+} __rte_cache_aligned;
+
+/**
+ * @internal
+ * The pool of *rte_eth_dev* structures. The size of the pool
+ * is configured at compile-time in the <rte_ethdev.c> file.
+ */
+extern struct rte_eth_dev rte_eth_devices[];
+
 /**< @internal Declaration of the hairpin peer queue information structure. */
 struct rte_hairpin_peer_info;
 
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index d5853dff86..d0017bbe05 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -105,147 +105,4 @@ struct rte_eth_fp_ops {
 
 extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
 
-
-/**
- * @internal
- * Structure used to hold information about the callbacks to be called for a
- * queue on RX and TX.
- */
-struct rte_eth_rxtx_callback {
-	struct rte_eth_rxtx_callback *next;
-	union{
-		rte_rx_callback_fn rx;
-		rte_tx_callback_fn tx;
-	} fn;
-	void *param;
-};
-
-/**
- * @internal
- * The generic data structure associated with each ethernet device.
- *
- * Pointers to burst-oriented packet receive and transmit functions are
- * located at the beginning of the structure, along with the pointer to
- * where all the data elements for the particular device are stored in shared
- * memory. This split allows the function pointer and driver data to be per-
- * process, while the actual configuration data for the device is shared.
- */
-struct rte_eth_dev {
-	eth_rx_burst_t rx_pkt_burst; /**< Pointer to PMD receive function. */
-	eth_tx_burst_t tx_pkt_burst; /**< Pointer to PMD transmit function. */
-	eth_tx_prep_t tx_pkt_prepare; /**< Pointer to PMD transmit prepare function. */
-
-	eth_rx_queue_count_t       rx_queue_count; /**< Get the number of used RX descriptors. */
-	eth_rx_descriptor_status_t rx_descriptor_status; /**< Check the status of a Rx descriptor. */
-	eth_tx_descriptor_status_t tx_descriptor_status; /**< Check the status of a Tx descriptor. */
-
-	/**
-	 * Next two fields are per-device data but *data is shared between
-	 * primary and secondary processes and *process_private is per-process
-	 * private. The second one is managed by PMDs if necessary.
-	 */
-	struct rte_eth_dev_data *data;  /**< Pointer to device data. */
-	void *process_private; /**< Pointer to per-process device data. */
-	const struct eth_dev_ops *dev_ops; /**< Functions exported by PMD */
-	struct rte_device *device; /**< Backing device */
-	struct rte_intr_handle *intr_handle; /**< Device interrupt handle */
-	/** User application callbacks for NIC interrupts */
-	struct rte_eth_dev_cb_list link_intr_cbs;
-	/**
-	 * User-supplied functions called from rx_burst to post-process
-	 * received packets before passing them to the user
-	 */
-	struct rte_eth_rxtx_callback *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
-	/**
-	 * User-supplied functions called from tx_burst to pre-process
-	 * received packets before passing them to the driver for transmission.
-	 */
-	struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
-	enum rte_eth_dev_state state; /**< Flag indicating the port state */
-	void *security_ctx; /**< Context for security ops */
-
-	uint64_t reserved_64s[4]; /**< Reserved for future fields */
-	void *reserved_ptrs[4];   /**< Reserved for future fields */
-} __rte_cache_aligned;
-
-struct rte_eth_dev_sriov;
-struct rte_eth_dev_owner;
-
-/**
- * @internal
- * The data part, with no function pointers, associated with each ethernet device.
- *
- * This structure is safe to place in shared memory to be common among different
- * processes in a multi-process configuration.
- */
-struct rte_eth_dev_data {
-	char name[RTE_ETH_NAME_MAX_LEN]; /**< Unique identifier name */
-
-	void **rx_queues; /**< Array of pointers to RX queues. */
-	void **tx_queues; /**< Array of pointers to TX queues. */
-	uint16_t nb_rx_queues; /**< Number of RX queues. */
-	uint16_t nb_tx_queues; /**< Number of TX queues. */
-
-	struct rte_eth_dev_sriov sriov;    /**< SRIOV data */
-
-	void *dev_private;
-			/**< PMD-specific private data.
-			 *   @see rte_eth_dev_release_port()
-			 */
-
-	struct rte_eth_link dev_link;   /**< Link-level information & status. */
-	struct rte_eth_conf dev_conf;   /**< Configuration applied to device. */
-	uint16_t mtu;                   /**< Maximum Transmission Unit. */
-	uint32_t min_rx_buf_size;
-			/**< Common RX buffer size handled by all queues. */
-
-	uint64_t rx_mbuf_alloc_failed; /**< RX ring mbuf allocation failures. */
-	struct rte_ether_addr *mac_addrs;
-			/**< Device Ethernet link address.
-			 *   @see rte_eth_dev_release_port()
-			 */
-	uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
-			/**< Bitmap associating MAC addresses to pools. */
-	struct rte_ether_addr *hash_mac_addrs;
-			/**< Device Ethernet MAC addresses of hash filtering.
-			 *   @see rte_eth_dev_release_port()
-			 */
-	uint16_t port_id;           /**< Device [external] port identifier. */
-
-	__extension__
-	uint8_t promiscuous   : 1, /**< RX promiscuous mode ON(1) / OFF(0). */
-		scattered_rx : 1,  /**< RX of scattered packets is ON(1) / OFF(0) */
-		all_multicast : 1, /**< RX all multicast mode ON(1) / OFF(0). */
-		dev_started : 1,   /**< Device state: STARTED(1) / STOPPED(0). */
-		lro         : 1,   /**< RX LRO is ON(1) / OFF(0) */
-		dev_configured : 1;
-		/**< Indicates whether the device is configured.
-		 *   CONFIGURED(1) / NOT CONFIGURED(0).
-		 */
-	uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
-		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
-	uint8_t tx_queue_state[RTE_MAX_QUEUES_PER_PORT];
-		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
-	uint32_t dev_flags;             /**< Capabilities. */
-	int numa_node;                  /**< NUMA node connection. */
-	struct rte_vlan_filter_conf vlan_filter_conf;
-			/**< VLAN filter configuration. */
-	struct rte_eth_dev_owner owner; /**< The port owner. */
-	uint16_t representor_id;
-			/**< Switch-specific identifier.
-			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
-			 */
-
-	pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
-	uint64_t reserved_64s[4]; /**< Reserved for future fields */
-	void *reserved_ptrs[4];   /**< Reserved for future fields */
-} __rte_cache_aligned;
-
-/**
- * @internal
- * The pool of *rte_eth_dev* structures. The size of the pool
- * is configured at compile-time in the <rte_ethdev.c> file.
- */
-extern struct rte_eth_dev rte_eth_devices[];
-
 #endif /* _RTE_ETHDEV_CORE_H_ */
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 2bad712958..cfe58e519d 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -73,7 +73,6 @@ DPDK_22 {
 	rte_eth_dev_udp_tunnel_port_add;
 	rte_eth_dev_udp_tunnel_port_delete;
 	rte_eth_dev_vlan_filter;
-	rte_eth_devices;
 	rte_eth_find_next;
 	rte_eth_find_next_of;
 	rte_eth_find_next_owned_by;
@@ -270,6 +269,7 @@ INTERNAL {
 	rte_eth_dev_release_port;
 	rte_eth_dev_internal_reset;
 	rte_eth_devargs_parse;
+	rte_eth_devices;
 	rte_eth_dma_zone_free;
 	rte_eth_dma_zone_reserve;
 	rte_eth_hairpin_queue_peer_bind;
diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c
index 13dfb28401..89c4ca5d40 100644
--- a/lib/eventdev/rte_event_eth_rx_adapter.c
+++ b/lib/eventdev/rte_event_eth_rx_adapter.c
@@ -11,7 +11,7 @@
 #include <rte_common.h>
 #include <rte_dev.h>
 #include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_log.h>
 #include <rte_malloc.h>
 #include <rte_service_component.h>
diff --git a/lib/eventdev/rte_event_eth_tx_adapter.c b/lib/eventdev/rte_event_eth_tx_adapter.c
index 18c0359db7..1c06c8707c 100644
--- a/lib/eventdev/rte_event_eth_tx_adapter.c
+++ b/lib/eventdev/rte_event_eth_tx_adapter.c
@@ -3,7 +3,7 @@
  */
 #include <rte_spinlock.h>
 #include <rte_service_component.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 
 #include "eventdev_pmd.h"
 #include "rte_eventdev_trace.h"
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index e347d6dfd5..ebef5f0906 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -29,7 +29,7 @@
 #include <rte_common.h>
 #include <rte_malloc.h>
 #include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_cryptodev.h>
 #include <cryptodev_pmd.h>
 #include <rte_telemetry.h>
diff --git a/lib/metrics/rte_metrics_telemetry.c b/lib/metrics/rte_metrics_telemetry.c
index 269f8ef613..5be21b2e86 100644
--- a/lib/metrics/rte_metrics_telemetry.c
+++ b/lib/metrics/rte_metrics_telemetry.c
@@ -2,7 +2,7 @@
  * Copyright(c) 2020 Intel Corporation
  */
 
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_string_fns.h>
 #ifdef RTE_LIB_TELEMETRY
 #include <telemetry_internal.h>
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v5 0/7] hide eth dev related structures
  2021-10-07 11:27       ` [dpdk-dev] [PATCH v5 " Konstantin Ananyev
                           ` (6 preceding siblings ...)
  2021-10-07 11:27         ` [dpdk-dev] [PATCH v5 7/7] ethdev: hide eth dev related structures Konstantin Ananyev
@ 2021-10-08 18:13         ` Slava Ovsiienko
  2021-10-11  9:22         ` Andrew Rybchenko
  8 siblings, 0 replies; 112+ messages in thread
From: Slava Ovsiienko @ 2021-10-08 18:13 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	Matan Azrad, sthemmin, NBU-Contact-longli, heinrich.kuhn,
	kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
	maxime.coquelin, chenbo.xia, NBU-Contact-Thomas Monjalon,
	ferruh.yigit, mdr, jay.jayatheerthan

Hi,

I've reviewed the series, and it looks good to me.
I see we did not introduce new indirect referencing on the datapath
(just replaced rte_eth_devices[] being hidden with the new rte_eth_fp_ops[].)
My only concern - we'll get two places where pointers to the PMDs routines are stored,
and it means potential unsynchro between them, but I do not see the actual scenario for that.
For example, mlx5 PMD proposes multiple tx/rx_burst routines, and actual routine
selection happens on dev_start(), but then it is not supposed to be changed till dev_stop().
The internal PMD checks like this:
"if (dev->rx_pkt_burst == mlx5_rx_burst)"
supposed to continue working OK as well.

Hence, for series:
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

With best regards,
Slava

> -----Original Message-----
> From: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Sent: Thursday, October 7, 2021 14:28
> To: dev@dpdk.org
> Cc: xiaoyun.li@intel.com; anoobj@marvell.com; jerinj@marvell.com;
> ndabilpuram@marvell.com; adwivedi@marvell.com;
> shepard.siegel@atomicrules.com; ed.czeck@atomicrules.com;
> john.miller@atomicrules.com; irusskikh@marvell.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com;
> rahul.lakkireddy@chelsio.com; hemant.agrawal@nxp.com;
> sachin.saxena@oss.nxp.com; haiyue.wang@intel.com; johndale@cisco.com;
> hyonkim@cisco.com; qi.z.zhang@intel.com; xiao.w.wang@intel.com;
> humin29@huawei.com; yisen.zhuang@huawei.com; oulijun@huawei.com;
> beilei.xing@intel.com; jingjing.wu@intel.com; qiming.yang@intel.com; Matan
> Azrad <matan@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>;
> sthemmin@microsoft.com; NBU-Contact-longli <longli@microsoft.com>;
> heinrich.kuhn@corigine.com; kirankumark@marvell.com;
> andrew.rybchenko@oktetlabs.ru; mczekaj@marvell.com;
> jiawenwu@trustnetic.com; jianwang@trustnetic.com;
> maxime.coquelin@redhat.com; chenbo.xia@intel.com; NBU-Contact-Thomas
> Monjalon <thomas@monjalon.net>; ferruh.yigit@intel.com; mdr@ashroe.eu;
> jay.jayatheerthan@intel.com; Konstantin Ananyev
> <konstantin.ananyev@intel.com>
> Subject: [PATCH v5 0/7] hide eth dev related structures
> 
> v5 changes:
> - Fix spelling (Thomas/David)
> - Rename internal helper functions (David)
> - Reorder patches and update commit messages (Thomas)
> - Update comments (Thomas)
> - Changed layout in rte_eth_fp_ops, to group functions and
>    related data based on their functionality:
>    first 64B line for Rx, second one for Tx.
>    Didn't observe any real performance difference comparing to
>    original layout. Though decided to keep a new one, as it seems
>    a bit more plausible.
> 
> v4 changes:
>  - Fix secondary process attach (Pavan)
>  - Fix build failure (Ferruh)
>  - Update lib/ethdev/verion.map (Ferruh)
>    Note that moving newly added symbols from EXPERIMENTAL to DPDK_22
>    section makes checkpatch.sh to complain.
> 
> v3 changes:
>  - Changes in public struct naming (Jerin/Haiyue)
>  - Split patches
>  - Update docs
>  - Shamelessly included Andrew's patch:
>    https://patches.dpdk.org/project/dpdk/patch/20210928154856.1015020-
> 1-andrew.rybchenko@oktetlabs.ru/
>    into these series.
>    I have to do similar thing here, so decided to avoid duplicated effort.
> 
> The aim of these patch series is to make rte_ethdev core data structures
> (rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback, etc.) internal to DPDK
> and not visible to the user.
> That should allow future possible changes to core ethdev related structures to
> be transparent to the user and help to improve ABI/API stability.
> Note that current ethdev API is preserved, but it is a formal ABI break.
> 
> The work is based on previous discussions at:
> https://www.mail-archive.com/dev@dpdk.org/msg211405.html
> https://www.mail-archive.com/dev@dpdk.org/msg216685.html
> and consists of the following main points:
> 1. Copy public 'fast' function pointers (rx_pkt_burst(), etc.) and
>    related data pointer from rte_eth_dev into a separate flat array.
>    We keep it public to still be able to use inline functions for these
>    'fast' calls (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
>    Note that apart from function pointers itself, each element of this
>    flat array also contains two opaque pointers for each ethdev:
>    1) a pointer to an array of internal queue data pointers
>    2)  points to array of queue callback data pointers.
>    Note that exposing this extra information allows us to avoid extra
>    changes inside PMD level, plus should help to avoid possible
>    performance degradation.
> 2. Change implementation of 'fast' inline ethdev functions
>    (rte_eth_rx_burst(), etc.) to use new public flat array.
>    While it is an ABI breakage, this change is intended to be transparent
>    for both users (no changes in user app is required) and PMD developers
>    (no changes in PMD is required).
>    One extra note - with new implementation RX/TX callback invocation
>    will cost one extra function call with this changes. That might cause
>    some slowdown for code-path with RX/TX callbacks heavily involved.
>    Hope such trade-off is acceptable for the community.
> 3. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
>    things into internal header: <ethdev_driver.h>.
> 
> That approach was selected to:
>   - Avoid(/minimize) possible performance losses.
>   - Minimize required changes inside PMDs.
> 
> Performance testing results (ICX 2.0GHz, E810 (ice)):
>  - testpmd macswap fwd mode, plus
>    a) no RX/TX callbacks:
>       no actual slowdown observed
>    b) bpf-load rx 0 0 JM ./dpdk.org/examples/bpf/t3.o:
>       ~2% slowdown
>  - l3fwd: no actual slowdown observed
> 
> Would like to thank everyone who already reviewed and tested previous
> versions of these series. All other interested parties please don't be shy and
> provide your feedback.
> 
> Andrew Rybchenko (1):
>   ethdev: remove legacy Rx descriptor done API
> 
> Konstantin Ananyev (6):
>   ethdev: allocate max space for internal queue array
>   ethdev: change input parameters for rx_queue_count
>   ethdev: copy fast-path API into separate structure
>   ethdev: make fast-path functions to use new flat array
>   ethdev: add API to retrieve multiple ethernet addresses
>   ethdev: hide eth dev related structures
> 
>  app/test-pmd/config.c                         |  23 +-
>  doc/guides/nics/features.rst                  |   6 +-
>  doc/guides/rel_notes/deprecation.rst          |   5 -
>  doc/guides/rel_notes/release_21_11.rst        |  21 ++
>  drivers/common/octeontx2/otx2_sec_idev.c      |   2 +-
>  drivers/crypto/octeontx2/otx2_cryptodev_ops.c |   2 +-
>  drivers/net/ark/ark_ethdev_rx.c               |   4 +-
>  drivers/net/ark/ark_ethdev_rx.h               |   3 +-
>  drivers/net/atlantic/atl_ethdev.h             |   2 +-
>  drivers/net/atlantic/atl_rxtx.c               |   9 +-
>  drivers/net/bnxt/bnxt_ethdev.c                |   8 +-
>  drivers/net/cxgbe/base/adapter.h              |   2 +-
>  drivers/net/dpaa/dpaa_ethdev.c                |   9 +-
>  drivers/net/dpaa2/dpaa2_ethdev.c              |   9 +-
>  drivers/net/dpaa2/dpaa2_ptp.c                 |   2 +-
>  drivers/net/e1000/e1000_ethdev.h              |  10 +-
>  drivers/net/e1000/em_ethdev.c                 |   1 -
>  drivers/net/e1000/em_rxtx.c                   |  21 +-
>  drivers/net/e1000/igb_ethdev.c                |   2 -
>  drivers/net/e1000/igb_rxtx.c                  |  21 +-
>  drivers/net/enic/enic_ethdev.c                |  12 +-
>  drivers/net/fm10k/fm10k.h                     |   5 +-
>  drivers/net/fm10k/fm10k_ethdev.c              |   1 -
>  drivers/net/fm10k/fm10k_rxtx.c                |  29 +-
>  drivers/net/hns3/hns3_rxtx.c                  |   7 +-
>  drivers/net/hns3/hns3_rxtx.h                  |   2 +-
>  drivers/net/i40e/i40e_ethdev.c                |   1 -
>  drivers/net/i40e/i40e_rxtx.c                  |  30 +-
>  drivers/net/i40e/i40e_rxtx.h                  |   4 +-
>  drivers/net/iavf/iavf_rxtx.c                  |   4 +-
>  drivers/net/iavf/iavf_rxtx.h                  |   2 +-
>  drivers/net/ice/ice_rxtx.c                    |   4 +-
>  drivers/net/ice/ice_rxtx.h                    |   2 +-
>  drivers/net/igc/igc_ethdev.c                  |   1 -
>  drivers/net/igc/igc_txrx.c                    |  23 +-
>  drivers/net/igc/igc_txrx.h                    |   5 +-
>  drivers/net/ixgbe/ixgbe_ethdev.c              |   2 -
>  drivers/net/ixgbe/ixgbe_ethdev.h              |   5 +-
>  drivers/net/ixgbe/ixgbe_rxtx.c                |  22 +-
>  drivers/net/mlx5/mlx5_rx.c                    |  26 +-
>  drivers/net/mlx5/mlx5_rx.h                    |   2 +-
>  drivers/net/netvsc/hn_rxtx.c                  |   4 +-
>  drivers/net/netvsc/hn_var.h                   |   3 +-
>  drivers/net/nfp/nfp_rxtx.c                    |   4 +-
>  drivers/net/nfp/nfp_rxtx.h                    |   3 +-
>  drivers/net/octeontx2/otx2_ethdev.c           |   1 -
>  drivers/net/octeontx2/otx2_ethdev.h           |   3 +-
>  drivers/net/octeontx2/otx2_ethdev_ops.c       |  20 +-
>  drivers/net/sfc/sfc_ethdev.c                  |  29 +-
>  drivers/net/thunderx/nicvf_ethdev.c           |   3 +-
>  drivers/net/thunderx/nicvf_rxtx.c             |   4 +-
>  drivers/net/thunderx/nicvf_rxtx.h             |   2 +-
>  drivers/net/txgbe/txgbe_ethdev.h              |   3 +-
>  drivers/net/txgbe/txgbe_rxtx.c                |   4 +-
>  drivers/net/vhost/rte_eth_vhost.c             |   4 +-
>  drivers/net/virtio/virtio_ethdev.c            |   1 -
>  lib/ethdev/ethdev_driver.h                    | 148 +++++++++
>  lib/ethdev/ethdev_private.c                   |  83 +++++
>  lib/ethdev/ethdev_private.h                   |   7 +
>  lib/ethdev/rte_ethdev.c                       |  89 ++++--
>  lib/ethdev/rte_ethdev.h                       | 288 ++++++++++++------
>  lib/ethdev/rte_ethdev_core.h                  | 171 +++--------
>  lib/ethdev/version.map                        |   8 +-
>  lib/eventdev/rte_event_eth_rx_adapter.c       |   2 +-
>  lib/eventdev/rte_event_eth_tx_adapter.c       |   2 +-
>  lib/eventdev/rte_eventdev.c                   |   2 +-
>  lib/metrics/rte_metrics_telemetry.c           |   2 +-
>  67 files changed, 677 insertions(+), 564 deletions(-)
> 
> --
> 2.26.3


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure
  2021-10-07 11:27         ` [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure Konstantin Ananyev
@ 2021-10-09 12:05           ` fengchengwen
  2021-10-11  1:18             ` fengchengwen
                               ` (2 more replies)
  2021-10-11  8:25           ` Andrew Rybchenko
  1 sibling, 3 replies; 112+ messages in thread
From: fengchengwen @ 2021-10-09 12:05 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan

On 2021/10/7 19:27, Konstantin Ananyev wrote:
> Copy public function pointers (rx_pkt_burst(), etc.) and related
> pointers to internal data from rte_eth_dev structure into a
> separate flat array. That array will remain in a public header.
> The intention here is to make rte_eth_dev and related structures internal.
> That should allow future possible changes to core eth_dev structures
> to be transparent to the user and help to avoid ABI/API breakages.
> The plan is to keep minimal part of data from rte_eth_dev public,
> so we still can use inline functions for fast-path calls
> (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
> The whole idea beyond this new schema:
> 1. PMDs keep to setup fast-path function pointers and related data
>    inside rte_eth_dev struct in the same way they did it before.
> 2. Inside rte_eth_dev_start() and inside rte_eth_dev_probing_finish()
>    (for secondary process) we call eth_dev_fp_ops_setup, which
>    copies these function and data pointers into rte_eth_fp_ops[port_id].
> 3. Inside rte_eth_dev_stop() and inside rte_eth_dev_release_port()
>    we call eth_dev_fp_ops_reset(), which resets rte_eth_fp_ops[port_id]
>    into some dummy values.
> 4. fast-path ethdev API (rte_eth_rx_burst(), etc.) will use that new
>    flat array to call PMD specific functions.
> That approach should allow us to make rte_eth_devices[] private
> without introducing regression and help to avoid changes in drivers code.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>  lib/ethdev/ethdev_private.c  | 52 ++++++++++++++++++++++++++++++++++
>  lib/ethdev/ethdev_private.h  |  7 +++++
>  lib/ethdev/rte_ethdev.c      | 27 ++++++++++++++++++
>  lib/ethdev/rte_ethdev_core.h | 55 ++++++++++++++++++++++++++++++++++++
>  4 files changed, 141 insertions(+)
> 
> diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
> index 012cf73ca2..3eeda6e9f9 100644
> --- a/lib/ethdev/ethdev_private.c
> +++ b/lib/ethdev/ethdev_private.c
> @@ -174,3 +174,55 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
>  		RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
>  	return str == NULL ? -1 : 0;
>  }
> +
> +static uint16_t
> +dummy_eth_rx_burst(__rte_unused void *rxq,
> +		__rte_unused struct rte_mbuf **rx_pkts,
> +		__rte_unused uint16_t nb_pkts)
> +{
> +	RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
> +	rte_errno = ENOTSUP;
> +	return 0;
> +}
> +
> +static uint16_t
> +dummy_eth_tx_burst(__rte_unused void *txq,
> +		__rte_unused struct rte_mbuf **tx_pkts,
> +		__rte_unused uint16_t nb_pkts)
> +{
> +	RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n");
> +	rte_errno = ENOTSUP;
> +	return 0;
> +}
> +
> +void
> +eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo)

The port_id parameter is preferable, this will hide rte_eth_fp_ops as much as possible.

> +{
> +	static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
> +	static const struct rte_eth_fp_ops dummy_ops = {
> +		.rx_pkt_burst = dummy_eth_rx_burst,
> +		.tx_pkt_burst = dummy_eth_tx_burst,
> +		.rxq = {.data = dummy_data, .clbk = dummy_data,},
> +		.txq = {.data = dummy_data, .clbk = dummy_data,},
> +	};
> +
> +	*fpo = dummy_ops;
> +}
> +
> +void
> +eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> +		const struct rte_eth_dev *dev)

Because fp_ops and eth_dev is a one-to-one correspondence. It's better only use
port_id parameter.

> +{
> +	fpo->rx_pkt_burst = dev->rx_pkt_burst;
> +	fpo->tx_pkt_burst = dev->tx_pkt_burst;
> +	fpo->tx_pkt_prepare = dev->tx_pkt_prepare;
> +	fpo->rx_queue_count = dev->rx_queue_count;
> +	fpo->rx_descriptor_status = dev->rx_descriptor_status;
> +	fpo->tx_descriptor_status = dev->tx_descriptor_status;
> +
> +	fpo->rxq.data = dev->data->rx_queues;
> +	fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
> +
> +	fpo->txq.data = dev->data->tx_queues;
> +	fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
> +}
> diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
> index 3724429577..5721be7bdc 100644
> --- a/lib/ethdev/ethdev_private.h
> +++ b/lib/ethdev/ethdev_private.h
> @@ -26,4 +26,11 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
>  /* Parse devargs value for representor parameter. */
>  int rte_eth_devargs_parse_representor_ports(char *str, void *data);
>  
> +/* reset eth fast-path API to dummy values */
> +void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
> +
> +/* setup eth fast-path API to ethdev values */
> +void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> +		const struct rte_eth_dev *dev);

Some drivers control the transmit/receive function during operation. E.g.
for hns3 driver, when detect reset, primary process will set rx/tx burst to dummy, after
process reset, primary process will set the correct rx/tx burst. During this process, the
send and receive threads are still working, but the bursts they call are changed. So:
1. it is recommended that trace be deleted from the dummy function.
2. public the eth_dev_fp_ops_reset/setup interface for driver usage.

> +
>  #endif /* _ETH_PRIVATE_H_ */
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index c8abda6dd7..9f7a0cbb8c 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -44,6 +44,9 @@
>  static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
>  struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
>  
> +/* public fast-path API */
> +struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
> +
>  /* spinlock for eth device callbacks */
>  static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
>  
> @@ -578,6 +581,8 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
>  		rte_eth_dev_callback_process(eth_dev,
>  				RTE_ETH_EVENT_DESTROY, NULL);
>  
> +	eth_dev_fp_ops_reset(rte_eth_fp_ops + eth_dev->data->port_id);
> +
>  	rte_spinlock_lock(&eth_dev_shared_data->ownership_lock);
>  
>  	eth_dev->state = RTE_ETH_DEV_UNUSED;
> @@ -1787,6 +1792,9 @@ rte_eth_dev_start(uint16_t port_id)
>  		(*dev->dev_ops->link_update)(dev, 0);
>  	}
>  
> +	/* expose selection of PMD fast-path functions */
> +	eth_dev_fp_ops_setup(rte_eth_fp_ops + port_id, dev);
> +
>  	rte_ethdev_trace_start(port_id);
>  	return 0;
>  }
> @@ -1809,6 +1817,9 @@ rte_eth_dev_stop(uint16_t port_id)
>  		return 0;
>  	}
>  
> +	/* point fast-path functions to dummy ones */
> +	eth_dev_fp_ops_reset(rte_eth_fp_ops + port_id);
> +
>  	dev->data->dev_started = 0;
>  	ret = (*dev->dev_ops->dev_stop)(dev);
>  	rte_ethdev_trace_stop(port_id, ret);
> @@ -4567,6 +4578,14 @@ rte_eth_mirror_rule_reset(uint16_t port_id, uint8_t rule_id)
>  	return eth_err(port_id, (*dev->dev_ops->mirror_rule_reset)(dev, rule_id));
>  }
>  
> +RTE_INIT(eth_dev_init_fp_ops)
> +{
> +	uint32_t i;
> +
> +	for (i = 0; i != RTE_DIM(rte_eth_fp_ops); i++)
> +		eth_dev_fp_ops_reset(rte_eth_fp_ops + i);
> +}
> +
>  RTE_INIT(eth_dev_init_cb_lists)
>  {
>  	uint16_t i;
> @@ -4735,6 +4754,14 @@ rte_eth_dev_probing_finish(struct rte_eth_dev *dev)
>  	if (dev == NULL)
>  		return;
>  
> +	/*
> +	 * for secondary process, at that point we expect device
> +	 * to be already 'usable', so shared data and all function pointers
> +	 * for fast-path devops have to be setup properly inside rte_eth_dev.
> +	 */
> +	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
> +		eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev);
> +
>  	rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL);
>  
>  	dev->state = RTE_ETH_DEV_ATTACHED;
> diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
> index 51cd68de94..d5853dff86 100644
> --- a/lib/ethdev/rte_ethdev_core.h
> +++ b/lib/ethdev/rte_ethdev_core.h
> @@ -50,6 +50,61 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
>  typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
>  /**< @internal Check the status of a Tx descriptor */
>  
> +/**
> + * @internal
> + * Structure used to hold opaque pointers to internal ethdev Rx/Tx
> + * queues data.
> + * The main purpose to expose these pointers at all - allow compiler
> + * to fetch this data for fast-path ethdev inline functions in advance.
> + */
> +struct rte_ethdev_qdata {
> +	void **data;
> +	/**< points to array of internal queue data pointers */
> +	void **clbk;
> +	/**< points to array of queue callback data pointers */
> +};
> +
> +/**
> + * @internal
> + * fast-path ethdev functions and related data are hold in a flat array.
> + * One entry per ethdev.
> + * On 64-bit systems contents of this structure occupy exactly two 64B lines.
> + * On 32-bit systems contents of this structure fits into one 64B line.
> + */
> +struct rte_eth_fp_ops {
> +
> +	/**
> +	 * Rx fast-path functions and related data.
> +	 * 64-bit systems: occupies first 64B line
> +	 */
> +	eth_rx_burst_t rx_pkt_burst;
> +	/**< PMD receive function. */
> +	eth_rx_queue_count_t rx_queue_count;
> +	/**< Get the number of used RX descriptors. */
> +	eth_rx_descriptor_status_t rx_descriptor_status;
> +	/**< Check the status of a Rx descriptor. */
> +	struct rte_ethdev_qdata rxq;
> +	/**< Rx queues data. */
> +	uintptr_t reserved1[3];
> +
> +	/**
> +	 * Tx fast-path functions and related data.
> +	 * 64-bit systems: occupies second 64B line
> +	 */
> +	eth_tx_burst_t tx_pkt_burst;

Why not place rx_pkt_burst/tx_pkt_burst/rxq /txq to the first cacheline ?
Other function, e.g. rx_queue_count/descriptor_status are low frequency call functions.

> +	/**< PMD transmit function. */
> +	eth_tx_prep_t tx_pkt_prepare;
> +	/**< PMD transmit prepare function. */
> +	eth_tx_descriptor_status_t tx_descriptor_status;
> +	/**< Check the status of a Tx descriptor. */
> +	struct rte_ethdev_qdata txq;
> +	/**< Tx queues data. */
> +	uintptr_t reserved2[3];
> +
> +} __rte_cache_aligned;
> +
> +extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
> +
>  
>  /**
>   * @internal
> 


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure
  2021-10-09 12:05           ` fengchengwen
@ 2021-10-11  1:18             ` fengchengwen
  2021-10-11  8:39               ` Andrew Rybchenko
  2021-10-11 15:24               ` Ananyev, Konstantin
  2021-10-11  8:35             ` Andrew Rybchenko
  2021-10-11 15:15             ` Ananyev, Konstantin
  2 siblings, 2 replies; 112+ messages in thread
From: fengchengwen @ 2021-10-11  1:18 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan

Sorry to self-reply.

I think it's better the 'struct rte_eth_dev *dev' hold a pointer to the
'struct rte_eth_fp_ops', e.g.

	struct rte_eth_dev {
		struct rte_eth_fp_ops *fp_ops;
		...  // other field
	}

The eth framework set the pointer in the rte_eth_dev_pci_allocate(), and driver fill
corresponding callback:
	dev->fp_ops->rx_pkt_burst = xxx_recv_pkts;
	dev->fp_ops->tx_pkt_burst = xxx_xmit_pkts;
	...

In this way, the behavior of the primary and secondary processes can be unified, which
is basically the same as that of the original process.


On 2021/10/9 20:05, fengchengwen wrote:
> On 2021/10/7 19:27, Konstantin Ananyev wrote:
>> Copy public function pointers (rx_pkt_burst(), etc.) and related
>> pointers to internal data from rte_eth_dev structure into a
>> separate flat array. That array will remain in a public header.
>> The intention here is to make rte_eth_dev and related structures internal.
>> That should allow future possible changes to core eth_dev structures
>> to be transparent to the user and help to avoid ABI/API breakages.
>> The plan is to keep minimal part of data from rte_eth_dev public,
>> so we still can use inline functions for fast-path calls
>> (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
>> The whole idea beyond this new schema:
>> 1. PMDs keep to setup fast-path function pointers and related data
>>    inside rte_eth_dev struct in the same way they did it before.
>> 2. Inside rte_eth_dev_start() and inside rte_eth_dev_probing_finish()
>>    (for secondary process) we call eth_dev_fp_ops_setup, which
>>    copies these function and data pointers into rte_eth_fp_ops[port_id].
>> 3. Inside rte_eth_dev_stop() and inside rte_eth_dev_release_port()
>>    we call eth_dev_fp_ops_reset(), which resets rte_eth_fp_ops[port_id]
>>    into some dummy values.
>> 4. fast-path ethdev API (rte_eth_rx_burst(), etc.) will use that new
>>    flat array to call PMD specific functions.
>> That approach should allow us to make rte_eth_devices[] private
>> without introducing regression and help to avoid changes in drivers code.
>>
>> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>> ---
>>  lib/ethdev/ethdev_private.c  | 52 ++++++++++++++++++++++++++++++++++
>>  lib/ethdev/ethdev_private.h  |  7 +++++
>>  lib/ethdev/rte_ethdev.c      | 27 ++++++++++++++++++
>>  lib/ethdev/rte_ethdev_core.h | 55 ++++++++++++++++++++++++++++++++++++
>>  4 files changed, 141 insertions(+)
>>
>> diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
>> index 012cf73ca2..3eeda6e9f9 100644
>> --- a/lib/ethdev/ethdev_private.c
>> +++ b/lib/ethdev/ethdev_private.c
>> @@ -174,3 +174,55 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
>>  		RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
>>  	return str == NULL ? -1 : 0;
>>  }
>> +
>> +static uint16_t
>> +dummy_eth_rx_burst(__rte_unused void *rxq,
>> +		__rte_unused struct rte_mbuf **rx_pkts,
>> +		__rte_unused uint16_t nb_pkts)
>> +{
>> +	RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
>> +	rte_errno = ENOTSUP;
>> +	return 0;
>> +}
>> +
>> +static uint16_t
>> +dummy_eth_tx_burst(__rte_unused void *txq,
>> +		__rte_unused struct rte_mbuf **tx_pkts,
>> +		__rte_unused uint16_t nb_pkts)
>> +{
>> +	RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n");
>> +	rte_errno = ENOTSUP;
>> +	return 0;
>> +}
>> +
>> +void
>> +eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo)
> 
> The port_id parameter is preferable, this will hide rte_eth_fp_ops as much as possible.
> 
>> +{
>> +	static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
>> +	static const struct rte_eth_fp_ops dummy_ops = {
>> +		.rx_pkt_burst = dummy_eth_rx_burst,
>> +		.tx_pkt_burst = dummy_eth_tx_burst,
>> +		.rxq = {.data = dummy_data, .clbk = dummy_data,},
>> +		.txq = {.data = dummy_data, .clbk = dummy_data,},
>> +	};
>> +
>> +	*fpo = dummy_ops;
>> +}
>> +
>> +void
>> +eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
>> +		const struct rte_eth_dev *dev)
> 
> Because fp_ops and eth_dev is a one-to-one correspondence. It's better only use
> port_id parameter.
> 
>> +{
>> +	fpo->rx_pkt_burst = dev->rx_pkt_burst;
>> +	fpo->tx_pkt_burst = dev->tx_pkt_burst;
>> +	fpo->tx_pkt_prepare = dev->tx_pkt_prepare;
>> +	fpo->rx_queue_count = dev->rx_queue_count;
>> +	fpo->rx_descriptor_status = dev->rx_descriptor_status;
>> +	fpo->tx_descriptor_status = dev->tx_descriptor_status;
>> +
>> +	fpo->rxq.data = dev->data->rx_queues;
>> +	fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
>> +
>> +	fpo->txq.data = dev->data->tx_queues;
>> +	fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
>> +}
>> diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
>> index 3724429577..5721be7bdc 100644
>> --- a/lib/ethdev/ethdev_private.h
>> +++ b/lib/ethdev/ethdev_private.h
>> @@ -26,4 +26,11 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
>>  /* Parse devargs value for representor parameter. */
>>  int rte_eth_devargs_parse_representor_ports(char *str, void *data);
>>  
>> +/* reset eth fast-path API to dummy values */
>> +void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
>> +
>> +/* setup eth fast-path API to ethdev values */
>> +void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
>> +		const struct rte_eth_dev *dev);
> 
> Some drivers control the transmit/receive function during operation. E.g.
> for hns3 driver, when detect reset, primary process will set rx/tx burst to dummy, after
> process reset, primary process will set the correct rx/tx burst. During this process, the
> send and receive threads are still working, but the bursts they call are changed. So:
> 1. it is recommended that trace be deleted from the dummy function.
> 2. public the eth_dev_fp_ops_reset/setup interface for driver usage.
> 
>> +
>>  #endif /* _ETH_PRIVATE_H_ */
>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
>> index c8abda6dd7..9f7a0cbb8c 100644
>> --- a/lib/ethdev/rte_ethdev.c
>> +++ b/lib/ethdev/rte_ethdev.c
>> @@ -44,6 +44,9 @@
>>  static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
>>  struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
>>  
>> +/* public fast-path API */
>> +struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
>> +
>>  /* spinlock for eth device callbacks */
>>  static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
>>  
>> @@ -578,6 +581,8 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
>>  		rte_eth_dev_callback_process(eth_dev,
>>  				RTE_ETH_EVENT_DESTROY, NULL);
>>  
>> +	eth_dev_fp_ops_reset(rte_eth_fp_ops + eth_dev->data->port_id);
>> +
>>  	rte_spinlock_lock(&eth_dev_shared_data->ownership_lock);
>>  
>>  	eth_dev->state = RTE_ETH_DEV_UNUSED;
>> @@ -1787,6 +1792,9 @@ rte_eth_dev_start(uint16_t port_id)
>>  		(*dev->dev_ops->link_update)(dev, 0);
>>  	}
>>  
>> +	/* expose selection of PMD fast-path functions */
>> +	eth_dev_fp_ops_setup(rte_eth_fp_ops + port_id, dev);
>> +
>>  	rte_ethdev_trace_start(port_id);
>>  	return 0;
>>  }
>> @@ -1809,6 +1817,9 @@ rte_eth_dev_stop(uint16_t port_id)
>>  		return 0;
>>  	}
>>  
>> +	/* point fast-path functions to dummy ones */
>> +	eth_dev_fp_ops_reset(rte_eth_fp_ops + port_id);
>> +
>>  	dev->data->dev_started = 0;
>>  	ret = (*dev->dev_ops->dev_stop)(dev);
>>  	rte_ethdev_trace_stop(port_id, ret);
>> @@ -4567,6 +4578,14 @@ rte_eth_mirror_rule_reset(uint16_t port_id, uint8_t rule_id)
>>  	return eth_err(port_id, (*dev->dev_ops->mirror_rule_reset)(dev, rule_id));
>>  }
>>  
>> +RTE_INIT(eth_dev_init_fp_ops)
>> +{
>> +	uint32_t i;
>> +
>> +	for (i = 0; i != RTE_DIM(rte_eth_fp_ops); i++)
>> +		eth_dev_fp_ops_reset(rte_eth_fp_ops + i);
>> +}
>> +
>>  RTE_INIT(eth_dev_init_cb_lists)
>>  {
>>  	uint16_t i;
>> @@ -4735,6 +4754,14 @@ rte_eth_dev_probing_finish(struct rte_eth_dev *dev)
>>  	if (dev == NULL)
>>  		return;
>>  
>> +	/*
>> +	 * for secondary process, at that point we expect device
>> +	 * to be already 'usable', so shared data and all function pointers
>> +	 * for fast-path devops have to be setup properly inside rte_eth_dev.
>> +	 */
>> +	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
>> +		eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev);
>> +
>>  	rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL);
>>  
>>  	dev->state = RTE_ETH_DEV_ATTACHED;
>> diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
>> index 51cd68de94..d5853dff86 100644
>> --- a/lib/ethdev/rte_ethdev_core.h
>> +++ b/lib/ethdev/rte_ethdev_core.h
>> @@ -50,6 +50,61 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
>>  typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
>>  /**< @internal Check the status of a Tx descriptor */
>>  
>> +/**
>> + * @internal
>> + * Structure used to hold opaque pointers to internal ethdev Rx/Tx
>> + * queues data.
>> + * The main purpose to expose these pointers at all - allow compiler
>> + * to fetch this data for fast-path ethdev inline functions in advance.
>> + */
>> +struct rte_ethdev_qdata {
>> +	void **data;
>> +	/**< points to array of internal queue data pointers */
>> +	void **clbk;
>> +	/**< points to array of queue callback data pointers */
>> +};
>> +
>> +/**
>> + * @internal
>> + * fast-path ethdev functions and related data are hold in a flat array.
>> + * One entry per ethdev.
>> + * On 64-bit systems contents of this structure occupy exactly two 64B lines.
>> + * On 32-bit systems contents of this structure fits into one 64B line.
>> + */
>> +struct rte_eth_fp_ops {
>> +
>> +	/**
>> +	 * Rx fast-path functions and related data.
>> +	 * 64-bit systems: occupies first 64B line
>> +	 */
>> +	eth_rx_burst_t rx_pkt_burst;
>> +	/**< PMD receive function. */
>> +	eth_rx_queue_count_t rx_queue_count;
>> +	/**< Get the number of used RX descriptors. */
>> +	eth_rx_descriptor_status_t rx_descriptor_status;
>> +	/**< Check the status of a Rx descriptor. */
>> +	struct rte_ethdev_qdata rxq;
>> +	/**< Rx queues data. */
>> +	uintptr_t reserved1[3];
>> +
>> +	/**
>> +	 * Tx fast-path functions and related data.
>> +	 * 64-bit systems: occupies second 64B line
>> +	 */
>> +	eth_tx_burst_t tx_pkt_burst;
> 
> Why not place rx_pkt_burst/tx_pkt_burst/rxq /txq to the first cacheline ?
> Other function, e.g. rx_queue_count/descriptor_status are low frequency call functions.
> 
>> +	/**< PMD transmit function. */
>> +	eth_tx_prep_t tx_pkt_prepare;
>> +	/**< PMD transmit prepare function. */
>> +	eth_tx_descriptor_status_t tx_descriptor_status;
>> +	/**< Check the status of a Tx descriptor. */
>> +	struct rte_ethdev_qdata txq;
>> +	/**< Tx queues data. */
>> +	uintptr_t reserved2[3];
>> +
>> +} __rte_cache_aligned;
>> +
>> +extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
>> +
>>  
>>  /**
>>   * @internal
>>
> 
> 
> .
> 


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v5 3/7] ethdev: change input parameters for rx_queue_count
  2021-10-07 11:27         ` [dpdk-dev] [PATCH v5 3/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
@ 2021-10-11  8:06           ` Andrew Rybchenko
  2021-10-12 17:59           ` Hyong Youb Kim (hyonkim)
  1 sibling, 0 replies; 112+ messages in thread
From: Andrew Rybchenko @ 2021-10-11  8:06 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	mczekaj, jiawenwu, jianwang, maxime.coquelin, chenbo.xia, thomas,
	ferruh.yigit, mdr, jay.jayatheerthan

On 10/7/21 2:27 PM, Konstantin Ananyev wrote:
> Currently majority of fast-path ethdev ops take pointers to internal
> queue data structures as an input parameter.
> While eth_rx_queue_count() takes a pointer to rte_eth_dev and queue
> index.
> For future work to hide rte_eth_devices[] and friends it would be
> plausible to unify parameters list of all fast-path ethdev ops.
> This patch changes eth_rx_queue_count() to accept pointer to internal
> queue data as input parameter.
> While this change is transparent to user, it still counts as an ABI change,
> as eth_rx_queue_count_t is used by ethdev public inline function
> rte_eth_rx_queue_count().
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

The patch introduces a number usages of rte_eth_devices in
drivers. As I understand it is undesirable, but I don't
think it is a blocker of the patch series. It should be
addresses separately.

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure
  2021-10-07 11:27         ` [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure Konstantin Ananyev
  2021-10-09 12:05           ` fengchengwen
@ 2021-10-11  8:25           ` Andrew Rybchenko
  2021-10-11 16:52             ` Ananyev, Konstantin
  1 sibling, 1 reply; 112+ messages in thread
From: Andrew Rybchenko @ 2021-10-11  8:25 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	mczekaj, jiawenwu, jianwang, maxime.coquelin, chenbo.xia, thomas,
	ferruh.yigit, mdr, jay.jayatheerthan

On 10/7/21 2:27 PM, Konstantin Ananyev wrote:
> Copy public function pointers (rx_pkt_burst(), etc.) and related
> pointers to internal data from rte_eth_dev structure into a
> separate flat array. That array will remain in a public header.
> The intention here is to make rte_eth_dev and related structures internal.
> That should allow future possible changes to core eth_dev structures
> to be transparent to the user and help to avoid ABI/API breakages.
> The plan is to keep minimal part of data from rte_eth_dev public,
> so we still can use inline functions for fast-path calls
> (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
> The whole idea beyond this new schema:
> 1. PMDs keep to setup fast-path function pointers and related data
>    inside rte_eth_dev struct in the same way they did it before.
> 2. Inside rte_eth_dev_start() and inside rte_eth_dev_probing_finish()
>    (for secondary process) we call eth_dev_fp_ops_setup, which
>    copies these function and data pointers into rte_eth_fp_ops[port_id].
> 3. Inside rte_eth_dev_stop() and inside rte_eth_dev_release_port()
>    we call eth_dev_fp_ops_reset(), which resets rte_eth_fp_ops[port_id]
>    into some dummy values.
> 4. fast-path ethdev API (rte_eth_rx_burst(), etc.) will use that new
>    flat array to call PMD specific functions.
> That approach should allow us to make rte_eth_devices[] private
> without introducing regression and help to avoid changes in drivers code.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

Overall LGTM, few nits below.

> ---
>  lib/ethdev/ethdev_private.c  | 52 ++++++++++++++++++++++++++++++++++
>  lib/ethdev/ethdev_private.h  |  7 +++++
>  lib/ethdev/rte_ethdev.c      | 27 ++++++++++++++++++
>  lib/ethdev/rte_ethdev_core.h | 55 ++++++++++++++++++++++++++++++++++++
>  4 files changed, 141 insertions(+)
> 
> diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
> index 012cf73ca2..3eeda6e9f9 100644
> --- a/lib/ethdev/ethdev_private.c
> +++ b/lib/ethdev/ethdev_private.c
> @@ -174,3 +174,55 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
>  		RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
>  	return str == NULL ? -1 : 0;
>  }
> +
> +static uint16_t
> +dummy_eth_rx_burst(__rte_unused void *rxq,
> +		__rte_unused struct rte_mbuf **rx_pkts,
> +		__rte_unused uint16_t nb_pkts)
> +{
> +	RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");

May be "unconfigured" -> "stopped" ? Or "non-started" ?

> +	rte_errno = ENOTSUP;
> +	return 0;
> +}
> +
> +static uint16_t
> +dummy_eth_tx_burst(__rte_unused void *txq,
> +		__rte_unused struct rte_mbuf **tx_pkts,
> +		__rte_unused uint16_t nb_pkts)
> +{
> +	RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n");

May be "unconfigured" -> "stopped" ?

> +	rte_errno = ENOTSUP;
> +	return 0;
> +}
> +
> +void
> +eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo)
> +{
> +	static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
> +	static const struct rte_eth_fp_ops dummy_ops = {
> +		.rx_pkt_burst = dummy_eth_rx_burst,
> +		.tx_pkt_burst = dummy_eth_tx_burst,
> +		.rxq = {.data = dummy_data, .clbk = dummy_data,},
> +		.txq = {.data = dummy_data, .clbk = dummy_data,},
> +	};
> +
> +	*fpo = dummy_ops;
> +}
> +
> +void
> +eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> +		const struct rte_eth_dev *dev)
> +{
> +	fpo->rx_pkt_burst = dev->rx_pkt_burst;
> +	fpo->tx_pkt_burst = dev->tx_pkt_burst;
> +	fpo->tx_pkt_prepare = dev->tx_pkt_prepare;
> +	fpo->rx_queue_count = dev->rx_queue_count;
> +	fpo->rx_descriptor_status = dev->rx_descriptor_status;
> +	fpo->tx_descriptor_status = dev->tx_descriptor_status;
> +
> +	fpo->rxq.data = dev->data->rx_queues;
> +	fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
> +
> +	fpo->txq.data = dev->data->tx_queues;
> +	fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
> +}
> diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
> index 3724429577..5721be7bdc 100644
> --- a/lib/ethdev/ethdev_private.h
> +++ b/lib/ethdev/ethdev_private.h
> @@ -26,4 +26,11 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
>  /* Parse devargs value for representor parameter. */
>  int rte_eth_devargs_parse_representor_ports(char *str, void *data);
>  
> +/* reset eth fast-path API to dummy values */
> +void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
> +
> +/* setup eth fast-path API to ethdev values */
> +void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> +		const struct rte_eth_dev *dev);
> +
>  #endif /* _ETH_PRIVATE_H_ */
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index c8abda6dd7..9f7a0cbb8c 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -44,6 +44,9 @@
>  static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
>  struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
>  
> +/* public fast-path API */

Shoudn't it be a doxygen style comment?

> +struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
> +
>  /* spinlock for eth device callbacks */
>  static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
>  

[snip]

> diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
> index 51cd68de94..d5853dff86 100644
> --- a/lib/ethdev/rte_ethdev_core.h
> +++ b/lib/ethdev/rte_ethdev_core.h
> @@ -50,6 +50,61 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
>  typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
>  /**< @internal Check the status of a Tx descriptor */
>  
> +/**
> + * @internal
> + * Structure used to hold opaque pointers to internal ethdev Rx/Tx
> + * queues data.
> + * The main purpose to expose these pointers at all - allow compiler
> + * to fetch this data for fast-path ethdev inline functions in advance.
> + */
> +struct rte_ethdev_qdata {
> +	void **data;
> +	/**< points to array of internal queue data pointers */

Please, put the documentation on the same like or just
put documentation before the documented member.

> +	void **clbk;
> +	/**< points to array of queue callback data pointers */
> +};
> +
> +/**
> + * @internal
> + * fast-path ethdev functions and related data are hold in a flat array.
> + * One entry per ethdev.
> + * On 64-bit systems contents of this structure occupy exactly two 64B lines.
> + * On 32-bit systems contents of this structure fits into one 64B line.
> + */
> +struct rte_eth_fp_ops {
> +
> +	/**
> +	 * Rx fast-path functions and related data.
> +	 * 64-bit systems: occupies first 64B line
> +	 */

As I understand the above comment is for a group of below
fields. If so, Doxygen annocation for member groups should
be used.

> +	eth_rx_burst_t rx_pkt_burst;
> +	/**< PMD receive function. */

May I ask to avoid usage of documentation after member in a
separate line. It makes sense to if it is located in the same
line, but otherwise, it should be simply put before the
documented member.

[snip]

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure
  2021-10-09 12:05           ` fengchengwen
  2021-10-11  1:18             ` fengchengwen
@ 2021-10-11  8:35             ` Andrew Rybchenko
  2021-10-11 15:15             ` Ananyev, Konstantin
  2 siblings, 0 replies; 112+ messages in thread
From: Andrew Rybchenko @ 2021-10-11  8:35 UTC (permalink / raw)
  To: fengchengwen, Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	mczekaj, jiawenwu, jianwang, maxime.coquelin, chenbo.xia, thomas,
	ferruh.yigit, mdr, jay.jayatheerthan

On 10/9/21 3:05 PM, fengchengwen wrote:
> On 2021/10/7 19:27, Konstantin Ananyev wrote:
>> Copy public function pointers (rx_pkt_burst(), etc.) and related
>> pointers to internal data from rte_eth_dev structure into a
>> separate flat array. That array will remain in a public header.
>> The intention here is to make rte_eth_dev and related structures internal.
>> That should allow future possible changes to core eth_dev structures
>> to be transparent to the user and help to avoid ABI/API breakages.
>> The plan is to keep minimal part of data from rte_eth_dev public,
>> so we still can use inline functions for fast-path calls
>> (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
>> The whole idea beyond this new schema:
>> 1. PMDs keep to setup fast-path function pointers and related data
>>    inside rte_eth_dev struct in the same way they did it before.
>> 2. Inside rte_eth_dev_start() and inside rte_eth_dev_probing_finish()
>>    (for secondary process) we call eth_dev_fp_ops_setup, which
>>    copies these function and data pointers into rte_eth_fp_ops[port_id].
>> 3. Inside rte_eth_dev_stop() and inside rte_eth_dev_release_port()
>>    we call eth_dev_fp_ops_reset(), which resets rte_eth_fp_ops[port_id]
>>    into some dummy values.
>> 4. fast-path ethdev API (rte_eth_rx_burst(), etc.) will use that new
>>    flat array to call PMD specific functions.
>> That approach should allow us to make rte_eth_devices[] private
>> without introducing regression and help to avoid changes in drivers code.
>>
>> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>> ---
>>  lib/ethdev/ethdev_private.c  | 52 ++++++++++++++++++++++++++++++++++
>>  lib/ethdev/ethdev_private.h  |  7 +++++
>>  lib/ethdev/rte_ethdev.c      | 27 ++++++++++++++++++
>>  lib/ethdev/rte_ethdev_core.h | 55 ++++++++++++++++++++++++++++++++++++
>>  4 files changed, 141 insertions(+)
>>
>> diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
>> index 012cf73ca2..3eeda6e9f9 100644
>> --- a/lib/ethdev/ethdev_private.c
>> +++ b/lib/ethdev/ethdev_private.c
>> @@ -174,3 +174,55 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
>>  		RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
>>  	return str == NULL ? -1 : 0;
>>  }
>> +
>> +static uint16_t
>> +dummy_eth_rx_burst(__rte_unused void *rxq,
>> +		__rte_unused struct rte_mbuf **rx_pkts,
>> +		__rte_unused uint16_t nb_pkts)
>> +{
>> +	RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
>> +	rte_errno = ENOTSUP;
>> +	return 0;
>> +}
>> +
>> +static uint16_t
>> +dummy_eth_tx_burst(__rte_unused void *txq,
>> +		__rte_unused struct rte_mbuf **tx_pkts,
>> +		__rte_unused uint16_t nb_pkts)
>> +{
>> +	RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n");
>> +	rte_errno = ENOTSUP;
>> +	return 0;
>> +}
>> +
>> +void
>> +eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo)
> 
> The port_id parameter is preferable, this will hide rte_eth_fp_ops as much as possible.

Sorry, but I see no point to hide it inside ethdev.
Of course, prototype should be reconsidered if we make
it ethdev-internal API available for drivers.
If so, I agree that the parameter should be port_id.

[snip]

>> diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
>> index 3724429577..5721be7bdc 100644
>> --- a/lib/ethdev/ethdev_private.h
>> +++ b/lib/ethdev/ethdev_private.h
>> @@ -26,4 +26,11 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
>>  /* Parse devargs value for representor parameter. */
>>  int rte_eth_devargs_parse_representor_ports(char *str, void *data);
>>  
>> +/* reset eth fast-path API to dummy values */
>> +void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
>> +
>> +/* setup eth fast-path API to ethdev values */
>> +void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
>> +		const struct rte_eth_dev *dev);
> 
> Some drivers control the transmit/receive function during operation. E.g.
> for hns3 driver, when detect reset, primary process will set rx/tx burst to dummy, after
> process reset, primary process will set the correct rx/tx burst. During this process, the
> send and receive threads are still working, but the bursts they call are changed. So:
> 1. it is recommended that trace be deleted from the dummy function.
> 2. public the eth_dev_fp_ops_reset/setup interface for driver usage.

Good point.

[snip]

>> diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
>> index 51cd68de94..d5853dff86 100644
>> --- a/lib/ethdev/rte_ethdev_core.h
>> +++ b/lib/ethdev/rte_ethdev_core.h
>> @@ -50,6 +50,61 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
>>  typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
>>  /**< @internal Check the status of a Tx descriptor */
>>  
>> +/**
>> + * @internal
>> + * Structure used to hold opaque pointers to internal ethdev Rx/Tx
>> + * queues data.
>> + * The main purpose to expose these pointers at all - allow compiler
>> + * to fetch this data for fast-path ethdev inline functions in advance.
>> + */
>> +struct rte_ethdev_qdata {
>> +	void **data;
>> +	/**< points to array of internal queue data pointers */
>> +	void **clbk;
>> +	/**< points to array of queue callback data pointers */
>> +};
>> +
>> +/**
>> + * @internal
>> + * fast-path ethdev functions and related data are hold in a flat array.
>> + * One entry per ethdev.
>> + * On 64-bit systems contents of this structure occupy exactly two 64B lines.
>> + * On 32-bit systems contents of this structure fits into one 64B line.
>> + */
>> +struct rte_eth_fp_ops {
>> +
>> +	/**
>> +	 * Rx fast-path functions and related data.
>> +	 * 64-bit systems: occupies first 64B line
>> +	 */
>> +	eth_rx_burst_t rx_pkt_burst;
>> +	/**< PMD receive function. */
>> +	eth_rx_queue_count_t rx_queue_count;
>> +	/**< Get the number of used RX descriptors. */
>> +	eth_rx_descriptor_status_t rx_descriptor_status;
>> +	/**< Check the status of a Rx descriptor. */
>> +	struct rte_ethdev_qdata rxq;
>> +	/**< Rx queues data. */
>> +	uintptr_t reserved1[3];
>> +
>> +	/**
>> +	 * Tx fast-path functions and related data.
>> +	 * 64-bit systems: occupies second 64B line
>> +	 */
>> +	eth_tx_burst_t tx_pkt_burst;
> 
> Why not place rx_pkt_burst/tx_pkt_burst/rxq /txq to the first cacheline ?
> Other function, e.g. rx_queue_count/descriptor_status are low frequency call functions.

+1 Very good question
If so, tx_pkt_prepare should be on the first cache-line
as well.

>> +	/**< PMD transmit function. */
>> +	eth_tx_prep_t tx_pkt_prepare;
>> +	/**< PMD transmit prepare function. */
>> +	eth_tx_descriptor_status_t tx_descriptor_status;
>> +	/**< Check the status of a Tx descriptor. */
>> +	struct rte_ethdev_qdata txq;
>> +	/**< Tx queues data. */
>> +	uintptr_t reserved2[3];
>> +
>> +} __rte_cache_aligned;
>> +
>> +extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
>> +
>>  
>>  /**
>>   * @internal
>>


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure
  2021-10-11  1:18             ` fengchengwen
@ 2021-10-11  8:39               ` Andrew Rybchenko
  2021-10-11 15:24               ` Ananyev, Konstantin
  1 sibling, 0 replies; 112+ messages in thread
From: Andrew Rybchenko @ 2021-10-11  8:39 UTC (permalink / raw)
  To: fengchengwen, Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	mczekaj, jiawenwu, jianwang, maxime.coquelin, chenbo.xia, thomas,
	ferruh.yigit, mdr, jay.jayatheerthan

On 10/11/21 4:18 AM, fengchengwen wrote:
> Sorry to self-reply.
> 
> I think it's better the 'struct rte_eth_dev *dev' hold a pointer to the
> 'struct rte_eth_fp_ops', e.g.
> 
> 	struct rte_eth_dev {
> 		struct rte_eth_fp_ops *fp_ops;
> 		...  // other field
> 	}
> 
> The eth framework set the pointer in the rte_eth_dev_pci_allocate(), and driver fill
> corresponding callback:
> 	dev->fp_ops->rx_pkt_burst = xxx_recv_pkts;
> 	dev->fp_ops->tx_pkt_burst = xxx_xmit_pkts;
> 	...
> 
> In this way, the behavior of the primary and secondary processes can be unified, which
> is basically the same as that of the original process.

Frankly speaking I see huge value in scheme suggested by
Konstantin to care about ops in stopped state on ethdev value.
So, I like approach in the patch more.

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v5 5/7] ethdev: make fast-path functions to use new flat array
  2021-10-07 11:27         ` [dpdk-dev] [PATCH v5 5/7] ethdev: make fast-path functions to use new flat array Konstantin Ananyev
@ 2021-10-11  9:02           ` Andrew Rybchenko
  2021-10-11 15:47             ` Ananyev, Konstantin
  0 siblings, 1 reply; 112+ messages in thread
From: Andrew Rybchenko @ 2021-10-11  9:02 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	mczekaj, jiawenwu, jianwang, maxime.coquelin, chenbo.xia, thomas,
	ferruh.yigit, mdr, jay.jayatheerthan

On 10/7/21 2:27 PM, Konstantin Ananyev wrote:
> Rework fast-path ethdev functions to use rte_eth_fp_ops[].
> While it is an API/ABI breakage, this change is intended to be
> transparent for both users (no changes in user app is required) and
> PMD developers (no changes in PMD is required).
> One extra thing to note - RX/TX callback invocation will cause extra
> function call with these changes. That might cause some insignificant
> slowdown for code-path where RX/TX callbacks are heavily involved.

I'm sorry for nit picking here and below:

RX -> Rx, TX -> Tx everywhere above.

> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>  lib/ethdev/ethdev_private.c |  31 +++++
>  lib/ethdev/rte_ethdev.h     | 242 ++++++++++++++++++++++++++----------
>  lib/ethdev/version.map      |   3 +
>  3 files changed, 208 insertions(+), 68 deletions(-)
> 
> diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
> index 3eeda6e9f9..1222c6f84e 100644
> --- a/lib/ethdev/ethdev_private.c
> +++ b/lib/ethdev/ethdev_private.c
> @@ -226,3 +226,34 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
>  	fpo->txq.data = dev->data->tx_queues;
>  	fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
>  }
> +
> +uint16_t
> +rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id,
> +	struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
> +	void *opaque)
> +{
> +	const struct rte_eth_rxtx_callback *cb = opaque;
> +
> +	while (cb != NULL) {
> +		nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
> +				nb_pkts, cb->param);
> +		cb = cb->next;
> +	}
> +
> +	return nb_rx;
> +}
> +
> +uint16_t
> +rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
> +	struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque)
> +{
> +	const struct rte_eth_rxtx_callback *cb = opaque;
> +
> +	while (cb != NULL) {
> +		nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
> +				cb->param);
> +		cb = cb->next;
> +	}
> +
> +	return nb_pkts;
> +}
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index cdd16d6e57..c0e1a40681 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -4904,6 +4904,33 @@ int rte_eth_representor_info_get(uint16_t port_id,
>  
>  #include <rte_ethdev_core.h>
>  
> +/**
> + * @internal
> + * Helper routine for eth driver rx_burst API.

rx -> Rx

> + * Should be called at exit from PMD's rte_eth_rx_bulk implementation.
> + * Does necessary post-processing - invokes RX callbacks if any, etc.

RX -> Rx

> + *
> + * @param port_id
> + *  The port identifier of the Ethernet device.
> + * @param queue_id
> + *  The index of the receive queue from which to retrieve input packets.

Isn't:
The index of the queue from which packets are received from?

> + * @param rx_pkts
> + *   The address of an array of pointers to *rte_mbuf* structures that
> + *   have been retrieved from the device.
> + * @param nb_pkts

Should be @param nb_rx

> + *   The number of packets that were retrieved from the device.
> + * @param nb_pkts
> + *   The number of elements in *rx_pkts* array.

@p should be used to refer to a paramter.

The description does not help to understand why both nb_rx and
nb_pkts are necessary. Isn't nb_pkts >= nb_rx and nb_rx
sufficient?

> + * @param opaque
> + *   Opaque pointer of RX queue callback related data.

RX -> Rx

> + *
> + * @return
> + *  The number of packets effectively supplied to the *rx_pkts* array.

@p should be used to refer to a parameter.

> + */
> +uint16_t rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id,
> +		struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
> +		void *opaque);
> +
>  /**
>   *
>   * Retrieve a burst of input packets from a receive queue of an Ethernet
> @@ -4995,23 +5022,37 @@ static inline uint16_t
>  rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
>  		 struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
>  {
> -	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>  	uint16_t nb_rx;
> +	struct rte_eth_fp_ops *p;

p is typically a very bad name in a funcion with
many pointer variables etc. May be "fpo" as in previous
patch?

> +	void *cb, *qd;

Please, avoid variable, expecially pointers, declaration in
one line.

I'd suggest to use 'rxq' instead of 'qd'. The first paramter
of the rx_pkt_burst is 'rxq'.

Also 'cb' seems to be used under RTE_ETHDEV_RXTX_CALLBACKS
only. If so, it could be unused variable warning if
RTE_ETHDEV_RXTX_CALLBACKS is not defined.

> +
> +#ifdef RTE_ETHDEV_DEBUG_RX
> +	if (port_id >= RTE_MAX_ETHPORTS ||
> +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> +		RTE_ETHDEV_LOG(ERR,
> +			"Invalid port_id=%u or queue_id=%u\n",
> +			port_id, queue_id);
> +		return 0;
> +	}
> +#endif
> +
> +	/* fetch pointer to queue data */
> +	p = &rte_eth_fp_ops[port_id];
> +	qd = p->rxq.data[queue_id];
>  
>  #ifdef RTE_ETHDEV_DEBUG_RX
>  	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
> -	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
>  
> -	if (queue_id >= dev->data->nb_rx_queues) {
> -		RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u\n", queue_id);
> +	if (qd == NULL) {
> +		RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u for port_id=%u\n",

RX -> Rx

> +			queue_id, port_id);
>  		return 0;
>  	}
>  #endif
> -	nb_rx = (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id],
> -				     rx_pkts, nb_pkts);
> +
> +	nb_rx = p->rx_pkt_burst(qd, rx_pkts, nb_pkts);
>  
>  #ifdef RTE_ETHDEV_RXTX_CALLBACKS
> -	struct rte_eth_rxtx_callback *cb;
>  
>  	/* __ATOMIC_RELEASE memory order was used when the
>  	 * call back was inserted into the list.
> @@ -5019,16 +5060,10 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
>  	 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
>  	 * not required.
>  	 */
> -	cb = __atomic_load_n(&dev->post_rx_burst_cbs[queue_id],
> -				__ATOMIC_RELAXED);
> -
> -	if (unlikely(cb != NULL)) {
> -		do {
> -			nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
> -						nb_pkts, cb->param);
> -			cb = cb->next;
> -		} while (cb != NULL);
> -	}
> +	cb = __atomic_load_n((void **)&p->rxq.clbk[queue_id], __ATOMIC_RELAXED);
> +	if (unlikely(cb != NULL))
> +		nb_rx = rte_eth_call_rx_callbacks(port_id, queue_id, rx_pkts,
> +				nb_rx, nb_pkts, cb);
>  #endif
>  
>  	rte_ethdev_trace_rx_burst(port_id, queue_id, (void **)rx_pkts, nb_rx);
> @@ -5051,16 +5086,27 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
>  static inline int
>  rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
>  {
> -	struct rte_eth_dev *dev;
> +	struct rte_eth_fp_ops *p;
> +	void *qd;

p -> fpo, qd -> rxq

> +
> +	if (port_id >= RTE_MAX_ETHPORTS ||
> +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> +		RTE_ETHDEV_LOG(ERR,
> +			"Invalid port_id=%u or queue_id=%u\n",
> +			port_id, queue_id);
> +		return -EINVAL;
> +	}
> +
> +	/* fetch pointer to queue data */
> +	p = &rte_eth_fp_ops[port_id];
> +	qd = p->rxq.data[queue_id];
>  
>  	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> -	dev = &rte_eth_devices[port_id];
> -	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_queue_count, -ENOTSUP);
> -	if (queue_id >= dev->data->nb_rx_queues ||
> -	    dev->data->rx_queues[queue_id] == NULL)
> +	RTE_FUNC_PTR_OR_ERR_RET(*p->rx_queue_count, -ENOTSUP);
> +	if (qd == NULL)
>  		return -EINVAL;
>  
> -	return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]);
> +	return (int)(*p->rx_queue_count)(qd);
>  }
>  
>  /**@{@name Rx hardware descriptor states
> @@ -5108,21 +5154,30 @@ static inline int
>  rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
>  	uint16_t offset)
>  {
> -	struct rte_eth_dev *dev;
> -	void *rxq;
> +	struct rte_eth_fp_ops *p;
> +	void *qd;

p -> fpo, qd -> rxq

>  
>  #ifdef RTE_ETHDEV_DEBUG_RX
> -	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> +	if (port_id >= RTE_MAX_ETHPORTS ||
> +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> +		RTE_ETHDEV_LOG(ERR,
> +			"Invalid port_id=%u or queue_id=%u\n",
> +			port_id, queue_id);
> +		return -EINVAL;
> +	}
>  #endif
> -	dev = &rte_eth_devices[port_id];
> +
> +	/* fetch pointer to queue data */
> +	p = &rte_eth_fp_ops[port_id];
> +	qd = p->rxq.data[queue_id];
> +
>  #ifdef RTE_ETHDEV_DEBUG_RX
> -	if (queue_id >= dev->data->nb_rx_queues)
> +	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> +	if (qd == NULL)
>  		return -ENODEV;
>  #endif
> -	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_descriptor_status, -ENOTSUP);
> -	rxq = dev->data->rx_queues[queue_id];
> -
> -	return (*dev->rx_descriptor_status)(rxq, offset);
> +	RTE_FUNC_PTR_OR_ERR_RET(*p->rx_descriptor_status, -ENOTSUP);
> +	return (*p->rx_descriptor_status)(qd, offset);
>  }
>  
>  /**@{@name Tx hardware descriptor states
> @@ -5169,23 +5224,54 @@ rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
>  static inline int rte_eth_tx_descriptor_status(uint16_t port_id,
>  	uint16_t queue_id, uint16_t offset)
>  {
> -	struct rte_eth_dev *dev;
> -	void *txq;
> +	struct rte_eth_fp_ops *p;
> +	void *qd;

p -> fpo, qd -> txq

>  
>  #ifdef RTE_ETHDEV_DEBUG_TX
> -	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> +	if (port_id >= RTE_MAX_ETHPORTS ||
> +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> +		RTE_ETHDEV_LOG(ERR,
> +			"Invalid port_id=%u or queue_id=%u\n",
> +			port_id, queue_id);
> +		return -EINVAL;
> +	}
>  #endif
> -	dev = &rte_eth_devices[port_id];
> +
> +	/* fetch pointer to queue data */
> +	p = &rte_eth_fp_ops[port_id];
> +	qd = p->txq.data[queue_id];
> +
>  #ifdef RTE_ETHDEV_DEBUG_TX
> -	if (queue_id >= dev->data->nb_tx_queues)
> +	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> +	if (qd == NULL)
>  		return -ENODEV;
>  #endif
> -	RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_descriptor_status, -ENOTSUP);
> -	txq = dev->data->tx_queues[queue_id];
> -
> -	return (*dev->tx_descriptor_status)(txq, offset);
> +	RTE_FUNC_PTR_OR_ERR_RET(*p->tx_descriptor_status, -ENOTSUP);
> +	return (*p->tx_descriptor_status)(qd, offset);
>  }
>  
> +/**
> + * @internal
> + * Helper routine for eth driver tx_burst API.
> + * Should be called before entry PMD's rte_eth_tx_bulk implementation.
> + * Does necessary pre-processing - invokes TX callbacks if any, etc.

TX -> Tx

> + *
> + * @param port_id
> + *   The port identifier of the Ethernet device.
> + * @param queue_id
> + *   The index of the transmit queue through which output packets must be
> + *   sent.
> + * @param tx_pkts
> + *   The address of an array of *nb_pkts* pointers to *rte_mbuf* structures
> + *   which contain the output packets.

*nb_pkts* -> @p nb_pkts

> + * @param nb_pkts
> + *   The maximum number of packets to transmit.
> + * @return
> + *   The number of output packets to transmit.
> + */
> +uint16_t rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
> +	struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque);
> +
>  /**
>   * Send a burst of output packets on a transmit queue of an Ethernet device.
>   *
> @@ -5256,20 +5342,34 @@ static inline uint16_t
>  rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
>  		 struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
>  {
> -	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	struct rte_eth_fp_ops *p;
> +	void *cb, *qd;

Same as above

> +
> +#ifdef RTE_ETHDEV_DEBUG_TX
> +	if (port_id >= RTE_MAX_ETHPORTS ||
> +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> +		RTE_ETHDEV_LOG(ERR,
> +			"Invalid port_id=%u or queue_id=%u\n",
> +			port_id, queue_id);
> +		return 0;
> +	}
> +#endif
> +
> +	/* fetch pointer to queue data */
> +	p = &rte_eth_fp_ops[port_id];
> +	qd = p->txq.data[queue_id];
>  
>  #ifdef RTE_ETHDEV_DEBUG_TX
>  	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
> -	RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0);
>  
> -	if (queue_id >= dev->data->nb_tx_queues) {
> -		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
> +	if (qd == NULL) {
> +		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",

TX -> Tx

> +			queue_id, port_id);
>  		return 0;
>  	}
>  #endif
>  
>  #ifdef RTE_ETHDEV_RXTX_CALLBACKS
> -	struct rte_eth_rxtx_callback *cb;
>  
>  	/* __ATOMIC_RELEASE memory order was used when the
>  	 * call back was inserted into the list.
> @@ -5277,21 +5377,16 @@ rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
>  	 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
>  	 * not required.
>  	 */
> -	cb = __atomic_load_n(&dev->pre_tx_burst_cbs[queue_id],
> -				__ATOMIC_RELAXED);
> -
> -	if (unlikely(cb != NULL)) {
> -		do {
> -			nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
> -					cb->param);
> -			cb = cb->next;
> -		} while (cb != NULL);
> -	}
> +	cb = __atomic_load_n((void **)&p->txq.clbk[queue_id], __ATOMIC_RELAXED);
> +	if (unlikely(cb != NULL))
> +		nb_pkts = rte_eth_call_tx_callbacks(port_id, queue_id, tx_pkts,
> +				nb_pkts, cb);
>  #endif
>  
> -	rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts,
> -		nb_pkts);
> -	return (*dev->tx_pkt_burst)(dev->data->tx_queues[queue_id], tx_pkts, nb_pkts);
> +	nb_pkts = p->tx_pkt_burst(qd, tx_pkts, nb_pkts);
> +
> +	rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts, nb_pkts);
> +	return nb_pkts;
>  }
>  
>  /**
> @@ -5354,31 +5449,42 @@ static inline uint16_t
>  rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id,
>  		struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
>  {
> -	struct rte_eth_dev *dev;
> +	struct rte_eth_fp_ops *p;
> +	void *qd;

p->fpo, qd->txq

>  
>  #ifdef RTE_ETHDEV_DEBUG_TX
> -	if (!rte_eth_dev_is_valid_port(port_id)) {
> -		RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
> +	if (port_id >= RTE_MAX_ETHPORTS ||
> +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> +		RTE_ETHDEV_LOG(ERR,
> +			"Invalid port_id=%u or queue_id=%u\n",
> +			port_id, queue_id);
>  		rte_errno = ENODEV;
>  		return 0;
>  	}
>  #endif
>  
> -	dev = &rte_eth_devices[port_id];
> +	/* fetch pointer to queue data */
> +	p = &rte_eth_fp_ops[port_id];
> +	qd = p->txq.data[queue_id];
>  
>  #ifdef RTE_ETHDEV_DEBUG_TX
> -	if (queue_id >= dev->data->nb_tx_queues) {
> -		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
> +	if (!rte_eth_dev_is_valid_port(port_id)) {
> +		RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);

TX -> Tx

> +		rte_errno = ENODEV;
> +		return 0;
> +	}
> +	if (qd == NULL) {
> +		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",

TX -> Tx

> +			queue_id, port_id);
>  		rte_errno = EINVAL;
>  		return 0;
>  	}
>  #endif
>  
> -	if (!dev->tx_pkt_prepare)
> +	if (!p->tx_pkt_prepare)

Please, change it to compare vs NULL since you touch the line.
Just to be consistent with DPDK coding style and lines above.

>  		return nb_pkts;
>  
> -	return (*dev->tx_pkt_prepare)(dev->data->tx_queues[queue_id],
> -			tx_pkts, nb_pkts);
> +	return p->tx_pkt_prepare(qd, tx_pkts, nb_pkts);
>  }
>  
>  #else
> diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
> index 904bce6ea1..79e62dcf61 100644
> --- a/lib/ethdev/version.map
> +++ b/lib/ethdev/version.map
> @@ -7,6 +7,8 @@ DPDK_22 {
>  	rte_eth_allmulticast_disable;
>  	rte_eth_allmulticast_enable;
>  	rte_eth_allmulticast_get;
> +	rte_eth_call_rx_callbacks;
> +	rte_eth_call_tx_callbacks;
>  	rte_eth_dev_adjust_nb_rx_tx_desc;
>  	rte_eth_dev_callback_register;
>  	rte_eth_dev_callback_unregister;
> @@ -76,6 +78,7 @@ DPDK_22 {
>  	rte_eth_find_next_of;
>  	rte_eth_find_next_owned_by;
>  	rte_eth_find_next_sibling;
> +	rte_eth_fp_ops;
>  	rte_eth_iterator_cleanup;
>  	rte_eth_iterator_init;
>  	rte_eth_iterator_next;
> 


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v5 6/7] ethdev: add API to retrieve multiple ethernet addresses
  2021-10-07 11:27         ` [dpdk-dev] [PATCH v5 6/7] ethdev: add API to retrieve multiple ethernet addresses Konstantin Ananyev
@ 2021-10-11  9:09           ` Andrew Rybchenko
  0 siblings, 0 replies; 112+ messages in thread
From: Andrew Rybchenko @ 2021-10-11  9:09 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	mczekaj, jiawenwu, jianwang, maxime.coquelin, chenbo.xia, thomas,
	ferruh.yigit, mdr, jay.jayatheerthan

On 10/7/21 2:27 PM, Konstantin Ananyev wrote:
> Introduce rte_eth_macaddrs_get() to allow user to retrieve all ethernet
> addresses assigned to given port.
> Change testpmd to use this new function and avoid referencing directly
> rte_eth_devices[].
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

[snip]

> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index 9f7a0cbb8c..83b57a7524 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -3573,6 +3573,31 @@ rte_eth_dev_set_ptypes(uint16_t port_id, uint32_t ptype_mask,
>  	return ret;
>  }
>  
> +int
> +rte_eth_macaddrs_get(uint16_t port_id, struct rte_ether_addr ma[], uint32_t num)

Do we really need fixed size type in the case of num?
Shouldn't it be just 'unsigned int' as the default choice for
a number of elements?

> +{
> +	int32_t ret;
> +	struct rte_eth_dev *dev;
> +	struct rte_eth_dev_info dev_info;
> +
> +	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> +	dev = &rte_eth_devices[port_id];
> +
> +	ret = rte_eth_dev_info_get(port_id, &dev_info);
> +	if (ret != 0)
> +		return ret;

In theory we can rely on port_id check in
rte_eth_dev_info_get() on success to avoid duplicate checks.
I'd do it and just highlight it in a comment.

> +
> +	if (ma == NULL) {
> +		RTE_ETHDEV_LOG(ERR, "%s: invalid parameters\n", __func__);
> +		return -EINVAL;
> +	}
> +
> +	num = RTE_MIN(dev_info.max_mac_addrs, num);
> +	memcpy(ma, dev->data->mac_addrs, num * sizeof(ma[0]));
> +
> +	return num;
> +}
> +
>  int
>  rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr)
>  {
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index c0e1a40681..77314ecf24 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -3037,6 +3037,27 @@ int rte_eth_dev_set_rx_queue_stats_mapping(uint16_t port_id,
>   */
>  int rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr);
>  
> +/**
> + * Retrieve the Ethernet addresses of an Ethernet device.
> + *
> + * @param port_id
> + *   The port identifier of the Ethernet device.
> + * @param ma
> + *   A pointer to an array of structures of type *ether_addr* to be filled with
> + *   the Ethernet addresses of the Ethernet device.
> + * @param num
> + *   Number of elements in the *ma* array.

*ma* -> @p ma

> + *   Note that  rte_eth_dev_info::max_mac_addrs can be used to retrieve
> + *   max number of Ethernet addresses for given port.
> + * @return
> + *   - number of retrieved addresses if successful
> + *   - (-ENODEV) if *port_id* invalid.
> + *   - (-EINVAL) if bad parameter.
> + */
> +__rte_experimental
> +int rte_eth_macaddrs_get(uint16_t port_id, struct rte_ether_addr *ma,
> +	uint32_t num);
> +
>  /**
>   * Retrieve the contextual information of an Ethernet device.
>   *
> diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
> index 79e62dcf61..2bad712958 100644
> --- a/lib/ethdev/version.map
> +++ b/lib/ethdev/version.map
> @@ -250,6 +250,9 @@ EXPERIMENTAL {
>  	rte_mtr_meter_policy_delete;
>  	rte_mtr_meter_policy_update;
>  	rte_mtr_meter_policy_validate;
> +
> +	# added in 21.11
> +	rte_eth_macaddrs_get;
>  };
>  
>  INTERNAL {
> 


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v5 7/7] ethdev: hide eth dev related structures
  2021-10-07 11:27         ` [dpdk-dev] [PATCH v5 7/7] ethdev: hide eth dev related structures Konstantin Ananyev
@ 2021-10-11  9:20           ` Andrew Rybchenko
  2021-10-11 15:54             ` Ananyev, Konstantin
  0 siblings, 1 reply; 112+ messages in thread
From: Andrew Rybchenko @ 2021-10-11  9:20 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	mczekaj, jiawenwu, jianwang, maxime.coquelin, chenbo.xia, thomas,
	ferruh.yigit, mdr, jay.jayatheerthan

On 10/7/21 2:27 PM, Konstantin Ananyev wrote:
> Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
> data into private header (ethdev_driver.h).
> Few minor changes to keep DPDK building after that.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

[snip]

I realize that many notes below correspond to just
moved code, but I still think that ti would be nice
to fix anyway.

> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
> index cc2c75261c..a743553d81 100644
> --- a/lib/ethdev/ethdev_driver.h
> +++ b/lib/ethdev/ethdev_driver.h
> @@ -17,6 +17,154 @@
>  
>  #include <rte_ethdev.h>
>  
> +/**
> + * @internal
> + * Structure used to hold information about the callbacks to be called for a
> + * queue on RX and TX.

RX -> Rx, TX -> Tx

> + */
> +struct rte_eth_rxtx_callback {
> +	struct rte_eth_rxtx_callback *next;
> +	union{

missing space before {

> +		rte_rx_callback_fn rx;
> +		rte_tx_callback_fn tx;
> +	} fn;
> +	void *param;
> +};
> +
> +/**
> + * @internal
> + * The generic data structure associated with each ethernet device.

ethernet -> Ethernet

> + *
> + * Pointers to burst-oriented packet receive and transmit functions are
> + * located at the beginning of the structure, along with the pointer to
> + * where all the data elements for the particular device are stored in shared
> + * memory. This split allows the function pointer and driver data to be per-
> + * process, while the actual configuration data for the device is shared.
> + */
> +struct rte_eth_dev {
> +	eth_rx_burst_t rx_pkt_burst; /**< Pointer to PMD receive function. */
> +	eth_tx_burst_t tx_pkt_burst; /**< Pointer to PMD transmit function. */
> +	eth_tx_prep_t tx_pkt_prepare;
> +	/**< Pointer to PMD transmit prepare function. */
> +	eth_rx_queue_count_t rx_queue_count;
> +	/**< Get the number of used RX descriptors. */
> +	eth_rx_descriptor_status_t rx_descriptor_status;
> +	/**< Check the status of a Rx descriptor. */
> +	eth_tx_descriptor_status_t tx_descriptor_status;
> +	/**< Check the status of a Tx descriptor. */

Please, move comments to be before documented structure
members.

> +
> +	/**
> +	 * points to device data that is shared between
> +	 * primary and secondary processes.
> +	 */
> +	struct rte_eth_dev_data *data;
> +	void *process_private; /**< Pointer to per-process device data. */
> +	const struct eth_dev_ops *dev_ops; /**< Functions exported by PMD */
> +	struct rte_device *device; /**< Backing device */
> +	struct rte_intr_handle *intr_handle; /**< Device interrupt handle */
> +	/** User application callbacks for NIC interrupts */
> +	struct rte_eth_dev_cb_list link_intr_cbs;
> +	/**
> +	 * User-supplied functions called from rx_burst to post-process
> +	 * received packets before passing them to the user
> +	 */
> +	struct rte_eth_rxtx_callback *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
> +	/**
> +	 * User-supplied functions called from tx_burst to pre-process
> +	 * received packets before passing them to the driver for transmission.
> +	 */
> +	struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
> +	enum rte_eth_dev_state state; /**< Flag indicating the port state */
> +	void *security_ctx; /**< Context for security ops */
> +
> +	uint64_t reserved_64s[4]; /**< Reserved for future fields */
> +	void *reserved_ptrs[4];   /**< Reserved for future fields */

Shoudl we remove these reserved fields?

> +} __rte_cache_aligned;
> +
> +struct rte_eth_dev_sriov;
> +struct rte_eth_dev_owner;
> +
> +/**
> + * @internal
> + * The data part, with no function pointers, associated with each ethernet

ethernet -> Ethernet

> + * device. This structure is safe to place in shared memory to be common
> + * among different processes in a multi-process configuration.
> + */
> +struct rte_eth_dev_data {
> +	char name[RTE_ETH_NAME_MAX_LEN]; /**< Unique identifier name */
> +
> +	void **rx_queues; /**< Array of pointers to RX queues. */
> +	void **tx_queues; /**< Array of pointers to TX queues. */
> +	uint16_t nb_rx_queues; /**< Number of RX queues. */
> +	uint16_t nb_tx_queues; /**< Number of TX queues. */

RX -> Rx, TX -> Tx

> +
> +	struct rte_eth_dev_sriov sriov;    /**< SRIOV data */
> +
> +	void *dev_private;
> +			/**< PMD-specific private data.
> +			 *   @see rte_eth_dev_release_port()
> +			 */
> +
> +	struct rte_eth_link dev_link;   /**< Link-level information & status. */
> +	struct rte_eth_conf dev_conf;   /**< Configuration applied to device. */
> +	uint16_t mtu;                   /**< Maximum Transmission Unit. */
> +	uint32_t min_rx_buf_size;
> +			/**< Common RX buffer size handled by all queues. */

RX -> Rx

> +
> +	uint64_t rx_mbuf_alloc_failed; /**< RX ring mbuf allocation failures. */

RX -> Rx

> +	struct rte_ether_addr *mac_addrs;
> +			/**< Device Ethernet link address.
> +			 *   @see rte_eth_dev_release_port()
> +			 */
> +	uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
> +			/**< Bitmap associating MAC addresses to pools. */
> +	struct rte_ether_addr *hash_mac_addrs;
> +			/**< Device Ethernet MAC addresses of hash filtering.
> +			 *   @see rte_eth_dev_release_port()
> +			 */

Please, move comments to be before. It looks like it will fit
in one line in many cases.

> +	uint16_t port_id;           /**< Device [external] port identifier. */
> +
> +	__extension__
> +	uint8_t promiscuous   : 1,
> +		/**< RX promiscuous mode ON(1) / OFF(0). */
> +		scattered_rx : 1,
> +		/**< RX of scattered packets is ON(1) / OFF(0) */
> +		all_multicast : 1,
> +		/**< RX all multicast mode ON(1) / OFF(0). */
> +		dev_started : 1,
> +		/**< Device state: STARTED(1) / STOPPED(0). */
> +		lro         : 1,
> +		/**< RX LRO is ON(1) / OFF(0) */
> +		dev_configured : 1;
> +		/**< Indicates whether the device is configured.
> +		 *   CONFIGURED(1) / NOT CONFIGURED(0).
> +		 */
> +	uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
> +		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
> +	uint8_t tx_queue_state[RTE_MAX_QUEUES_PER_PORT];
> +		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */

Comments should be before the code.

> +	uint32_t dev_flags;             /**< Capabilities. */
> +	int numa_node;                  /**< NUMA node connection. */
> +	struct rte_vlan_filter_conf vlan_filter_conf;
> +			/**< VLAN filter configuration. */
> +	struct rte_eth_dev_owner owner; /**< The port owner. */
> +	uint16_t representor_id;
> +			/**< Switch-specific identifier.
> +			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
> +			 */
> +
> +	pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
> +	uint64_t reserved_64s[4]; /**< Reserved for future fields */
> +	void *reserved_ptrs[4];   /**< Reserved for future fields */

Should we remove these reserved fields?

> +} __rte_cache_aligned;
> +
> +/**
> + * @internal
> + * The pool of *rte_eth_dev* structures. The size of the pool
> + * is configured at compile-time in the <rte_ethdev.c> file.
> + */
> +extern struct rte_eth_dev rte_eth_devices[];
> +
>  /**< @internal Declaration of the hairpin peer queue information structure. */
>  struct rte_hairpin_peer_info;

[snip]

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v5 2/7] ethdev: allocate max space for internal queue array
  2021-10-07 11:27         ` [dpdk-dev] [PATCH v5 2/7] ethdev: allocate max space for internal queue array Konstantin Ananyev
@ 2021-10-11  9:20           ` Andrew Rybchenko
  2021-10-11 16:25             ` Ananyev, Konstantin
  0 siblings, 1 reply; 112+ messages in thread
From: Andrew Rybchenko @ 2021-10-11  9:20 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	mczekaj, jiawenwu, jianwang, maxime.coquelin, chenbo.xia, thomas,
	ferruh.yigit, mdr, jay.jayatheerthan

On 10/7/21 2:27 PM, Konstantin Ananyev wrote:
> At queue configure stage always allocate space for maximum possible
> number (RTE_MAX_QUEUES_PER_PORT) of queue pointers.
> That will allow 'fast' inline functions (eth_rx_burst, etc.) to refer
> pointer to internal queue data without extra checking of current number
> of configured queues.
> That would help in future to hide rte_eth_dev and related structures.
> It means that from now on, each ethdev port will always consume:
> ((2*sizeof(uintptr_t))* RTE_MAX_QUEUES_PER_PORT)
> bytes of memory for its queue pointers.
> With RTE_MAX_QUEUES_PER_PORT==1024 (default value) it is 16KB per port.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>  lib/ethdev/rte_ethdev.c | 36 +++++++++---------------------------
>  1 file changed, 9 insertions(+), 27 deletions(-)
> 
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index ed37f8871b..c8abda6dd7 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -897,7 +897,8 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
>  
>  	if (dev->data->rx_queues == NULL && nb_queues != 0) { /* first time configuration */
>  		dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues",
> -				sizeof(dev->data->rx_queues[0]) * nb_queues,
> +				sizeof(dev->data->rx_queues[0]) *
> +				RTE_MAX_QUEUES_PER_PORT,
>  				RTE_CACHE_LINE_SIZE);

Looking at it I have few questions:
1. Why is nb_queues == 0 case kept as an exception? Yes,
   strictly speaking it is not the problem of the patch,
   DPDK will still segfault (non-debug build) if I
   allocate Tx queues only but call rte_eth_rx_burst().
   After reading the patch description I thought that
   we're trying to address it.
2. Why do we need to allocate memory dynamically?
   Can we just make rx_queues an array of appropriate size?
   May be wasting 512K unconditionally is too much.
3. If wasting 512K is too much, I'd consider to move
   allocation to eth_dev_get(). If

>  		if (dev->data->rx_queues == NULL) {
>  			dev->data->nb_rx_queues = 0;
> @@ -908,21 +909,11 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
>  
>  		rxq = dev->data->rx_queues;
>  
> -		for (i = nb_queues; i < old_nb_queues; i++)
> +		for (i = nb_queues; i < old_nb_queues; i++) {
>  			(*dev->dev_ops->rx_queue_release)(rxq[i]);
> -		rxq = rte_realloc(rxq, sizeof(rxq[0]) * nb_queues,
> -				RTE_CACHE_LINE_SIZE);
> -		if (rxq == NULL)
> -			return -(ENOMEM);
> -		if (nb_queues > old_nb_queues) {
> -			uint16_t new_qs = nb_queues - old_nb_queues;
> -
> -			memset(rxq + old_nb_queues, 0,
> -				sizeof(rxq[0]) * new_qs);
> +			rxq[i] = NULL;

It looks like the patch should be rebased on top of
next-net main because of queue release patches.

[snip]

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v5 0/7] hide eth dev related structures
  2021-10-07 11:27       ` [dpdk-dev] [PATCH v5 " Konstantin Ananyev
                           ` (7 preceding siblings ...)
  2021-10-08 18:13         ` [dpdk-dev] [PATCH v5 0/7] " Slava Ovsiienko
@ 2021-10-11  9:22         ` Andrew Rybchenko
  8 siblings, 0 replies; 112+ messages in thread
From: Andrew Rybchenko @ 2021-10-11  9:22 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	mczekaj, jiawenwu, jianwang, maxime.coquelin, chenbo.xia, thomas,
	ferruh.yigit, mdr, jay.jayatheerthan

Hi Konstantin,

On 10/7/21 2:27 PM, Konstantin Ananyev wrote:
> v5 changes:
> - Fix spelling (Thomas/David)
> - Rename internal helper functions (David)
> - Reorder patches and update commit messages (Thomas)
> - Update comments (Thomas)
> - Changed layout in rte_eth_fp_ops, to group functions and
>    related data based on their functionality:
>    first 64B line for Rx, second one for Tx.
>    Didn't observe any real performance difference comparing to
>    original layout. Though decided to keep a new one, as it seems
>    a bit more plausible. 
> 
> v4 changes:
>  - Fix secondary process attach (Pavan)
>  - Fix build failure (Ferruh)
>  - Update lib/ethdev/verion.map (Ferruh)
>    Note that moving newly added symbols from EXPERIMENTAL to DPDK_22
>    section makes checkpatch.sh to complain.
> 
> v3 changes:
>  - Changes in public struct naming (Jerin/Haiyue)
>  - Split patches
>  - Update docs
>  - Shamelessly included Andrew's patch:
>    https://patches.dpdk.org/project/dpdk/patch/20210928154856.1015020-1-andrew.rybchenko@oktetlabs.ru/
>    into these series.
>    I have to do similar thing here, so decided to avoid duplicated effort.
> 
> The aim of these patch series is to make rte_ethdev core data structures
> (rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback, etc.) internal to
> DPDK and not visible to the user.
> That should allow future possible changes to core ethdev related structures
> to be transparent to the user and help to improve ABI/API stability.
> Note that current ethdev API is preserved, but it is a formal ABI break.
> 
> The work is based on previous discussions at:
> https://www.mail-archive.com/dev@dpdk.org/msg211405.html
> https://www.mail-archive.com/dev@dpdk.org/msg216685.html
> and consists of the following main points:
> 1. Copy public 'fast' function pointers (rx_pkt_burst(), etc.) and
>    related data pointer from rte_eth_dev into a separate flat array.
>    We keep it public to still be able to use inline functions for these
>    'fast' calls (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
>    Note that apart from function pointers itself, each element of this
>    flat array also contains two opaque pointers for each ethdev:
>    1) a pointer to an array of internal queue data pointers
>    2)  points to array of queue callback data pointers.
>    Note that exposing this extra information allows us to avoid extra
>    changes inside PMD level, plus should help to avoid possible
>    performance degradation.
> 2. Change implementation of 'fast' inline ethdev functions
>    (rte_eth_rx_burst(), etc.) to use new public flat array.
>    While it is an ABI breakage, this change is intended to be transparent
>    for both users (no changes in user app is required) and PMD developers
>    (no changes in PMD is required).
>    One extra note - with new implementation RX/TX callback invocation
>    will cost one extra function call with this changes. That might cause
>    some slowdown for code-path with RX/TX callbacks heavily involved.
>    Hope such trade-off is acceptable for the community.
> 3. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
>    things into internal header: <ethdev_driver.h>.
> 
> That approach was selected to:
>   - Avoid(/minimize) possible performance losses.
>   - Minimize required changes inside PMDs.
> 
> Performance testing results (ICX 2.0GHz, E810 (ice)):
>  - testpmd macswap fwd mode, plus
>    a) no RX/TX callbacks:
>       no actual slowdown observed
>    b) bpf-load rx 0 0 JM ./dpdk.org/examples/bpf/t3.o:
>       ~2% slowdown
>  - l3fwd: no actual slowdown observed
> 
> Would like to thank everyone who already reviewed and tested previous
> versions of these series. All other interested parties please don't be shy
> and provide your feedback.

Many thanks for the very good patch series.
I hope we'll make it to be included in 21.11.
If you need any help with cosmetic fixes suggested
by me on review, please, let me know.

Andrew.

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure
  2021-10-09 12:05           ` fengchengwen
  2021-10-11  1:18             ` fengchengwen
  2021-10-11  8:35             ` Andrew Rybchenko
@ 2021-10-11 15:15             ` Ananyev, Konstantin
  2 siblings, 0 replies; 112+ messages in thread
From: Ananyev, Konstantin @ 2021-10-11 15:15 UTC (permalink / raw)
  To: fengchengwen, dev
  Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
	maxime.coquelin, Xia, Chenbo, thomas, Yigit, Ferruh, mdr,
	Jayatheerthan, Jay


> > Copy public function pointers (rx_pkt_burst(), etc.) and related
> > pointers to internal data from rte_eth_dev structure into a
> > separate flat array. That array will remain in a public header.
> > The intention here is to make rte_eth_dev and related structures internal.
> > That should allow future possible changes to core eth_dev structures
> > to be transparent to the user and help to avoid ABI/API breakages.
> > The plan is to keep minimal part of data from rte_eth_dev public,
> > so we still can use inline functions for fast-path calls
> > (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
> > The whole idea beyond this new schema:
> > 1. PMDs keep to setup fast-path function pointers and related data
> >    inside rte_eth_dev struct in the same way they did it before.
> > 2. Inside rte_eth_dev_start() and inside rte_eth_dev_probing_finish()
> >    (for secondary process) we call eth_dev_fp_ops_setup, which
> >    copies these function and data pointers into rte_eth_fp_ops[port_id].
> > 3. Inside rte_eth_dev_stop() and inside rte_eth_dev_release_port()
> >    we call eth_dev_fp_ops_reset(), which resets rte_eth_fp_ops[port_id]
> >    into some dummy values.
> > 4. fast-path ethdev API (rte_eth_rx_burst(), etc.) will use that new
> >    flat array to call PMD specific functions.
> > That approach should allow us to make rte_eth_devices[] private
> > without introducing regression and help to avoid changes in drivers code.
> >
> > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > ---
> >  lib/ethdev/ethdev_private.c  | 52 ++++++++++++++++++++++++++++++++++
> >  lib/ethdev/ethdev_private.h  |  7 +++++
> >  lib/ethdev/rte_ethdev.c      | 27 ++++++++++++++++++
> >  lib/ethdev/rte_ethdev_core.h | 55 ++++++++++++++++++++++++++++++++++++
> >  4 files changed, 141 insertions(+)
> >
> > diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
> > index 012cf73ca2..3eeda6e9f9 100644
> > --- a/lib/ethdev/ethdev_private.c
> > +++ b/lib/ethdev/ethdev_private.c
> > @@ -174,3 +174,55 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
> >  		RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
> >  	return str == NULL ? -1 : 0;
> >  }
> > +
> > +static uint16_t
> > +dummy_eth_rx_burst(__rte_unused void *rxq,
> > +		__rte_unused struct rte_mbuf **rx_pkts,
> > +		__rte_unused uint16_t nb_pkts)
> > +{
> > +	RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
> > +	rte_errno = ENOTSUP;
> > +	return 0;
> > +}
> > +
> > +static uint16_t
> > +dummy_eth_tx_burst(__rte_unused void *txq,
> > +		__rte_unused struct rte_mbuf **tx_pkts,
> > +		__rte_unused uint16_t nb_pkts)
> > +{
> > +	RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n");
> > +	rte_errno = ENOTSUP;
> > +	return 0;
> > +}
> > +
> > +void
> > +eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo)
> 
> The port_id parameter is preferable, this will hide rte_eth_fp_ops as much as possible.

Why do we need to hide it here?
rte_eth_fp_ops is a public structure, and it is a helper function that
just resets fields of this structure to some predefined dummy values.
Nice and simple, so I prefer to keep it like that. 

> 
> > +{
> > +	static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
> > +	static const struct rte_eth_fp_ops dummy_ops = {
> > +		.rx_pkt_burst = dummy_eth_rx_burst,
> > +		.tx_pkt_burst = dummy_eth_tx_burst,
> > +		.rxq = {.data = dummy_data, .clbk = dummy_data,},
> > +		.txq = {.data = dummy_data, .clbk = dummy_data,},
> > +	};
> > +
> > +	*fpo = dummy_ops;
> > +}
> > +
> > +void
> > +eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> > +		const struct rte_eth_dev *dev)
> 
> Because fp_ops and eth_dev is a one-to-one correspondence. It's better only use
> port_id parameter.

Same as above:
All this internal helper function does - copies some fields from one structure to another.
Both structures are visible by ethdev layer.
No point to add extra assumptions and complexity here. 

> 
> > +{
> > +	fpo->rx_pkt_burst = dev->rx_pkt_burst;
> > +	fpo->tx_pkt_burst = dev->tx_pkt_burst;
> > +	fpo->tx_pkt_prepare = dev->tx_pkt_prepare;
> > +	fpo->rx_queue_count = dev->rx_queue_count;
> > +	fpo->rx_descriptor_status = dev->rx_descriptor_status;
> > +	fpo->tx_descriptor_status = dev->tx_descriptor_status;
> > +
> > +	fpo->rxq.data = dev->data->rx_queues;
> > +	fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
> > +
> > +	fpo->txq.data = dev->data->tx_queues;
> > +	fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
> > +}
> > diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
> > index 3724429577..5721be7bdc 100644
> > --- a/lib/ethdev/ethdev_private.h
> > +++ b/lib/ethdev/ethdev_private.h
> > @@ -26,4 +26,11 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
> >  /* Parse devargs value for representor parameter. */
> >  int rte_eth_devargs_parse_representor_ports(char *str, void *data);
> >
> > +/* reset eth fast-path API to dummy values */
> > +void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
> > +
> > +/* setup eth fast-path API to ethdev values */
> > +void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> > +		const struct rte_eth_dev *dev);
> 
> Some drivers control the transmit/receive function during operation. E.g.
> for hns3 driver, when detect reset, primary process will set rx/tx burst to dummy, after
> process reset, primary process will set the correct rx/tx burst. During this process, the
> send and receive threads are still working, but the bursts they call are changed. So:

This text above is a bit too cryptic for me...
Are you saying that your driver changes rte_eth_dev.rx_pkt_burst(/ tx_pkt_burst) on the fly
(after dev_start() and before dev_stop())?
If so, then generally speaking, it is a bad idea.
While it might works for some limited scenarios, right now it is not supported by ethdev framework,
and might introduce a lot of problems. 

> 1. it is recommended that trace be deleted from the dummy function.

You are talking about:
RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
right?
Dummy function is supposed to be set only when device is not able to do RX/TX properly
(not attached, or attached but not configured, or attached and configured, but not started).
Obviously if app calls rx/tx_burst for such port it is a major issue, that should be flagged imemdiatelly.
So I believe having log here makes a perfect sense here. 

> 2. public the eth_dev_fp_ops_reset/setup interface for driver usage.

You mean move their declarations into ethdev_driver.h?
I suppose that could be done, but still wonder why driver would need to
call these functions directly?
 
> > +
> >  #endif /* _ETH_PRIVATE_H_ */
> > diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> > index c8abda6dd7..9f7a0cbb8c 100644
> > --- a/lib/ethdev/rte_ethdev.c
> > +++ b/lib/ethdev/rte_ethdev.c
> > @@ -44,6 +44,9 @@
> >  static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
> >  struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
> >
> > +/* public fast-path API */
> > +struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
> > +
> >  /* spinlock for eth device callbacks */
> >  static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
> >
> > @@ -578,6 +581,8 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
> >  		rte_eth_dev_callback_process(eth_dev,
> >  				RTE_ETH_EVENT_DESTROY, NULL);
> >
> > +	eth_dev_fp_ops_reset(rte_eth_fp_ops + eth_dev->data->port_id);
> > +
> >  	rte_spinlock_lock(&eth_dev_shared_data->ownership_lock);
> >
> >  	eth_dev->state = RTE_ETH_DEV_UNUSED;
> > @@ -1787,6 +1792,9 @@ rte_eth_dev_start(uint16_t port_id)
> >  		(*dev->dev_ops->link_update)(dev, 0);
> >  	}
> >
> > +	/* expose selection of PMD fast-path functions */
> > +	eth_dev_fp_ops_setup(rte_eth_fp_ops + port_id, dev);
> > +
> >  	rte_ethdev_trace_start(port_id);
> >  	return 0;
> >  }
> > @@ -1809,6 +1817,9 @@ rte_eth_dev_stop(uint16_t port_id)
> >  		return 0;
> >  	}
> >
> > +	/* point fast-path functions to dummy ones */
> > +	eth_dev_fp_ops_reset(rte_eth_fp_ops + port_id);
> > +
> >  	dev->data->dev_started = 0;
> >  	ret = (*dev->dev_ops->dev_stop)(dev);
> >  	rte_ethdev_trace_stop(port_id, ret);
> > @@ -4567,6 +4578,14 @@ rte_eth_mirror_rule_reset(uint16_t port_id, uint8_t rule_id)
> >  	return eth_err(port_id, (*dev->dev_ops->mirror_rule_reset)(dev, rule_id));
> >  }
> >
> > +RTE_INIT(eth_dev_init_fp_ops)
> > +{
> > +	uint32_t i;
> > +
> > +	for (i = 0; i != RTE_DIM(rte_eth_fp_ops); i++)
> > +		eth_dev_fp_ops_reset(rte_eth_fp_ops + i);
> > +}
> > +
> >  RTE_INIT(eth_dev_init_cb_lists)
> >  {
> >  	uint16_t i;
> > @@ -4735,6 +4754,14 @@ rte_eth_dev_probing_finish(struct rte_eth_dev *dev)
> >  	if (dev == NULL)
> >  		return;
> >
> > +	/*
> > +	 * for secondary process, at that point we expect device
> > +	 * to be already 'usable', so shared data and all function pointers
> > +	 * for fast-path devops have to be setup properly inside rte_eth_dev.
> > +	 */
> > +	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
> > +		eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev);
> > +
> >  	rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL);
> >
> >  	dev->state = RTE_ETH_DEV_ATTACHED;
> > diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
> > index 51cd68de94..d5853dff86 100644
> > --- a/lib/ethdev/rte_ethdev_core.h
> > +++ b/lib/ethdev/rte_ethdev_core.h
> > @@ -50,6 +50,61 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
> >  typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
> >  /**< @internal Check the status of a Tx descriptor */
> >
> > +/**
> > + * @internal
> > + * Structure used to hold opaque pointers to internal ethdev Rx/Tx
> > + * queues data.
> > + * The main purpose to expose these pointers at all - allow compiler
> > + * to fetch this data for fast-path ethdev inline functions in advance.
> > + */
> > +struct rte_ethdev_qdata {
> > +	void **data;
> > +	/**< points to array of internal queue data pointers */
> > +	void **clbk;
> > +	/**< points to array of queue callback data pointers */
> > +};
> > +
> > +/**
> > + * @internal
> > + * fast-path ethdev functions and related data are hold in a flat array.
> > + * One entry per ethdev.
> > + * On 64-bit systems contents of this structure occupy exactly two 64B lines.
> > + * On 32-bit systems contents of this structure fits into one 64B line.
> > + */
> > +struct rte_eth_fp_ops {
> > +
> > +	/**
> > +	 * Rx fast-path functions and related data.
> > +	 * 64-bit systems: occupies first 64B line
> > +	 */
> > +	eth_rx_burst_t rx_pkt_burst;
> > +	/**< PMD receive function. */
> > +	eth_rx_queue_count_t rx_queue_count;
> > +	/**< Get the number of used RX descriptors. */
> > +	eth_rx_descriptor_status_t rx_descriptor_status;
> > +	/**< Check the status of a Rx descriptor. */
> > +	struct rte_ethdev_qdata rxq;
> > +	/**< Rx queues data. */
> > +	uintptr_t reserved1[3];
> > +
> > +	/**
> > +	 * Tx fast-path functions and related data.
> > +	 * 64-bit systems: occupies second 64B line
> > +	 */
> > +	eth_tx_burst_t tx_pkt_burst;
> 
> Why not place rx_pkt_burst/tx_pkt_burst/rxq /txq to the first cacheline ?
> Other function, e.g. rx_queue_count/descriptor_status are low frequency call functions.

I suppose you are talking about layout like that:
struct rte_eth_fp_ops {
   /* first 64B line */
   rx_pkt_burst;
   tx_pkt_burst;
   tx_pkt_prepare;
   struct rte_ethdev_qdata rxq;
   struct rte_ethdev_qdata txq;
   reserved1[1];
   /* second 64B line */
  ...
};

I thought about such ability, even tried it, but I didn't see any performance gain.
From other side current layout seems better to me from structural point:
it is more uniform and easy to extend in future (both RX and TX data occupies
separate 64B line, each have equal rom for extension).    
 
> > +	/**< PMD transmit function. */
> > +	eth_tx_prep_t tx_pkt_prepare;
> > +	/**< PMD transmit prepare function. */
> > +	eth_tx_descriptor_status_t tx_descriptor_status;
> > +	/**< Check the status of a Tx descriptor. */
> > +	struct rte_ethdev_qdata txq;
> > +	/**< Tx queues data. */
> > +	uintptr_t reserved2[3];
> > +
> > +} __rte_cache_aligned;
> > +
> > +extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
> > +
> >
> >  /**
> >   * @internal
> >


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure
  2021-10-11  1:18             ` fengchengwen
  2021-10-11  8:39               ` Andrew Rybchenko
@ 2021-10-11 15:24               ` Ananyev, Konstantin
  1 sibling, 0 replies; 112+ messages in thread
From: Ananyev, Konstantin @ 2021-10-11 15:24 UTC (permalink / raw)
  To: fengchengwen, dev
  Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
	maxime.coquelin, Xia, Chenbo, thomas, Yigit, Ferruh, mdr,
	Jayatheerthan, Jay


> 
> Sorry to self-reply.
> 
> I think it's better the 'struct rte_eth_dev *dev' hold a pointer to the
> 'struct rte_eth_fp_ops', e.g.
> 
> 	struct rte_eth_dev {
> 		struct rte_eth_fp_ops *fp_ops;
> 		...  // other field
> 	}
> 
> The eth framework set the pointer in the rte_eth_dev_pci_allocate(), and driver fill
> corresponding callback:
> 	dev->fp_ops->rx_pkt_burst = xxx_recv_pkts;
> 	dev->fp_ops->tx_pkt_burst = xxx_xmit_pkts;
> 	...
> 
> In this way, the behavior of the primary and secondary processes can be unified, which
> is basically the same as that of the original process.

I don't think it is a good thing to do, as it nullifies one of the main thing of this approach:
fp_ops[] points to actual rx/tx functions and data only when PMD is really ready to do rx/tx
(device is properly configured and started), in other cases it points to dummy stubs.
As another drawback it will mean code changes in aach and every driver.


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v5 5/7] ethdev: make fast-path functions to use new flat array
  2021-10-11  9:02           ` Andrew Rybchenko
@ 2021-10-11 15:47             ` Ananyev, Konstantin
  2021-10-11 17:03               ` Andrew Rybchenko
  0 siblings, 1 reply; 112+ messages in thread
From: Ananyev, Konstantin @ 2021-10-11 15:47 UTC (permalink / raw)
  To: Andrew Rybchenko, dev
  Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, mczekaj, jiawenwu, jianwang, maxime.coquelin, Xia,
	Chenbo, thomas, Yigit, Ferruh, mdr, Jayatheerthan, Jay


> 
> On 10/7/21 2:27 PM, Konstantin Ananyev wrote:
> > Rework fast-path ethdev functions to use rte_eth_fp_ops[].
> > While it is an API/ABI breakage, this change is intended to be
> > transparent for both users (no changes in user app is required) and
> > PMD developers (no changes in PMD is required).
> > One extra thing to note - RX/TX callback invocation will cause extra
> > function call with these changes. That might cause some insignificant
> > slowdown for code-path where RX/TX callbacks are heavily involved.
> 
> I'm sorry for nit picking here and below:
> 
> RX -> Rx, TX -> Tx everywhere above.
> 
> >
> > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > ---
> >  lib/ethdev/ethdev_private.c |  31 +++++
> >  lib/ethdev/rte_ethdev.h     | 242 ++++++++++++++++++++++++++----------
> >  lib/ethdev/version.map      |   3 +
> >  3 files changed, 208 insertions(+), 68 deletions(-)
> >
> > diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
> > index 3eeda6e9f9..1222c6f84e 100644
> > --- a/lib/ethdev/ethdev_private.c
> > +++ b/lib/ethdev/ethdev_private.c
> > @@ -226,3 +226,34 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> >  	fpo->txq.data = dev->data->tx_queues;
> >  	fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
> >  }
> > +
> > +uint16_t
> > +rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id,
> > +	struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
> > +	void *opaque)
> > +{
> > +	const struct rte_eth_rxtx_callback *cb = opaque;
> > +
> > +	while (cb != NULL) {
> > +		nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
> > +				nb_pkts, cb->param);
> > +		cb = cb->next;
> > +	}
> > +
> > +	return nb_rx;
> > +}
> > +
> > +uint16_t
> > +rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
> > +	struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque)
> > +{
> > +	const struct rte_eth_rxtx_callback *cb = opaque;
> > +
> > +	while (cb != NULL) {
> > +		nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
> > +				cb->param);
> > +		cb = cb->next;
> > +	}
> > +
> > +	return nb_pkts;
> > +}
> > diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> > index cdd16d6e57..c0e1a40681 100644
> > --- a/lib/ethdev/rte_ethdev.h
> > +++ b/lib/ethdev/rte_ethdev.h
> > @@ -4904,6 +4904,33 @@ int rte_eth_representor_info_get(uint16_t port_id,
> >
> >  #include <rte_ethdev_core.h>
> >
> > +/**
> > + * @internal
> > + * Helper routine for eth driver rx_burst API.
> 
> rx -> Rx
> 
> > + * Should be called at exit from PMD's rte_eth_rx_bulk implementation.
> > + * Does necessary post-processing - invokes RX callbacks if any, etc.
> 
> RX -> Rx
> 
> > + *
> > + * @param port_id
> > + *  The port identifier of the Ethernet device.
> > + * @param queue_id
> > + *  The index of the receive queue from which to retrieve input packets.
> 
> Isn't:
> The index of the queue from which packets are received from?

I copied it from comments from rte_eth_rx_burst().
I suppose it is just two ways to say the same thing.

> 
> > + * @param rx_pkts
> > + *   The address of an array of pointers to *rte_mbuf* structures that
> > + *   have been retrieved from the device.
> > + * @param nb_pkts
> 
> Should be @param nb_rx

Ack, will fix. 

> 
> > + *   The number of packets that were retrieved from the device.
> > + * @param nb_pkts
> > + *   The number of elements in *rx_pkts* array.
> 
> @p should be used to refer to a paramter.

To be more precise you are talking about:
s/*rx_pkts*/@ rx_pkts/
?

> 
> The description does not help to understand why both nb_rx and
> nb_pkts are necessary. Isn't nb_pkts >= nb_rx and nb_rx
> sufficient?

Nope,  that's for callbacks call.
Will update the comment.
 
> > + * @param opaque
> > + *   Opaque pointer of RX queue callback related data.
> 
> RX -> Rx
> 
> > + *
> > + * @return
> > + *  The number of packets effectively supplied to the *rx_pkts* array.
> 
> @p should be used to refer to a parameter.
> 
> > + */
> > +uint16_t rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id,
> > +		struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
> > +		void *opaque);
> > +
> >  /**
> >   *
> >   * Retrieve a burst of input packets from a receive queue of an Ethernet
> > @@ -4995,23 +5022,37 @@ static inline uint16_t
> >  rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
> >  		 struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
> >  {
> > -	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> >  	uint16_t nb_rx;
> > +	struct rte_eth_fp_ops *p;
> 
> p is typically a very bad name in a funcion with
> many pointer variables etc. May be "fpo" as in previous
> patch?
> 
> > +	void *cb, *qd;
> 
> Please, avoid variable, expecially pointers, declaration in
> one line.

Here and in other places, I think local variable names and placement,
is just a matter of personal preference.

> 
> I'd suggest to use 'rxq' instead of 'qd'. The first paramter
> of the rx_pkt_burst is 'rxq'.
> 
> Also 'cb' seems to be used under RTE_ETHDEV_RXTX_CALLBACKS
> only. If so, it could be unused variable warning if
> RTE_ETHDEV_RXTX_CALLBACKS is not defined.

Good point, will move it back under #ifdef RTE_ETHDEV_RXTX_CALLBACKS.
 
> > +
> > +#ifdef RTE_ETHDEV_DEBUG_RX
> > +	if (port_id >= RTE_MAX_ETHPORTS ||
> > +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> > +		RTE_ETHDEV_LOG(ERR,
> > +			"Invalid port_id=%u or queue_id=%u\n",
> > +			port_id, queue_id);
> > +		return 0;
> > +	}
> > +#endif
> > +
> > +	/* fetch pointer to queue data */
> > +	p = &rte_eth_fp_ops[port_id];
> > +	qd = p->rxq.data[queue_id];
> >
> >  #ifdef RTE_ETHDEV_DEBUG_RX
> >  	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
> > -	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
> >
> > -	if (queue_id >= dev->data->nb_rx_queues) {
> > -		RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u\n", queue_id);
> > +	if (qd == NULL) {
> > +		RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u for port_id=%u\n",
> 
> RX -> Rx
> 
> > +			queue_id, port_id);
> >  		return 0;
> >  	}
> >  #endif
> > -	nb_rx = (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id],
> > -				     rx_pkts, nb_pkts);
> > +
> > +	nb_rx = p->rx_pkt_burst(qd, rx_pkts, nb_pkts);
> >
> >  #ifdef RTE_ETHDEV_RXTX_CALLBACKS
> > -	struct rte_eth_rxtx_callback *cb;
> >
> >  	/* __ATOMIC_RELEASE memory order was used when the
> >  	 * call back was inserted into the list.
> > @@ -5019,16 +5060,10 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
> >  	 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
> >  	 * not required.
> >  	 */
> > -	cb = __atomic_load_n(&dev->post_rx_burst_cbs[queue_id],
> > -				__ATOMIC_RELAXED);
> > -
> > -	if (unlikely(cb != NULL)) {
> > -		do {
> > -			nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
> > -						nb_pkts, cb->param);
> > -			cb = cb->next;
> > -		} while (cb != NULL);
> > -	}
> > +	cb = __atomic_load_n((void **)&p->rxq.clbk[queue_id], __ATOMIC_RELAXED);
> > +	if (unlikely(cb != NULL))
> > +		nb_rx = rte_eth_call_rx_callbacks(port_id, queue_id, rx_pkts,
> > +				nb_rx, nb_pkts, cb);
> >  #endif
> >
> >  	rte_ethdev_trace_rx_burst(port_id, queue_id, (void **)rx_pkts, nb_rx);
> > @@ -5051,16 +5086,27 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
> >  static inline int
> >  rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
> >  {
> > -	struct rte_eth_dev *dev;
> > +	struct rte_eth_fp_ops *p;
> > +	void *qd;
> 
> p -> fpo, qd -> rxq
> 
> > +
> > +	if (port_id >= RTE_MAX_ETHPORTS ||
> > +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> > +		RTE_ETHDEV_LOG(ERR,
> > +			"Invalid port_id=%u or queue_id=%u\n",
> > +			port_id, queue_id);
> > +		return -EINVAL;
> > +	}
> > +
> > +	/* fetch pointer to queue data */
> > +	p = &rte_eth_fp_ops[port_id];
> > +	qd = p->rxq.data[queue_id];
> >
> >  	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> > -	dev = &rte_eth_devices[port_id];
> > -	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_queue_count, -ENOTSUP);
> > -	if (queue_id >= dev->data->nb_rx_queues ||
> > -	    dev->data->rx_queues[queue_id] == NULL)
> > +	RTE_FUNC_PTR_OR_ERR_RET(*p->rx_queue_count, -ENOTSUP);
> > +	if (qd == NULL)
> >  		return -EINVAL;
> >
> > -	return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]);
> > +	return (int)(*p->rx_queue_count)(qd);
> >  }
> >
> >  /**@{@name Rx hardware descriptor states
> > @@ -5108,21 +5154,30 @@ static inline int
> >  rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
> >  	uint16_t offset)
> >  {
> > -	struct rte_eth_dev *dev;
> > -	void *rxq;
> > +	struct rte_eth_fp_ops *p;
> > +	void *qd;
> 
> p -> fpo, qd -> rxq
> 
> >
> >  #ifdef RTE_ETHDEV_DEBUG_RX
> > -	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> > +	if (port_id >= RTE_MAX_ETHPORTS ||
> > +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> > +		RTE_ETHDEV_LOG(ERR,
> > +			"Invalid port_id=%u or queue_id=%u\n",
> > +			port_id, queue_id);
> > +		return -EINVAL;
> > +	}
> >  #endif
> > -	dev = &rte_eth_devices[port_id];
> > +
> > +	/* fetch pointer to queue data */
> > +	p = &rte_eth_fp_ops[port_id];
> > +	qd = p->rxq.data[queue_id];
> > +
> >  #ifdef RTE_ETHDEV_DEBUG_RX
> > -	if (queue_id >= dev->data->nb_rx_queues)
> > +	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> > +	if (qd == NULL)
> >  		return -ENODEV;
> >  #endif
> > -	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_descriptor_status, -ENOTSUP);
> > -	rxq = dev->data->rx_queues[queue_id];
> > -
> > -	return (*dev->rx_descriptor_status)(rxq, offset);
> > +	RTE_FUNC_PTR_OR_ERR_RET(*p->rx_descriptor_status, -ENOTSUP);
> > +	return (*p->rx_descriptor_status)(qd, offset);
> >  }
> >
> >  /**@{@name Tx hardware descriptor states
> > @@ -5169,23 +5224,54 @@ rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
> >  static inline int rte_eth_tx_descriptor_status(uint16_t port_id,
> >  	uint16_t queue_id, uint16_t offset)
> >  {
> > -	struct rte_eth_dev *dev;
> > -	void *txq;
> > +	struct rte_eth_fp_ops *p;
> > +	void *qd;
> 
> p -> fpo, qd -> txq
> 
> >
> >  #ifdef RTE_ETHDEV_DEBUG_TX
> > -	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> > +	if (port_id >= RTE_MAX_ETHPORTS ||
> > +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> > +		RTE_ETHDEV_LOG(ERR,
> > +			"Invalid port_id=%u or queue_id=%u\n",
> > +			port_id, queue_id);
> > +		return -EINVAL;
> > +	}
> >  #endif
> > -	dev = &rte_eth_devices[port_id];
> > +
> > +	/* fetch pointer to queue data */
> > +	p = &rte_eth_fp_ops[port_id];
> > +	qd = p->txq.data[queue_id];
> > +
> >  #ifdef RTE_ETHDEV_DEBUG_TX
> > -	if (queue_id >= dev->data->nb_tx_queues)
> > +	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> > +	if (qd == NULL)
> >  		return -ENODEV;
> >  #endif
> > -	RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_descriptor_status, -ENOTSUP);
> > -	txq = dev->data->tx_queues[queue_id];
> > -
> > -	return (*dev->tx_descriptor_status)(txq, offset);
> > +	RTE_FUNC_PTR_OR_ERR_RET(*p->tx_descriptor_status, -ENOTSUP);
> > +	return (*p->tx_descriptor_status)(qd, offset);
> >  }
> >
> > +/**
> > + * @internal
> > + * Helper routine for eth driver tx_burst API.
> > + * Should be called before entry PMD's rte_eth_tx_bulk implementation.
> > + * Does necessary pre-processing - invokes TX callbacks if any, etc.
> 
> TX -> Tx
> 
> > + *
> > + * @param port_id
> > + *   The port identifier of the Ethernet device.
> > + * @param queue_id
> > + *   The index of the transmit queue through which output packets must be
> > + *   sent.
> > + * @param tx_pkts
> > + *   The address of an array of *nb_pkts* pointers to *rte_mbuf* structures
> > + *   which contain the output packets.
> 
> *nb_pkts* -> @p nb_pkts
> 
> > + * @param nb_pkts
> > + *   The maximum number of packets to transmit.
> > + * @return
> > + *   The number of output packets to transmit.
> > + */
> > +uint16_t rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
> > +	struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque);
> > +
> >  /**
> >   * Send a burst of output packets on a transmit queue of an Ethernet device.
> >   *
> > @@ -5256,20 +5342,34 @@ static inline uint16_t
> >  rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
> >  		 struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
> >  {
> > -	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > +	struct rte_eth_fp_ops *p;
> > +	void *cb, *qd;
> 
> Same as above
> 
> > +
> > +#ifdef RTE_ETHDEV_DEBUG_TX
> > +	if (port_id >= RTE_MAX_ETHPORTS ||
> > +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> > +		RTE_ETHDEV_LOG(ERR,
> > +			"Invalid port_id=%u or queue_id=%u\n",
> > +			port_id, queue_id);
> > +		return 0;
> > +	}
> > +#endif
> > +
> > +	/* fetch pointer to queue data */
> > +	p = &rte_eth_fp_ops[port_id];
> > +	qd = p->txq.data[queue_id];
> >
> >  #ifdef RTE_ETHDEV_DEBUG_TX
> >  	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
> > -	RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0);
> >
> > -	if (queue_id >= dev->data->nb_tx_queues) {
> > -		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
> > +	if (qd == NULL) {
> > +		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
> 
> TX -> Tx
> 
> > +			queue_id, port_id);
> >  		return 0;
> >  	}
> >  #endif
> >
> >  #ifdef RTE_ETHDEV_RXTX_CALLBACKS
> > -	struct rte_eth_rxtx_callback *cb;
> >
> >  	/* __ATOMIC_RELEASE memory order was used when the
> >  	 * call back was inserted into the list.
> > @@ -5277,21 +5377,16 @@ rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
> >  	 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
> >  	 * not required.
> >  	 */
> > -	cb = __atomic_load_n(&dev->pre_tx_burst_cbs[queue_id],
> > -				__ATOMIC_RELAXED);
> > -
> > -	if (unlikely(cb != NULL)) {
> > -		do {
> > -			nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
> > -					cb->param);
> > -			cb = cb->next;
> > -		} while (cb != NULL);
> > -	}
> > +	cb = __atomic_load_n((void **)&p->txq.clbk[queue_id], __ATOMIC_RELAXED);
> > +	if (unlikely(cb != NULL))
> > +		nb_pkts = rte_eth_call_tx_callbacks(port_id, queue_id, tx_pkts,
> > +				nb_pkts, cb);
> >  #endif
> >
> > -	rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts,
> > -		nb_pkts);
> > -	return (*dev->tx_pkt_burst)(dev->data->tx_queues[queue_id], tx_pkts, nb_pkts);
> > +	nb_pkts = p->tx_pkt_burst(qd, tx_pkts, nb_pkts);
> > +
> > +	rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts, nb_pkts);
> > +	return nb_pkts;
> >  }
> >
> >  /**
> > @@ -5354,31 +5449,42 @@ static inline uint16_t
> >  rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id,
> >  		struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
> >  {
> > -	struct rte_eth_dev *dev;
> > +	struct rte_eth_fp_ops *p;
> > +	void *qd;
> 
> p->fpo, qd->txq
> 
> >
> >  #ifdef RTE_ETHDEV_DEBUG_TX
> > -	if (!rte_eth_dev_is_valid_port(port_id)) {
> > -		RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
> > +	if (port_id >= RTE_MAX_ETHPORTS ||
> > +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> > +		RTE_ETHDEV_LOG(ERR,
> > +			"Invalid port_id=%u or queue_id=%u\n",
> > +			port_id, queue_id);
> >  		rte_errno = ENODEV;
> >  		return 0;
> >  	}
> >  #endif
> >
> > -	dev = &rte_eth_devices[port_id];
> > +	/* fetch pointer to queue data */
> > +	p = &rte_eth_fp_ops[port_id];
> > +	qd = p->txq.data[queue_id];
> >
> >  #ifdef RTE_ETHDEV_DEBUG_TX
> > -	if (queue_id >= dev->data->nb_tx_queues) {
> > -		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
> > +	if (!rte_eth_dev_is_valid_port(port_id)) {
> > +		RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
> 
> TX -> Tx
> 
> > +		rte_errno = ENODEV;
> > +		return 0;
> > +	}
> > +	if (qd == NULL) {
> > +		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
> 
> TX -> Tx
> 
> > +			queue_id, port_id);
> >  		rte_errno = EINVAL;
> >  		return 0;
> >  	}
> >  #endif
> >
> > -	if (!dev->tx_pkt_prepare)
> > +	if (!p->tx_pkt_prepare)
> 
> Please, change it to compare vs NULL since you touch the line.
> Just to be consistent with DPDK coding style and lines above.

Ok, I am also fond of explicit comparisons :) 

 
> >  		return nb_pkts;
> >
> > -	return (*dev->tx_pkt_prepare)(dev->data->tx_queues[queue_id],
> > -			tx_pkts, nb_pkts);
> > +	return p->tx_pkt_prepare(qd, tx_pkts, nb_pkts);
> >  }
> >
> >  #else
> > diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
> > index 904bce6ea1..79e62dcf61 100644
> > --- a/lib/ethdev/version.map
> > +++ b/lib/ethdev/version.map
> > @@ -7,6 +7,8 @@ DPDK_22 {
> >  	rte_eth_allmulticast_disable;
> >  	rte_eth_allmulticast_enable;
> >  	rte_eth_allmulticast_get;
> > +	rte_eth_call_rx_callbacks;
> > +	rte_eth_call_tx_callbacks;
> >  	rte_eth_dev_adjust_nb_rx_tx_desc;
> >  	rte_eth_dev_callback_register;
> >  	rte_eth_dev_callback_unregister;
> > @@ -76,6 +78,7 @@ DPDK_22 {
> >  	rte_eth_find_next_of;
> >  	rte_eth_find_next_owned_by;
> >  	rte_eth_find_next_sibling;
> > +	rte_eth_fp_ops;
> >  	rte_eth_iterator_cleanup;
> >  	rte_eth_iterator_init;
> >  	rte_eth_iterator_next;
> >


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v5 7/7] ethdev: hide eth dev related structures
  2021-10-11  9:20           ` Andrew Rybchenko
@ 2021-10-11 15:54             ` Ananyev, Konstantin
  2021-10-11 17:04               ` Andrew Rybchenko
  0 siblings, 1 reply; 112+ messages in thread
From: Ananyev, Konstantin @ 2021-10-11 15:54 UTC (permalink / raw)
  To: Andrew Rybchenko, dev
  Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, mczekaj, jiawenwu, jianwang, maxime.coquelin, Xia,
	Chenbo, thomas, Yigit, Ferruh, mdr, Jayatheerthan, Jay



> 
> On 10/7/21 2:27 PM, Konstantin Ananyev wrote:
> > Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
> > data into private header (ethdev_driver.h).
> > Few minor changes to keep DPDK building after that.
> >
> > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 
> [snip]
> 
> I realize that many notes below correspond to just
> moved code, but I still think that ti would be nice
> to fix anyway.

Yes, that’s just cut and paste from rte_ethdev_core.h and ethdev_driver.h
It is probably a good idea to clean-up and might be re-layout rte_eth_dev and friends,
but I don't think it has to be part of this patch-set.
After all, if it will become internal structure, that work could be done for any DPDK release. 


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v5 2/7] ethdev: allocate max space for internal queue array
  2021-10-11  9:20           ` Andrew Rybchenko
@ 2021-10-11 16:25             ` Ananyev, Konstantin
  2021-10-11 17:15               ` Andrew Rybchenko
  0 siblings, 1 reply; 112+ messages in thread
From: Ananyev, Konstantin @ 2021-10-11 16:25 UTC (permalink / raw)
  To: Andrew Rybchenko, dev
  Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, mczekaj, jiawenwu, jianwang, maxime.coquelin, Xia,
	Chenbo, thomas, Yigit, Ferruh, mdr, Jayatheerthan, Jay



> > At queue configure stage always allocate space for maximum possible
> > number (RTE_MAX_QUEUES_PER_PORT) of queue pointers.
> > That will allow 'fast' inline functions (eth_rx_burst, etc.) to refer
> > pointer to internal queue data without extra checking of current number
> > of configured queues.
> > That would help in future to hide rte_eth_dev and related structures.
> > It means that from now on, each ethdev port will always consume:
> > ((2*sizeof(uintptr_t))* RTE_MAX_QUEUES_PER_PORT)
> > bytes of memory for its queue pointers.
> > With RTE_MAX_QUEUES_PER_PORT==1024 (default value) it is 16KB per port.
> >
> > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > ---
> >  lib/ethdev/rte_ethdev.c | 36 +++++++++---------------------------
> >  1 file changed, 9 insertions(+), 27 deletions(-)
> >
> > diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> > index ed37f8871b..c8abda6dd7 100644
> > --- a/lib/ethdev/rte_ethdev.c
> > +++ b/lib/ethdev/rte_ethdev.c
> > @@ -897,7 +897,8 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
> >
> >  	if (dev->data->rx_queues == NULL && nb_queues != 0) { /* first time configuration */
> >  		dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues",
> > -				sizeof(dev->data->rx_queues[0]) * nb_queues,
> > +				sizeof(dev->data->rx_queues[0]) *
> > +				RTE_MAX_QUEUES_PER_PORT,
> >  				RTE_CACHE_LINE_SIZE);
> 
> Looking at it I have few questions:
> 1. Why is nb_queues == 0 case kept as an exception? Yes,
>    strictly speaking it is not the problem of the patch,
>    DPDK will still segfault (non-debug build) if I
>    allocate Tx queues only but call rte_eth_rx_burst().

eth_dev_rx_queue_config(.., nb_queues=0) is used in few places to clean-up things.

>    After reading the patch description I thought that
>    we're trying to address it.

We do, though I can't see how we can address it in this patch.
Though it is a good idea - I think I can add extra check in eth_dev_fp_ops_setup()
or around and setup RX function pointers only when dev->data->rx_queues != NULL.
Same for TX.

> 2. Why do we need to allocate memory dynamically?
>    Can we just make rx_queues an array of appropriate size?

Pavan already asked same question.
My answer to him:
Yep we can, and yes it will simplify this peace of code.
The main reason I decided no to do this change now -
it will change layout of the_eth_dev_data structure.
In this series I tried to mininize(/avoid) changes in rte_eth_dev and rte_eth_dev_data,
as much as possible to avoid any unforeseen performance and functional impacts.
If we'll manage to make rte_eth_dev and rte_eth_dev_data private we can in future
consider that one and other changes in rte_eth_dev and rte_eth_dev_data layouts
without worrying about ABI breakage

>    May be wasting 512K unconditionally is too much.
> 3. If wasting 512K is too much, I'd consider to move
>    allocation to eth_dev_get(). If

Don't understand where 512KB came from.
each ethdev port will always consume:
((2*sizeof(uintptr_t))* RTE_MAX_QUEUES_PER_PORT)
bytes of memory for its queue pointers.
With RTE_MAX_QUEUES_PER_PORT==1024 (default value) it is 16KB per port.
 
> >  		if (dev->data->rx_queues == NULL) {
> >  			dev->data->nb_rx_queues = 0;
> > @@ -908,21 +909,11 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
> >
> >  		rxq = dev->data->rx_queues;
> >
> > -		for (i = nb_queues; i < old_nb_queues; i++)
> > +		for (i = nb_queues; i < old_nb_queues; i++) {
> >  			(*dev->dev_ops->rx_queue_release)(rxq[i]);
> > -		rxq = rte_realloc(rxq, sizeof(rxq[0]) * nb_queues,
> > -				RTE_CACHE_LINE_SIZE);
> > -		if (rxq == NULL)
> > -			return -(ENOMEM);
> > -		if (nb_queues > old_nb_queues) {
> > -			uint16_t new_qs = nb_queues - old_nb_queues;
> > -
> > -			memset(rxq + old_nb_queues, 0,
> > -				sizeof(rxq[0]) * new_qs);
> > +			rxq[i] = NULL;
> 
> It looks like the patch should be rebased on top of
> next-net main because of queue release patches.
> 
> [snip]

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure
  2021-10-11  8:25           ` Andrew Rybchenko
@ 2021-10-11 16:52             ` Ananyev, Konstantin
  2021-10-11 17:22               ` Andrew Rybchenko
  0 siblings, 1 reply; 112+ messages in thread
From: Ananyev, Konstantin @ 2021-10-11 16:52 UTC (permalink / raw)
  To: Andrew Rybchenko, dev
  Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, mczekaj, jiawenwu, jianwang, maxime.coquelin, Xia,
	Chenbo, thomas, Yigit, Ferruh, mdr, Jayatheerthan, Jay



> On 10/7/21 2:27 PM, Konstantin Ananyev wrote:
> > Copy public function pointers (rx_pkt_burst(), etc.) and related
> > pointers to internal data from rte_eth_dev structure into a
> > separate flat array. That array will remain in a public header.
> > The intention here is to make rte_eth_dev and related structures internal.
> > That should allow future possible changes to core eth_dev structures
> > to be transparent to the user and help to avoid ABI/API breakages.
> > The plan is to keep minimal part of data from rte_eth_dev public,
> > so we still can use inline functions for fast-path calls
> > (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
> > The whole idea beyond this new schema:
> > 1. PMDs keep to setup fast-path function pointers and related data
> >    inside rte_eth_dev struct in the same way they did it before.
> > 2. Inside rte_eth_dev_start() and inside rte_eth_dev_probing_finish()
> >    (for secondary process) we call eth_dev_fp_ops_setup, which
> >    copies these function and data pointers into rte_eth_fp_ops[port_id].
> > 3. Inside rte_eth_dev_stop() and inside rte_eth_dev_release_port()
> >    we call eth_dev_fp_ops_reset(), which resets rte_eth_fp_ops[port_id]
> >    into some dummy values.
> > 4. fast-path ethdev API (rte_eth_rx_burst(), etc.) will use that new
> >    flat array to call PMD specific functions.
> > That approach should allow us to make rte_eth_devices[] private
> > without introducing regression and help to avoid changes in drivers code.
> >
> > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 
> Overall LGTM, few nits below.
> 
> > ---
> >  lib/ethdev/ethdev_private.c  | 52 ++++++++++++++++++++++++++++++++++
> >  lib/ethdev/ethdev_private.h  |  7 +++++
> >  lib/ethdev/rte_ethdev.c      | 27 ++++++++++++++++++
> >  lib/ethdev/rte_ethdev_core.h | 55 ++++++++++++++++++++++++++++++++++++
> >  4 files changed, 141 insertions(+)
> >
> > diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
> > index 012cf73ca2..3eeda6e9f9 100644
> > --- a/lib/ethdev/ethdev_private.c
> > +++ b/lib/ethdev/ethdev_private.c
> > @@ -174,3 +174,55 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
> >  		RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
> >  	return str == NULL ? -1 : 0;
> >  }
> > +
> > +static uint16_t
> > +dummy_eth_rx_burst(__rte_unused void *rxq,
> > +		__rte_unused struct rte_mbuf **rx_pkts,
> > +		__rte_unused uint16_t nb_pkts)
> > +{
> > +	RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
> 
> May be "unconfigured" -> "stopped" ? Or "non-started" ?

Yes, it can be configured but not started.
So 'not started' seems like a better wording here.
Another option probably: 'not ready'.
What people think?

...

> 
> > +	rte_errno = ENOTSUP;
> > +	return 0;
> > +}
> > +
> > +struct rte_eth_fp_ops {
> > +
> > +	/**
> > +	 * Rx fast-path functions and related data.
> > +	 * 64-bit systems: occupies first 64B line
> > +	 */
> 
> As I understand the above comment is for a group of below
> fields. If so, Doxygen annocation for member groups should
> be used.

Ok, and how to do it?


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v5 5/7] ethdev: make fast-path functions to use new flat array
  2021-10-11 15:47             ` Ananyev, Konstantin
@ 2021-10-11 17:03               ` Andrew Rybchenko
  0 siblings, 0 replies; 112+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 17:03 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev
  Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, mczekaj, jiawenwu, jianwang, maxime.coquelin, Xia,
	Chenbo, thomas, Yigit, Ferruh, mdr, Jayatheerthan, Jay

On 10/11/21 6:47 PM, Ananyev, Konstantin wrote:
> 
>>
>> On 10/7/21 2:27 PM, Konstantin Ananyev wrote:
>>> Rework fast-path ethdev functions to use rte_eth_fp_ops[].
>>> While it is an API/ABI breakage, this change is intended to be
>>> transparent for both users (no changes in user app is required) and
>>> PMD developers (no changes in PMD is required).
>>> One extra thing to note - RX/TX callback invocation will cause extra
>>> function call with these changes. That might cause some insignificant
>>> slowdown for code-path where RX/TX callbacks are heavily involved.
>>
>> I'm sorry for nit picking here and below:
>>
>> RX -> Rx, TX -> Tx everywhere above.
>>
>>>
>>> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>>> ---
>>>   lib/ethdev/ethdev_private.c |  31 +++++
>>>   lib/ethdev/rte_ethdev.h     | 242 ++++++++++++++++++++++++++----------
>>>   lib/ethdev/version.map      |   3 +
>>>   3 files changed, 208 insertions(+), 68 deletions(-)
>>>
>>> diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
>>> index 3eeda6e9f9..1222c6f84e 100644
>>> --- a/lib/ethdev/ethdev_private.c
>>> +++ b/lib/ethdev/ethdev_private.c
>>> @@ -226,3 +226,34 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
>>>   	fpo->txq.data = dev->data->tx_queues;
>>>   	fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
>>>   }
>>> +
>>> +uint16_t
>>> +rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id,
>>> +	struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
>>> +	void *opaque)
>>> +{
>>> +	const struct rte_eth_rxtx_callback *cb = opaque;
>>> +
>>> +	while (cb != NULL) {
>>> +		nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
>>> +				nb_pkts, cb->param);
>>> +		cb = cb->next;
>>> +	}
>>> +
>>> +	return nb_rx;
>>> +}
>>> +
>>> +uint16_t
>>> +rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
>>> +	struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque)
>>> +{
>>> +	const struct rte_eth_rxtx_callback *cb = opaque;
>>> +
>>> +	while (cb != NULL) {
>>> +		nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
>>> +				cb->param);
>>> +		cb = cb->next;
>>> +	}
>>> +
>>> +	return nb_pkts;
>>> +}
>>> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
>>> index cdd16d6e57..c0e1a40681 100644
>>> --- a/lib/ethdev/rte_ethdev.h
>>> +++ b/lib/ethdev/rte_ethdev.h
>>> @@ -4904,6 +4904,33 @@ int rte_eth_representor_info_get(uint16_t port_id,
>>>
>>>   #include <rte_ethdev_core.h>
>>>
>>> +/**
>>> + * @internal
>>> + * Helper routine for eth driver rx_burst API.
>>
>> rx -> Rx
>>
>>> + * Should be called at exit from PMD's rte_eth_rx_bulk implementation.
>>> + * Does necessary post-processing - invokes RX callbacks if any, etc.
>>
>> RX -> Rx
>>
>>> + *
>>> + * @param port_id
>>> + *  The port identifier of the Ethernet device.
>>> + * @param queue_id
>>> + *  The index of the receive queue from which to retrieve input packets.
>>
>> Isn't:
>> The index of the queue from which packets are received from?
> 
> I copied it from comments from rte_eth_rx_burst().
> I suppose it is just two ways to say the same thing.

May be it is just my problem that I don't understand the
initial description.

> 
>>
>>> + * @param rx_pkts
>>> + *   The address of an array of pointers to *rte_mbuf* structures that
>>> + *   have been retrieved from the device.
>>> + * @param nb_pkts
>>
>> Should be @param nb_rx
> 
> Ack, will fix.
> 
>>
>>> + *   The number of packets that were retrieved from the device.
>>> + * @param nb_pkts
>>> + *   The number of elements in *rx_pkts* array.
>>
>> @p should be used to refer to a paramter.
> 
> To be more precise you are talking about:
> s/*rx_pkts*/@ rx_pkts/

s/"rx_pkts"/@p rx_pkts/

> ?
> 
>>
>> The description does not help to understand why both nb_rx and
>> nb_pkts are necessary. Isn't nb_pkts >= nb_rx and nb_rx
>> sufficient?
> 
> Nope,  that's for callbacks call.
> Will update the comment.

Thanks.

>>> + * @param opaque
>>> + *   Opaque pointer of RX queue callback related data.
>>
>> RX -> Rx
>>
>>> + *
>>> + * @return
>>> + *  The number of packets effectively supplied to the *rx_pkts* array.
>>
>> @p should be used to refer to a parameter.
>>
>>> + */
>>> +uint16_t rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id,
>>> +		struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
>>> +		void *opaque);
>>> +
>>>   /**
>>>    *
>>>    * Retrieve a burst of input packets from a receive queue of an Ethernet
>>> @@ -4995,23 +5022,37 @@ static inline uint16_t
>>>   rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
>>>   		 struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
>>>   {
>>> -	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>>>   	uint16_t nb_rx;
>>> +	struct rte_eth_fp_ops *p;
>>
>> p is typically a very bad name in a funcion with
>> many pointer variables etc. May be "fpo" as in previous
>> patch?
>>
>>> +	void *cb, *qd;
>>
>> Please, avoid variable, expecially pointers, declaration in
>> one line.
> 
> Here and in other places, I think local variable names and placement,
> is just a matter of personal preference.

Of course you can drop my notes as long as I'm alone.
I've started my comment from "Please" :)
May be I'm asking too much.

Also, I'm sorry, but I stricly against 'p' name since such
naming makes code harder to read.

> 
>>
>> I'd suggest to use 'rxq' instead of 'qd'. The first paramter
>> of the rx_pkt_burst is 'rxq'.
>>
>> Also 'cb' seems to be used under RTE_ETHDEV_RXTX_CALLBACKS
>> only. If so, it could be unused variable warning if
>> RTE_ETHDEV_RXTX_CALLBACKS is not defined.
> 
> Good point, will move it back under #ifdef RTE_ETHDEV_RXTX_CALLBACKS.
>   
>>> +
>>> +#ifdef RTE_ETHDEV_DEBUG_RX
>>> +	if (port_id >= RTE_MAX_ETHPORTS ||
>>> +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
>>> +		RTE_ETHDEV_LOG(ERR,
>>> +			"Invalid port_id=%u or queue_id=%u\n",
>>> +			port_id, queue_id);
>>> +		return 0;
>>> +	}
>>> +#endif
>>> +
>>> +	/* fetch pointer to queue data */
>>> +	p = &rte_eth_fp_ops[port_id];
>>> +	qd = p->rxq.data[queue_id];
>>>
>>>   #ifdef RTE_ETHDEV_DEBUG_RX
>>>   	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
>>> -	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
>>>
>>> -	if (queue_id >= dev->data->nb_rx_queues) {
>>> -		RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u\n", queue_id);
>>> +	if (qd == NULL) {
>>> +		RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u for port_id=%u\n",
>>
>> RX -> Rx
>>
>>> +			queue_id, port_id);
>>>   		return 0;
>>>   	}
>>>   #endif
>>> -	nb_rx = (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id],
>>> -				     rx_pkts, nb_pkts);
>>> +
>>> +	nb_rx = p->rx_pkt_burst(qd, rx_pkts, nb_pkts);
>>>
>>>   #ifdef RTE_ETHDEV_RXTX_CALLBACKS
>>> -	struct rte_eth_rxtx_callback *cb;
>>>
>>>   	/* __ATOMIC_RELEASE memory order was used when the
>>>   	 * call back was inserted into the list.
>>> @@ -5019,16 +5060,10 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
>>>   	 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
>>>   	 * not required.
>>>   	 */
>>> -	cb = __atomic_load_n(&dev->post_rx_burst_cbs[queue_id],
>>> -				__ATOMIC_RELAXED);
>>> -
>>> -	if (unlikely(cb != NULL)) {
>>> -		do {
>>> -			nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
>>> -						nb_pkts, cb->param);
>>> -			cb = cb->next;
>>> -		} while (cb != NULL);
>>> -	}
>>> +	cb = __atomic_load_n((void **)&p->rxq.clbk[queue_id], __ATOMIC_RELAXED);
>>> +	if (unlikely(cb != NULL))
>>> +		nb_rx = rte_eth_call_rx_callbacks(port_id, queue_id, rx_pkts,
>>> +				nb_rx, nb_pkts, cb);
>>>   #endif
>>>
>>>   	rte_ethdev_trace_rx_burst(port_id, queue_id, (void **)rx_pkts, nb_rx);
>>> @@ -5051,16 +5086,27 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
>>>   static inline int
>>>   rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
>>>   {
>>> -	struct rte_eth_dev *dev;
>>> +	struct rte_eth_fp_ops *p;
>>> +	void *qd;
>>
>> p -> fpo, qd -> rxq
>>
>>> +
>>> +	if (port_id >= RTE_MAX_ETHPORTS ||
>>> +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
>>> +		RTE_ETHDEV_LOG(ERR,
>>> +			"Invalid port_id=%u or queue_id=%u\n",
>>> +			port_id, queue_id);
>>> +		return -EINVAL;
>>> +	}
>>> +
>>> +	/* fetch pointer to queue data */
>>> +	p = &rte_eth_fp_ops[port_id];
>>> +	qd = p->rxq.data[queue_id];
>>>
>>>   	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
>>> -	dev = &rte_eth_devices[port_id];
>>> -	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_queue_count, -ENOTSUP);
>>> -	if (queue_id >= dev->data->nb_rx_queues ||
>>> -	    dev->data->rx_queues[queue_id] == NULL)
>>> +	RTE_FUNC_PTR_OR_ERR_RET(*p->rx_queue_count, -ENOTSUP);
>>> +	if (qd == NULL)
>>>   		return -EINVAL;
>>>
>>> -	return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]);
>>> +	return (int)(*p->rx_queue_count)(qd);
>>>   }
>>>
>>>   /**@{@name Rx hardware descriptor states
>>> @@ -5108,21 +5154,30 @@ static inline int
>>>   rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
>>>   	uint16_t offset)
>>>   {
>>> -	struct rte_eth_dev *dev;
>>> -	void *rxq;
>>> +	struct rte_eth_fp_ops *p;
>>> +	void *qd;
>>
>> p -> fpo, qd -> rxq
>>
>>>
>>>   #ifdef RTE_ETHDEV_DEBUG_RX
>>> -	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
>>> +	if (port_id >= RTE_MAX_ETHPORTS ||
>>> +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
>>> +		RTE_ETHDEV_LOG(ERR,
>>> +			"Invalid port_id=%u or queue_id=%u\n",
>>> +			port_id, queue_id);
>>> +		return -EINVAL;
>>> +	}
>>>   #endif
>>> -	dev = &rte_eth_devices[port_id];
>>> +
>>> +	/* fetch pointer to queue data */
>>> +	p = &rte_eth_fp_ops[port_id];
>>> +	qd = p->rxq.data[queue_id];
>>> +
>>>   #ifdef RTE_ETHDEV_DEBUG_RX
>>> -	if (queue_id >= dev->data->nb_rx_queues)
>>> +	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
>>> +	if (qd == NULL)
>>>   		return -ENODEV;
>>>   #endif
>>> -	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_descriptor_status, -ENOTSUP);
>>> -	rxq = dev->data->rx_queues[queue_id];
>>> -
>>> -	return (*dev->rx_descriptor_status)(rxq, offset);
>>> +	RTE_FUNC_PTR_OR_ERR_RET(*p->rx_descriptor_status, -ENOTSUP);
>>> +	return (*p->rx_descriptor_status)(qd, offset);
>>>   }
>>>
>>>   /**@{@name Tx hardware descriptor states
>>> @@ -5169,23 +5224,54 @@ rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
>>>   static inline int rte_eth_tx_descriptor_status(uint16_t port_id,
>>>   	uint16_t queue_id, uint16_t offset)
>>>   {
>>> -	struct rte_eth_dev *dev;
>>> -	void *txq;
>>> +	struct rte_eth_fp_ops *p;
>>> +	void *qd;
>>
>> p -> fpo, qd -> txq
>>
>>>
>>>   #ifdef RTE_ETHDEV_DEBUG_TX
>>> -	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
>>> +	if (port_id >= RTE_MAX_ETHPORTS ||
>>> +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
>>> +		RTE_ETHDEV_LOG(ERR,
>>> +			"Invalid port_id=%u or queue_id=%u\n",
>>> +			port_id, queue_id);
>>> +		return -EINVAL;
>>> +	}
>>>   #endif
>>> -	dev = &rte_eth_devices[port_id];
>>> +
>>> +	/* fetch pointer to queue data */
>>> +	p = &rte_eth_fp_ops[port_id];
>>> +	qd = p->txq.data[queue_id];
>>> +
>>>   #ifdef RTE_ETHDEV_DEBUG_TX
>>> -	if (queue_id >= dev->data->nb_tx_queues)
>>> +	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
>>> +	if (qd == NULL)
>>>   		return -ENODEV;
>>>   #endif
>>> -	RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_descriptor_status, -ENOTSUP);
>>> -	txq = dev->data->tx_queues[queue_id];
>>> -
>>> -	return (*dev->tx_descriptor_status)(txq, offset);
>>> +	RTE_FUNC_PTR_OR_ERR_RET(*p->tx_descriptor_status, -ENOTSUP);
>>> +	return (*p->tx_descriptor_status)(qd, offset);
>>>   }
>>>
>>> +/**
>>> + * @internal
>>> + * Helper routine for eth driver tx_burst API.
>>> + * Should be called before entry PMD's rte_eth_tx_bulk implementation.
>>> + * Does necessary pre-processing - invokes TX callbacks if any, etc.
>>
>> TX -> Tx
>>
>>> + *
>>> + * @param port_id
>>> + *   The port identifier of the Ethernet device.
>>> + * @param queue_id
>>> + *   The index of the transmit queue through which output packets must be
>>> + *   sent.
>>> + * @param tx_pkts
>>> + *   The address of an array of *nb_pkts* pointers to *rte_mbuf* structures
>>> + *   which contain the output packets.
>>
>> *nb_pkts* -> @p nb_pkts
>>
>>> + * @param nb_pkts
>>> + *   The maximum number of packets to transmit.
>>> + * @return
>>> + *   The number of output packets to transmit.
>>> + */
>>> +uint16_t rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
>>> +	struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque);
>>> +
>>>   /**
>>>    * Send a burst of output packets on a transmit queue of an Ethernet device.
>>>    *
>>> @@ -5256,20 +5342,34 @@ static inline uint16_t
>>>   rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
>>>   		 struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
>>>   {
>>> -	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>>> +	struct rte_eth_fp_ops *p;
>>> +	void *cb, *qd;
>>
>> Same as above
>>
>>> +
>>> +#ifdef RTE_ETHDEV_DEBUG_TX
>>> +	if (port_id >= RTE_MAX_ETHPORTS ||
>>> +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
>>> +		RTE_ETHDEV_LOG(ERR,
>>> +			"Invalid port_id=%u or queue_id=%u\n",
>>> +			port_id, queue_id);
>>> +		return 0;
>>> +	}
>>> +#endif
>>> +
>>> +	/* fetch pointer to queue data */
>>> +	p = &rte_eth_fp_ops[port_id];
>>> +	qd = p->txq.data[queue_id];
>>>
>>>   #ifdef RTE_ETHDEV_DEBUG_TX
>>>   	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
>>> -	RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0);
>>>
>>> -	if (queue_id >= dev->data->nb_tx_queues) {
>>> -		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
>>> +	if (qd == NULL) {
>>> +		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
>>
>> TX -> Tx
>>
>>> +			queue_id, port_id);
>>>   		return 0;
>>>   	}
>>>   #endif
>>>
>>>   #ifdef RTE_ETHDEV_RXTX_CALLBACKS
>>> -	struct rte_eth_rxtx_callback *cb;
>>>
>>>   	/* __ATOMIC_RELEASE memory order was used when the
>>>   	 * call back was inserted into the list.
>>> @@ -5277,21 +5377,16 @@ rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
>>>   	 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
>>>   	 * not required.
>>>   	 */
>>> -	cb = __atomic_load_n(&dev->pre_tx_burst_cbs[queue_id],
>>> -				__ATOMIC_RELAXED);
>>> -
>>> -	if (unlikely(cb != NULL)) {
>>> -		do {
>>> -			nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
>>> -					cb->param);
>>> -			cb = cb->next;
>>> -		} while (cb != NULL);
>>> -	}
>>> +	cb = __atomic_load_n((void **)&p->txq.clbk[queue_id], __ATOMIC_RELAXED);
>>> +	if (unlikely(cb != NULL))
>>> +		nb_pkts = rte_eth_call_tx_callbacks(port_id, queue_id, tx_pkts,
>>> +				nb_pkts, cb);
>>>   #endif
>>>
>>> -	rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts,
>>> -		nb_pkts);
>>> -	return (*dev->tx_pkt_burst)(dev->data->tx_queues[queue_id], tx_pkts, nb_pkts);
>>> +	nb_pkts = p->tx_pkt_burst(qd, tx_pkts, nb_pkts);
>>> +
>>> +	rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts, nb_pkts);
>>> +	return nb_pkts;
>>>   }
>>>
>>>   /**
>>> @@ -5354,31 +5449,42 @@ static inline uint16_t
>>>   rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id,
>>>   		struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
>>>   {
>>> -	struct rte_eth_dev *dev;
>>> +	struct rte_eth_fp_ops *p;
>>> +	void *qd;
>>
>> p->fpo, qd->txq
>>
>>>
>>>   #ifdef RTE_ETHDEV_DEBUG_TX
>>> -	if (!rte_eth_dev_is_valid_port(port_id)) {
>>> -		RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
>>> +	if (port_id >= RTE_MAX_ETHPORTS ||
>>> +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
>>> +		RTE_ETHDEV_LOG(ERR,
>>> +			"Invalid port_id=%u or queue_id=%u\n",
>>> +			port_id, queue_id);
>>>   		rte_errno = ENODEV;
>>>   		return 0;
>>>   	}
>>>   #endif
>>>
>>> -	dev = &rte_eth_devices[port_id];
>>> +	/* fetch pointer to queue data */
>>> +	p = &rte_eth_fp_ops[port_id];
>>> +	qd = p->txq.data[queue_id];
>>>
>>>   #ifdef RTE_ETHDEV_DEBUG_TX
>>> -	if (queue_id >= dev->data->nb_tx_queues) {
>>> -		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
>>> +	if (!rte_eth_dev_is_valid_port(port_id)) {
>>> +		RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
>>
>> TX -> Tx
>>
>>> +		rte_errno = ENODEV;
>>> +		return 0;
>>> +	}
>>> +	if (qd == NULL) {
>>> +		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
>>
>> TX -> Tx
>>
>>> +			queue_id, port_id);
>>>   		rte_errno = EINVAL;
>>>   		return 0;
>>>   	}
>>>   #endif
>>>
>>> -	if (!dev->tx_pkt_prepare)
>>> +	if (!p->tx_pkt_prepare)
>>
>> Please, change it to compare vs NULL since you touch the line.
>> Just to be consistent with DPDK coding style and lines above.
> 
> Ok, I am also fond of explicit comparisons :)

Thanks.

> 
>   
>>>   		return nb_pkts;
>>>
>>> -	return (*dev->tx_pkt_prepare)(dev->data->tx_queues[queue_id],
>>> -			tx_pkts, nb_pkts);
>>> +	return p->tx_pkt_prepare(qd, tx_pkts, nb_pkts);
>>>   }
>>>
>>>   #else
>>> diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
>>> index 904bce6ea1..79e62dcf61 100644
>>> --- a/lib/ethdev/version.map
>>> +++ b/lib/ethdev/version.map
>>> @@ -7,6 +7,8 @@ DPDK_22 {
>>>   	rte_eth_allmulticast_disable;
>>>   	rte_eth_allmulticast_enable;
>>>   	rte_eth_allmulticast_get;
>>> +	rte_eth_call_rx_callbacks;
>>> +	rte_eth_call_tx_callbacks;
>>>   	rte_eth_dev_adjust_nb_rx_tx_desc;
>>>   	rte_eth_dev_callback_register;
>>>   	rte_eth_dev_callback_unregister;
>>> @@ -76,6 +78,7 @@ DPDK_22 {
>>>   	rte_eth_find_next_of;
>>>   	rte_eth_find_next_owned_by;
>>>   	rte_eth_find_next_sibling;
>>> +	rte_eth_fp_ops;
>>>   	rte_eth_iterator_cleanup;
>>>   	rte_eth_iterator_init;
>>>   	rte_eth_iterator_next;
>>>
> 


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v5 7/7] ethdev: hide eth dev related structures
  2021-10-11 15:54             ` Ananyev, Konstantin
@ 2021-10-11 17:04               ` Andrew Rybchenko
  0 siblings, 0 replies; 112+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 17:04 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev
  Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, mczekaj, jiawenwu, jianwang, maxime.coquelin, Xia,
	Chenbo, thomas, Yigit, Ferruh, mdr, Jayatheerthan, Jay

On 10/11/21 6:54 PM, Ananyev, Konstantin wrote:
> 
> 
>>
>> On 10/7/21 2:27 PM, Konstantin Ananyev wrote:
>>> Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
>>> data into private header (ethdev_driver.h).
>>> Few minor changes to keep DPDK building after that.
>>>
>>> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>>
>> [snip]
>>
>> I realize that many notes below correspond to just
>> moved code, but I still think that ti would be nice
>> to fix anyway.
> 
> Yes, that’s just cut and paste from rte_ethdev_core.h and ethdev_driver.h
> It is probably a good idea to clean-up and might be re-layout rte_eth_dev and friends,
> but I don't think it has to be part of this patch-set.
> After all, if it will become internal structure, that work could be done for any DPDK release.
> 

OK, I can do cleanup as a follow up patch.

^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v5 2/7] ethdev: allocate max space for internal queue array
  2021-10-11 16:25             ` Ananyev, Konstantin
@ 2021-10-11 17:15               ` Andrew Rybchenko
  2021-10-11 23:06                 ` Ananyev, Konstantin
  0 siblings, 1 reply; 112+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 17:15 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev
  Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, mczekaj, jiawenwu, jianwang, maxime.coquelin, Xia,
	Chenbo, thomas, Yigit, Ferruh, mdr, Jayatheerthan, Jay

On 10/11/21 7:25 PM, Ananyev, Konstantin wrote:
> 
> 
>>> At queue configure stage always allocate space for maximum possible
>>> number (RTE_MAX_QUEUES_PER_PORT) of queue pointers.
>>> That will allow 'fast' inline functions (eth_rx_burst, etc.) to refer
>>> pointer to internal queue data without extra checking of current number
>>> of configured queues.
>>> That would help in future to hide rte_eth_dev and related structures.
>>> It means that from now on, each ethdev port will always consume:
>>> ((2*sizeof(uintptr_t))* RTE_MAX_QUEUES_PER_PORT)
>>> bytes of memory for its queue pointers.
>>> With RTE_MAX_QUEUES_PER_PORT==1024 (default value) it is 16KB per port.
>>>
>>> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>>> ---
>>>   lib/ethdev/rte_ethdev.c | 36 +++++++++---------------------------
>>>   1 file changed, 9 insertions(+), 27 deletions(-)
>>>
>>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
>>> index ed37f8871b..c8abda6dd7 100644
>>> --- a/lib/ethdev/rte_ethdev.c
>>> +++ b/lib/ethdev/rte_ethdev.c
>>> @@ -897,7 +897,8 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
>>>
>>>   	if (dev->data->rx_queues == NULL && nb_queues != 0) { /* first time configuration */
>>>   		dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues",
>>> -				sizeof(dev->data->rx_queues[0]) * nb_queues,
>>> +				sizeof(dev->data->rx_queues[0]) *
>>> +				RTE_MAX_QUEUES_PER_PORT,
>>>   				RTE_CACHE_LINE_SIZE);
>>
>> Looking at it I have few questions:
>> 1. Why is nb_queues == 0 case kept as an exception? Yes,
>>     strictly speaking it is not the problem of the patch,
>>     DPDK will still segfault (non-debug build) if I
>>     allocate Tx queues only but call rte_eth_rx_burst().
> 
> eth_dev_rx_queue_config(.., nb_queues=0) is used in few places to clean-up things.

No, as far as I know. For Tx only application (e.g. traffic generator)
it is 100% legal to configure with tx_queues=X, rx_queues=0.
The same is for Rx only application (e.g. packet capture).

> 
>>     After reading the patch description I thought that
>>     we're trying to address it.
> 
> We do, though I can't see how we can address it in this patch.
> Though it is a good idea - I think I can add extra check in eth_dev_fp_ops_setup()
> or around and setup RX function pointers only when dev->data->rx_queues != NULL.
> Same for TX.

You don't need to care about these pointers, if these arrays are
always allocated. See (3) below.

> 
>> 2. Why do we need to allocate memory dynamically?
>>     Can we just make rx_queues an array of appropriate size?
> 
> Pavan already asked same question.
> My answer to him:
> Yep we can, and yes it will simplify this peace of code.
> The main reason I decided no to do this change now -
> it will change layout of the_eth_dev_data structure.
> In this series I tried to mininize(/avoid) changes in rte_eth_dev and rte_eth_dev_data,
> as much as possible to avoid any unforeseen performance and functional impacts.
> If we'll manage to make rte_eth_dev and rte_eth_dev_data private we can in future
> consider that one and other changes in rte_eth_dev and rte_eth_dev_data layouts
> without worrying about ABI breakage

Thanks a lot. Makes sense.

>>     May be wasting 512K unconditionally is too much.
>> 3. If wasting 512K is too much, I'd consider to move
>>     allocation to eth_dev_get(). If
> 
> Don't understand where 512KB came from.

32 port * 1024 queues * 2 types * 8 pointer size
if we allocate as in (2) above.

> each ethdev port will always consume:
> ((2*sizeof(uintptr_t))* RTE_MAX_QUEUES_PER_PORT)
> bytes of memory for its queue pointers.
> With RTE_MAX_QUEUES_PER_PORT==1024 (default value) it is 16KB per port.

IMHO it will be a bit nicer if queue pointers arrays are allocated
on device get if size is fixed. It is just a suggestion. If you
disagree, feel free to drop it.

>>>   		if (dev->data->rx_queues == NULL) {
>>>   			dev->data->nb_rx_queues = 0;
>>> @@ -908,21 +909,11 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
>>>
>>>   		rxq = dev->data->rx_queues;
>>>
>>> -		for (i = nb_queues; i < old_nb_queues; i++)
>>> +		for (i = nb_queues; i < old_nb_queues; i++) {
>>>   			(*dev->dev_ops->rx_queue_release)(rxq[i]);
>>> -		rxq = rte_realloc(rxq, sizeof(rxq[0]) * nb_queues,
>>> -				RTE_CACHE_LINE_SIZE);
>>> -		if (rxq == NULL)
>>> -			return -(ENOMEM);
>>> -		if (nb_queues > old_nb_queues) {
>>> -			uint16_t new_qs = nb_queues - old_nb_queues;
>>> -
>>> -			memset(rxq + old_nb_queues, 0,
>>> -				sizeof(rxq[0]) * new_qs);
>>> +			rxq[i] = NULL;
>>
>> It looks like the patch should be rebased on top of
>> next-net main because of queue release patches.
>>
>> [snip]


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure
  2021-10-11 16:52             ` Ananyev, Konstantin
@ 2021-10-11 17:22               ` Andrew Rybchenko
  0 siblings, 0 replies; 112+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 17:22 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev
  Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, mczekaj, jiawenwu, jianwang, maxime.coquelin, Xia,
	Chenbo, thomas, Yigit, Ferruh, mdr, Jayatheerthan, Jay

On 10/11/21 7:52 PM, Ananyev, Konstantin wrote:
> 
> 
>> On 10/7/21 2:27 PM, Konstantin Ananyev wrote:
>>> Copy public function pointers (rx_pkt_burst(), etc.) and related
>>> pointers to internal data from rte_eth_dev structure into a
>>> separate flat array. That array will remain in a public header.
>>> The intention here is to make rte_eth_dev and related structures internal.
>>> That should allow future possible changes to core eth_dev structures
>>> to be transparent to the user and help to avoid ABI/API breakages.
>>> The plan is to keep minimal part of data from rte_eth_dev public,
>>> so we still can use inline functions for fast-path calls
>>> (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
>>> The whole idea beyond this new schema:
>>> 1. PMDs keep to setup fast-path function pointers and related data
>>>     inside rte_eth_dev struct in the same way they did it before.
>>> 2. Inside rte_eth_dev_start() and inside rte_eth_dev_probing_finish()
>>>     (for secondary process) we call eth_dev_fp_ops_setup, which
>>>     copies these function and data pointers into rte_eth_fp_ops[port_id].
>>> 3. Inside rte_eth_dev_stop() and inside rte_eth_dev_release_port()
>>>     we call eth_dev_fp_ops_reset(), which resets rte_eth_fp_ops[port_id]
>>>     into some dummy values.
>>> 4. fast-path ethdev API (rte_eth_rx_burst(), etc.) will use that new
>>>     flat array to call PMD specific functions.
>>> That approach should allow us to make rte_eth_devices[] private
>>> without introducing regression and help to avoid changes in drivers code.
>>>
>>> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>>
>> Overall LGTM, few nits below.
>>
>>> ---
>>>   lib/ethdev/ethdev_private.c  | 52 ++++++++++++++++++++++++++++++++++
>>>   lib/ethdev/ethdev_private.h  |  7 +++++
>>>   lib/ethdev/rte_ethdev.c      | 27 ++++++++++++++++++
>>>   lib/ethdev/rte_ethdev_core.h | 55 ++++++++++++++++++++++++++++++++++++
>>>   4 files changed, 141 insertions(+)
>>>
>>> diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
>>> index 012cf73ca2..3eeda6e9f9 100644
>>> --- a/lib/ethdev/ethdev_private.c
>>> +++ b/lib/ethdev/ethdev_private.c
>>> @@ -174,3 +174,55 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
>>>   		RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
>>>   	return str == NULL ? -1 : 0;
>>>   }
>>> +
>>> +static uint16_t
>>> +dummy_eth_rx_burst(__rte_unused void *rxq,
>>> +		__rte_unused struct rte_mbuf **rx_pkts,
>>> +		__rte_unused uint16_t nb_pkts)
>>> +{
>>> +	RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
>>
>> May be "unconfigured" -> "stopped" ? Or "non-started" ?
> 
> Yes, it can be configured but not started.
> So 'not started' seems like a better wording here.
> Another option probably: 'not ready'.
> What people think?

Taking into account that some PMds would like to set dummy
pointers in some specifics conditions, I think "not ready"
is the best option here.

> 
> ...
> 
>>
>>> +	rte_errno = ENOTSUP;
>>> +	return 0;
>>> +}
>>> +
>>> +struct rte_eth_fp_ops {
>>> +
>>> +	/**
>>> +	 * Rx fast-path functions and related data.
>>> +	 * 64-bit systems: occupies first 64B line
>>> +	 */
>>
>> As I understand the above comment is for a group of below
>> fields. If so, Doxygen annocation for member groups should
>> be used.
> 
> Ok, and how to do it?
> 

See [1]

[1] https://www.doxygen.nl/manual/grouping.html#memgroup


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v5 2/7] ethdev: allocate max space for internal queue array
  2021-10-11 17:15               ` Andrew Rybchenko
@ 2021-10-11 23:06                 ` Ananyev, Konstantin
  2021-10-12  5:47                   ` Andrew Rybchenko
  0 siblings, 1 reply; 112+ messages in thread
From: Ananyev, Konstantin @ 2021-10-11 23:06 UTC (permalink / raw)
  To: Andrew Rybchenko, dev
  Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, mczekaj, jiawenwu, jianwang, maxime.coquelin, Xia,
	Chenbo, thomas, Yigit, Ferruh, mdr, Jayatheerthan, Jay


> >>> At queue configure stage always allocate space for maximum possible
> >>> number (RTE_MAX_QUEUES_PER_PORT) of queue pointers.
> >>> That will allow 'fast' inline functions (eth_rx_burst, etc.) to refer
> >>> pointer to internal queue data without extra checking of current number
> >>> of configured queues.
> >>> That would help in future to hide rte_eth_dev and related structures.
> >>> It means that from now on, each ethdev port will always consume:
> >>> ((2*sizeof(uintptr_t))* RTE_MAX_QUEUES_PER_PORT)
> >>> bytes of memory for its queue pointers.
> >>> With RTE_MAX_QUEUES_PER_PORT==1024 (default value) it is 16KB per port.
> >>>
> >>> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> >>> ---
> >>>   lib/ethdev/rte_ethdev.c | 36 +++++++++---------------------------
> >>>   1 file changed, 9 insertions(+), 27 deletions(-)
> >>>
> >>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> >>> index ed37f8871b..c8abda6dd7 100644
> >>> --- a/lib/ethdev/rte_ethdev.c
> >>> +++ b/lib/ethdev/rte_ethdev.c
> >>> @@ -897,7 +897,8 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
> >>>
> >>>   	if (dev->data->rx_queues == NULL && nb_queues != 0) { /* first time configuration */
> >>>   		dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues",
> >>> -				sizeof(dev->data->rx_queues[0]) * nb_queues,
> >>> +				sizeof(dev->data->rx_queues[0]) *
> >>> +				RTE_MAX_QUEUES_PER_PORT,
> >>>   				RTE_CACHE_LINE_SIZE);
> >>
> >> Looking at it I have few questions:
> >> 1. Why is nb_queues == 0 case kept as an exception? Yes,
> >>     strictly speaking it is not the problem of the patch,
> >>     DPDK will still segfault (non-debug build) if I
> >>     allocate Tx queues only but call rte_eth_rx_burst().
> >
> > eth_dev_rx_queue_config(.., nb_queues=0) is used in few places to clean-up things.
> 
> No, as far as I know. For Tx only application (e.g. traffic generator)
> it is 100% legal to configure with tx_queues=X, rx_queues=0.
> The same is for Rx only application (e.g. packet capture).

Yes, that is valid config for sure.
I just pointed that simply ignoring 'nb_queues' value and
always allocating space for max possible queues, i.e:

eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues) 
{
....
- if (dev->data->rx_queues == NULL && nb_queues != 0) { /* first time configuration */
+ if (dev->data->rx_queues == NULL) {
wouldn't work, as right now nb_queues == 0 has extra special meaning -
do final cleanup and free dev->data->rx_queues.
But re-reading the text below, it seems that I misunderstood you
and it probably wasn't your intention anyway.

> 
> >
> >>     After reading the patch description I thought that
> >>     we're trying to address it.
> >
> > We do, though I can't see how we can address it in this patch.
> > Though it is a good idea - I think I can add extra check in eth_dev_fp_ops_setup()
> > or around and setup RX function pointers only when dev->data->rx_queues != NULL.
> > Same for TX.
> 
> You don't need to care about these pointers, if these arrays are
> always allocated. See (3) below.
> 
> >
> >> 2. Why do we need to allocate memory dynamically?
> >>     Can we just make rx_queues an array of appropriate size?
> >
> > Pavan already asked same question.
> > My answer to him:
> > Yep we can, and yes it will simplify this peace of code.
> > The main reason I decided no to do this change now -
> > it will change layout of the_eth_dev_data structure.
> > In this series I tried to mininize(/avoid) changes in rte_eth_dev and rte_eth_dev_data,
> > as much as possible to avoid any unforeseen performance and functional impacts.
> > If we'll manage to make rte_eth_dev and rte_eth_dev_data private we can in future
> > consider that one and other changes in rte_eth_dev and rte_eth_dev_data layouts
> > without worrying about ABI breakage
> 
> Thanks a lot. Makes sense.
> 
> >>     May be wasting 512K unconditionally is too much.
> >> 3. If wasting 512K is too much, I'd consider to move
> >>     allocation to eth_dev_get(). If
> >
> > Don't understand where 512KB came from.
> 
> 32 port * 1024 queues * 2 types * 8 pointer size
> if we allocate as in (2) above.
> 
> > each ethdev port will always consume:
> > ((2*sizeof(uintptr_t))* RTE_MAX_QUEUES_PER_PORT)
> > bytes of memory for its queue pointers.
> > With RTE_MAX_QUEUES_PER_PORT==1024 (default value) it is 16KB per port.
> 
> IMHO it will be a bit nicer if queue pointers arrays are allocated
> on device get if size is fixed. It is just a suggestion. If you
> disagree, feel free to drop it.

You mean - allocate these arrays somewhere at rte_eth_dev_allocate() path?
That sounds like an interesting idea, but seems too drastic to me at that stage.

> 
> >>>   		if (dev->data->rx_queues == NULL) {
> >>>   			dev->data->nb_rx_queues = 0;
> >>> @@ -908,21 +909,11 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
> >>>
> >>>   		rxq = dev->data->rx_queues;
> >>>
> >>> -		for (i = nb_queues; i < old_nb_queues; i++)
> >>> +		for (i = nb_queues; i < old_nb_queues; i++) {
> >>>   			(*dev->dev_ops->rx_queue_release)(rxq[i]);
> >>> -		rxq = rte_realloc(rxq, sizeof(rxq[0]) * nb_queues,
> >>> -				RTE_CACHE_LINE_SIZE);
> >>> -		if (rxq == NULL)
> >>> -			return -(ENOMEM);
> >>> -		if (nb_queues > old_nb_queues) {
> >>> -			uint16_t new_qs = nb_queues - old_nb_queues;
> >>> -
> >>> -			memset(rxq + old_nb_queues, 0,
> >>> -				sizeof(rxq[0]) * new_qs);
> >>> +			rxq[i] = NULL;
> >>
> >> It looks like the patch should be rebased on top of
> >> next-net main because of queue release patches.
> >>
> >> [snip]


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v5 2/7] ethdev: allocate max space for internal queue array
  2021-10-11 23:06                 ` Ananyev, Konstantin
@ 2021-10-12  5:47                   ` Andrew Rybchenko
  0 siblings, 0 replies; 112+ messages in thread
From: Andrew Rybchenko @ 2021-10-12  5:47 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev
  Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, mczekaj, jiawenwu, jianwang, maxime.coquelin, Xia,
	Chenbo, thomas, Yigit, Ferruh, mdr, Jayatheerthan, Jay

On 10/12/21 2:06 AM, Ananyev, Konstantin wrote:
>>>>> At queue configure stage always allocate space for maximum possible
>>>>> number (RTE_MAX_QUEUES_PER_PORT) of queue pointers.
>>>>> That will allow 'fast' inline functions (eth_rx_burst, etc.) to refer
>>>>> pointer to internal queue data without extra checking of current number
>>>>> of configured queues.
>>>>> That would help in future to hide rte_eth_dev and related structures.
>>>>> It means that from now on, each ethdev port will always consume:
>>>>> ((2*sizeof(uintptr_t))* RTE_MAX_QUEUES_PER_PORT)
>>>>> bytes of memory for its queue pointers.
>>>>> With RTE_MAX_QUEUES_PER_PORT==1024 (default value) it is 16KB per port.
>>>>>
>>>>> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>>>>> ---
>>>>>   lib/ethdev/rte_ethdev.c | 36 +++++++++---------------------------
>>>>>   1 file changed, 9 insertions(+), 27 deletions(-)
>>>>>
>>>>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
>>>>> index ed37f8871b..c8abda6dd7 100644
>>>>> --- a/lib/ethdev/rte_ethdev.c
>>>>> +++ b/lib/ethdev/rte_ethdev.c
>>>>> @@ -897,7 +897,8 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
>>>>>
>>>>>   	if (dev->data->rx_queues == NULL && nb_queues != 0) { /* first time configuration */
>>>>>   		dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues",
>>>>> -				sizeof(dev->data->rx_queues[0]) * nb_queues,
>>>>> +				sizeof(dev->data->rx_queues[0]) *
>>>>> +				RTE_MAX_QUEUES_PER_PORT,
>>>>>   				RTE_CACHE_LINE_SIZE);
>>>>
>>>> Looking at it I have few questions:
>>>> 1. Why is nb_queues == 0 case kept as an exception? Yes,
>>>>     strictly speaking it is not the problem of the patch,
>>>>     DPDK will still segfault (non-debug build) if I
>>>>     allocate Tx queues only but call rte_eth_rx_burst().
>>>
>>> eth_dev_rx_queue_config(.., nb_queues=0) is used in few places to clean-up things.
>>
>> No, as far as I know. For Tx only application (e.g. traffic generator)
>> it is 100% legal to configure with tx_queues=X, rx_queues=0.
>> The same is for Rx only application (e.g. packet capture).
> 
> Yes, that is valid config for sure.
> I just pointed that simply ignoring 'nb_queues' value and
> always allocating space for max possible queues, i.e:
> 
> eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues) 
> {
> ....
> - if (dev->data->rx_queues == NULL && nb_queues != 0) { /* first time configuration */
> + if (dev->data->rx_queues == NULL) {
> wouldn't work, as right now nb_queues == 0 has extra special meaning -
> do final cleanup and free dev->data->rx_queues.
> But re-reading the text below, it seems that I misunderstood you
> and it probably wasn't your intention anyway.
> 
>>
>>>
>>>>     After reading the patch description I thought that
>>>>     we're trying to address it.
>>>
>>> We do, though I can't see how we can address it in this patch.
>>> Though it is a good idea - I think I can add extra check in eth_dev_fp_ops_setup()
>>> or around and setup RX function pointers only when dev->data->rx_queues != NULL.
>>> Same for TX.
>>
>> You don't need to care about these pointers, if these arrays are
>> always allocated. See (3) below.
>>
>>>
>>>> 2. Why do we need to allocate memory dynamically?
>>>>     Can we just make rx_queues an array of appropriate size?
>>>
>>> Pavan already asked same question.
>>> My answer to him:
>>> Yep we can, and yes it will simplify this peace of code.
>>> The main reason I decided no to do this change now -
>>> it will change layout of the_eth_dev_data structure.
>>> In this series I tried to mininize(/avoid) changes in rte_eth_dev and rte_eth_dev_data,
>>> as much as possible to avoid any unforeseen performance and functional impacts.
>>> If we'll manage to make rte_eth_dev and rte_eth_dev_data private we can in future
>>> consider that one and other changes in rte_eth_dev and rte_eth_dev_data layouts
>>> without worrying about ABI breakage
>>
>> Thanks a lot. Makes sense.
>>
>>>>     May be wasting 512K unconditionally is too much.
>>>> 3. If wasting 512K is too much, I'd consider to move
>>>>     allocation to eth_dev_get(). If
>>>
>>> Don't understand where 512KB came from.
>>
>> 32 port * 1024 queues * 2 types * 8 pointer size
>> if we allocate as in (2) above.
>>
>>> each ethdev port will always consume:
>>> ((2*sizeof(uintptr_t))* RTE_MAX_QUEUES_PER_PORT)
>>> bytes of memory for its queue pointers.
>>> With RTE_MAX_QUEUES_PER_PORT==1024 (default value) it is 16KB per port.
>>
>> IMHO it will be a bit nicer if queue pointers arrays are allocated
>> on device get if size is fixed. It is just a suggestion. If you
>> disagree, feel free to drop it.
> 
> You mean - allocate these arrays somewhere at rte_eth_dev_allocate() path?

Yes, eth_dev_get() mentioned above is called from
rte_eth_dev_allocate().

> That sounds like an interesting idea, but seems too drastic to me at that stage.

Yes, of course, we can address it later.

> 
>>
>>>>>   		if (dev->data->rx_queues == NULL) {
>>>>>   			dev->data->nb_rx_queues = 0;
>>>>> @@ -908,21 +909,11 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
>>>>>
>>>>>   		rxq = dev->data->rx_queues;
>>>>>
>>>>> -		for (i = nb_queues; i < old_nb_queues; i++)
>>>>> +		for (i = nb_queues; i < old_nb_queues; i++) {
>>>>>   			(*dev->dev_ops->rx_queue_release)(rxq[i]);
>>>>> -		rxq = rte_realloc(rxq, sizeof(rxq[0]) * nb_queues,
>>>>> -				RTE_CACHE_LINE_SIZE);
>>>>> -		if (rxq == NULL)
>>>>> -			return -(ENOMEM);
>>>>> -		if (nb_queues > old_nb_queues) {
>>>>> -			uint16_t new_qs = nb_queues - old_nb_queues;
>>>>> -
>>>>> -			memset(rxq + old_nb_queues, 0,
>>>>> -				sizeof(rxq[0]) * new_qs);
>>>>> +			rxq[i] = NULL;
>>>>
>>>> It looks like the patch should be rebased on top of
>>>> next-net main because of queue release patches.
>>>>
>>>> [snip]
> 


^ permalink raw reply	[flat|nested] 112+ messages in thread

* Re: [dpdk-dev] [PATCH v5 3/7] ethdev: change input parameters for rx_queue_count
  2021-10-07 11:27         ` [dpdk-dev] [PATCH v5 3/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
  2021-10-11  8:06           ` Andrew Rybchenko
@ 2021-10-12 17:59           ` Hyong Youb Kim (hyonkim)
  1 sibling, 0 replies; 112+ messages in thread
From: Hyong Youb Kim (hyonkim) @ 2021-10-12 17:59 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, John Daley (johndale),
	qi.z.zhang, xiao.w.wang, humin29, yisen.zhuang, oulijun,
	beilei.xing, jingjing.wu, qiming.yang, matan, viacheslavo,
	sthemmin, longli, heinrich.kuhn, kirankumark, andrew.rybchenko,
	mczekaj, jiawenwu, jianwang, maxime.coquelin, chenbo.xia, thomas,
	ferruh.yigit, mdr, jay.jayatheerthan

> -----Original Message-----
> From: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Sent: Thursday, October 7, 2021 8:28 PM
[...]
> Subject: [PATCH v5 3/7] ethdev: change input parameters for
> rx_queue_count
> 
> Currently majority of fast-path ethdev ops take pointers to internal
> queue data structures as an input parameter.
> While eth_rx_queue_count() takes a pointer to rte_eth_dev and queue
> index.
> For future work to hide rte_eth_devices[] and friends it would be
> plausible to unify parameters list of all fast-path ethdev ops.
> This patch changes eth_rx_queue_count() to accept pointer to internal
> queue data as input parameter.
> While this change is transparent to user, it still counts as an ABI change,
> as eth_rx_queue_count_t is used by ethdev public inline function
> rte_eth_rx_queue_count().
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---

For net/enic,

Acked-by: Hyong Youb Kim <hyonkim@cisco.com>

Thanks.
-Hyong


^ permalink raw reply	[flat|nested] 112+ messages in thread

end of thread, other threads:[~2021-10-12 17:59 UTC | newest]

Thread overview: 112+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-20 16:28 [dpdk-dev] [RFC 0/7] hide eth dev related structures Konstantin Ananyev
2021-08-20 16:28 ` [dpdk-dev] [RFC 1/7] eth: move ethdev 'burst' API into separate structure Konstantin Ananyev
2021-08-20 16:28 ` [dpdk-dev] [RFC 2/7] eth: make drivers to use new API for Rx Konstantin Ananyev
2021-09-06 18:41   ` Ferruh Yigit
2021-09-14 14:28     ` Ananyev, Konstantin
2021-08-20 16:28 ` [dpdk-dev] [RFC 3/7] eth: make drivers to use new API for Tx Konstantin Ananyev
2021-08-20 16:28 ` [dpdk-dev] [RFC 4/7] eth: make drivers to use new API for Tx prepare Konstantin Ananyev
2021-08-20 16:28 ` [dpdk-dev] [RFC 5/7] eth: make drivers to use new API to obtain descriptor status Konstantin Ananyev
2021-08-20 16:28 ` [dpdk-dev] [RFC 6/7] eth: make drivers to use new API for Rx queue count Konstantin Ananyev
2021-08-20 16:28 ` [dpdk-dev] [RFC 7/7] eth: hide eth dev related structures Konstantin Ananyev
2021-08-26 12:37 ` [dpdk-dev] [RFC 0/7] " Jerin Jacob
2021-09-06 18:09   ` Ferruh Yigit
2021-09-14 13:33   ` Ananyev, Konstantin
2021-09-15  9:45     ` Jerin Jacob
2021-09-22 15:08       ` Ananyev, Konstantin
2021-09-27 16:14         ` Jerin Jacob
2021-09-28  9:37           ` Ananyev, Konstantin
2021-09-22 14:09 ` [dpdk-dev] [RFC v2 0/5] " Konstantin Ananyev
2021-09-22 14:09   ` [dpdk-dev] [RFC v2 1/5] ethdev: allocate max space for internal queue array Konstantin Ananyev
2021-09-22 14:09   ` [dpdk-dev] [RFC v2 2/5] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
2021-09-23  5:51     ` Wang, Haiyue
2021-09-22 14:09   ` [dpdk-dev] [RFC v2 3/5] ethdev: copy ethdev 'burst' API into separate structure Konstantin Ananyev
2021-09-23  5:58     ` Wang, Haiyue
2021-09-27 18:01       ` Jerin Jacob
2021-09-28  9:42         ` Ananyev, Konstantin
2021-09-22 14:09   ` [dpdk-dev] [RFC v2 4/5] ethdev: make burst functions to use new flat array Konstantin Ananyev
2021-09-22 14:09   ` [dpdk-dev] [RFC v2 5/5] ethdev: hide eth dev related structures Konstantin Ananyev
2021-10-01 14:02   ` [dpdk-dev] [PATCH v3 0/7] " Konstantin Ananyev
2021-10-01 14:02     ` [dpdk-dev] [PATCH v3 1/7] ethdev: allocate max space for internal queue array Konstantin Ananyev
2021-10-01 16:48       ` Ferruh Yigit
2021-10-01 14:02     ` [dpdk-dev] [PATCH v3 2/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
2021-10-01 14:02     ` [dpdk-dev] [PATCH v3 3/7] ethdev: copy ethdev 'fast' API into separate structure Konstantin Ananyev
2021-10-01 14:02     ` [dpdk-dev] [PATCH v3 4/7] ethdev: make burst functions to use new flat array Konstantin Ananyev
2021-10-01 16:46       ` Ferruh Yigit
2021-10-01 17:40         ` Ananyev, Konstantin
2021-10-04  8:46           ` Ferruh Yigit
2021-10-04  9:20             ` Ananyev, Konstantin
2021-10-04 10:13               ` Ferruh Yigit
2021-10-04 11:17                 ` Ananyev, Konstantin
2021-10-01 14:02     ` [dpdk-dev] [PATCH v3 5/7] ethdev: add API to retrieve multiple ethernet addresses Konstantin Ananyev
2021-10-01 14:02     ` [dpdk-dev] [PATCH v3 6/7] ethdev: remove legacy Rx descriptor done API Konstantin Ananyev
2021-10-01 14:02     ` [dpdk-dev] [PATCH v3 7/7] ethdev: hide eth dev related structures Konstantin Ananyev
2021-10-01 16:53       ` Ferruh Yigit
2021-10-01 17:04         ` Ferruh Yigit
2021-10-01 17:02     ` [dpdk-dev] [PATCH v3 0/7] " Ferruh Yigit
2021-10-04 13:55     ` [dpdk-dev] [PATCH v4 " Konstantin Ananyev
2021-10-04 13:55       ` [dpdk-dev] [PATCH v4 1/7] ethdev: allocate max space for internal queue array Konstantin Ananyev
2021-10-05 12:09         ` Thomas Monjalon
2021-10-05 16:45           ` Ananyev, Konstantin
2021-10-05 16:49             ` Thomas Monjalon
2021-10-05 12:21         ` Thomas Monjalon
2021-10-04 13:55       ` [dpdk-dev] [PATCH v4 2/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
2021-10-04 13:55       ` [dpdk-dev] [PATCH v4 3/7] ethdev: copy ethdev 'fast' API into separate structure Konstantin Ananyev
2021-10-05 13:09         ` Thomas Monjalon
2021-10-05 16:41           ` Ananyev, Konstantin
2021-10-05 16:48             ` Thomas Monjalon
2021-10-05 17:04               ` Ananyev, Konstantin
2021-10-04 13:56       ` [dpdk-dev] [PATCH v4 4/7] ethdev: make burst functions to use new flat array Konstantin Ananyev
2021-10-05  9:54         ` David Marchand
2021-10-05 10:13           ` Ananyev, Konstantin
2021-10-04 13:56       ` [dpdk-dev] [PATCH v4 5/7] ethdev: add API to retrieve multiple ethernet addresses Konstantin Ananyev
2021-10-05 13:13         ` Thomas Monjalon
2021-10-05 16:35           ` Ananyev, Konstantin
2021-10-05 16:45             ` Thomas Monjalon
2021-10-05 17:12               ` Ananyev, Konstantin
2021-10-05 17:41                 ` Thomas Monjalon
2021-10-04 13:56       ` [dpdk-dev] [PATCH v4 6/7] ethdev: remove legacy Rx descriptor done API Konstantin Ananyev
2021-10-05 13:14         ` Thomas Monjalon
2021-10-05 16:21           ` Ananyev, Konstantin
2021-10-04 13:56       ` [dpdk-dev] [PATCH v4 7/7] ethdev: hide eth dev related structures Konstantin Ananyev
2021-10-05 10:04         ` David Marchand
2021-10-05 10:43           ` Ferruh Yigit
2021-10-05 11:37             ` David Marchand
2021-10-05 15:57               ` Ananyev, Konstantin
2021-10-05 13:24         ` Thomas Monjalon
2021-10-05 16:19           ` Ananyev, Konstantin
2021-10-05 16:25             ` Thomas Monjalon
2021-10-06 16:42       ` [dpdk-dev] [PATCH v4 0/7] " Ali Alnubani
2021-10-06 17:26         ` Ali Alnubani
2021-10-07 11:27       ` [dpdk-dev] [PATCH v5 " Konstantin Ananyev
2021-10-07 11:27         ` [dpdk-dev] [PATCH v5 1/7] ethdev: remove legacy Rx descriptor done API Konstantin Ananyev
2021-10-07 11:27         ` [dpdk-dev] [PATCH v5 2/7] ethdev: allocate max space for internal queue array Konstantin Ananyev
2021-10-11  9:20           ` Andrew Rybchenko
2021-10-11 16:25             ` Ananyev, Konstantin
2021-10-11 17:15               ` Andrew Rybchenko
2021-10-11 23:06                 ` Ananyev, Konstantin
2021-10-12  5:47                   ` Andrew Rybchenko
2021-10-07 11:27         ` [dpdk-dev] [PATCH v5 3/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
2021-10-11  8:06           ` Andrew Rybchenko
2021-10-12 17:59           ` Hyong Youb Kim (hyonkim)
2021-10-07 11:27         ` [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure Konstantin Ananyev
2021-10-09 12:05           ` fengchengwen
2021-10-11  1:18             ` fengchengwen
2021-10-11  8:39               ` Andrew Rybchenko
2021-10-11 15:24               ` Ananyev, Konstantin
2021-10-11  8:35             ` Andrew Rybchenko
2021-10-11 15:15             ` Ananyev, Konstantin
2021-10-11  8:25           ` Andrew Rybchenko
2021-10-11 16:52             ` Ananyev, Konstantin
2021-10-11 17:22               ` Andrew Rybchenko
2021-10-07 11:27         ` [dpdk-dev] [PATCH v5 5/7] ethdev: make fast-path functions to use new flat array Konstantin Ananyev
2021-10-11  9:02           ` Andrew Rybchenko
2021-10-11 15:47             ` Ananyev, Konstantin
2021-10-11 17:03               ` Andrew Rybchenko
2021-10-07 11:27         ` [dpdk-dev] [PATCH v5 6/7] ethdev: add API to retrieve multiple ethernet addresses Konstantin Ananyev
2021-10-11  9:09           ` Andrew Rybchenko
2021-10-07 11:27         ` [dpdk-dev] [PATCH v5 7/7] ethdev: hide eth dev related structures Konstantin Ananyev
2021-10-11  9:20           ` Andrew Rybchenko
2021-10-11 15:54             ` Ananyev, Konstantin
2021-10-11 17:04               ` Andrew Rybchenko
2021-10-08 18:13         ` [dpdk-dev] [PATCH v5 0/7] " Slava Ovsiienko
2021-10-11  9:22         ` Andrew Rybchenko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.