From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B869C74A5B for ; Thu, 30 Mar 2023 06:29:55 +0000 (UTC) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2275141138; Thu, 30 Mar 2023 08:29:54 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by mails.dpdk.org (Postfix) with ESMTP id 7162841138 for ; Thu, 30 Mar 2023 08:29:52 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4500512FC; Wed, 29 Mar 2023 23:30:36 -0700 (PDT) Received: from net-x86-dell-8268.shanghai.arm.com (net-x86-dell-8268.shanghai.arm.com [10.169.210.116]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B1E6E3F663; Wed, 29 Mar 2023 23:29:48 -0700 (PDT) From: Feifei Wang To: Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko Cc: dev@dpdk.org, konstantin.v.ananyev@yandex.ru, mb@smartsharesystems.com, nd@arm.com, Feifei Wang , Honnappa Nagarahalli , Ruifeng Wang Subject: [PATCH v5 1/3] ethdev: add API for buffer recycle mode Date: Thu, 30 Mar 2023 14:29:37 +0800 Message-Id: <20230330062939.1206267-2-feifei.wang2@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230330062939.1206267-1-feifei.wang2@arm.com> References: <20211224164613.32569-1-feifei.wang2@arm.com> <20230330062939.1206267-1-feifei.wang2@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org There are 4 upper APIs for buffer recycle mode: 1. 'rte_eth_rx_queue_buf_recycle_info_get' This is to retrieve buffer ring information about given ports's Rx queue in buffer recycle mode. And due to this, buffer recycle can be no longer limited to the same driver in Rx and Tx. 2. 'rte_eth_dev_buf_recycle' Users can call this API to enable buffer recycle mode in data path. There are 2 internal APIs in it, which is separately for Rx and TX. 3. 'rte_eth_tx_buf_stash' Internal API for buffer recycle mode. This is to stash Tx used buffers into Rx buffer ring. 4. 'rte_eth_rx_descriptors_refill' Internal API for buffer recycle mode. This is to refill Rx descriptors. Above all APIs are just implemented at the upper level. For different APIs, we need to define specific functions separately. Suggested-by: Honnappa Nagarahalli Suggested-by: Ruifeng Wang Signed-off-by: Feifei Wang Reviewed-by: Ruifeng Wang Reviewed-by: Honnappa Nagarahalli --- lib/ethdev/ethdev_driver.h | 10 ++ lib/ethdev/ethdev_private.c | 2 + lib/ethdev/rte_ethdev.c | 33 +++++ lib/ethdev/rte_ethdev.h | 230 +++++++++++++++++++++++++++++++++++ lib/ethdev/rte_ethdev_core.h | 15 ++- lib/ethdev/version.map | 6 + 6 files changed, 294 insertions(+), 2 deletions(-) diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h index 2c9d615fb5..412f064975 100644 --- a/lib/ethdev/ethdev_driver.h +++ b/lib/ethdev/ethdev_driver.h @@ -59,6 +59,10 @@ struct rte_eth_dev { eth_rx_descriptor_status_t rx_descriptor_status; /** Check the status of a Tx descriptor */ eth_tx_descriptor_status_t tx_descriptor_status; + /** Stash Tx used buffers into RX ring in buffer recycle mode */ + eth_tx_buf_stash_t tx_buf_stash; + /** Refill Rx descriptors in buffer recycle mode */ + eth_rx_descriptors_refill_t rx_descriptors_refill; /** * Device data that is shared between primary and secondary processes @@ -504,6 +508,10 @@ typedef void (*eth_rxq_info_get_t)(struct rte_eth_dev *dev, typedef void (*eth_txq_info_get_t)(struct rte_eth_dev *dev, uint16_t tx_queue_id, struct rte_eth_txq_info *qinfo); +typedef void (*eth_rxq_buf_recycle_info_get_t)(struct rte_eth_dev *dev, + uint16_t rx_queue_id, + struct rte_eth_rxq_buf_recycle_info *rxq_buf_recycle_info); + typedef int (*eth_burst_mode_get_t)(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_burst_mode *mode); @@ -1247,6 +1255,8 @@ struct eth_dev_ops { eth_rxq_info_get_t rxq_info_get; /** Retrieve Tx queue information */ eth_txq_info_get_t txq_info_get; + /** Get Rx queue buffer recycle information */ + eth_rxq_buf_recycle_info_get_t rxq_buf_recycle_info_get; eth_burst_mode_get_t rx_burst_mode_get; /**< Get Rx burst mode */ eth_burst_mode_get_t tx_burst_mode_get; /**< Get Tx burst mode */ eth_fw_version_get_t fw_version_get; /**< Get firmware version */ diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c index 14ec8c6ccf..f8d0ae9226 100644 --- a/lib/ethdev/ethdev_private.c +++ b/lib/ethdev/ethdev_private.c @@ -277,6 +277,8 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo, fpo->rx_queue_count = dev->rx_queue_count; fpo->rx_descriptor_status = dev->rx_descriptor_status; fpo->tx_descriptor_status = dev->tx_descriptor_status; + fpo->tx_buf_stash = dev->tx_buf_stash; + fpo->rx_descriptors_refill = dev->rx_descriptors_refill; fpo->rxq.data = dev->data->rx_queues; fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs; diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 4d03255683..36c3a17588 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -5784,6 +5784,39 @@ rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id, return 0; } +int +rte_eth_rx_queue_buf_recycle_info_get(uint16_t port_id, uint16_t queue_id, + struct rte_eth_rxq_buf_recycle_info *rxq_buf_recycle_info) +{ + struct rte_eth_dev *dev; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + dev = &rte_eth_devices[port_id]; + + if (queue_id >= dev->data->nb_rx_queues) { + RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id); + return -EINVAL; + } + + RTE_ASSERT(rxq_buf_recycle_info != NULL); + + if (dev->data->rx_queues == NULL || + dev->data->rx_queues[queue_id] == NULL) { + RTE_ETHDEV_LOG(ERR, + "Rx queue %"PRIu16" of device with port_id=%" + PRIu16" has not been setup\n", + queue_id, port_id); + return -EINVAL; + } + + if (*dev->dev_ops->rxq_buf_recycle_info_get == NULL) + return -ENOTSUP; + + dev->dev_ops->rxq_buf_recycle_info_get(dev, queue_id, rxq_buf_recycle_info); + + return 0; +} + int rte_eth_rx_burst_mode_get(uint16_t port_id, uint16_t queue_id, struct rte_eth_burst_mode *mode) diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index 99fe9e238b..016c16615d 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -1820,6 +1820,29 @@ struct rte_eth_txq_info { uint8_t queue_state; /**< one of RTE_ETH_QUEUE_STATE_*. */ } __rte_cache_min_aligned; +/** + * @warning + * @b EXPERIMENTAL: this structure may change without prior notice. + * + * Ethernet device Rx queue buffer ring information structure in buffer recycle mode. + * Used to retrieve Rx queue buffer ring information when Tx queue stashing used buffers + * in Rx buffer ring. + */ +struct rte_eth_rxq_buf_recycle_info { + struct rte_mbuf **buf_ring; /**< buffer ring of Rx queue. */ + struct rte_mempool *mp; /**< mempool of Rx queue. */ + uint16_t *refill_head; /**< head of buffer ring refilling descriptors. */ + uint16_t *receive_tail; /**< tail of buffer ring receiving pkts. */ + uint16_t buf_ring_size; /**< configured number of buffer ring size. */ + /** + * request for number of Rx refill buffers. + * For some PMD drivers, Rx refiil buffers number should be aligned with + * its buffer ring size. This is to simplify ring wraparound. + * Value 0 means that no request for this. + */ + uint16_t refill_request; +} __rte_cache_min_aligned; + /* Generic Burst mode flag definition, values can be ORed. */ /** @@ -4809,6 +4832,32 @@ int rte_eth_rx_queue_info_get(uint16_t port_id, uint16_t queue_id, int rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id, struct rte_eth_txq_info *qinfo); +/** + * @warning + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice + * + * Retrieve buffer ring information about given ports's Rx queue in buffer recycle + * mode. + * + * @param port_id + * The port identifier of the Ethernet device. + * @param queue_id + * The Rx queue on the Ethernet device for which buffer ring information + * will be retrieved. + * @param rxq_buf_recycle_info + * A pointer to a structure of type *rte_eth_rxq_buf_recycle_info* to be filled. + * + * @return + * - 0: Success + * - -ENODEV: If *port_id* is invalid. + * - -ENOTSUP: routine is not supported by the device PMD. + * - -EINVAL: The queue_id is out of range. + */ +__rte_experimental +int rte_eth_rx_queue_buf_recycle_info_get(uint16_t port_id, + uint16_t queue_id, + struct rte_eth_rxq_buf_recycle_info *rxq_buf_recycle_info); + /** * Retrieve information about the Rx packet burst mode. * @@ -5987,6 +6036,71 @@ rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id) return (int)(*p->rx_queue_count)(qd); } +/** + * @internal + * Rx routine for rte_eth_dev_buf_recycle(). + * Refill Rx descriptors in buffer recycle mode. + * + * @note + * This API can only be called by rte_eth_dev_buf_recycle(). + * Before calling this API, rte_eth_tx_buf_stash() should be + * called to stash Tx used buffers into Rx buffer ring. + * + * When this functionality is not implemented in the driver, the return + * buffer number is 0. + * + * @param port_id + * The port identifier of the Ethernet device. + * @param queue_id + * The index of the receive queue. + * The value must be in the range [0, nb_rx_queue - 1] previously supplied + * to rte_eth_dev_configure(). + *@param nb + * The number of Rx descriptors to be refilled. + * @return + * The number Rx descriptors correct to be refilled. + * - ENODEV: bad port or queue (only if compiled with debug). + */ +static inline uint16_t rte_eth_rx_descriptors_refill(uint16_t port_id, + uint16_t queue_id, uint16_t nb) +{ + struct rte_eth_fp_ops *p; + void *qd; + +#ifdef RTE_ETHDEV_DEBUG_RX + if (port_id >= RTE_MAX_ETHPORTS || + queue_id >= RTE_MAX_QUEUES_PER_PORT) { + RTE_ETHDEV_LOG(ERR, + "Invalid port_id=%u or queue_id=%u\n", + port_id, queue_id); + rte_errno = ENODEV; + return 0; + } +#endif + + p = &rte_eth_fp_ops[port_id]; + qd = p->rxq.data[queue_id]; + +#ifdef RTE_ETHDEV_DEBUG_RX + if (!rte_eth_dev_is_valid_port(port_id)) { + RTE_ETHDEV_LOG(ERR, "Invalid Rx port_id=%u\n", port_id); + rte_errno = ENODEV; + return 0; + + if (qd == NULL) { + RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u for port_id=%u\n", + queue_id, port_id); + rte_errno = ENODEV; + return 0; + } +#endif + + if (p->rx_descriptors_refill == NULL) + return 0; + + return p->rx_descriptors_refill(qd, nb); +} + /**@{@name Rx hardware descriptor states * @see rte_eth_rx_descriptor_status */ @@ -6483,6 +6597,122 @@ rte_eth_tx_buffer(uint16_t port_id, uint16_t queue_id, return rte_eth_tx_buffer_flush(port_id, queue_id, buffer); } +/** + * @internal + * Tx routine for rte_eth_dev_buf_recycle(). + * Stash Tx used buffers into Rx buffer ring in buffer recycle mode. + * + * @note + * This API can only be called by rte_eth_dev_buf_recycle(). + * After calling this API, rte_eth_rx_descriptors_refill() should be + * called to refill Rx ring descriptors. + * + * When this functionality is not implemented in the driver, the return + * buffer number is 0. + * + * @param port_id + * The port identifier of the Ethernet device. + * @param queue_id + * The index of the transmit queue. + * The value must be in the range [0, nb_tx_queue - 1] previously supplied + * to rte_eth_dev_configure(). + * @param rxq_buf_recycle_info + * A pointer to a structure of Rx queue buffer ring information in buffer + * recycle mode. + * + * @return + * The number buffers correct to be filled in the Rx buffer ring. + * - ENODEV: bad port or queue (only if compiled with debug). + */ +static inline uint16_t rte_eth_tx_buf_stash(uint16_t port_id, uint16_t queue_id, + struct rte_eth_rxq_buf_recycle_info *rxq_buf_recycle_info) +{ + struct rte_eth_fp_ops *p; + void *qd; + +#ifdef RTE_ETHDEV_DEBUG_TX + if (port_id >= RTE_MAX_ETHPORTS || + queue_id >= RTE_MAX_QUEUES_PER_PORT) { + RTE_ETHDEV_LOG(ERR, + "Invalid port_id=%u or queue_id=%u\n", + port_id, queue_id); + rte_errno = ENODEV; + return 0; + } +#endif + + p = &rte_eth_fp_ops[port_id]; + qd = p->txq.data[queue_id]; + +#ifdef RTE_ETHDEV_DEBUG_TX + if (!rte_eth_dev_is_valid_port(port_id)) { + RTE_ETHDEV_LOG(ERR, "Invalid Tx port_id=%u\n", port_id); + rte_errno = ENODEV; + return 0; + + if (qd == NULL) { + RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n", + queue_id, port_id); + rte_erno = ENODEV; + return 0; + } +#endif + + if (p->tx_buf_stash == NULL) + return 0; + + return p->tx_buf_stash(qd, rxq_buf_recycle_info); +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice + * + * Buffer recycle mode can let Tx queue directly put used buffers into Rx buffer + * ring. This avoids freeing buffers into mempool and allocating buffers from + * mempool. + * + * @param rx_port_id + * Port identifying the receive side. + * @param rx_queue_id + * The index of the receive queue identifying the receive side. + * The value must be in the range [0, nb_rx_queue - 1] previously supplied + * to rte_eth_dev_configure(). + * @param tx_port_id + * Port identifying the transmit side. + * @param tx_queue_id + * The index of the transmit queue identifying the transmit side. + * The value must be in the range [0, nb_tx_queue - 1] previously supplied + * to rte_eth_dev_configure(). + * @param rxq_recycle_info + * A pointer to a structure of type *rte_eth_txq_rearm_data* to be filled. + * @return + * - (0) on success or no recycling buffer. + * - (-EINVAL) rxq_recycle_info is NULL. + */ +__rte_experimental +static inline int +rte_eth_dev_buf_recycle(uint16_t rx_port_id, uint16_t rx_queue_id, + uint16_t tx_port_id, uint16_t tx_queue_id, + struct rte_eth_rxq_buf_recycle_info *rxq_buf_recycle_info) +{ + /* The number of recycling buffers. */ + uint16_t nb_buf; + + if (!rxq_buf_recycle_info) + return -EINVAL; + + /* Stash Tx used buffers into Rx buffer ring */ + nb_buf = rte_eth_tx_buf_stash(tx_port_id, tx_queue_id, + rxq_buf_recycle_info); + /* If there are recycling buffers, refill Rx queue descriptors. */ + if (nb_buf) + rte_eth_rx_descriptors_refill(rx_port_id, rx_queue_id, + nb_buf); + + return 0; +} + /** * @warning * @b EXPERIMENTAL: this API may change without prior notice diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h index dcf8adab92..a138fd4dbc 100644 --- a/lib/ethdev/rte_ethdev_core.h +++ b/lib/ethdev/rte_ethdev_core.h @@ -56,6 +56,13 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset); /** @internal Check the status of a Tx descriptor */ typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset); +/** @internal Stash Tx used buffers into RX ring in buffer recycle mode */ +typedef uint16_t (*eth_tx_buf_stash_t)(void *txq, + struct rte_eth_rxq_buf_recycle_info *rxq_buf_recycle_info); + +/** @internal Refill Rx descriptors in buffer recycle mode */ +typedef uint16_t (*eth_rx_descriptors_refill_t)(void *rxq, uint16_t nb); + /** * @internal * Structure used to hold opaque pointers to internal ethdev Rx/Tx @@ -90,9 +97,11 @@ struct rte_eth_fp_ops { eth_rx_queue_count_t rx_queue_count; /** Check the status of a Rx descriptor. */ eth_rx_descriptor_status_t rx_descriptor_status; + /** Refill Rx descriptors in buffer recycle mode */ + eth_rx_descriptors_refill_t rx_descriptors_refill; /** Rx queues data. */ struct rte_ethdev_qdata rxq; - uintptr_t reserved1[3]; + uintptr_t reserved1[4]; /**@}*/ /**@{*/ @@ -106,9 +115,11 @@ struct rte_eth_fp_ops { eth_tx_prep_t tx_pkt_prepare; /** Check the status of a Tx descriptor. */ eth_tx_descriptor_status_t tx_descriptor_status; + /** Stash Tx used buffers into RX ring in buffer recycle mode */ + eth_tx_buf_stash_t tx_buf_stash; /** Tx queues data. */ struct rte_ethdev_qdata txq; - uintptr_t reserved2[3]; + uintptr_t reserved2[4]; /**@}*/ } __rte_cache_aligned; diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 357d1a88c0..8a4b1dac80 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -299,6 +299,10 @@ EXPERIMENTAL { rte_flow_action_handle_query_update; rte_flow_async_action_handle_query_update; rte_flow_async_create_by_index; + + # added in 23.07 + rte_eth_dev_buf_recycle; + rte_eth_rx_queue_buf_recycle_info_get; }; INTERNAL { @@ -328,4 +332,6 @@ INTERNAL { rte_eth_representor_id_get; rte_eth_switch_domain_alloc; rte_eth_switch_domain_free; + rte_eth_tx_buf_stash; + rte_eth_rx_descriptors_refill; }; -- 2.25.1