All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ferruh Yigit <ferruh.yigit@amd.com>
To: "Konstantin Ananyev" <konstantin.ananyev@huawei.com>,
	"Feifei Wang" <Feifei.Wang2@arm.com>,
	"Константин Ананьев" <konstantin.v.ananyev@yandex.ru>,
	"thomas@monjalon.net" <thomas@monjalon.net>,
	"Andrew Rybchenko" <andrew.rybchenko@oktetlabs.ru>
Cc: "dev@dpdk.org" <dev@dpdk.org>, nd <nd@arm.com>,
	Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>,
	Ruifeng Wang <Ruifeng.Wang@arm.com>
Subject: Re: [PATCH v6 1/4] ethdev: add API for mbufs recycle mode
Date: Wed, 7 Jun 2023 01:00:42 +0100	[thread overview]
Message-ID: <4a6e9e73-f311-854f-98c8-8fb4d0df07de@amd.com> (raw)
In-Reply-To: <07e46ddd2d6d4e3ca7f9958ecc1fa5b7@huawei.com>

On 6/6/2023 9:34 AM, Konstantin Ananyev wrote:
> 
> 
>>
>> [...]
>>>> Probably I am missing something, but why it is not possible to do something
>>> like that:
>>>>
>>>> rte_eth_recycle_mbufs(rx_port_id=X, rx_queue_id=Y, tx_port_id=N,
>>>> tx_queue_id=M, ...); ....
>>>> rte_eth_recycle_mbufs(rx_port_id=X, rx_queue_id=Y, tx_port_id=N,
>>>> tx_queue_id=K, ...);
>>>>
>>>> I.E. feed rx queue from 2 tx queues?
>>>>
>>>> Two problems for this:
>>>> 1. If we have 2 tx queues for rx, the thread should make the extra
>>>> judgement to decide which one to choose in the driver layer.
>>>
>>> Not sure, why on the driver layer?
>>> The example I gave above - decision is made on application layer.
>>> Lets say first call didn't free enough mbufs, so app decided to use second txq
>>> for rearm.
>> [Feifei] I think currently mbuf recycle mode can support this usage. For examples:
>> n =  rte_eth_recycle_mbufs(rx_port_id=X, rx_queue_id=Y, tx_port_id=N, tx_queue_id=M, ...);
>> if (n < planned_number)
>> rte_eth_recycle_mbufs(rx_port_id=X, rx_queue_id=Y, tx_port_id=N, tx_queue_id=K, ...);
>>
>> Thus, if users want, they can do like this.
> 
> Yes, that was my thought, that's why I was surprise that in the comments we have:
> " Currently, the rte_eth_recycle_mbufs() function can only support one-time pairing
> * between the receive queue and transmit queue. Do not pair one receive queue with
>  * multiple transmit queues or pair one transmit queue with multiple receive queues,
>  * in order to avoid memory error rewriting."
> 

I guess that is from previous versions of the set, it can be good to
address limitations/restrictions again with latest version.


>>
>>>
>>>> On the other hand, current mechanism can support users to switch 1 txq
>>>> to another timely in the application layer. If user want to choose
>>>> another txq, he just need to change the txq_queue_id parameter in the API.
>>>> 2. If you want one rxq to support two txq at the same time, this needs
>>>> to add spinlock on guard variable to avoid multi-thread conflict.
>>>> Spinlock will decrease the data-path performance greatly.  Thus, we do
>>>> not consider
>>>> 1 rxq mapping multiple txqs here.
>>>
>>> I am talking about situation when one thread controls 2 tx queues.
>>>
>>>> + *
>>>> + * @param rx_port_id
>>>> + * Port identifying the receive side.
>>>> + * @param rx_queue_id
>>>> + * The index of the receive queue identifying the receive side.
>>>> + * The value must be in the range [0, nb_rx_queue - 1] previously
>>>> +supplied
>>>> + * to rte_eth_dev_configure().
>>>> + * @param tx_port_id
>>>> + * Port identifying the transmit side.
>>>> + * @param tx_queue_id
>>>> + * The index of the transmit queue identifying the transmit side.
>>>> + * The value must be in the range [0, nb_tx_queue - 1] previously
>>>> +supplied
>>>> + * to rte_eth_dev_configure().
>>>> + * @param recycle_rxq_info
>>>> + * A pointer to a structure of type *rte_eth_recycle_rxq_info* which
>>>> +contains
>>>> + * the information of the Rx queue mbuf ring.
>>>> + * @return
>>>> + * The number of recycling mbufs.
>>>> + */
>>>> +__rte_experimental
>>>> +static inline uint16_t
>>>> +rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id,
>>>> +uint16_t tx_port_id, uint16_t tx_queue_id,  struct
>>>> +rte_eth_recycle_rxq_info *recycle_rxq_info) {  struct rte_eth_fp_ops
>>>> +*p;  void *qd;  uint16_t nb_mbufs;
>>>> +
>>>> +#ifdef RTE_ETHDEV_DEBUG_TX
>>>> + if (tx_port_id >= RTE_MAX_ETHPORTS ||  tx_queue_id >=
>>>> +RTE_MAX_QUEUES_PER_PORT) {  RTE_ETHDEV_LOG(ERR,  "Invalid
>>>> +tx_port_id=%u or tx_queue_id=%u\n",  tx_port_id, tx_queue_id);
>>>> +return 0;  } #endif
>>>> +
>>>> + /* fetch pointer to queue data */
>>>> + p = &rte_eth_fp_ops[tx_port_id];
>>>> + qd = p->txq.data[tx_queue_id];
>>>> +
>>>> +#ifdef RTE_ETHDEV_DEBUG_TX
>>>> + RTE_ETH_VALID_PORTID_OR_ERR_RET(tx_port_id, 0);
>>>> +
>>>> + if (qd == NULL) {
>>>> + RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n",
>>>> +tx_queue_id, tx_port_id);  return 0;  } #endif  if
>>>> +(p->recycle_tx_mbufs_reuse == NULL)  return 0;
>>>> +
>>>> + /* Copy used *rte_mbuf* buffer pointers from Tx mbuf ring
>>>> + * into Rx mbuf ring.
>>>> + */
>>>> + nb_mbufs = p->recycle_tx_mbufs_reuse(qd, recycle_rxq_info);
>>>> +
>>>> + /* If no recycling mbufs, return 0. */ if (nb_mbufs == 0) return 0;
>>>> +
>>>> +#ifdef RTE_ETHDEV_DEBUG_RX
>>>> + if (rx_port_id >= RTE_MAX_ETHPORTS ||  rx_queue_id >=
>>>> +RTE_MAX_QUEUES_PER_PORT) {  RTE_ETHDEV_LOG(ERR, "Invalid
>>>> +rx_port_id=%u or rx_queue_id=%u\n",  rx_port_id, rx_queue_id);
>>>> +return 0;  } #endif
>>>> +
>>>> + /* fetch pointer to queue data */
>>>> + p = &rte_eth_fp_ops[rx_port_id];
>>>> + qd = p->rxq.data[rx_queue_id];
>>>> +
>>>> +#ifdef RTE_ETHDEV_DEBUG_RX
>>>> + RTE_ETH_VALID_PORTID_OR_ERR_RET(rx_port_id, 0);
>>>> +
>>>> + if (qd == NULL) {
>>>> + RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u for port_id=%u\n",
>>>> +rx_queue_id, rx_port_id);  return 0;  } #endif
>>>> +
>>>> + if (p->recycle_rx_descriptors_refill == NULL) return 0;
>>>> +
>>>> + /* Replenish the Rx descriptors with the recycling
>>>> + * into Rx mbuf ring.
>>>> + */
>>>> + p->recycle_rx_descriptors_refill(qd, nb_mbufs);
>>>> +
>>>> + return nb_mbufs;
>>>> +}
>>>> +
>>>>  /**
>>>>   * @warning
>>>>   * @b EXPERIMENTAL: this API may change without prior notice diff
>>>> --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
>>>> index dcf8adab92..a2e6ea6b6c 100644
>>>> --- a/lib/ethdev/rte_ethdev_core.h
>>>> +++ b/lib/ethdev/rte_ethdev_core.h
>>>> @@ -56,6 +56,13 @@ typedef int (*eth_rx_descriptor_status_t)(void
>>>> *rxq, uint16_t offset);
>>>>  /** @internal Check the status of a Tx descriptor */  typedef int
>>>> (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
>>>>
>>>> +/** @internal Copy used mbufs from Tx mbuf ring into Rx mbuf ring */
>>>> +typedef uint16_t (*eth_recycle_tx_mbufs_reuse_t)(void *txq,  struct
>>>> +rte_eth_recycle_rxq_info *recycle_rxq_info);
>>>> +
>>>> +/** @internal Refill Rx descriptors with the recycling mbufs */
>>>> +typedef void (*eth_recycle_rx_descriptors_refill_t)(void *rxq,
>>>> +uint16_t nb);
>>>> +
>>>>  /**
>>>>   * @internal
>>>>   * Structure used to hold opaque pointers to internal ethdev Rx/Tx @@
>>>> -90,9 +97,11 @@ struct rte_eth_fp_ops {
>>>>          eth_rx_queue_count_t rx_queue_count;
>>>>          /** Check the status of a Rx descriptor. */
>>>>          eth_rx_descriptor_status_t rx_descriptor_status;
>>>> + /** Refill Rx descriptors with the recycling mbufs. */
>>>> + eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
>>>> I am afraid we can't put new fields here without ABI breakage.
>>>>
>>>> Agree
>>>>
>>>> It has to be below rxq.
>>>> Now thinking about current layout probably not the best one, and when
>>>> introducing this struct, I should probably put rxq either on the top
>>>> of the struct, or on the next cache line.
>>>> But such change is not possible right now anyway.
>>>> Same story for txq.
>>>>
>>>> Thus we should rearrange the structure like below:
>>>> struct rte_eth_fp_ops {
>>>>     struct rte_ethdev_qdata rxq;
>>>>          eth_rx_burst_t rx_pkt_burst;
>>>>          eth_rx_queue_count_t rx_queue_count;
>>>>          eth_rx_descriptor_status_t rx_descriptor_status;
>>>>        eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
>>>>               uintptr_t reserved1[2];
>>>> }
>>>
>>> Yes, I think such layout will be better.
>>> The only problem here - we have to wait for 23.11 for that.
>>>
>> Ok, if not this change, maybe we still need to wait. Because mbufs_recycle have other
>> ABI breakage. Such as the change for 'struct rte_eth_dev'.
> 
> Ok by me.
> 
>>>>
>>>>
>>>>          /** Rx queues data. */
>>>>          struct rte_ethdev_qdata rxq;
>>>> - uintptr_t reserved1[3];
>>>> + uintptr_t reserved1[2];
>>>>          /**@}*/
>>>>
>>>>          /**@{*/
>>>> @@ -106,9 +115,11 @@ struct rte_eth_fp_ops {
>>>>          eth_tx_prep_t tx_pkt_prepare;
>>>>          /** Check the status of a Tx descriptor. */
>>>>          eth_tx_descriptor_status_t tx_descriptor_status;
>>>> + /** Copy used mbufs from Tx mbuf ring into Rx. */
>>>> + eth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;
>>>>          /** Tx queues data. */
>>>>          struct rte_ethdev_qdata txq;
>>>> - uintptr_t reserved2[3];
>>>> + uintptr_t reserved2[2];
>>>>          /**@}*/
>>>>
>>>>  } __rte_cache_aligned;
>>>> diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index
>>>> 357d1a88c0..45c417f6bd 100644
>>>> --- a/lib/ethdev/version.map
>>>> +++ b/lib/ethdev/version.map
>>>> @@ -299,6 +299,10 @@ EXPERIMENTAL {
>>>>          rte_flow_action_handle_query_update;
>>>>          rte_flow_async_action_handle_query_update;
>>>>          rte_flow_async_create_by_index;
>>>> +
>>>> + # added in 23.07
>>>> + rte_eth_recycle_mbufs;
>>>> + rte_eth_recycle_rx_queue_info_get;
>>>>  };
>>>>
>>>>  INTERNAL {
>>>> --
>>>> 2.25.1
>>>>


  reply	other threads:[~2023-06-07  0:00 UTC|newest]

Thread overview: 67+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-24 16:46 [RFC PATCH v1 0/4] Direct re-arming of buffers on receive side Feifei Wang
2021-12-24 16:46 ` [RFC PATCH v1 1/4] net/i40e: enable direct re-arm mode Feifei Wang
2021-12-24 16:46 ` [RFC PATCH v1 2/4] ethdev: add API for " Feifei Wang
2021-12-24 19:38   ` Stephen Hemminger
2021-12-26  9:49     ` 回复: " Feifei Wang
2021-12-26 10:31       ` Morten Brørup
2021-12-24 16:46 ` [RFC PATCH v1 3/4] net/i40e: add direct re-arm mode internal API Feifei Wang
2021-12-24 16:46 ` [RFC PATCH v1 4/4] examples/l3fwd: give an example for direct rearm mode Feifei Wang
2021-12-26 10:25 ` [RFC PATCH v1 0/4] Direct re-arming of buffers on receive side Morten Brørup
2021-12-28  6:55   ` 回复: " Feifei Wang
2022-01-18 15:51     ` Ferruh Yigit
2022-01-18 16:53       ` Thomas Monjalon
2022-01-18 17:27         ` Morten Brørup
2022-01-27  5:24           ` Honnappa Nagarahalli
2022-01-27 16:45             ` Ananyev, Konstantin
2022-02-02 19:46               ` Honnappa Nagarahalli
2022-01-27  5:16         ` Honnappa Nagarahalli
2023-02-28  6:43       ` 回复: " Feifei Wang
2023-02-28  6:52         ` Feifei Wang
2022-01-27  4:06   ` Honnappa Nagarahalli
2022-01-27 17:13     ` Morten Brørup
2022-01-28 11:29     ` Morten Brørup
2023-03-23 10:43 ` [PATCH v4 0/3] Recycle buffers from Tx to Rx Feifei Wang
2023-03-23 10:43   ` [PATCH v4 1/3] ethdev: add API for buffer recycle mode Feifei Wang
2023-03-23 11:41     ` Morten Brørup
2023-03-29  2:16       ` Feifei Wang
2023-03-23 10:43   ` [PATCH v4 2/3] net/i40e: implement recycle buffer mode Feifei Wang
2023-03-23 10:43   ` [PATCH v4 3/3] net/ixgbe: " Feifei Wang
2023-03-30  6:29 ` [PATCH v5 0/3] Recycle buffers from Tx to Rx Feifei Wang
2023-03-30  6:29   ` [PATCH v5 1/3] ethdev: add API for buffer recycle mode Feifei Wang
2023-03-30  7:19     ` Morten Brørup
2023-03-30  9:31       ` Feifei Wang
2023-03-30 15:15         ` Morten Brørup
2023-03-30 15:58         ` Morten Brørup
2023-04-26  6:59           ` Feifei Wang
2023-04-19 14:46     ` Ferruh Yigit
2023-04-26  7:29       ` Feifei Wang
2023-03-30  6:29   ` [PATCH v5 2/3] net/i40e: implement recycle buffer mode Feifei Wang
2023-03-30  6:29   ` [PATCH v5 3/3] net/ixgbe: " Feifei Wang
2023-04-19 14:46     ` Ferruh Yigit
2023-04-26  7:36       ` Feifei Wang
2023-03-30 15:04   ` [PATCH v5 0/3] Recycle buffers from Tx to Rx Stephen Hemminger
2023-04-03  2:48     ` Feifei Wang
2023-04-19 14:56   ` Ferruh Yigit
2023-04-25  7:57     ` Feifei Wang
2023-05-25  9:45 ` [PATCH v6 0/4] Recycle mbufs from Tx queue to Rx queue Feifei Wang
2023-05-25  9:45   ` [PATCH v6 1/4] ethdev: add API for mbufs recycle mode Feifei Wang
2023-05-25 15:08     ` Morten Brørup
2023-05-31  6:10       ` Feifei Wang
2023-06-05 12:53     ` Константин Ананьев
2023-06-06  2:55       ` Feifei Wang
2023-06-06  7:10         ` Konstantin Ananyev
2023-06-06  7:31           ` Feifei Wang
2023-06-06  8:34             ` Konstantin Ananyev
2023-06-07  0:00               ` Ferruh Yigit [this message]
2023-06-12  3:25                 ` Feifei Wang
2023-05-25  9:45   ` [PATCH v6 2/4] net/i40e: implement " Feifei Wang
2023-06-05 13:02     ` Константин Ананьев
2023-06-06  3:16       ` Feifei Wang
2023-06-06  7:18         ` Konstantin Ananyev
2023-06-06  7:58           ` Feifei Wang
2023-06-06  8:27             ` Konstantin Ananyev
2023-06-12  3:05               ` Feifei Wang
2023-05-25  9:45   ` [PATCH v6 3/4] net/ixgbe: " Feifei Wang
2023-05-25  9:45   ` [PATCH v6 4/4] app/testpmd: add recycle mbufs engine Feifei Wang
2023-06-05 13:08     ` Константин Ананьев
2023-06-06  6:32       ` Feifei Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4a6e9e73-f311-854f-98c8-8fb4d0df07de@amd.com \
    --to=ferruh.yigit@amd.com \
    --cc=Feifei.Wang2@arm.com \
    --cc=Honnappa.Nagarahalli@arm.com \
    --cc=Ruifeng.Wang@arm.com \
    --cc=andrew.rybchenko@oktetlabs.ru \
    --cc=dev@dpdk.org \
    --cc=konstantin.ananyev@huawei.com \
    --cc=konstantin.v.ananyev@yandex.ru \
    --cc=nd@arm.com \
    --cc=thomas@monjalon.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.