All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Burakov, Anatoly" <anatoly.burakov@intel.com>
To: Gregory Etelson <getelson@nvidia.com>, <dev@dpdk.org>
Cc: <matan@nvidia.com>, <rasland@nvidia.com>,
	Dmitry Kozlyuk <dkozlyuk@oss.nvidia.com>
Subject: Re: [dpdk-dev] [PATCH] examples/multi_process: fix RX packets distribution
Date: Thu, 28 Oct 2021 15:29:57 +0100	[thread overview]
Message-ID: <4fd478e1-a65d-405c-a51f-4b4569908357@intel.com> (raw)
In-Reply-To: <20211026095037.17557-1-getelson@nvidia.com>

On 26-Oct-21 10:50 AM, Gregory Etelson wrote:
> MP servers distributes RX packets between clients according to
> round-robin scheme.
> 
> Current implementation always started packets distribution from
> the first client. That procedure resulted in uniform distribution
> in cases when RX packets number was a multiple of clients number.
> However, if RX burst repeatedly returned single
> packet, round-robin scheme would not work because all packets
> were assigned to the first client only.
> 
> The patch does not restart packets distribution from
> the first client.
> Packets distribution always continues to the next client.
> 
> Fixes: af75078fece3 ("first public release")
> 
> Signed-off-by: Gregory Etelson <getelson@nvidia.com>
> Reviewed-by: Dmitry Kozlyuk <dkozlyuk@oss.nvidia.com>
> ---
>   examples/multi_process/client_server_mp/mp_server/main.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/examples/multi_process/client_server_mp/mp_server/main.c b/examples/multi_process/client_server_mp/mp_server/main.c
> index b4761ebc7b..fb441cbbf0 100644
> --- a/examples/multi_process/client_server_mp/mp_server/main.c
> +++ b/examples/multi_process/client_server_mp/mp_server/main.c
> @@ -234,7 +234,7 @@ process_packets(uint32_t port_num __rte_unused,
>   		struct rte_mbuf *pkts[], uint16_t rx_count)
>   {
>   	uint16_t i;
> -	uint8_t client = 0;
> +	static uint8_t client = 0;
>   
>   	for (i = 0; i < rx_count; i++) {
>   		enqueue_rx_packet(client, pkts[i]);
> 

Wouldn't that make it global? I don't recall off the top of my head if 
the multiprocess app is intended to have multiple Rx threads, but if you 
did have two forwarding threads, they would effectively both use the 
same `client` value, stepping on top of each other. This should probably 
be per-thread?

-- 
Thanks,
Anatoly

  reply	other threads:[~2021-10-28 14:30 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-26  9:50 [dpdk-dev] [PATCH] examples/multi_process: fix RX packets distribution Gregory Etelson
2021-10-28 14:29 ` Burakov, Anatoly [this message]
2021-10-28 15:14   ` Gregory Etelson
2021-10-28 15:35     ` Burakov, Anatoly
2021-11-08 21:27       ` Thomas Monjalon
2021-11-09  6:42         ` Gregory Etelson
2021-11-09  7:30           ` Thomas Monjalon
2021-11-09  9:35             ` Gregory Etelson
2021-11-09  9:58 ` [dpdk-dev] [PATCH v2] examples/multi_proces: fix Rx " Gregory Etelson
2021-11-09 11:35   ` Thomas Monjalon
2021-11-09 11:49     ` Gregory Etelson
2021-11-09 14:17       ` Thomas Monjalon
2021-11-10 16:52 ` [PATCH v3] " Gregory Etelson
2021-11-10 16:57 ` [PATCH v4] examples/multi_process: " Gregory Etelson
2021-11-16 15:07   ` David Marchand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4fd478e1-a65d-405c-a51f-4b4569908357@intel.com \
    --to=anatoly.burakov@intel.com \
    --cc=dev@dpdk.org \
    --cc=dkozlyuk@oss.nvidia.com \
    --cc=getelson@nvidia.com \
    --cc=matan@nvidia.com \
    --cc=rasland@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.