linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vincent Guittot <vincent.guittot@linaro.org>
To: Tariq Toukan <tariqt@nvidia.com>
Cc: David Chen <david.chen@nutanix.com>,
	Zhang Qiao <zhangqiao22@huawei.com>,
	"Peter Zijlstra (Intel)" <peterz@infradead.org>,
	Willem de Bruijn <willemdebruijn.kernel@gmail.com>,
	Ingo Molnar <mingo@redhat.com>,
	Juri Lelli <juri.lelli@redhat.com>,
	Valentin Schneider <vschneid@redhat.com>,
	linux-kernel@vger.kernel.org,
	"David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	Saeed Mahameed <saeedm@nvidia.com>,
	Network Development <netdev@vger.kernel.org>,
	Gal Pressman <gal@nvidia.com>, Malek Imam <mimam@nvidia.com>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	David Ahern <dsahern@kernel.org>,
	Tariq Toukan <ttoukan.linux@gmail.com>
Subject: Re: Bug report: UDP ~20% degradation
Date: Wed, 8 Feb 2023 15:12:55 +0100	[thread overview]
Message-ID: <CAKfTPtCO=GFm6nKU0DVa-aa3f1pTQ5vBEF+9hJeTR9C_RRRZ9A@mail.gmail.com> (raw)
In-Reply-To: <113e81f6-b349-97c0-4cec-d90087e7e13b@nvidia.com>

Hi Tariq,

On Wed, 8 Feb 2023 at 12:09, Tariq Toukan <tariqt@nvidia.com> wrote:
>
> Hi all,
>
> Our performance verification team spotted a degradation of up to ~20% in
> UDP performance, for a specific combination of parameters.
>
> Our matrix covers several parameters values, like:
> IP version: 4/6
> MTU: 1500/9000
> Msg size: 64/1452/8952 (only when applicable while avoiding ip
> fragmentation).
> Num of streams: 1/8/16/24.
> Num of directions: unidir/bidir.
>
> Surprisingly, the issue exists only with this specific combination:
> 8 streams,
> MTU 9000,
> Msg size 8952,
> both ipv4/6,
> bidir.
> (in unidir it repros only with ipv4)
>
> The reproduction is consistent on all the different setups we tested with.
>
> Bisect [2] was done between these two points, v5.19 (Good), and v6.0-rc1
> (Bad), with ConnectX-6DX NIC.
>
> c82a69629c53eda5233f13fc11c3c01585ef48a2 is the first bad commit [1].
>
> We couldn't come up with a good explanation how this patch causes this
> issue. We also looked for related changes in the networking/UDP stack,
> but nothing looked suspicious.
>
> Maybe someone here can help with this.
> We can provide more details or do further tests/experiments to progress
> with the debug.

Could you share more details about your system and the cpu topology ?

The commit  c82a69629c53 migrates a task on an idle cpu when the task
is the only one running on local cpu but the time spent by this local
cpu under interrupt or RT context becomes significant (10%-17%)
I can imagine that 16/24 stream overload your system so load_balance
doesn't end up in this case and the cpus are busy with several
threads. On the other hand, 1 stream is small enough to keep your
system lightly loaded but 8 streams make your system significantly
loaded to trigger the reduced capacity case but still not overloaded.

Vincent

>
> Thanks,
> Tariq
>
> [1]
> commit c82a69629c53eda5233f13fc11c3c01585ef48a2
> Author: Vincent Guittot <vincent.guittot@linaro.org>
> Date:   Fri Jul 8 17:44:01 2022 +0200
>
>      sched/fair: fix case with reduced capacity CPU
>
>      The capacity of the CPU available for CFS tasks can be reduced
> because of
>      other activities running on the latter. In such case, it's worth
> trying to
>      move CFS tasks on a CPU with more available capacity.
>
>
>
>
>      The rework of the load balance has filtered the case when the CPU
> is
>
>
>      classified to be fully busy but its capacity is reduced.
>
>
>
>
>
>
>
>      Check if CPU's capacity is reduced while gathering load balance
> statistic
>
>
>      and classify it group_misfit_task instead of group_fully_busy so we
> can
>
>
>      try to move the load on another CPU.
>
>
>
>
>
>
>
>      Reported-by: David Chen <david.chen@nutanix.com>
>
>
>
>      Reported-by: Zhang Qiao <zhangqiao22@huawei.com>
>
>
>
>      Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
>
>
>
>      Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
>
>
>
>      Tested-by: David Chen <david.chen@nutanix.com>
>
>
>
>      Tested-by: Zhang Qiao <zhangqiao22@huawei.com>
>
>
>
>      Link:
> https://lkml.kernel.org/r/20220708154401.21411-1-vincent.guittot@linaro.org
>
>
>
>
> [2]
>
> Detailed bisec steps:
>
> +--------------+--------+-----------+-----------+
> | Commit       | Status | BW (Gbps) | BW (Gbps) |
> |              |        | run1      | run2      |
> +--------------+--------+-----------+-----------+
> | 526942b8134c | Bad    | ---       | ---       |
> +--------------+--------+-----------+-----------+
> | 2e7a95156d64 | Bad    | ---       | ---       |
> +--------------+--------+-----------+-----------+
> | 26c350fe7ae0 | Good   | 279.8     | 281.9     |
> +--------------+--------+-----------+-----------+
> | 9de1f9c8ca51 | Bad    | 257.243   | ---       |
> +--------------+--------+-----------+-----------+
> | 892f7237b3ff | Good   | 285       | 300.7     |
> +--------------+--------+-----------+-----------+
> | 0dd1cabe8a4a | Good   | 305.599   | 290.3     |
> +--------------+--------+-----------+-----------+
> | dfea84827f7e | Bad    | 250.2     | 258.899   |
> +--------------+--------+-----------+-----------+
> | 22a39c3d8693 | Bad    | 236.8     | 245.399   |
> +--------------+--------+-----------+-----------+
> | e2f3e35f1f5a | Good   | 277.599   | 287       |
> +--------------+--------+-----------+-----------+
> | 401e4963bf45 | Bad    | 250.149   | 248.899   |
> +--------------+--------+-----------+-----------+
> | 3e8c6c9aac42 | Good   | 299.09    | 294.9     |
> +--------------+--------+-----------+-----------+
> | 1fcf54deb767 | Good   | 292.719   | 301.299   |
> +--------------+--------+-----------+-----------+
> | c82a69629c53 | Bad    | 254.7     | 246.1     |
> +--------------+--------+-----------+-----------+
> | c02d5546ea34 | Good   | 276.4     | 294       |
> +--------------+--------+-----------+-----------+

  reply	other threads:[~2023-02-08 14:13 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-08 11:08 Bug report: UDP ~20% degradation Tariq Toukan
2023-02-08 14:12 ` Vincent Guittot [this message]
2023-02-12 11:50   ` Tariq Toukan
2023-02-22  8:49     ` Tariq Toukan
2023-02-22 16:51       ` Vincent Guittot
2023-04-05 13:19         ` Linux regression tracking (Thorsten Leemhuis)
2023-02-10 18:37 ` Linux regression tracking #adding (Thorsten Leemhuis)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAKfTPtCO=GFm6nKU0DVa-aa3f1pTQ5vBEF+9hJeTR9C_RRRZ9A@mail.gmail.com' \
    --to=vincent.guittot@linaro.org \
    --cc=davem@davemloft.net \
    --cc=david.chen@nutanix.com \
    --cc=dsahern@kernel.org \
    --cc=edumazet@google.com \
    --cc=gal@nvidia.com \
    --cc=juri.lelli@redhat.com \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mimam@nvidia.com \
    --cc=mingo@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=peterz@infradead.org \
    --cc=saeedm@nvidia.com \
    --cc=tariqt@nvidia.com \
    --cc=ttoukan.linux@gmail.com \
    --cc=vschneid@redhat.com \
    --cc=willemdebruijn.kernel@gmail.com \
    --cc=yoshfuji@linux-ipv6.org \
    --cc=zhangqiao22@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).