linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Michal Kubecek <mkubecek@suse.cz>
To: Yunsheng Lin <linyunsheng@huawei.com>
Cc: davem@davemloft.net, kuba@kernel.org, olteanv@gmail.com,
	ast@kernel.org, daniel@iogearbox.net, andriin@fb.com,
	edumazet@google.com, weiwan@google.com, cong.wang@bytedance.com,
	ap420073@gmail.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, linuxarm@openeuler.org,
	mkl@pengutronix.de, linux-can@vger.kernel.org, jhs@mojatatu.com,
	xiyou.wangcong@gmail.com, jiri@resnulli.us, andrii@kernel.org,
	kafai@fb.com, songliubraving@fb.com, yhs@fb.com,
	john.fastabend@gmail.com, kpsingh@kernel.org,
	bpf@vger.kernel.org, pabeni@redhat.com, mzhivich@akamai.com,
	johunt@akamai.com, albcamus@gmail.com, kehuan.feng@gmail.com,
	a.fatoum@pengutronix.de, atenart@kernel.org,
	alexander.duyck@gmail.com, hdanton@sina.com, jgross@suse.com,
	JKosina@suse.com
Subject: Re: [PATCH net v4 1/2] net: sched: fix packet stuck problem for lockless qdisc
Date: Tue, 20 Apr 2021 01:55:03 +0200	[thread overview]
Message-ID: <20210419235503.eo77f6s73a4d25oh@lion.mk-sys.cz> (raw)
In-Reply-To: <20210419152946.3n7adsd355rfeoda@lion.mk-sys.cz>

On Mon, Apr 19, 2021 at 05:29:46PM +0200, Michal Kubecek wrote:
> 
> As pointed out in the discussion on v3, this patch may result in
> significantly higher CPU consumption with multiple threads competing on
> a saturated outgoing device. I missed this submission so that I haven't
> checked it yet but given the description of v3->v4 changes above, it's
> quite likely that it suffers from the same problem.

And it indeed does. However, with the additional patch from the v3
discussion, the numbers are approximately the same as with an unpatched
mainline kernel.

As with v4, I tried this patch on top of 5.12-rc7 with real devices.
I used two machines with 10Gb/s Intel ixgbe NICs, sender has 16 CPUs
(2 8-core CPUs with HT disabled) and 16 Rx/Tx queues, receiver has
48 CPUs (2 12-core CPUs with HT enabled) and 48 Rx/Tx queues.

  threads    5.12-rc7    5.12-rc7 + v4    5.12-rc7 + v4 + stop
     1        25.1%          38.1%            22.9%
     8        66.2%         277.0%            74.1%
    16        90.1%         150.7%            91.0%
    32       107.2%         272.6%           108.3%
    64       116.3%         487.5%           118.1%
   128       126.1%         946.7%           126.9%

(The values are normalized to one core, i.e. 100% corresponds to one
fully used logical CPU.)

So it seems that repeated scheduling while the queue was stopped is
indeed the main performance issue and that other cases of the logic
being too pessimistic do not play significant role. There is an
exception with 8 connections/threads and the result with just this
series also looks abnormally high (e.g. much higher than with
16 threads). It might be worth investigating what happens there and
what do the results with other thread counts around 8 look like.

I'll run some more tests with other traffic patterns tomorrow and
I'm also going to take a closer look at the additional patch.

Michal

  reply	other threads:[~2021-04-19 23:55 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-16  1:16 [PATCH net v4 0/2] fix packet stuck problem for lockless qdisc Yunsheng Lin
2021-04-16  1:16 ` [PATCH net v4 1/2] net: sched: " Yunsheng Lin
2021-04-19 15:29   ` Michal Kubecek
2021-04-19 23:55     ` Michal Kubecek [this message]
2021-04-20  2:23       ` Yunsheng Lin
2021-04-20 20:34       ` Michal Kubecek
2021-04-21  1:52         ` Yunsheng Lin
2021-04-21  5:31           ` Michal Kubecek
2021-04-21  8:21             ` Yunsheng Lin
2021-04-21  8:44               ` Michal Kubecek
2021-04-21  9:25                 ` Yunsheng Lin
2021-04-23  9:42                   ` Yunsheng Lin
2021-04-30  3:11                     ` Yunsheng Lin
2021-04-30  3:15                       ` Yunsheng Lin
2021-04-30  6:28                         ` Michal Kubecek
2021-04-16  1:16 ` [PATCH net v4 2/2] net: sched: fix endless tx action reschedule during deactivation Yunsheng Lin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210419235503.eo77f6s73a4d25oh@lion.mk-sys.cz \
    --to=mkubecek@suse.cz \
    --cc=JKosina@suse.com \
    --cc=a.fatoum@pengutronix.de \
    --cc=albcamus@gmail.com \
    --cc=alexander.duyck@gmail.com \
    --cc=andrii@kernel.org \
    --cc=andriin@fb.com \
    --cc=ap420073@gmail.com \
    --cc=ast@kernel.org \
    --cc=atenart@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=cong.wang@bytedance.com \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=hdanton@sina.com \
    --cc=jgross@suse.com \
    --cc=jhs@mojatatu.com \
    --cc=jiri@resnulli.us \
    --cc=john.fastabend@gmail.com \
    --cc=johunt@akamai.com \
    --cc=kafai@fb.com \
    --cc=kehuan.feng@gmail.com \
    --cc=kpsingh@kernel.org \
    --cc=kuba@kernel.org \
    --cc=linux-can@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxarm@openeuler.org \
    --cc=linyunsheng@huawei.com \
    --cc=mkl@pengutronix.de \
    --cc=mzhivich@akamai.com \
    --cc=netdev@vger.kernel.org \
    --cc=olteanv@gmail.com \
    --cc=pabeni@redhat.com \
    --cc=songliubraving@fb.com \
    --cc=weiwan@google.com \
    --cc=xiyou.wangcong@gmail.com \
    --cc=yhs@fb.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).