linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Tim Chen <tim.c.chen@linux.intel.com>
To: Shakeel Butt <shakeelb@google.com>, Jiebin Sun <jiebin.sun@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	vasily.averin@linux.dev, Dennis Zhou <dennis@kernel.org>,
	Tejun Heo <tj@kernel.org>, Christoph Lameter <cl@linux.com>,
	"Eric W. Biederman" <ebiederm@xmission.com>,
	Alexey Gladkov <legion@kernel.org>,
	Manfred Spraul <manfred@colorfullife.com>,
	alexander.mikhalitsyn@virtuozzo.com,
	Linux MM <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>,
	"Chen, Tim C" <tim.c.chen@intel.com>,
	Feng Tang <feng.tang@intel.com>,
	Huang Ying <ying.huang@intel.com>,
	tianyou.li@intel.com, wangyang.guo@intel.com
Subject: Re: [PATCH] ipc/msg.c: mitigate the lock contention with percpu counter
Date: Tue, 06 Sep 2022 11:44:46 -0700	[thread overview]
Message-ID: <048517e7f95aa8460cd47a169f3dfbd8e9b70d5c.camel@linux.intel.com> (raw)
In-Reply-To: <CALvZod44uUFnwfF4StC24t+d1s_XE10hkmSCgb04FjtTATo6xQ@mail.gmail.com>

On Fri, 2022-09-02 at 09:27 -0700, Shakeel Butt wrote:
> On Fri, Sep 2, 2022 at 12:04 AM Jiebin Sun <jiebin.sun@intel.com> wrote:
> > The msg_bytes and msg_hdrs atomic counters are frequently
> > updated when IPC msg queue is in heavy use, causing heavy
> > cache bounce and overhead. Change them to percpu_counters
> > greatly improve the performance. Since there is one unique
> > ipc namespace, additional memory cost is minimal. Reading
> > of the count done in msgctl call, which is infrequent. So
> > the need to sum up the counts in each CPU is infrequent.
> > 
> > Apply the patch and test the pts/stress-ng-1.4.0
> > -- system v message passing (160 threads).
> > 
> > Score gain: 3.38x
> > 
> > CPU: ICX 8380 x 2 sockets
> > Core number: 40 x 2 physical cores
> > Benchmark: pts/stress-ng-1.4.0
> > -- system v message passing (160 threads)
> > 
> > Signed-off-by: Jiebin Sun <jiebin.sun@intel.com>
> [...]
> > +void percpu_counter_add_local(struct percpu_counter *fbc, s64 amount)
> > +{
> > +       this_cpu_add(*fbc->counters, amount);
> > +}
> > +EXPORT_SYMBOL(percpu_counter_add_local);
> 
> Why not percpu_counter_add()? This may drift the fbc->count more than
> batch*nr_cpus. I am assuming that is not the issue for you as you
> always do an expensive sum in the slow path. As Andrew asked, this
> should be a separate patch.

In the IPC case, the read is always done with the accurate read using
percpu_counter_sum() gathering all the counts and
never with percpu_counter_read() that only read global count.
So Jiebin was not worry about accuracy.

However, the counter is s64 and the local per cpu counter is S32.
So the counter size has shrunk if we only keep the count in local per
cpu counter, which can overflow a lot sooner and is not okay.

Jiebin, can you try to use percpu_counter_add_batch, but using a large
batch size.  That should achieve what you want without needing
to create a percpu_counter_add_local() function, and also the overflow
problem.

Tim



  parent reply	other threads:[~2022-09-06 18:44 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-02 15:22 [PATCH] ipc/msg.c: mitigate the lock contention with percpu counter Jiebin Sun
2022-09-02 16:06 ` Andrew Morton
2022-09-05 11:54   ` Sun, Jiebin
2022-09-02 16:27 ` Shakeel Butt
2022-09-05 12:02   ` Sun, Jiebin
2022-09-06 18:44   ` Tim Chen [this message]
2022-09-07  9:39     ` Sun, Jiebin
2022-09-07 20:43       ` Andrew Morton
2022-09-07 17:25   ` [PATCH v4] ipc/msg: " Jiebin Sun
2022-09-07 16:01     ` Tim Chen
2022-09-07 21:34       ` Andrew Morton
2022-09-07 22:10         ` Tim Chen
2022-09-08  8:25         ` Sun, Jiebin
2022-09-08 15:38           ` Andrew Morton
2022-09-08 16:15             ` Dennis Zhou
2022-09-03 19:35 ` [PATCH] ipc/msg.c: " Manfred Spraul
2022-09-05 12:12   ` Sun, Jiebin
2022-09-05 19:35 ` [PATCH v2 0/2] ipc/msg: mitigate the lock contention in ipc/msg Jiebin Sun
2022-09-05 19:35   ` [PATCH v2 2/2] ipc/msg: mitigate the lock contention with percpu counter Jiebin Sun
2022-09-05 19:35   ` [PATCH v2 1/2] percpu: Add percpu_counter_add_local Jiebin Sun
2022-09-05 19:31     ` Shakeel Butt
2022-09-06  8:41       ` Sun, Jiebin
2022-09-06 16:54 ` [PATCH v3 0/2] ipc/msg: mitigate the lock contention in ipc/msg Jiebin Sun
2022-09-06 16:54   ` [PATCH v3 1/2] percpu: Add percpu_counter_add_local Jiebin Sun
2022-09-06 16:54   ` [PATCH v3 2/2] ipc/msg: mitigate the lock contention with percpu counter Jiebin Sun
2022-09-09 20:36 ` [PATCH v5 0/2] ipc/msg: mitigate the lock contention in ipc/msg Jiebin Sun
2022-09-09 20:36   ` [PATCH v5 1/2] percpu: Add percpu_counter_add_local and percpu_counter_sub_local Jiebin Sun
2022-09-09 16:37     ` Tim Chen
2022-09-10  1:37     ` kernel test robot
2022-09-10  8:15     ` kernel test robot
2022-09-10  8:26     ` kernel test robot
2022-09-09 20:36   ` [PATCH v5 2/2] ipc/msg: mitigate the lock contention with percpu counter Jiebin Sun
2022-09-09 16:11     ` Tim Chen
2022-09-13 19:25 ` [PATCH v6 0/2] ipc/msg: mitigate the lock contention in ipc/msg Jiebin Sun
2022-09-13 19:25   ` [PATCH v6 1/2] percpu: Add percpu_counter_add_local and percpu_counter_sub_local Jiebin Sun
2022-09-18 11:08     ` Manfred Spraul
2022-09-20  6:01       ` Sun, Jiebin
2022-09-13 19:25   ` [PATCH v6 2/2] ipc/msg: mitigate the lock contention with percpu counter Jiebin Sun
2022-09-18 12:53     ` Manfred Spraul
2022-09-20  2:36       ` Sun, Jiebin
2022-09-20  4:53         ` Manfred Spraul
2022-09-20  5:50           ` Sun, Jiebin
2022-09-20 15:08           ` [PATCH] ipc/msg: avoid negative value by overflow in msginfo Jiebin Sun

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=048517e7f95aa8460cd47a169f3dfbd8e9b70d5c.camel@linux.intel.com \
    --to=tim.c.chen@linux.intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=alexander.mikhalitsyn@virtuozzo.com \
    --cc=cl@linux.com \
    --cc=dennis@kernel.org \
    --cc=ebiederm@xmission.com \
    --cc=feng.tang@intel.com \
    --cc=jiebin.sun@intel.com \
    --cc=legion@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=manfred@colorfullife.com \
    --cc=shakeelb@google.com \
    --cc=tianyou.li@intel.com \
    --cc=tim.c.chen@intel.com \
    --cc=tj@kernel.org \
    --cc=vasily.averin@linux.dev \
    --cc=wangyang.guo@intel.com \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).