All of lore.kernel.org
 help / color / mirror / Atom feed
From: Eric Dumazet <edumazet@google.com>
To: Shakeel Butt <shakeelb@google.com>
Cc: Feng Tang <feng.tang@intel.com>, Linux MM <linux-mm@kvack.org>,
	 Andrew Morton <akpm@linux-foundation.org>,
	Roman Gushchin <roman.gushchin@linux.dev>,
	 Michal Hocko <mhocko@kernel.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	 Muchun Song <songmuchun@bytedance.com>,
	Jakub Kicinski <kuba@kernel.org>,
	 Xin Long <lucien.xin@gmail.com>,
	Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>,
	 kernel test robot <oliver.sang@intel.com>,
	Soheil Hassas Yeganeh <soheil@google.com>,
	 LKML <linux-kernel@vger.kernel.org>,
	network dev <netdev@vger.kernel.org>,
	 linux-s390@vger.kernel.org,
	MPTCP Upstream <mptcp@lists.linux.dev>,
	 "linux-sctp @ vger . kernel . org" <linux-sctp@vger.kernel.org>,
	lkp@lists.01.org,  kbuild test robot <lkp@intel.com>,
	Huang Ying <ying.huang@intel.com>,
	 Xing Zhengjun <zhengjun.xing@linux.intel.com>,
	Yin Fengwei <fengwei.yin@intel.com>,  Ying Xu <yinxu@redhat.com>
Subject: Re: [net] 4890b686f4: netperf.Throughput_Mbps -69.4% regression
Date: Mon, 27 Jun 2022 19:05:17 +0200	[thread overview]
Message-ID: <CANn89iLnOBGk+P33MRAkNwVLQC+s1M36m+cg1d4pJ970ecdxcg@mail.gmail.com> (raw)
In-Reply-To: <CALvZod60OHC4iQnyBd16evCHXa_8ucpHiRnm9iNErQeUOycGZw@mail.gmail.com>

On Mon, Jun 27, 2022 at 6:48 PM Shakeel Butt <shakeelb@google.com> wrote:
>
> On Mon, Jun 27, 2022 at 9:26 AM Eric Dumazet <edumazet@google.com> wrote:
> >
> [...]
> > >
> >
> > I simply did the following and got much better results.
> >
> > But I am not sure if updates to ->usage are really needed that often...
>
> I suspect we need to improve the per-cpu memcg stock usage here. Were
> the updates mostly from uncharge path or charge path or that's
> irrelevant?

I wonder if the cache is always used...

stock = this_cpu_ptr(&memcg_stock);
if (memcg == stock->cached && stock->nr_pages >= nr_pages) {

Apparently the per-cpu cache is only used for one memcg at a time ?

Not sure how this would scale to hosts with dozens of memcgs.

Maybe we could add some metrics to have an idea of the cache hit/miss ratio :/


>
> I think doing full drain (i.e. drain_stock()) within __refill_stock()
> when the local cache is larger than MEMCG_CHARGE_BATCH is not best.
> Rather we should always keep at least MEMCG_CHARGE_BATCH for such
> scenarios.
>
> >
> >
> > diff --git a/include/linux/page_counter.h b/include/linux/page_counter.h
> > index 679591301994d316062f92b275efa2459a8349c9..e267be4ba849760117d9fd041e22c2a44658ab36
> > 100644
> > --- a/include/linux/page_counter.h
> > +++ b/include/linux/page_counter.h
> > @@ -3,12 +3,15 @@
> >  #define _LINUX_PAGE_COUNTER_H
> >
> >  #include <linux/atomic.h>
> > +#include <linux/cache.h>
> >  #include <linux/kernel.h>
> >  #include <asm/page.h>
> >
> >  struct page_counter {
> > -       atomic_long_t usage;
> > -       unsigned long min;
> > +       /* contended cache line. */
> > +       atomic_long_t usage ____cacheline_aligned_in_smp;
> > +
> > +       unsigned long min ____cacheline_aligned_in_smp;
>
> Do we need to align 'min' too?

Probably if there is a hierarchy ...

propagate_protected_usage() seems to have potential high cost.


>
> >         unsigned long low;
> >         unsigned long high;
> >         unsigned long max;
> > @@ -27,12 +30,6 @@ struct page_counter {
> >         unsigned long watermark;
> >         unsigned long failcnt;
> >
> > -       /*
> > -        * 'parent' is placed here to be far from 'usage' to reduce
> > -        * cache false sharing, as 'usage' is written mostly while
> > -        * parent is frequently read for cgroup's hierarchical
> > -        * counting nature.
> > -        */
> >         struct page_counter *parent;
> >  };
> >
> >
> >

WARNING: multiple messages have this Message-ID (diff)
From: Eric Dumazet <edumazet@google.com>
To: lkp@lists.01.org
Subject: Re: [net] 4890b686f4: netperf.Throughput_Mbps -69.4% regression
Date: Mon, 27 Jun 2022 19:05:17 +0200	[thread overview]
Message-ID: <CANn89iLnOBGk+P33MRAkNwVLQC+s1M36m+cg1d4pJ970ecdxcg@mail.gmail.com> (raw)
In-Reply-To: <CALvZod60OHC4iQnyBd16evCHXa_8ucpHiRnm9iNErQeUOycGZw@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 2527 bytes --]

On Mon, Jun 27, 2022 at 6:48 PM Shakeel Butt <shakeelb@google.com> wrote:
>
> On Mon, Jun 27, 2022 at 9:26 AM Eric Dumazet <edumazet@google.com> wrote:
> >
> [...]
> > >
> >
> > I simply did the following and got much better results.
> >
> > But I am not sure if updates to ->usage are really needed that often...
>
> I suspect we need to improve the per-cpu memcg stock usage here. Were
> the updates mostly from uncharge path or charge path or that's
> irrelevant?

I wonder if the cache is always used...

stock = this_cpu_ptr(&memcg_stock);
if (memcg == stock->cached && stock->nr_pages >= nr_pages) {

Apparently the per-cpu cache is only used for one memcg at a time ?

Not sure how this would scale to hosts with dozens of memcgs.

Maybe we could add some metrics to have an idea of the cache hit/miss ratio :/


>
> I think doing full drain (i.e. drain_stock()) within __refill_stock()
> when the local cache is larger than MEMCG_CHARGE_BATCH is not best.
> Rather we should always keep at least MEMCG_CHARGE_BATCH for such
> scenarios.
>
> >
> >
> > diff --git a/include/linux/page_counter.h b/include/linux/page_counter.h
> > index 679591301994d316062f92b275efa2459a8349c9..e267be4ba849760117d9fd041e22c2a44658ab36
> > 100644
> > --- a/include/linux/page_counter.h
> > +++ b/include/linux/page_counter.h
> > @@ -3,12 +3,15 @@
> >  #define _LINUX_PAGE_COUNTER_H
> >
> >  #include <linux/atomic.h>
> > +#include <linux/cache.h>
> >  #include <linux/kernel.h>
> >  #include <asm/page.h>
> >
> >  struct page_counter {
> > -       atomic_long_t usage;
> > -       unsigned long min;
> > +       /* contended cache line. */
> > +       atomic_long_t usage ____cacheline_aligned_in_smp;
> > +
> > +       unsigned long min ____cacheline_aligned_in_smp;
>
> Do we need to align 'min' too?

Probably if there is a hierarchy ...

propagate_protected_usage() seems to have potential high cost.


>
> >         unsigned long low;
> >         unsigned long high;
> >         unsigned long max;
> > @@ -27,12 +30,6 @@ struct page_counter {
> >         unsigned long watermark;
> >         unsigned long failcnt;
> >
> > -       /*
> > -        * 'parent' is placed here to be far from 'usage' to reduce
> > -        * cache false sharing, as 'usage' is written mostly while
> > -        * parent is frequently read for cgroup's hierarchical
> > -        * counting nature.
> > -        */
> >         struct page_counter *parent;
> >  };
> >
> >
> >

  reply	other threads:[~2022-06-27 17:05 UTC|newest]

Thread overview: 70+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-19 15:04 [net] 4890b686f4: netperf.Throughput_Mbps -69.4% regression kernel test robot
2022-06-19 15:04 ` kernel test robot
2022-06-23  0:28 ` Jakub Kicinski
2022-06-23  0:28   ` Jakub Kicinski
2022-06-23  3:08   ` Xin Long
2022-06-23  3:08     ` Xin Long
2022-06-23 22:50     ` Xin Long
2022-06-23 22:50       ` Xin Long
2022-06-24  1:57       ` Jakub Kicinski
2022-06-24  1:57         ` Jakub Kicinski
2022-06-24  4:13         ` Eric Dumazet
2022-06-24  4:13           ` Eric Dumazet
2022-06-24  4:22           ` Eric Dumazet
2022-06-24  4:22             ` Eric Dumazet
2022-06-24  5:13           ` Feng Tang
2022-06-24  5:13             ` Feng Tang
2022-06-24  5:45             ` Eric Dumazet
2022-06-24  5:45               ` Eric Dumazet
2022-06-24  6:00               ` Feng Tang
2022-06-24  6:00                 ` Feng Tang
2022-06-24  6:07                 ` Eric Dumazet
2022-06-24  6:07                   ` Eric Dumazet
2022-06-24  6:34           ` Shakeel Butt
2022-06-24  6:34             ` Shakeel Butt
2022-06-24  7:06             ` Feng Tang
2022-06-24  7:06               ` Feng Tang
2022-06-24 14:43               ` Shakeel Butt
2022-06-24 14:43                 ` Shakeel Butt
2022-06-25  2:36                 ` Feng Tang
2022-06-25  2:36                   ` Feng Tang
2022-06-27  2:38                   ` Feng Tang
2022-06-27  2:38                     ` Feng Tang
2022-06-27  8:46                     ` Eric Dumazet
2022-06-27  8:46                       ` Eric Dumazet
2022-06-27 12:34                       ` Feng Tang
2022-06-27 12:34                         ` Feng Tang
2022-06-27 14:07                         ` Eric Dumazet
2022-06-27 14:07                           ` Eric Dumazet
2022-06-27 14:48                           ` Feng Tang
2022-06-27 14:48                             ` Feng Tang
2022-06-27 16:25                             ` Eric Dumazet
2022-06-27 16:25                               ` Eric Dumazet
2022-06-27 16:48                               ` Shakeel Butt
2022-06-27 16:48                                 ` Shakeel Butt
2022-06-27 17:05                                 ` Eric Dumazet [this message]
2022-06-27 17:05                                   ` Eric Dumazet
2022-06-28  1:46                                 ` Roman Gushchin
2022-06-28  1:46                                   ` Roman Gushchin
2022-06-28  3:49                               ` Feng Tang
2022-06-28  3:49                                 ` Feng Tang
2022-07-01 15:47                                 ` Shakeel Butt
2022-07-01 15:47                                   ` Shakeel Butt
2022-07-03 10:43                                   ` Feng Tang
2022-07-03 10:43                                     ` Feng Tang
2022-07-03 22:55                                     ` Roman Gushchin
2022-07-03 22:55                                       ` Roman Gushchin
2022-07-05  5:03                                       ` Feng Tang
2022-07-05  5:03                                         ` Feng Tang
2022-08-16  5:52                                         ` Oliver Sang
2022-08-16  5:52                                           ` Oliver Sang
2022-08-16 15:55                                           ` Shakeel Butt
2022-08-16 15:55                                             ` Shakeel Butt
2022-06-27 14:52                         ` Shakeel Butt
2022-06-27 14:52                           ` Shakeel Butt
2022-06-27 14:56                           ` Eric Dumazet
2022-06-27 14:56                             ` Eric Dumazet
2022-06-27 15:12                           ` Feng Tang
2022-06-27 15:12                             ` Feng Tang
2022-06-27 16:25                             ` Shakeel Butt
2022-06-27 16:25                               ` Shakeel Butt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CANn89iLnOBGk+P33MRAkNwVLQC+s1M36m+cg1d4pJ970ecdxcg@mail.gmail.com \
    --to=edumazet@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=feng.tang@intel.com \
    --cc=fengwei.yin@intel.com \
    --cc=hannes@cmpxchg.org \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=linux-sctp@vger.kernel.org \
    --cc=lkp@intel.com \
    --cc=lkp@lists.01.org \
    --cc=lucien.xin@gmail.com \
    --cc=marcelo.leitner@gmail.com \
    --cc=mhocko@kernel.org \
    --cc=mptcp@lists.linux.dev \
    --cc=netdev@vger.kernel.org \
    --cc=oliver.sang@intel.com \
    --cc=roman.gushchin@linux.dev \
    --cc=shakeelb@google.com \
    --cc=soheil@google.com \
    --cc=songmuchun@bytedance.com \
    --cc=ying.huang@intel.com \
    --cc=yinxu@redhat.com \
    --cc=zhengjun.xing@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.