linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Frederic Weisbecker <frederic@kernel.org>
To: "Leonardo Brás" <leobras@redhat.com>
Cc: Michal Hocko <mhocko@suse.com>,
	Roman Gushchin <roman.gushchin@linux.dev>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Shakeel Butt <shakeelb@google.com>,
	Muchun Song <muchun.song@linux.dev>,
	Andrew Morton <akpm@linux-foundation.org>,
	cgroups@vger.kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org,
	Frederic Weisbecker <fweisbecker@suse.de>
Subject: Re: [PATCH v2 0/5] Introduce memcg_stock_pcp remote draining
Date: Fri, 27 Jan 2023 14:03:59 +0100	[thread overview]
Message-ID: <Y9PLvzI8WU0vYWUt@lothringen> (raw)
In-Reply-To: <601fc35a8cc2167e53e45c636fccb2d899fd7c50.camel@redhat.com>

On Fri, Jan 27, 2023 at 05:12:13AM -0300, Leonardo Brás wrote:
> On Fri, 2023-01-27 at 04:22 -0300, Leonardo Brás wrote:
> > > Hmm, OK, I have misunderstood your proposal. Yes, the overal pcp charges
> > > potentially left behind should be small and that shouldn't really be a
> > > concern for memcg oom situations (unless the limit is very small and
> > > workloads on isolated cpus using small hard limits is way beyond my
> > > imagination).
> > > 
> > > My first thought was that those charges could be left behind without any
> > > upper bound but in reality sooner or later something should be running
> > > on those cpus and if the memcg is gone the pcp cache would get refilled
> > > and old charges gone.
> > > 
> > > So yes, this is actually a better and even simpler solution. All we need
> > > is something like this
> > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > > index ab457f0394ab..13b84bbd70ba 100644
> > > --- a/mm/memcontrol.c
> > > +++ b/mm/memcontrol.c
> > > @@ -2344,6 +2344,9 @@ static void drain_all_stock(struct mem_cgroup *root_memcg)
> > >  		struct mem_cgroup *memcg;
> > >  		bool flush = false;
> > >  
> > > +		if (cpu_is_isolated(cpu))
> > > +			continue;
> > > +
> > >  		rcu_read_lock();
> > >  		memcg = stock->cached;
> > >  		if (memcg && stock->nr_pages &&
> > > 
> > > There is no such cpu_is_isolated() AFAICS so we would need a help from
> > > NOHZ and cpuisol people to create one for us. Frederic, would such an
> > > abstraction make any sense from your POV?
> > 
> > 
> > IIUC, 'if (cpu_is_isolated())' would be instead:
> > 
> > if (!housekeeping_cpu(smp_processor_id(), HK_TYPE_DOMAIN) ||
> > !housekeeping_cpu(smp_processor_id(), HK_TYPE_WQ)
> 
> oh, sorry 's/smp_processor_id()/cpu/' here:
> 
> if(!housekeeping_cpu(cpu, HK_TYPE_DOMAIN) || !housekeeping_cpu(cpu, HK_TYPE_WQ))

Do you also need to handle cpuset.sched_load_balance=0 (aka. cpuset v2 "isolated"
partition type). It has the same effect as isolcpus=, but it can be changed at
runtime.

And then on_null_domain() look like what you need. You'd have to make that API
more generally available though, and rename it to something like
"bool cpu_has_null_domain(int cpu)".

But then you also need to handle concurrent cpuset changes. If you can tolerate
it to be racy, then RCU alone is fine.

Thanks.

  parent reply	other threads:[~2023-01-27 13:04 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-25  7:34 [PATCH v2 0/5] Introduce memcg_stock_pcp remote draining Leonardo Bras
2023-01-25  7:34 ` [PATCH v2 1/5] mm/memcontrol: Align percpu memcg_stock to cache Leonardo Bras
2023-01-25  7:34 ` [PATCH v2 2/5] mm/memcontrol: Change stock_lock type from local_lock_t to spinlock_t Leonardo Bras
2023-01-25  7:35 ` [PATCH v2 3/5] mm/memcontrol: Reorder memcg_stock_pcp members to avoid holes Leonardo Bras
2023-01-25  7:35 ` [PATCH v2 4/5] mm/memcontrol: Perform all stock drain in current CPU Leonardo Bras
2023-01-25  7:35 ` [PATCH v2 5/5] mm/memcontrol: Remove flags from memcg_stock_pcp Leonardo Bras
2023-01-25  8:33 ` [PATCH v2 0/5] Introduce memcg_stock_pcp remote draining Michal Hocko
2023-01-25 11:06   ` Leonardo Brás
2023-01-25 11:39     ` Michal Hocko
2023-01-25 18:22     ` Marcelo Tosatti
2023-01-25 23:14       ` Roman Gushchin
2023-01-26  7:41         ` Michal Hocko
2023-01-26 18:03           ` Marcelo Tosatti
2023-01-26 19:20             ` Michal Hocko
2023-01-27  0:32               ` Marcelo Tosatti
2023-01-27  6:58                 ` Michal Hocko
2023-02-01 18:31               ` Roman Gushchin
2023-01-26 23:12           ` Roman Gushchin
2023-01-27  7:11             ` Michal Hocko
2023-01-27  7:22               ` Leonardo Brás
2023-01-27  8:12                 ` Leonardo Brás
2023-01-27  9:23                   ` Michal Hocko
2023-01-27 13:03                   ` Frederic Weisbecker [this message]
2023-01-27 13:58               ` Michal Hocko
2023-01-27 18:18                 ` Roman Gushchin
2023-02-03 15:21                   ` Michal Hocko
2023-02-03 19:25                     ` Roman Gushchin
2023-02-13 13:36                       ` Michal Hocko
2023-01-27  7:14             ` Leonardo Brás
2023-01-27  7:20               ` Michal Hocko
2023-01-27  7:35                 ` Leonardo Brás
2023-01-27  9:29                   ` Michal Hocko
2023-01-27 19:29                     ` Leonardo Brás
2023-01-27 23:50                       ` Roman Gushchin
2023-01-26 18:19         ` Marcelo Tosatti
2023-01-27  5:40           ` Leonardo Brás
2023-01-26  7:45       ` Michal Hocko
2023-01-26 18:14         ` Marcelo Tosatti
2023-01-26 19:13           ` Michal Hocko
2023-01-27  6:55             ` Leonardo Brás
2023-01-31 11:35               ` Marcelo Tosatti
2023-02-01  4:36                 ` Leonardo Brás
2023-02-01 12:52                   ` Michal Hocko
2023-02-01 12:41                 ` Michal Hocko
2023-02-04  4:55                   ` Leonardo Brás
2023-02-05 19:49                     ` Roman Gushchin
2023-02-07  3:18                       ` Leonardo Brás
2023-02-08 19:23                         ` Roman Gushchin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Y9PLvzI8WU0vYWUt@lothringen \
    --to=frederic@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=fweisbecker@suse.de \
    --cc=hannes@cmpxchg.org \
    --cc=leobras@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=mtosatti@redhat.com \
    --cc=muchun.song@linux.dev \
    --cc=roman.gushchin@linux.dev \
    --cc=shakeelb@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).