All of lore.kernel.org
 help / color / mirror / Atom feed
From: Roman Gushchin <roman.gushchin@linux.dev>
To: Michal Hocko <mhocko@suse.com>
Cc: "Marcelo Tosatti" <mtosatti@redhat.com>,
	"Leonardo Brás" <leobras@redhat.com>,
	"Johannes Weiner" <hannes@cmpxchg.org>,
	"Shakeel Butt" <shakeelb@google.com>,
	"Muchun Song" <muchun.song@linux.dev>,
	"Andrew Morton" <akpm@linux-foundation.org>,
	cgroups@vger.kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org,
	"Frederic Weisbecker" <fweisbecker@suse.de>
Subject: Re: [PATCH v2 0/5] Introduce memcg_stock_pcp remote draining
Date: Fri, 27 Jan 2023 10:18:03 -0800	[thread overview]
Message-ID: <Y9QVWwAreTlDVdZ0@P9FQF9L96D.corp.robot.car> (raw)
In-Reply-To: <Y9PYe1X7dRQOcahg@dhcp22.suse.cz>

On Fri, Jan 27, 2023 at 02:58:19PM +0100, Michal Hocko wrote:
> On Fri 27-01-23 08:11:04, Michal Hocko wrote:
> > [Cc Frederic]
> > 
> > On Thu 26-01-23 15:12:35, Roman Gushchin wrote:
> > > On Thu, Jan 26, 2023 at 08:41:34AM +0100, Michal Hocko wrote:
> > [...]
> > > > > Essentially each cpu will try to grab the remains of the memory quota
> > > > > and move it locally. I wonder in such circumstances if we need to disable the pcp-caching
> > > > > on per-cgroup basis.
> > > > 
> > > > I think it would be more than sufficient to disable pcp charging on an
> > > > isolated cpu.
> > > 
> > > It might have significant performance consequences.
> > 
> > Is it really significant?
> > 
> > > I'd rather opt out of stock draining for isolated cpus: it might slightly reduce
> > > the accuracy of memory limits and slightly increase the memory footprint (all
> > > those dying memcgs...), but the impact will be limited. Actually it is limited
> > > by the number of cpus.
> > 
> > Hmm, OK, I have misunderstood your proposal. Yes, the overal pcp charges
> > potentially left behind should be small and that shouldn't really be a
> > concern for memcg oom situations (unless the limit is very small and
> > workloads on isolated cpus using small hard limits is way beyond my
> > imagination).
> > 
> > My first thought was that those charges could be left behind without any
> > upper bound but in reality sooner or later something should be running
> > on those cpus and if the memcg is gone the pcp cache would get refilled
> > and old charges gone.
> > 
> > So yes, this is actually a better and even simpler solution. All we need
> > is something like this
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index ab457f0394ab..13b84bbd70ba 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -2344,6 +2344,9 @@ static void drain_all_stock(struct mem_cgroup *root_memcg)
> >  		struct mem_cgroup *memcg;
> >  		bool flush = false;
> >  
> > +		if (cpu_is_isolated(cpu))
> > +			continue;
> > +
> >  		rcu_read_lock();
> >  		memcg = stock->cached;
> >  		if (memcg && stock->nr_pages &&
> 
> Btw. this would be over pessimistic. The following should make more
> sense:
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index ab457f0394ab..55e440e54504 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2357,7 +2357,7 @@ static void drain_all_stock(struct mem_cgroup *root_memcg)
>  		    !test_and_set_bit(FLUSHING_CACHED_CHARGE, &stock->flags)) {
>  			if (cpu == curcpu)
>  				drain_local_stock(&stock->work);
> -			else
> +			else if (!cpu_is_isolated(cpu))
>  				schedule_work_on(cpu, &stock->work);
>  		}
>  	}


Yes, this is exactly what I was thinking of. It should solve the problem
for isolated cpus well enough without introducing an overhead for everybody else.

If you'll make a proper patch, please add my
Acked-by: Roman Gushchin <roman.gushchin@linux.dev>

I understand the concerns regarding spurious OOMs on 256-cores machine,
but I guess they are somewhat theoretical and also possible with the current code
(e.g. one ooming cgroup can effectively block draining for everybody else).

Thanks!

WARNING: multiple messages have this Message-ID (diff)
From: Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org>
To: Michal Hocko <mhocko-IBi9RG/b67k@public.gmane.org>
Cc: "Marcelo Tosatti"
	<mtosatti-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
	"Leonardo Brás" <leobras-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
	"Johannes Weiner"
	<hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>,
	"Shakeel Butt" <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>,
	"Muchun Song"
	<muchun.song-fxUVXftIFDnyG1zEObXtfA@public.gmane.org>,
	"Andrew Morton"
	<akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>,
	cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	"Frederic Weisbecker" <fweisbecker-l3A5Bk7waGM@public.gmane.org>
Subject: Re: [PATCH v2 0/5] Introduce memcg_stock_pcp remote draining
Date: Fri, 27 Jan 2023 10:18:03 -0800	[thread overview]
Message-ID: <Y9QVWwAreTlDVdZ0@P9FQF9L96D.corp.robot.car> (raw)
In-Reply-To: <Y9PYe1X7dRQOcahg-2MMpYkNvuYDjFM9bn6wA6Q@public.gmane.org>

On Fri, Jan 27, 2023 at 02:58:19PM +0100, Michal Hocko wrote:
> On Fri 27-01-23 08:11:04, Michal Hocko wrote:
> > [Cc Frederic]
> > 
> > On Thu 26-01-23 15:12:35, Roman Gushchin wrote:
> > > On Thu, Jan 26, 2023 at 08:41:34AM +0100, Michal Hocko wrote:
> > [...]
> > > > > Essentially each cpu will try to grab the remains of the memory quota
> > > > > and move it locally. I wonder in such circumstances if we need to disable the pcp-caching
> > > > > on per-cgroup basis.
> > > > 
> > > > I think it would be more than sufficient to disable pcp charging on an
> > > > isolated cpu.
> > > 
> > > It might have significant performance consequences.
> > 
> > Is it really significant?
> > 
> > > I'd rather opt out of stock draining for isolated cpus: it might slightly reduce
> > > the accuracy of memory limits and slightly increase the memory footprint (all
> > > those dying memcgs...), but the impact will be limited. Actually it is limited
> > > by the number of cpus.
> > 
> > Hmm, OK, I have misunderstood your proposal. Yes, the overal pcp charges
> > potentially left behind should be small and that shouldn't really be a
> > concern for memcg oom situations (unless the limit is very small and
> > workloads on isolated cpus using small hard limits is way beyond my
> > imagination).
> > 
> > My first thought was that those charges could be left behind without any
> > upper bound but in reality sooner or later something should be running
> > on those cpus and if the memcg is gone the pcp cache would get refilled
> > and old charges gone.
> > 
> > So yes, this is actually a better and even simpler solution. All we need
> > is something like this
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index ab457f0394ab..13b84bbd70ba 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -2344,6 +2344,9 @@ static void drain_all_stock(struct mem_cgroup *root_memcg)
> >  		struct mem_cgroup *memcg;
> >  		bool flush = false;
> >  
> > +		if (cpu_is_isolated(cpu))
> > +			continue;
> > +
> >  		rcu_read_lock();
> >  		memcg = stock->cached;
> >  		if (memcg && stock->nr_pages &&
> 
> Btw. this would be over pessimistic. The following should make more
> sense:
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index ab457f0394ab..55e440e54504 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2357,7 +2357,7 @@ static void drain_all_stock(struct mem_cgroup *root_memcg)
>  		    !test_and_set_bit(FLUSHING_CACHED_CHARGE, &stock->flags)) {
>  			if (cpu == curcpu)
>  				drain_local_stock(&stock->work);
> -			else
> +			else if (!cpu_is_isolated(cpu))
>  				schedule_work_on(cpu, &stock->work);
>  		}
>  	}


Yes, this is exactly what I was thinking of. It should solve the problem
for isolated cpus well enough without introducing an overhead for everybody else.

If you'll make a proper patch, please add my
Acked-by: Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org>

I understand the concerns regarding spurious OOMs on 256-cores machine,
but I guess they are somewhat theoretical and also possible with the current code
(e.g. one ooming cgroup can effectively block draining for everybody else).

Thanks!

  reply	other threads:[~2023-01-27 18:19 UTC|newest]

Thread overview: 85+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-25  7:34 [PATCH v2 0/5] Introduce memcg_stock_pcp remote draining Leonardo Bras
2023-01-25  7:34 ` Leonardo Bras
2023-01-25  7:34 ` [PATCH v2 1/5] mm/memcontrol: Align percpu memcg_stock to cache Leonardo Bras
2023-01-25  7:34   ` Leonardo Bras
2023-01-25  7:34 ` [PATCH v2 2/5] mm/memcontrol: Change stock_lock type from local_lock_t to spinlock_t Leonardo Bras
2023-01-25  7:34   ` Leonardo Bras
2023-01-25  7:35 ` [PATCH v2 3/5] mm/memcontrol: Reorder memcg_stock_pcp members to avoid holes Leonardo Bras
2023-01-25  7:35   ` Leonardo Bras
2023-01-25  7:35 ` [PATCH v2 4/5] mm/memcontrol: Perform all stock drain in current CPU Leonardo Bras
2023-01-25  7:35 ` [PATCH v2 5/5] mm/memcontrol: Remove flags from memcg_stock_pcp Leonardo Bras
2023-01-25  7:35   ` Leonardo Bras
2023-01-25  8:33 ` [PATCH v2 0/5] Introduce memcg_stock_pcp remote draining Michal Hocko
2023-01-25  8:33   ` Michal Hocko
2023-01-25 11:06   ` Leonardo Brás
2023-01-25 11:06     ` Leonardo Brás
2023-01-25 11:39     ` Michal Hocko
2023-01-25 11:39       ` Michal Hocko
2023-01-25 18:22     ` Marcelo Tosatti
2023-01-25 18:22       ` Marcelo Tosatti
2023-01-25 23:14       ` Roman Gushchin
2023-01-25 23:14         ` Roman Gushchin
2023-01-26  7:41         ` Michal Hocko
2023-01-26  7:41           ` Michal Hocko
2023-01-26 18:03           ` Marcelo Tosatti
2023-01-26 19:20             ` Michal Hocko
2023-01-27  0:32               ` Marcelo Tosatti
2023-01-27  0:32                 ` Marcelo Tosatti
2023-01-27  6:58                 ` Michal Hocko
2023-01-27  6:58                   ` Michal Hocko
2023-02-01 18:31               ` Roman Gushchin
2023-01-26 23:12           ` Roman Gushchin
2023-01-27  7:11             ` Michal Hocko
2023-01-27  7:11               ` Michal Hocko
2023-01-27  7:22               ` Leonardo Brás
2023-01-27  8:12                 ` Leonardo Brás
2023-01-27  8:12                   ` Leonardo Brás
2023-01-27  9:23                   ` Michal Hocko
2023-01-27  9:23                     ` Michal Hocko
2023-01-27 13:03                   ` Frederic Weisbecker
2023-01-27 13:03                     ` Frederic Weisbecker
2023-01-27 13:58               ` Michal Hocko
2023-01-27 13:58                 ` Michal Hocko
2023-01-27 18:18                 ` Roman Gushchin [this message]
2023-01-27 18:18                   ` Roman Gushchin
2023-02-03 15:21                   ` Michal Hocko
2023-02-03 15:21                     ` Michal Hocko
2023-02-03 19:25                     ` Roman Gushchin
2023-02-03 19:25                       ` Roman Gushchin
2023-02-13 13:36                       ` Michal Hocko
2023-02-13 13:36                         ` Michal Hocko
2023-01-27  7:14             ` Leonardo Brás
2023-01-27  7:14               ` Leonardo Brás
2023-01-27  7:20               ` Michal Hocko
2023-01-27  7:20                 ` Michal Hocko
2023-01-27  7:35                 ` Leonardo Brás
2023-01-27  9:29                   ` Michal Hocko
2023-01-27 19:29                     ` Leonardo Brás
2023-01-27 19:29                       ` Leonardo Brás
2023-01-27 23:50                       ` Roman Gushchin
2023-01-26 18:19         ` Marcelo Tosatti
2023-01-26 18:19           ` Marcelo Tosatti
2023-01-27  5:40           ` Leonardo Brás
2023-01-27  5:40             ` Leonardo Brás
2023-01-26  2:01       ` Hillf Danton
2023-01-26  7:45       ` Michal Hocko
2023-01-26  7:45         ` Michal Hocko
2023-01-26 18:14         ` Marcelo Tosatti
2023-01-26 18:14           ` Marcelo Tosatti
2023-01-26 19:13           ` Michal Hocko
2023-01-26 19:13             ` Michal Hocko
2023-01-27  6:55             ` Leonardo Brás
2023-01-27  6:55               ` Leonardo Brás
2023-01-31 11:35               ` Marcelo Tosatti
2023-01-31 11:35                 ` Marcelo Tosatti
2023-02-01  4:36                 ` Leonardo Brás
2023-02-01 12:52                   ` Michal Hocko
2023-02-01 12:52                     ` Michal Hocko
2023-02-01 12:41                 ` Michal Hocko
2023-02-01 12:41                   ` Michal Hocko
2023-02-04  4:55                   ` Leonardo Brás
2023-02-04  4:55                     ` Leonardo Brás
2023-02-05 19:49                     ` Roman Gushchin
2023-02-07  3:18                       ` Leonardo Brás
2023-02-08 19:23                         ` Roman Gushchin
2023-02-08 19:23                           ` Roman Gushchin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Y9QVWwAreTlDVdZ0@P9FQF9L96D.corp.robot.car \
    --to=roman.gushchin@linux.dev \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=fweisbecker@suse.de \
    --cc=hannes@cmpxchg.org \
    --cc=leobras@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=mtosatti@redhat.com \
    --cc=muchun.song@linux.dev \
    --cc=shakeelb@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.