All of lore.kernel.org
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.com>
To: Marcelo Tosatti <mtosatti@redhat.com>
Cc: "Leonardo Brás" <leobras@redhat.com>,
	"Johannes Weiner" <hannes@cmpxchg.org>,
	"Roman Gushchin" <roman.gushchin@linux.dev>,
	"Shakeel Butt" <shakeelb@google.com>,
	"Muchun Song" <muchun.song@linux.dev>,
	"Andrew Morton" <akpm@linux-foundation.org>,
	cgroups@vger.kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 0/5] Introduce memcg_stock_pcp remote draining
Date: Wed, 1 Feb 2023 13:41:16 +0100	[thread overview]
Message-ID: <Y9pd7AxAILUSHrpe@dhcp22.suse.cz> (raw)
In-Reply-To: <Y9j9BnMwfm4TJks7@tpad>

On Tue 31-01-23 08:35:34, Marcelo Tosatti wrote:
[...]
> So it would be good to point out a specific problematic 
> testcase/scenario with using the spinlock in this particular case?

Please think about it some more. The sole purpose of the pcp charge
caching is to avoid atomics because the normal fast path (i.e. no limit
hit) is a page counter which is an atomic counter. If you go with a spin
lock for the pcp cache you are just losing some of the advantage of the
caching. Sure that would be a smaller atomics use than a deeper memcg
hierarchy but still.

Not to mention a potential contention which is hard to predict and it
will depend on the specific runtime very much. So not something that
would be easy to test for other than artificial testcases. Full cpu
pcp draining is not a very common operation and one could argue that
the problem will be limited but so far I haven't really heard any strong
arguments against the proposal to avoid scheduling the work on isolated
cpus which is a much more simpler solution and achieves the same
effect.
-- 
Michal Hocko
SUSE Labs

WARNING: multiple messages have this Message-ID (diff)
From: Michal Hocko <mhocko-IBi9RG/b67k@public.gmane.org>
To: Marcelo Tosatti <mtosatti-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Cc: "Leonardo Brás" <leobras-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
	"Johannes Weiner"
	<hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>,
	"Roman Gushchin"
	<roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org>,
	"Shakeel Butt" <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>,
	"Muchun Song"
	<muchun.song-fxUVXftIFDnyG1zEObXtfA@public.gmane.org>,
	"Andrew Morton"
	<akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>,
	cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Subject: Re: [PATCH v2 0/5] Introduce memcg_stock_pcp remote draining
Date: Wed, 1 Feb 2023 13:41:16 +0100	[thread overview]
Message-ID: <Y9pd7AxAILUSHrpe@dhcp22.suse.cz> (raw)
In-Reply-To: <Y9j9BnMwfm4TJks7@tpad>

On Tue 31-01-23 08:35:34, Marcelo Tosatti wrote:
[...]
> So it would be good to point out a specific problematic 
> testcase/scenario with using the spinlock in this particular case?

Please think about it some more. The sole purpose of the pcp charge
caching is to avoid atomics because the normal fast path (i.e. no limit
hit) is a page counter which is an atomic counter. If you go with a spin
lock for the pcp cache you are just losing some of the advantage of the
caching. Sure that would be a smaller atomics use than a deeper memcg
hierarchy but still.

Not to mention a potential contention which is hard to predict and it
will depend on the specific runtime very much. So not something that
would be easy to test for other than artificial testcases. Full cpu
pcp draining is not a very common operation and one could argue that
the problem will be limited but so far I haven't really heard any strong
arguments against the proposal to avoid scheduling the work on isolated
cpus which is a much more simpler solution and achieves the same
effect.
-- 
Michal Hocko
SUSE Labs

  parent reply	other threads:[~2023-02-01 12:41 UTC|newest]

Thread overview: 85+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-25  7:34 [PATCH v2 0/5] Introduce memcg_stock_pcp remote draining Leonardo Bras
2023-01-25  7:34 ` Leonardo Bras
2023-01-25  7:34 ` [PATCH v2 1/5] mm/memcontrol: Align percpu memcg_stock to cache Leonardo Bras
2023-01-25  7:34   ` Leonardo Bras
2023-01-25  7:34 ` [PATCH v2 2/5] mm/memcontrol: Change stock_lock type from local_lock_t to spinlock_t Leonardo Bras
2023-01-25  7:34   ` Leonardo Bras
2023-01-25  7:35 ` [PATCH v2 3/5] mm/memcontrol: Reorder memcg_stock_pcp members to avoid holes Leonardo Bras
2023-01-25  7:35   ` Leonardo Bras
2023-01-25  7:35 ` [PATCH v2 4/5] mm/memcontrol: Perform all stock drain in current CPU Leonardo Bras
2023-01-25  7:35 ` [PATCH v2 5/5] mm/memcontrol: Remove flags from memcg_stock_pcp Leonardo Bras
2023-01-25  7:35   ` Leonardo Bras
2023-01-25  8:33 ` [PATCH v2 0/5] Introduce memcg_stock_pcp remote draining Michal Hocko
2023-01-25  8:33   ` Michal Hocko
2023-01-25 11:06   ` Leonardo Brás
2023-01-25 11:06     ` Leonardo Brás
2023-01-25 11:39     ` Michal Hocko
2023-01-25 11:39       ` Michal Hocko
2023-01-25 18:22     ` Marcelo Tosatti
2023-01-25 18:22       ` Marcelo Tosatti
2023-01-25 23:14       ` Roman Gushchin
2023-01-25 23:14         ` Roman Gushchin
2023-01-26  7:41         ` Michal Hocko
2023-01-26  7:41           ` Michal Hocko
2023-01-26 18:03           ` Marcelo Tosatti
2023-01-26 19:20             ` Michal Hocko
2023-01-27  0:32               ` Marcelo Tosatti
2023-01-27  0:32                 ` Marcelo Tosatti
2023-01-27  6:58                 ` Michal Hocko
2023-01-27  6:58                   ` Michal Hocko
2023-02-01 18:31               ` Roman Gushchin
2023-01-26 23:12           ` Roman Gushchin
2023-01-27  7:11             ` Michal Hocko
2023-01-27  7:11               ` Michal Hocko
2023-01-27  7:22               ` Leonardo Brás
2023-01-27  8:12                 ` Leonardo Brás
2023-01-27  8:12                   ` Leonardo Brás
2023-01-27  9:23                   ` Michal Hocko
2023-01-27  9:23                     ` Michal Hocko
2023-01-27 13:03                   ` Frederic Weisbecker
2023-01-27 13:03                     ` Frederic Weisbecker
2023-01-27 13:58               ` Michal Hocko
2023-01-27 13:58                 ` Michal Hocko
2023-01-27 18:18                 ` Roman Gushchin
2023-01-27 18:18                   ` Roman Gushchin
2023-02-03 15:21                   ` Michal Hocko
2023-02-03 15:21                     ` Michal Hocko
2023-02-03 19:25                     ` Roman Gushchin
2023-02-03 19:25                       ` Roman Gushchin
2023-02-13 13:36                       ` Michal Hocko
2023-02-13 13:36                         ` Michal Hocko
2023-01-27  7:14             ` Leonardo Brás
2023-01-27  7:14               ` Leonardo Brás
2023-01-27  7:20               ` Michal Hocko
2023-01-27  7:20                 ` Michal Hocko
2023-01-27  7:35                 ` Leonardo Brás
2023-01-27  9:29                   ` Michal Hocko
2023-01-27 19:29                     ` Leonardo Brás
2023-01-27 19:29                       ` Leonardo Brás
2023-01-27 23:50                       ` Roman Gushchin
2023-01-26 18:19         ` Marcelo Tosatti
2023-01-26 18:19           ` Marcelo Tosatti
2023-01-27  5:40           ` Leonardo Brás
2023-01-27  5:40             ` Leonardo Brás
2023-01-26  2:01       ` Hillf Danton
2023-01-26  7:45       ` Michal Hocko
2023-01-26  7:45         ` Michal Hocko
2023-01-26 18:14         ` Marcelo Tosatti
2023-01-26 18:14           ` Marcelo Tosatti
2023-01-26 19:13           ` Michal Hocko
2023-01-26 19:13             ` Michal Hocko
2023-01-27  6:55             ` Leonardo Brás
2023-01-27  6:55               ` Leonardo Brás
2023-01-31 11:35               ` Marcelo Tosatti
2023-01-31 11:35                 ` Marcelo Tosatti
2023-02-01  4:36                 ` Leonardo Brás
2023-02-01 12:52                   ` Michal Hocko
2023-02-01 12:52                     ` Michal Hocko
2023-02-01 12:41                 ` Michal Hocko [this message]
2023-02-01 12:41                   ` Michal Hocko
2023-02-04  4:55                   ` Leonardo Brás
2023-02-04  4:55                     ` Leonardo Brás
2023-02-05 19:49                     ` Roman Gushchin
2023-02-07  3:18                       ` Leonardo Brás
2023-02-08 19:23                         ` Roman Gushchin
2023-02-08 19:23                           ` Roman Gushchin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Y9pd7AxAILUSHrpe@dhcp22.suse.cz \
    --to=mhocko@suse.com \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=leobras@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mtosatti@redhat.com \
    --cc=muchun.song@linux.dev \
    --cc=roman.gushchin@linux.dev \
    --cc=shakeelb@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.