All of lore.kernel.org
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.com>
To: Feng Tang <feng.tang@intel.com>
Cc: Aneesh Kumar K V <aneesh.kumar@linux.ibm.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Johannes Weiner <hannes@cmpxchg.org>, Tejun Heo <tj@kernel.org>,
	Zefan Li <lizefan.x@bytedance.com>,
	Waiman Long <longman@redhat.com>,
	"Huang, Ying" <ying.huang@intel.com>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"cgroups@vger.kernel.org" <cgroups@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"Hansen, Dave" <dave.hansen@intel.com>,
	"Chen, Tim C" <tim.c.chen@intel.com>,
	"Yin, Fengwei" <fengwei.yin@intel.com>
Subject: Re: [PATCH] mm/vmscan: respect cpuset policy during page demotion
Date: Wed, 26 Oct 2022 17:59:19 +0200	[thread overview]
Message-ID: <Y1lZV6qHp3gIINGc@dhcp22.suse.cz> (raw)
In-Reply-To: <Y1kl8VbPE0RYdyEB@feng-clx>

On Wed 26-10-22 20:20:01, Feng Tang wrote:
> On Wed, Oct 26, 2022 at 05:19:50PM +0800, Michal Hocko wrote:
> > On Wed 26-10-22 16:00:13, Feng Tang wrote:
> > > On Wed, Oct 26, 2022 at 03:49:48PM +0800, Aneesh Kumar K V wrote:
> > > > On 10/26/22 1:13 PM, Feng Tang wrote:
> > > > > In page reclaim path, memory could be demoted from faster memory tier
> > > > > to slower memory tier. Currently, there is no check about cpuset's
> > > > > memory policy, that even if the target demotion node is not allowd
> > > > > by cpuset, the demotion will still happen, which breaks the cpuset
> > > > > semantics.
> > > > > 
> > > > > So add cpuset policy check in the demotion path and skip demotion
> > > > > if the demotion targets are not allowed by cpuset.
> > > > > 
> > > > 
> > > > What about the vma policy or the task memory policy? Shouldn't we respect
> > > > those memory policy restrictions while demoting the page? 
> > >  
> > > Good question! We have some basic patches to consider memory policy
> > > in demotion path too, which are still under test, and will be posted
> > > soon. And the basic idea is similar to this patch.
> > 
> > For that you need to consult each vma and it's owning task(s) and that
> > to me sounds like something to be done in folio_check_references.
> > Relying on memcg to get a cpuset cgroup is really ugly and not really
> > 100% correct. Memory controller might be disabled and then you do not
> > have your association anymore.
>  
> You are right, for cpuset case, the solution depends on 'CONFIG_MEMCG=y',
> and the bright side is most of distribution have it on.

CONFIG_MEMCG=y is not sufficient. You would need to enable memcg
controller during the runtime as well.
 
> > This all can get quite expensive so the primary question is, does the
> > existing behavior generates any real issues or is this more of an
> > correctness exercise? I mean it certainly is not great to demote to an
> > incompatible numa node but are there any reasonable configurations when
> > the demotion target node is explicitly excluded from memory
> > policy/cpuset?
> 
> We haven't got customer report on this, but there are quite some customers
> use cpuset to bind some specific memory nodes to a docker (You've helped
> us solve a OOM issue in such cases), so I think it's practical to respect
> the cpuset semantics as much as we can.

Yes, it is definitely better to respect cpusets and all local memory
policies. There is no dispute there. The thing is whether this is really
worth it. How often would cpusets (or policies in general) go actively
against demotion nodes (i.e. exclude those nodes from their allowes node
mask)?

I can imagine workloads which wouldn't like to get their memory demoted
for some reason but wouldn't it be more practical to tell that
explicitly (e.g. via prctl) rather than configuring cpusets/memory
policies explicitly?
 
> Your concern about the expensive cost makes sense! Some raw ideas are:
> * if the shrink_folio_list is called by kswapd, the folios come from
>   the same per-memcg lruvec, so only one check is enough
> * if not from kswapd, like called form madvise or DAMON code, we can
>   save a memcg cache, and if the next folio's memcg is same as the
>   cache, we reuse its result. And due to the locality, the real
>   check is rarely performed.

memcg is not the expensive part of the thing. You need to get from page
-> all vmas::vm_policy -> mm -> task::mempolicy

-- 
Michal Hocko
SUSE Labs

WARNING: multiple messages have this Message-ID (diff)
From: Michal Hocko <mhocko-IBi9RG/b67k@public.gmane.org>
To: Feng Tang <feng.tang-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Cc: Aneesh Kumar K V
	<aneesh.kumar-tEXmvtCZX7AybS5Ee8rs3A@public.gmane.org>,
	Andrew Morton
	<akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>,
	Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>,
	Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
	Zefan Li <lizefan.x-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org>,
	Waiman Long <longman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
	"Huang,
	Ying" <ying.huang-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>,
	"linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org"
	<linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org>,
	"cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
	<cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	"linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
	<linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	"Hansen,
	Dave" <dave.hansen-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>,
	"Chen,
	Tim C" <tim.c.chen-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>,
	"Yin,
	Fengwei" <fengwei.yin-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Subject: Re: [PATCH] mm/vmscan: respect cpuset policy during page demotion
Date: Wed, 26 Oct 2022 17:59:19 +0200	[thread overview]
Message-ID: <Y1lZV6qHp3gIINGc@dhcp22.suse.cz> (raw)
In-Reply-To: <Y1kl8VbPE0RYdyEB@feng-clx>

On Wed 26-10-22 20:20:01, Feng Tang wrote:
> On Wed, Oct 26, 2022 at 05:19:50PM +0800, Michal Hocko wrote:
> > On Wed 26-10-22 16:00:13, Feng Tang wrote:
> > > On Wed, Oct 26, 2022 at 03:49:48PM +0800, Aneesh Kumar K V wrote:
> > > > On 10/26/22 1:13 PM, Feng Tang wrote:
> > > > > In page reclaim path, memory could be demoted from faster memory tier
> > > > > to slower memory tier. Currently, there is no check about cpuset's
> > > > > memory policy, that even if the target demotion node is not allowd
> > > > > by cpuset, the demotion will still happen, which breaks the cpuset
> > > > > semantics.
> > > > > 
> > > > > So add cpuset policy check in the demotion path and skip demotion
> > > > > if the demotion targets are not allowed by cpuset.
> > > > > 
> > > > 
> > > > What about the vma policy or the task memory policy? Shouldn't we respect
> > > > those memory policy restrictions while demoting the page? 
> > >  
> > > Good question! We have some basic patches to consider memory policy
> > > in demotion path too, which are still under test, and will be posted
> > > soon. And the basic idea is similar to this patch.
> > 
> > For that you need to consult each vma and it's owning task(s) and that
> > to me sounds like something to be done in folio_check_references.
> > Relying on memcg to get a cpuset cgroup is really ugly and not really
> > 100% correct. Memory controller might be disabled and then you do not
> > have your association anymore.
>  
> You are right, for cpuset case, the solution depends on 'CONFIG_MEMCG=y',
> and the bright side is most of distribution have it on.

CONFIG_MEMCG=y is not sufficient. You would need to enable memcg
controller during the runtime as well.
 
> > This all can get quite expensive so the primary question is, does the
> > existing behavior generates any real issues or is this more of an
> > correctness exercise? I mean it certainly is not great to demote to an
> > incompatible numa node but are there any reasonable configurations when
> > the demotion target node is explicitly excluded from memory
> > policy/cpuset?
> 
> We haven't got customer report on this, but there are quite some customers
> use cpuset to bind some specific memory nodes to a docker (You've helped
> us solve a OOM issue in such cases), so I think it's practical to respect
> the cpuset semantics as much as we can.

Yes, it is definitely better to respect cpusets and all local memory
policies. There is no dispute there. The thing is whether this is really
worth it. How often would cpusets (or policies in general) go actively
against demotion nodes (i.e. exclude those nodes from their allowes node
mask)?

I can imagine workloads which wouldn't like to get their memory demoted
for some reason but wouldn't it be more practical to tell that
explicitly (e.g. via prctl) rather than configuring cpusets/memory
policies explicitly?
 
> Your concern about the expensive cost makes sense! Some raw ideas are:
> * if the shrink_folio_list is called by kswapd, the folios come from
>   the same per-memcg lruvec, so only one check is enough
> * if not from kswapd, like called form madvise or DAMON code, we can
>   save a memcg cache, and if the next folio's memcg is same as the
>   cache, we reuse its result. And due to the locality, the real
>   check is rarely performed.

memcg is not the expensive part of the thing. You need to get from page
-> all vmas::vm_policy -> mm -> task::mempolicy

-- 
Michal Hocko
SUSE Labs

  reply	other threads:[~2022-10-26 15:59 UTC|newest]

Thread overview: 98+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-26  7:43 [PATCH] mm/vmscan: respect cpuset policy during page demotion Feng Tang
2022-10-26  7:43 ` Feng Tang
2022-10-26  7:49 ` Aneesh Kumar K V
2022-10-26  7:49   ` Aneesh Kumar K V
2022-10-26  8:00   ` Feng Tang
2022-10-26  8:00     ` Feng Tang
2022-10-26  9:19     ` Michal Hocko
2022-10-26  9:19       ` Michal Hocko
2022-10-26 10:42       ` Aneesh Kumar K V
2022-10-26 10:42         ` Aneesh Kumar K V
2022-10-26 11:02         ` Michal Hocko
2022-10-26 11:02           ` Michal Hocko
2022-10-26 12:08           ` Aneesh Kumar K V
2022-10-26 12:08             ` Aneesh Kumar K V
2022-10-26 12:21             ` Michal Hocko
2022-10-26 12:21               ` Michal Hocko
2022-10-26 12:35               ` Aneesh Kumar K V
2022-10-26 12:35                 ` Aneesh Kumar K V
2022-10-27  9:02                 ` Michal Hocko
2022-10-27  9:02                   ` Michal Hocko
2022-10-27 10:16                   ` Aneesh Kumar K V
2022-10-27 10:16                     ` Aneesh Kumar K V
2022-10-27 13:05                     ` Michal Hocko
2022-10-27 13:05                       ` Michal Hocko
2022-10-26 12:20       ` Feng Tang
2022-10-26 12:20         ` Feng Tang
2022-10-26 15:59         ` Michal Hocko [this message]
2022-10-26 15:59           ` Michal Hocko
2022-10-26 17:57           ` Yang Shi
2022-10-26 17:57             ` Yang Shi
2022-10-27  7:11             ` Feng Tang
2022-10-27  7:11               ` Feng Tang
2022-10-27  7:45               ` Huang, Ying
2022-10-27  7:45                 ` Huang, Ying
2022-10-27  7:51                 ` Feng Tang
2022-10-27  7:51                   ` Feng Tang
2022-10-27 17:55               ` Yang Shi
2022-10-27 17:55                 ` Yang Shi
2022-10-28  3:37                 ` Feng Tang
2022-10-28  3:37                   ` Feng Tang
2022-10-28  5:54                   ` Huang, Ying
2022-10-28  5:54                     ` Huang, Ying
2022-10-28 17:23                     ` Yang Shi
2022-10-28 17:23                       ` Yang Shi
2022-10-31  1:56                       ` Huang, Ying
2022-10-31  1:56                         ` Huang, Ying
2022-10-31  2:19                       ` Feng Tang
2022-10-31  2:19                         ` Feng Tang
2022-10-28  5:09                 ` Aneesh Kumar K V
2022-10-28  5:09                   ` Aneesh Kumar K V
2022-10-28 17:16                   ` Yang Shi
2022-10-28 17:16                     ` Yang Shi
2022-10-31  1:53                     ` Huang, Ying
2022-10-31  1:53                       ` Huang, Ying
2022-10-27  6:47           ` Huang, Ying
2022-10-27  6:47             ` Huang, Ying
2022-10-27  7:10             ` Michal Hocko
2022-10-27  7:10               ` Michal Hocko
2022-10-27  7:39               ` Huang, Ying
2022-10-27  7:39                 ` Huang, Ying
2022-10-27  8:01                 ` Michal Hocko
2022-10-27  8:01                   ` Michal Hocko
2022-10-27  9:31                   ` Huang, Ying
2022-10-27  9:31                     ` Huang, Ying
2022-10-27 12:29                     ` Michal Hocko
2022-10-27 12:29                       ` Michal Hocko
2022-10-27 23:22                       ` Huang, Ying
2022-10-27 23:22                         ` Huang, Ying
2022-10-31  8:40                         ` Michal Hocko
2022-10-31  8:40                           ` Michal Hocko
2022-10-31  8:51                           ` Huang, Ying
2022-10-31  8:51                             ` Huang, Ying
2022-10-31  9:18                             ` Michal Hocko
2022-10-31  9:18                               ` Michal Hocko
2022-10-31 14:09                           ` Feng Tang
2022-10-31 14:09                             ` Feng Tang
2022-10-31 14:32                             ` Michal Hocko
2022-10-31 14:32                               ` Michal Hocko
2022-11-07  8:05                               ` Feng Tang
2022-11-07  8:05                                 ` Feng Tang
2022-11-07  8:17                                 ` Michal Hocko
2022-11-07  8:17                                   ` Michal Hocko
2022-11-01  3:17                     ` Huang, Ying
2022-11-01  3:17                       ` Huang, Ying
2022-10-26  8:26 ` Yin, Fengwei
2022-10-26  8:26   ` Yin, Fengwei
2022-10-26  8:37   ` Feng Tang
2022-10-26  8:37     ` Feng Tang
2022-10-26 14:36 ` Waiman Long
2022-10-26 14:36   ` Waiman Long
2022-10-27  5:57   ` Feng Tang
2022-10-27  5:57     ` Feng Tang
2022-10-27  5:13 ` Huang, Ying
2022-10-27  5:13   ` Huang, Ying
2022-10-27  5:49   ` Feng Tang
2022-10-27  5:49     ` Feng Tang
2022-10-27  6:05     ` Huang, Ying
2022-10-27  6:05       ` Huang, Ying

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Y1lZV6qHp3gIINGc@dhcp22.suse.cz \
    --to=mhocko@suse.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=cgroups@vger.kernel.org \
    --cc=dave.hansen@intel.com \
    --cc=feng.tang@intel.com \
    --cc=fengwei.yin@intel.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lizefan.x@bytedance.com \
    --cc=longman@redhat.com \
    --cc=tim.c.chen@intel.com \
    --cc=tj@kernel.org \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.