All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Huang, Ying" <ying.huang@intel.com>
To: Yang Shi <shy828301@gmail.com>
Cc: Aneesh Kumar K V <aneesh.kumar@linux.ibm.com>,
	Feng Tang <feng.tang@intel.com>,
	"Hocko, Michal" <mhocko@suse.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Johannes Weiner <hannes@cmpxchg.org>, Tejun Heo <tj@kernel.org>,
	Zefan Li <lizefan.x@bytedance.com>,
	Waiman Long <longman@redhat.com>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"cgroups@vger.kernel.org" <cgroups@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"Hansen, Dave" <dave.hansen@intel.com>,
	"Chen, Tim C" <tim.c.chen@intel.com>,
	"Yin, Fengwei" <fengwei.yin@intel.com>
Subject: Re: [PATCH] mm/vmscan: respect cpuset policy during page demotion
Date: Mon, 31 Oct 2022 09:53:18 +0800	[thread overview]
Message-ID: <87cza8ep2p.fsf@yhuang6-desk2.ccr.corp.intel.com> (raw)
In-Reply-To: <CAHbLzkrqaCF8wvtsAmvaZUqCAoRfO1qPSFyK7bVq5O5m1zuksg@mail.gmail.com> (Yang Shi's message of "Fri, 28 Oct 2022 10:16:50 -0700")

Yang Shi <shy828301@gmail.com> writes:

> On Thu, Oct 27, 2022 at 10:09 PM Aneesh Kumar K V
> <aneesh.kumar@linux.ibm.com> wrote:
>>
>> On 10/27/22 11:25 PM, Yang Shi wrote:
>> > On Thu, Oct 27, 2022 at 12:12 AM Feng Tang <feng.tang@intel.com> wrote:
>> >>
>> >> On Thu, Oct 27, 2022 at 01:57:52AM +0800, Yang Shi wrote:
>> >>> On Wed, Oct 26, 2022 at 8:59 AM Michal Hocko <mhocko@suse.com> wrote:
>> >> [...]
>> >>>>>> This all can get quite expensive so the primary question is, does the
>> >>>>>> existing behavior generates any real issues or is this more of an
>> >>>>>> correctness exercise? I mean it certainly is not great to demote to an
>> >>>>>> incompatible numa node but are there any reasonable configurations when
>> >>>>>> the demotion target node is explicitly excluded from memory
>> >>>>>> policy/cpuset?
>> >>>>>
>> >>>>> We haven't got customer report on this, but there are quite some customers
>> >>>>> use cpuset to bind some specific memory nodes to a docker (You've helped
>> >>>>> us solve a OOM issue in such cases), so I think it's practical to respect
>> >>>>> the cpuset semantics as much as we can.
>> >>>>
>> >>>> Yes, it is definitely better to respect cpusets and all local memory
>> >>>> policies. There is no dispute there. The thing is whether this is really
>> >>>> worth it. How often would cpusets (or policies in general) go actively
>> >>>> against demotion nodes (i.e. exclude those nodes from their allowes node
>> >>>> mask)?
>> >>>>
>> >>>> I can imagine workloads which wouldn't like to get their memory demoted
>> >>>> for some reason but wouldn't it be more practical to tell that
>> >>>> explicitly (e.g. via prctl) rather than configuring cpusets/memory
>> >>>> policies explicitly?
>> >>>>
>> >>>>> Your concern about the expensive cost makes sense! Some raw ideas are:
>> >>>>> * if the shrink_folio_list is called by kswapd, the folios come from
>> >>>>>   the same per-memcg lruvec, so only one check is enough
>> >>>>> * if not from kswapd, like called form madvise or DAMON code, we can
>> >>>>>   save a memcg cache, and if the next folio's memcg is same as the
>> >>>>>   cache, we reuse its result. And due to the locality, the real
>> >>>>>   check is rarely performed.
>> >>>>
>> >>>> memcg is not the expensive part of the thing. You need to get from page
>> >>>> -> all vmas::vm_policy -> mm -> task::mempolicy
>> >>>
>> >>> Yeah, on the same page with Michal. Figuring out mempolicy from page
>> >>> seems quite expensive and the correctness can't be guranteed since the
>> >>> mempolicy could be set per-thread and the mm->task depends on
>> >>> CONFIG_MEMCG so it doesn't work for !CONFIG_MEMCG.
>> >>
>> >> Yes, you are right. Our "working" psudo code for mem policy looks like
>> >> what Michal mentioned, and it can't work for all cases, but try to
>> >> enforce it whenever possible:
>> >>
>> >> static bool  __check_mpol_demotion(struct folio *folio, struct vm_area_struct *vma,
>> >>                 unsigned long addr, void *arg)
>> >> {
>> >>         bool *skip_demotion = arg;
>> >>         struct mempolicy *mpol;
>> >>         int nid, dnid;
>> >>         bool ret = true;
>> >>
>> >>         mpol = __get_vma_policy(vma, addr);
>> >>         if (!mpol) {
>> >>                 struct task_struct *task;
>> >>                 if (vma->vm_mm)
>> >>                         task = vma->vm_mm->owner;
>> >
>> > But this task may not be the task you want IIUC. For example, the
>> > process has two threads, A and B. They have different mempolicy. The
>> > vmscan is trying to demote a page belonging to thread A, but the task
>> > may point to thread B, so you actually get the wrong mempolicy IIUC.
>> >
>>
>> But if we swap out this page and fault back in via thread B the page would
>> get allocated as per thread B mempolicy. So if we demote based on thread B
>> policy are we breaking anything?
>
> If the page is demoted by following thread B's mempolicy, didn't it
> already break thread A's mempolicy in the first place if you care
> about it? If thread A and thread B have the same mempolicy, then it is
> not a problem.
>
> Actually there is another problem for shared page. If a page is shared
> by two processes, P1 and P2, when you do rmap walk to find the task,
> you may find two contradict mempolicy, what mempolicy would you like
> to obey? Do you have to save all the intermediate mempolicy results
> somewhere or you just bail out once the first mempolicy is found?

Yes.  There's no perfect solution for this.  I suggest to avoid demoting
if any VMA (or task) prevent it.  Because allowing demoting is the
default policy.  And we will not promote the page back if it becomes hot
later by default because promotion only works for default memory policy
by default.

Best Regards,
Huang, Ying

WARNING: multiple messages have this Message-ID (diff)
From: "Huang, Ying" <ying.huang-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
To: Yang Shi <shy828301-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Cc: Aneesh Kumar K V
	<aneesh.kumar-tEXmvtCZX7AybS5Ee8rs3A@public.gmane.org>,
	Feng Tang <feng.tang-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>,
	"Hocko, Michal" <mhocko-IBi9RG/b67k@public.gmane.org>,
	Andrew Morton
	<akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>,
	Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>,
	Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
	Zefan Li <lizefan.x-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org>,
	Waiman Long <longman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
	"linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org"
	<linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org>,
	"cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
	<cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	"linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
	<linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	"Hansen,
	Dave" <dave.hansen-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>,
	"Chen,
	Tim C" <tim.c.chen-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>,
	"Yin,
	Fengwei" <fengwei.yin-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Subject: Re: [PATCH] mm/vmscan: respect cpuset policy during page demotion
Date: Mon, 31 Oct 2022 09:53:18 +0800	[thread overview]
Message-ID: <87cza8ep2p.fsf@yhuang6-desk2.ccr.corp.intel.com> (raw)
In-Reply-To: <CAHbLzkrqaCF8wvtsAmvaZUqCAoRfO1qPSFyK7bVq5O5m1zuksg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> (Yang Shi's message of "Fri, 28 Oct 2022 10:16:50 -0700")

Yang Shi <shy828301-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> writes:

> On Thu, Oct 27, 2022 at 10:09 PM Aneesh Kumar K V
> <aneesh.kumar-tEXmvtCZX7AybS5Ee8rs3A@public.gmane.org> wrote:
>>
>> On 10/27/22 11:25 PM, Yang Shi wrote:
>> > On Thu, Oct 27, 2022 at 12:12 AM Feng Tang <feng.tang-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org> wrote:
>> >>
>> >> On Thu, Oct 27, 2022 at 01:57:52AM +0800, Yang Shi wrote:
>> >>> On Wed, Oct 26, 2022 at 8:59 AM Michal Hocko <mhocko-IBi9RG/b67k@public.gmane.org> wrote:
>> >> [...]
>> >>>>>> This all can get quite expensive so the primary question is, does the
>> >>>>>> existing behavior generates any real issues or is this more of an
>> >>>>>> correctness exercise? I mean it certainly is not great to demote to an
>> >>>>>> incompatible numa node but are there any reasonable configurations when
>> >>>>>> the demotion target node is explicitly excluded from memory
>> >>>>>> policy/cpuset?
>> >>>>>
>> >>>>> We haven't got customer report on this, but there are quite some customers
>> >>>>> use cpuset to bind some specific memory nodes to a docker (You've helped
>> >>>>> us solve a OOM issue in such cases), so I think it's practical to respect
>> >>>>> the cpuset semantics as much as we can.
>> >>>>
>> >>>> Yes, it is definitely better to respect cpusets and all local memory
>> >>>> policies. There is no dispute there. The thing is whether this is really
>> >>>> worth it. How often would cpusets (or policies in general) go actively
>> >>>> against demotion nodes (i.e. exclude those nodes from their allowes node
>> >>>> mask)?
>> >>>>
>> >>>> I can imagine workloads which wouldn't like to get their memory demoted
>> >>>> for some reason but wouldn't it be more practical to tell that
>> >>>> explicitly (e.g. via prctl) rather than configuring cpusets/memory
>> >>>> policies explicitly?
>> >>>>
>> >>>>> Your concern about the expensive cost makes sense! Some raw ideas are:
>> >>>>> * if the shrink_folio_list is called by kswapd, the folios come from
>> >>>>>   the same per-memcg lruvec, so only one check is enough
>> >>>>> * if not from kswapd, like called form madvise or DAMON code, we can
>> >>>>>   save a memcg cache, and if the next folio's memcg is same as the
>> >>>>>   cache, we reuse its result. And due to the locality, the real
>> >>>>>   check is rarely performed.
>> >>>>
>> >>>> memcg is not the expensive part of the thing. You need to get from page
>> >>>> -> all vmas::vm_policy -> mm -> task::mempolicy
>> >>>
>> >>> Yeah, on the same page with Michal. Figuring out mempolicy from page
>> >>> seems quite expensive and the correctness can't be guranteed since the
>> >>> mempolicy could be set per-thread and the mm->task depends on
>> >>> CONFIG_MEMCG so it doesn't work for !CONFIG_MEMCG.
>> >>
>> >> Yes, you are right. Our "working" psudo code for mem policy looks like
>> >> what Michal mentioned, and it can't work for all cases, but try to
>> >> enforce it whenever possible:
>> >>
>> >> static bool  __check_mpol_demotion(struct folio *folio, struct vm_area_struct *vma,
>> >>                 unsigned long addr, void *arg)
>> >> {
>> >>         bool *skip_demotion = arg;
>> >>         struct mempolicy *mpol;
>> >>         int nid, dnid;
>> >>         bool ret = true;
>> >>
>> >>         mpol = __get_vma_policy(vma, addr);
>> >>         if (!mpol) {
>> >>                 struct task_struct *task;
>> >>                 if (vma->vm_mm)
>> >>                         task = vma->vm_mm->owner;
>> >
>> > But this task may not be the task you want IIUC. For example, the
>> > process has two threads, A and B. They have different mempolicy. The
>> > vmscan is trying to demote a page belonging to thread A, but the task
>> > may point to thread B, so you actually get the wrong mempolicy IIUC.
>> >
>>
>> But if we swap out this page and fault back in via thread B the page would
>> get allocated as per thread B mempolicy. So if we demote based on thread B
>> policy are we breaking anything?
>
> If the page is demoted by following thread B's mempolicy, didn't it
> already break thread A's mempolicy in the first place if you care
> about it? If thread A and thread B have the same mempolicy, then it is
> not a problem.
>
> Actually there is another problem for shared page. If a page is shared
> by two processes, P1 and P2, when you do rmap walk to find the task,
> you may find two contradict mempolicy, what mempolicy would you like
> to obey? Do you have to save all the intermediate mempolicy results
> somewhere or you just bail out once the first mempolicy is found?

Yes.  There's no perfect solution for this.  I suggest to avoid demoting
if any VMA (or task) prevent it.  Because allowing demoting is the
default policy.  And we will not promote the page back if it becomes hot
later by default because promotion only works for default memory policy
by default.

Best Regards,
Huang, Ying

  reply	other threads:[~2022-10-31  1:54 UTC|newest]

Thread overview: 98+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-26  7:43 [PATCH] mm/vmscan: respect cpuset policy during page demotion Feng Tang
2022-10-26  7:43 ` Feng Tang
2022-10-26  7:49 ` Aneesh Kumar K V
2022-10-26  7:49   ` Aneesh Kumar K V
2022-10-26  8:00   ` Feng Tang
2022-10-26  8:00     ` Feng Tang
2022-10-26  9:19     ` Michal Hocko
2022-10-26  9:19       ` Michal Hocko
2022-10-26 10:42       ` Aneesh Kumar K V
2022-10-26 10:42         ` Aneesh Kumar K V
2022-10-26 11:02         ` Michal Hocko
2022-10-26 11:02           ` Michal Hocko
2022-10-26 12:08           ` Aneesh Kumar K V
2022-10-26 12:08             ` Aneesh Kumar K V
2022-10-26 12:21             ` Michal Hocko
2022-10-26 12:21               ` Michal Hocko
2022-10-26 12:35               ` Aneesh Kumar K V
2022-10-26 12:35                 ` Aneesh Kumar K V
2022-10-27  9:02                 ` Michal Hocko
2022-10-27  9:02                   ` Michal Hocko
2022-10-27 10:16                   ` Aneesh Kumar K V
2022-10-27 10:16                     ` Aneesh Kumar K V
2022-10-27 13:05                     ` Michal Hocko
2022-10-27 13:05                       ` Michal Hocko
2022-10-26 12:20       ` Feng Tang
2022-10-26 12:20         ` Feng Tang
2022-10-26 15:59         ` Michal Hocko
2022-10-26 15:59           ` Michal Hocko
2022-10-26 17:57           ` Yang Shi
2022-10-26 17:57             ` Yang Shi
2022-10-27  7:11             ` Feng Tang
2022-10-27  7:11               ` Feng Tang
2022-10-27  7:45               ` Huang, Ying
2022-10-27  7:45                 ` Huang, Ying
2022-10-27  7:51                 ` Feng Tang
2022-10-27  7:51                   ` Feng Tang
2022-10-27 17:55               ` Yang Shi
2022-10-27 17:55                 ` Yang Shi
2022-10-28  3:37                 ` Feng Tang
2022-10-28  3:37                   ` Feng Tang
2022-10-28  5:54                   ` Huang, Ying
2022-10-28  5:54                     ` Huang, Ying
2022-10-28 17:23                     ` Yang Shi
2022-10-28 17:23                       ` Yang Shi
2022-10-31  1:56                       ` Huang, Ying
2022-10-31  1:56                         ` Huang, Ying
2022-10-31  2:19                       ` Feng Tang
2022-10-31  2:19                         ` Feng Tang
2022-10-28  5:09                 ` Aneesh Kumar K V
2022-10-28  5:09                   ` Aneesh Kumar K V
2022-10-28 17:16                   ` Yang Shi
2022-10-28 17:16                     ` Yang Shi
2022-10-31  1:53                     ` Huang, Ying [this message]
2022-10-31  1:53                       ` Huang, Ying
2022-10-27  6:47           ` Huang, Ying
2022-10-27  6:47             ` Huang, Ying
2022-10-27  7:10             ` Michal Hocko
2022-10-27  7:10               ` Michal Hocko
2022-10-27  7:39               ` Huang, Ying
2022-10-27  7:39                 ` Huang, Ying
2022-10-27  8:01                 ` Michal Hocko
2022-10-27  8:01                   ` Michal Hocko
2022-10-27  9:31                   ` Huang, Ying
2022-10-27  9:31                     ` Huang, Ying
2022-10-27 12:29                     ` Michal Hocko
2022-10-27 12:29                       ` Michal Hocko
2022-10-27 23:22                       ` Huang, Ying
2022-10-27 23:22                         ` Huang, Ying
2022-10-31  8:40                         ` Michal Hocko
2022-10-31  8:40                           ` Michal Hocko
2022-10-31  8:51                           ` Huang, Ying
2022-10-31  8:51                             ` Huang, Ying
2022-10-31  9:18                             ` Michal Hocko
2022-10-31  9:18                               ` Michal Hocko
2022-10-31 14:09                           ` Feng Tang
2022-10-31 14:09                             ` Feng Tang
2022-10-31 14:32                             ` Michal Hocko
2022-10-31 14:32                               ` Michal Hocko
2022-11-07  8:05                               ` Feng Tang
2022-11-07  8:05                                 ` Feng Tang
2022-11-07  8:17                                 ` Michal Hocko
2022-11-07  8:17                                   ` Michal Hocko
2022-11-01  3:17                     ` Huang, Ying
2022-11-01  3:17                       ` Huang, Ying
2022-10-26  8:26 ` Yin, Fengwei
2022-10-26  8:26   ` Yin, Fengwei
2022-10-26  8:37   ` Feng Tang
2022-10-26  8:37     ` Feng Tang
2022-10-26 14:36 ` Waiman Long
2022-10-26 14:36   ` Waiman Long
2022-10-27  5:57   ` Feng Tang
2022-10-27  5:57     ` Feng Tang
2022-10-27  5:13 ` Huang, Ying
2022-10-27  5:13   ` Huang, Ying
2022-10-27  5:49   ` Feng Tang
2022-10-27  5:49     ` Feng Tang
2022-10-27  6:05     ` Huang, Ying
2022-10-27  6:05       ` Huang, Ying

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87cza8ep2p.fsf@yhuang6-desk2.ccr.corp.intel.com \
    --to=ying.huang@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=cgroups@vger.kernel.org \
    --cc=dave.hansen@intel.com \
    --cc=feng.tang@intel.com \
    --cc=fengwei.yin@intel.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lizefan.x@bytedance.com \
    --cc=longman@redhat.com \
    --cc=mhocko@suse.com \
    --cc=shy828301@gmail.com \
    --cc=tim.c.chen@intel.com \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.