From: "Huang, Ying" <ying.huang@intel.com> To: Yang Shi <shy828301@gmail.com> Cc: Shakeel Butt <shakeelb@google.com>, Tim Chen <tim.c.chen@linux.intel.com>, Michal Hocko <mhocko@suse.cz>, Johannes Weiner <hannes@cmpxchg.org>, Andrew Morton <akpm@linux-foundation.org>, Dave Hansen <dave.hansen@intel.com>, Dan Williams <dan.j.williams@intel.com>, David Rientjes <rientjes@google.com>, Linux MM <linux-mm@kvack.org>, Cgroups <cgroups@vger.kernel.org>, LKML <linux-kernel@vger.kernel.org>, Feng Tang <feng.tang@intel.com> Subject: Re: [RFC PATCH v1 00/11] Manage the top tier memory in a tiered memory Date: Fri, 09 Apr 2021 10:58:03 +0800 [thread overview] Message-ID: <87eefkxiys.fsf@yhuang6-desk1.ccr.corp.intel.com> (raw) In-Reply-To: <CAHbLzkrPD6s9vRy89cgQ36e+1cs6JbLqV84se7nnvP9MByizXA@mail.gmail.com> (Yang Shi's message of "Thu, 8 Apr 2021 11:00:54 -0700") Yang Shi <shy828301@gmail.com> writes: > On Thu, Apr 8, 2021 at 10:19 AM Shakeel Butt <shakeelb@google.com> wrote: >> >> Hi Tim, >> >> On Mon, Apr 5, 2021 at 11:08 AM Tim Chen <tim.c.chen@linux.intel.com> wrote: >> > >> > Traditionally, all memory is DRAM. Some DRAM might be closer/faster than >> > others NUMA wise, but a byte of media has about the same cost whether it >> > is close or far. But, with new memory tiers such as Persistent Memory >> > (PMEM). there is a choice between fast/expensive DRAM and slow/cheap >> > PMEM. >> > >> > The fast/expensive memory lives in the top tier of the memory hierachy. >> > >> > Previously, the patchset >> > [PATCH 00/10] [v7] Migrate Pages in lieu of discard >> > https://lore.kernel.org/linux-mm/20210401183216.443C4443@viggo.jf.intel.com/ >> > provides a mechanism to demote cold pages from DRAM node into PMEM. >> > >> > And the patchset >> > [PATCH 0/6] [RFC v6] NUMA balancing: optimize memory placement for memory tiering system >> > https://lore.kernel.org/linux-mm/20210311081821.138467-1-ying.huang@intel.com/ >> > provides a mechanism to promote hot pages in PMEM to the DRAM node >> > leveraging autonuma. >> > >> > The two patchsets together keep the hot pages in DRAM and colder pages >> > in PMEM. >> >> Thanks for working on this as this is becoming more and more important >> particularly in the data centers where memory is a big portion of the >> cost. >> >> I see you have responded to Michal and I will add my more specific >> response there. Here I wanted to give my high level concern regarding >> using v1's soft limit like semantics for top tier memory. >> >> This patch series aims to distribute/partition top tier memory between >> jobs of different priorities. We want high priority jobs to have >> preferential access to the top tier memory and we don't want low >> priority jobs to hog the top tier memory. >> >> Using v1's soft limit like behavior can potentially cause high >> priority jobs to stall to make enough space on top tier memory on >> their allocation path and I think this patchset is aiming to reduce >> that impact by making kswapd do that work. However I think the more >> concerning issue is the low priority job hogging the top tier memory. >> >> The possible ways the low priority job can hog the top tier memory are >> by allocating non-movable memory or by mlocking the memory. (Oh there >> is also pinning the memory but I don't know if there is a user api to >> pin memory?) For the mlocked memory, you need to either modify the >> reclaim code or use a different mechanism for demoting cold memory. > > Do you mean long term pin? RDMA should be able to simply pin the > memory for weeks. A lot of transient pins come from Direct I/O. They > should be less concerned. > > The low priority jobs should be able to be restricted by cpuset, for > example, just keep them on second tier memory nodes. Then all the > above problems are gone. To optimize the page placement of a process between DRAM and PMEM, we want to place the hot pages in DRAM and the cold pages in PMEM. But the memory accessing pattern changes overtime, so we need to migrate pages between DRAM and PMEM to adapt to the changing. To avoid the hot pages be pinned in PMEM always, one way is to online the PMEM as movable zones. If so, and if the low priority jobs are restricted by cpuset to allocate from PMEM only, we may fail to run quite some workloads as being discussed in the following threads, https://lore.kernel.org/linux-mm/1604470210-124827-1-git-send-email-feng.tang@intel.com/ >> >> Basically I am saying we should put the upfront control (limit) on the >> usage of top tier memory by the jobs. > > This sounds similar to what I talked about in LSFMM 2019 > (https://lwn.net/Articles/787418/). We used to have some potential > usecase which divides DRAM:PMEM ratio for different jobs or memcgs > when I was with Alibaba. > > In the first place I thought about per NUMA node limit, but it was > very hard to configure it correctly for users unless you know exactly > about your memory usage and hot/cold memory distribution. > > I'm wondering, just off the top of my head, if we could extend the > semantic of low and min limit. For example, just redefine low and min > to "the limit on top tier memory". Then we could have low priority > jobs have 0 low/min limit. Per my understanding, memory.low/min are for the memory protection instead of the memory limiting. memory.high is for the memory limiting. Best Regards, Huang, Ying
WARNING: multiple messages have this Message-ID (diff)
From: "Huang, Ying" <ying.huang-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org> To: Yang Shi <shy828301-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> Cc: Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>, Tim Chen <tim.c.chen-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>, Michal Hocko <mhocko-AlSwsSmVLrQ@public.gmane.org>, Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>, Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>, Dave Hansen <dave.hansen-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>, Dan Williams <dan.j.williams-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>, David Rientjes <rientjes-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>, Linux MM <linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org>, Cgroups <cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>, LKML <linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>, Feng Tang <feng.tang-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org> Subject: Re: [RFC PATCH v1 00/11] Manage the top tier memory in a tiered memory Date: Fri, 09 Apr 2021 10:58:03 +0800 [thread overview] Message-ID: <87eefkxiys.fsf@yhuang6-desk1.ccr.corp.intel.com> (raw) In-Reply-To: <CAHbLzkrPD6s9vRy89cgQ36e+1cs6JbLqV84se7nnvP9MByizXA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> (Yang Shi's message of "Thu, 8 Apr 2021 11:00:54 -0700") Yang Shi <shy828301-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> writes: > On Thu, Apr 8, 2021 at 10:19 AM Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote: >> >> Hi Tim, >> >> On Mon, Apr 5, 2021 at 11:08 AM Tim Chen <tim.c.chen-VuQAYsv1563Yd54FQh9/CA@public.gmane.org> wrote: >> > >> > Traditionally, all memory is DRAM. Some DRAM might be closer/faster than >> > others NUMA wise, but a byte of media has about the same cost whether it >> > is close or far. But, with new memory tiers such as Persistent Memory >> > (PMEM). there is a choice between fast/expensive DRAM and slow/cheap >> > PMEM. >> > >> > The fast/expensive memory lives in the top tier of the memory hierachy. >> > >> > Previously, the patchset >> > [PATCH 00/10] [v7] Migrate Pages in lieu of discard >> > https://lore.kernel.org/linux-mm/20210401183216.443C4443-LXbPSdftPKxrdx17CPfAsdBPR1lH4CV8@public.gmane.org/ >> > provides a mechanism to demote cold pages from DRAM node into PMEM. >> > >> > And the patchset >> > [PATCH 0/6] [RFC v6] NUMA balancing: optimize memory placement for memory tiering system >> > https://lore.kernel.org/linux-mm/20210311081821.138467-1-ying.huang-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org/ >> > provides a mechanism to promote hot pages in PMEM to the DRAM node >> > leveraging autonuma. >> > >> > The two patchsets together keep the hot pages in DRAM and colder pages >> > in PMEM. >> >> Thanks for working on this as this is becoming more and more important >> particularly in the data centers where memory is a big portion of the >> cost. >> >> I see you have responded to Michal and I will add my more specific >> response there. Here I wanted to give my high level concern regarding >> using v1's soft limit like semantics for top tier memory. >> >> This patch series aims to distribute/partition top tier memory between >> jobs of different priorities. We want high priority jobs to have >> preferential access to the top tier memory and we don't want low >> priority jobs to hog the top tier memory. >> >> Using v1's soft limit like behavior can potentially cause high >> priority jobs to stall to make enough space on top tier memory on >> their allocation path and I think this patchset is aiming to reduce >> that impact by making kswapd do that work. However I think the more >> concerning issue is the low priority job hogging the top tier memory. >> >> The possible ways the low priority job can hog the top tier memory are >> by allocating non-movable memory or by mlocking the memory. (Oh there >> is also pinning the memory but I don't know if there is a user api to >> pin memory?) For the mlocked memory, you need to either modify the >> reclaim code or use a different mechanism for demoting cold memory. > > Do you mean long term pin? RDMA should be able to simply pin the > memory for weeks. A lot of transient pins come from Direct I/O. They > should be less concerned. > > The low priority jobs should be able to be restricted by cpuset, for > example, just keep them on second tier memory nodes. Then all the > above problems are gone. To optimize the page placement of a process between DRAM and PMEM, we want to place the hot pages in DRAM and the cold pages in PMEM. But the memory accessing pattern changes overtime, so we need to migrate pages between DRAM and PMEM to adapt to the changing. To avoid the hot pages be pinned in PMEM always, one way is to online the PMEM as movable zones. If so, and if the low priority jobs are restricted by cpuset to allocate from PMEM only, we may fail to run quite some workloads as being discussed in the following threads, https://lore.kernel.org/linux-mm/1604470210-124827-1-git-send-email-feng.tang-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org/ >> >> Basically I am saying we should put the upfront control (limit) on the >> usage of top tier memory by the jobs. > > This sounds similar to what I talked about in LSFMM 2019 > (https://lwn.net/Articles/787418/). We used to have some potential > usecase which divides DRAM:PMEM ratio for different jobs or memcgs > when I was with Alibaba. > > In the first place I thought about per NUMA node limit, but it was > very hard to configure it correctly for users unless you know exactly > about your memory usage and hot/cold memory distribution. > > I'm wondering, just off the top of my head, if we could extend the > semantic of low and min limit. For example, just redefine low and min > to "the limit on top tier memory". Then we could have low priority > jobs have 0 low/min limit. Per my understanding, memory.low/min are for the memory protection instead of the memory limiting. memory.high is for the memory limiting. Best Regards, Huang, Ying
next prev parent reply other threads:[~2021-04-09 2:58 UTC|newest] Thread overview: 75+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-04-05 17:08 [RFC PATCH v1 00/11] Manage the top tier memory in a tiered memory Tim Chen 2021-04-05 17:08 ` [RFC PATCH v1 01/11] mm: Define top tier memory node mask Tim Chen 2021-04-05 17:08 ` Tim Chen 2021-04-05 17:08 ` [RFC PATCH v1 02/11] mm: Add soft memory limit for mem cgroup Tim Chen 2021-04-05 17:08 ` Tim Chen 2021-04-05 17:08 ` [RFC PATCH v1 03/11] mm: Account the top tier memory usage per cgroup Tim Chen 2021-04-05 17:08 ` [RFC PATCH v1 04/11] mm: Report top tier memory usage in sysfs Tim Chen 2021-04-05 17:08 ` Tim Chen 2021-04-05 17:08 ` [RFC PATCH v1 05/11] mm: Add soft_limit_top_tier tree for mem cgroup Tim Chen 2021-04-05 17:08 ` Tim Chen 2021-04-05 17:08 ` [RFC PATCH v1 06/11] mm: Handle top tier memory in cgroup soft limit memory tree utilities Tim Chen 2021-04-05 17:08 ` Tim Chen 2021-04-05 17:08 ` [RFC PATCH v1 07/11] mm: Account the total top tier memory in use Tim Chen 2021-04-05 17:08 ` Tim Chen 2021-04-05 17:08 ` [RFC PATCH v1 08/11] mm: Add toptier option for mem_cgroup_soft_limit_reclaim() Tim Chen 2021-04-05 17:08 ` Tim Chen 2021-04-05 17:08 ` [RFC PATCH v1 09/11] mm: Use kswapd to demote pages when toptier memory is tight Tim Chen 2021-04-05 17:08 ` Tim Chen 2021-04-05 17:08 ` [RFC PATCH v1 10/11] mm: Set toptier_scale_factor via sysctl Tim Chen 2021-04-05 17:08 ` Tim Chen 2021-04-05 17:08 ` [RFC PATCH v1 11/11] mm: Wakeup kswapd if toptier memory need soft reclaim Tim Chen 2021-04-05 17:08 ` Tim Chen 2021-04-06 9:08 ` [RFC PATCH v1 00/11] Manage the top tier memory in a tiered memory Michal Hocko 2021-04-06 9:08 ` Michal Hocko 2021-04-07 22:33 ` Tim Chen 2021-04-07 22:33 ` Tim Chen 2021-04-08 11:52 ` Michal Hocko 2021-04-08 11:52 ` Michal Hocko 2021-04-09 23:26 ` Tim Chen 2021-04-09 23:26 ` Tim Chen 2021-04-12 19:20 ` Shakeel Butt 2021-04-12 19:20 ` Shakeel Butt 2021-04-12 19:20 ` Shakeel Butt 2021-04-14 8:59 ` Jonathan Cameron 2021-04-14 8:59 ` Jonathan Cameron 2021-04-15 0:42 ` Tim Chen 2021-04-15 0:42 ` Tim Chen 2021-04-13 2:15 ` Huang, Ying 2021-04-13 2:15 ` Huang, Ying 2021-04-13 2:15 ` Huang, Ying 2021-04-13 8:33 ` Michal Hocko 2021-04-13 8:33 ` Michal Hocko 2021-04-12 14:03 ` Shakeel Butt 2021-04-12 14:03 ` Shakeel Butt 2021-04-12 14:03 ` Shakeel Butt 2021-04-08 17:18 ` Shakeel Butt 2021-04-08 17:18 ` Shakeel Butt 2021-04-08 17:18 ` Shakeel Butt 2021-04-08 18:00 ` Yang Shi 2021-04-08 18:00 ` Yang Shi 2021-04-08 20:29 ` Shakeel Butt 2021-04-08 20:29 ` Shakeel Butt 2021-04-08 20:29 ` Shakeel Butt 2021-04-08 20:50 ` Yang Shi 2021-04-08 20:50 ` Yang Shi 2021-04-08 20:50 ` Yang Shi 2021-04-12 14:03 ` Shakeel Butt 2021-04-12 14:03 ` Shakeel Butt 2021-04-12 14:03 ` Shakeel Butt 2021-04-09 7:24 ` Michal Hocko 2021-04-09 7:24 ` Michal Hocko 2021-04-15 22:31 ` Tim Chen 2021-04-15 22:31 ` Tim Chen 2021-04-16 6:38 ` Michal Hocko 2021-04-16 6:38 ` Michal Hocko 2021-04-14 23:22 ` Tim Chen 2021-04-14 23:22 ` Tim Chen 2021-04-09 2:58 ` Huang, Ying [this message] 2021-04-09 2:58 ` Huang, Ying 2021-04-09 2:58 ` Huang, Ying 2021-04-09 20:50 ` Yang Shi 2021-04-09 20:50 ` Yang Shi 2021-04-09 20:50 ` Yang Shi 2021-04-15 22:25 ` Tim Chen 2021-04-15 22:25 ` Tim Chen
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=87eefkxiys.fsf@yhuang6-desk1.ccr.corp.intel.com \ --to=ying.huang@intel.com \ --cc=akpm@linux-foundation.org \ --cc=cgroups@vger.kernel.org \ --cc=dan.j.williams@intel.com \ --cc=dave.hansen@intel.com \ --cc=feng.tang@intel.com \ --cc=hannes@cmpxchg.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mhocko@suse.cz \ --cc=rientjes@google.com \ --cc=shakeelb@google.com \ --cc=shy828301@gmail.com \ --cc=tim.c.chen@linux.intel.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.