linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Yang Shi <shy828301@gmail.com>
To: Shakeel Butt <shakeelb@google.com>
Cc: Dave Hansen <dave.hansen@intel.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	LKML <linux-kernel@vger.kernel.org>,
	Linux MM <linux-mm@kvack.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Jonathan Adams <jwadams@google.com>,
	"Chen, Tim C" <tim.c.chen@intel.com>
Subject: Re: [PATCH 0/4] [RFC] Migrate Pages in lieu of discard
Date: Fri, 18 Oct 2019 14:44:08 -0700	[thread overview]
Message-ID: <CAHbLzkp1cDFizWOvknHUT0N9Y6AtQM9Z_Af9mQpiQ4a=PRexkw@mail.gmail.com> (raw)
In-Reply-To: <CALvZod4yVgHa6oVjFFhV1rpE0auxdEmu2g2pEBmZ4Z-CP-ru=g@mail.gmail.com>

On Thu, Oct 17, 2019 at 3:58 PM Shakeel Butt <shakeelb@google.com> wrote:
>
> On Thu, Oct 17, 2019 at 10:20 AM Yang Shi <shy828301@gmail.com> wrote:
> >
> > On Thu, Oct 17, 2019 at 7:26 AM Dave Hansen <dave.hansen@intel.com> wrote:
> > >
> > > On 10/16/19 8:45 PM, Shakeel Butt wrote:
> > > > On Wed, Oct 16, 2019 at 3:49 PM Dave Hansen <dave.hansen@linux.intel.com> wrote:
> > > >> This set implements a solution to these problems.  At the end of the
> > > >> reclaim process in shrink_page_list() just before the last page
> > > >> refcount is dropped, the page is migrated to persistent memory instead
> > > >> of being dropped.
> > > ..> The memory cgroup part of the story is missing here. Since PMEM is
> > > > treated as slow DRAM, shouldn't its usage be accounted to the
> > > > corresponding memcg's memory/memsw counters and the migration should
> > > > not happen for memcg limit reclaim? Otherwise some jobs can hog the
> > > > whole PMEM.
> > >
> > > My expectation (and I haven't confirmed this) is that the any memory use
> > > is accounted to the owning cgroup, whether it is DRAM or PMEM.  memcg
> > > limit reclaim and global reclaim both end up doing migrations and
> > > neither should have a net effect on the counters.
> >
> > Yes, your expectation is correct. As long as PMEM is a NUMA node, it
> > is treated as regular memory by memcg. But, I don't think memcg limit
> > reclaim should do migration since limit reclaim is used to reduce
> > memory usage, but migration doesn't reduce usage, it just moves memory
> > from one node to the other.
> >
> > In my implementation, I just skip migration for memcg limit reclaim,
> > please see: https://lore.kernel.org/linux-mm/1560468577-101178-7-git-send-email-yang.shi@linux.alibaba.com/
> >
> > >
> > > There is certainly a problem here because DRAM is a more valuable
> > > resource vs. PMEM, and memcg accounts for them as if they were equally
> > > valuable.  I really want to see memcg account for this cost discrepancy
> > > at some point, but I'm not quite sure what form it would take.  Any
> > > feedback from you heavy memcg users out there would be much appreciated.
> >
> > We did have some demands to control the ratio between DRAM and PMEM as
> > I mentioned in LSF/MM. Mel Gorman did suggest make memcg account DRAM
> > and PMEM respectively or something similar.
> >
>
> Can you please describe how you plan to use this ratio? Are
> applications supposed to use this ratio or the admins will be
> adjusting this ratio? Also should it dynamically updated based on the
> workload i.e. as the working set or hot pages grows we want more DRAM
> and as cold pages grows we want more PMEM? Basically I am trying to
> see if we have something like smart auto-numa balancing to fulfill
> your use-case.

We thought it should be controlled by admins and transparent to the
end users. The ratio is fixed, but the memory could be moved between
DRAM and PMEM dynamically as long as it doesn't exceed the ratio so
that we could keep warmer data in DRAM and colder data in PMEM.

I talked this about in LSF/MM, please check this out:
https://lwn.net/Articles/787418/

>
> > >
> > > > Also what happens when PMEM is full? Can the memory migrated to PMEM
> > > > be reclaimed (or discarded)?
> > >
> > > Yep.  The "migration path" can be as long as you want, but once the data
> > > hits a "terminal node" it will stop getting migrated and normal discard
> > > at the end of reclaim happens.
> >
> > I recalled I had a hallway conversation with Keith about this in
> > LSF/MM. We all agree there should be not a cycle. But, IMHO, I don't
> > think exporting migration path to userspace (or letting user to define
> > migration path) and having multiple migration stops are good ideas in
> > general.
> >
> > >

  reply	other threads:[~2019-10-18 21:44 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-16 22:11 [PATCH 0/4] [RFC] Migrate Pages in lieu of discard Dave Hansen
2019-10-16 22:11 ` [PATCH 1/4] node: Define and export memory migration path Dave Hansen
2019-10-17 11:12   ` Kirill A. Shutemov
2019-10-17 11:44     ` Kirill A. Shutemov
2019-10-16 22:11 ` [PATCH 2/4] mm/migrate: Defer allocating new page until needed Dave Hansen
2019-10-17 11:27   ` Kirill A. Shutemov
2019-10-16 22:11 ` [PATCH 3/4] mm/vmscan: Attempt to migrate page in lieu of discard Dave Hansen
2019-10-17 17:30   ` Yang Shi
2019-10-18 18:15     ` Dave Hansen
2019-10-18 21:02       ` Yang Shi
2019-10-16 22:11 ` [PATCH 4/4] mm/vmscan: Consider anonymous pages without swap Dave Hansen
2019-10-17  3:45 ` [PATCH 0/4] [RFC] Migrate Pages in lieu of discard Shakeel Butt
2019-10-17 14:26   ` Dave Hansen
2019-10-17 16:58     ` Shakeel Butt
2019-10-17 20:51       ` Dave Hansen
2019-10-17 17:20     ` Yang Shi
2019-10-17 21:05       ` Dave Hansen
2019-10-17 22:58       ` Shakeel Butt
2019-10-18 21:44         ` Yang Shi [this message]
2019-10-17 16:01 ` Suleiman Souhlal
2019-10-17 16:32   ` Dave Hansen
2019-10-17 16:39     ` Shakeel Butt
2019-10-18  8:11     ` Suleiman Souhlal
2019-10-18 15:10       ` Dave Hansen
2019-10-18 15:39         ` Suleiman Souhlal
2019-10-18  7:44 ` Michal Hocko
2019-10-18 14:54   ` Dave Hansen
2019-10-18 21:39     ` Yang Shi
2019-10-18 21:55       ` Dan Williams
2019-10-22 13:49     ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAHbLzkp1cDFizWOvknHUT0N9Y6AtQM9Z_Af9mQpiQ4a=PRexkw@mail.gmail.com' \
    --to=shy828301@gmail.com \
    --cc=dan.j.williams@intel.com \
    --cc=dave.hansen@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=jwadams@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=shakeelb@google.com \
    --cc=tim.c.chen@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).