linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Mina Almasry <almasrymina@google.com>
To: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	 Michal Hocko <mhocko@kernel.org>,
	Roman Gushchin <roman.gushchin@linux.dev>,
	 Shakeel Butt <shakeelb@google.com>,
	Muchun Song <songmuchun@bytedance.com>,
	 Huang Ying <ying.huang@intel.com>,
	Yang Shi <yang.shi@linux.alibaba.com>,
	 Yosry Ahmed <yosryahmed@google.com>,
	weixugc@google.com, fvdl@google.com,
	 linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH v2] [mm-unstable] mm: Fix memcg reclaim on memory tiered systems
Date: Wed, 7 Dec 2022 14:14:49 -0800	[thread overview]
Message-ID: <CAHS8izOUfBhctUw7Vx4rCgx0bfRETDx_taKRuoUx14jG8vzZ3w@mail.gmail.com> (raw)
In-Reply-To: <87k033eiwj.fsf@linux.ibm.com>

On Wed, Dec 7, 2022 at 12:07 AM Aneesh Kumar K.V
<aneesh.kumar@linux.ibm.com> wrote:
>
> Mina Almasry <almasrymina@google.com> writes:
>
> > commit 3f1509c57b1b ("Revert "mm/vmscan: never demote for memcg
> > reclaim"") enabled demotion in memcg reclaim, which is the right thing
> > to do, but introduced a regression in the behavior of
> > try_to_free_mem_cgroup_pages().
> >
> > The callers of try_to_free_mem_cgroup_pages() expect it to attempt to
> > reclaim - not demote - nr_pages from the cgroup. I.e. the memory usage
> > of the cgroup should reduce by nr_pages. The callers expect
> > try_to_free_mem_cgroup_pages() to also return the number of pages
> > reclaimed, not demoted.
> >
> > However, try_to_free_mem_cgroup_pages() actually unconditionally counts
> > demoted pages as reclaimed pages. So in practice when it is called it will
> > often demote nr_pages and return the number of demoted pages to the caller.
> > Demoted pages don't lower the memcg usage as the caller requested.
> >
> > I suspect various things work suboptimally on memory systems or don't
> > work at all due to this:
> >
> > - memory.high enforcement likely doesn't work (it just demotes nr_pages
> >   instead of lowering the memcg usage by nr_pages).
> > - try_charge_memcg() will keep retrying the charge while
> >   try_to_free_mem_cgroup_pages() is just demoting pages and not actually
> >   making any room for the charge.
> > - memory.reclaim has a wonky interface. It advertises to the user it
> >   reclaims the provided amount but it will actually demote that amount.
> >
> > There may be more effects to this issue.
> >
> > To fix these issues I propose shrink_folio_list() to only count pages
> > demoted from inside of sc->nodemask to outside of sc->nodemask as
> > 'reclaimed'.
> >
> > For callers such as reclaim_high() or try_charge_memcg() that set
> > sc->nodemask to NULL, try_to_free_mem_cgroup_pages() will try to
> > actually reclaim nr_pages and return the number of pages reclaimed. No
> > demoted pages would count towards the nr_pages requirement.
> >
> > For callers such as memory_reclaim() that set sc->nodemask,
> > try_to_free_mem_cgroup_pages() will free nr_pages from that nodemask
> > with either demotion or reclaim.
> >
> > Tested this change using memory.reclaim interface. With this change,
> >
> >       echo "1m" > memory.reclaim
> >
> > Will cause freeing of 1m of memory from the cgroup regardless of the
> > demotions happening inside.
> >
> >       echo "1m nodes=0" > memory.reclaim
> >
> > Will cause freeing of 1m of node 0 by demotion if a demotion target is
> > available, and by reclaim if no demotion target is available.
> >
> > Signed-off-by: Mina Almasry <almasrymina@google.com>
> >
> > ---
> >
> > This is developed on top of mm-unstable largely to test with memory.reclaim
> > nodes= arg and ensure the fix is compatible with that.
> >
> > v2:
> > - Shortened the commit message a bit.
> > - Fixed issue when demotion falls back to other allowed target nodes returned by
> >   node_get_allowed_targets() as Wei suggested.
> >
> > Cc: weixugc@google.com
> > ---
> >  include/linux/memory-tiers.h |  7 +++++--
> >  mm/memory-tiers.c            | 10 +++++++++-
> >  mm/vmscan.c                  | 20 +++++++++++++++++---
> >  3 files changed, 31 insertions(+), 6 deletions(-)
> >
> > diff --git a/include/linux/memory-tiers.h b/include/linux/memory-tiers.h
> > index fc9647b1b4f9..f3f359760fd0 100644
> > --- a/include/linux/memory-tiers.h
> > +++ b/include/linux/memory-tiers.h
> > @@ -38,7 +38,8 @@ void init_node_memory_type(int node, struct memory_dev_type *default_type);
> >  void clear_node_memory_type(int node, struct memory_dev_type *memtype);
> >  #ifdef CONFIG_MIGRATION
> >  int next_demotion_node(int node);
> > -void node_get_allowed_targets(pg_data_t *pgdat, nodemask_t *targets);
> > +void node_get_allowed_targets(pg_data_t *pgdat, nodemask_t *targets,
> > +                           nodemask_t *demote_from_targets);
> >  bool node_is_toptier(int node);
> >  #else
> >  static inline int next_demotion_node(int node)
> > @@ -46,7 +47,9 @@ static inline int next_demotion_node(int node)
> >       return NUMA_NO_NODE;
> >  }
> >
> > -static inline void node_get_allowed_targets(pg_data_t *pgdat, nodemask_t *targets)
> > +static inline void node_get_allowed_targets(pg_data_t *pgdat,
> > +                                         nodemask_t *targets,
> > +                                         nodemask_t *demote_from_targets)
> >  {
> >       *targets = NODE_MASK_NONE;
> >  }
> > diff --git a/mm/memory-tiers.c b/mm/memory-tiers.c
> > index c734658c6242..7f8f0b5de2b3 100644
> > --- a/mm/memory-tiers.c
> > +++ b/mm/memory-tiers.c
> > @@ -264,7 +264,8 @@ bool node_is_toptier(int node)
> >       return toptier;
> >  }
> >
> > -void node_get_allowed_targets(pg_data_t *pgdat, nodemask_t *targets)
> > +void node_get_allowed_targets(pg_data_t *pgdat, nodemask_t *targets,
> > +                           nodemask_t *demote_from_targets)
> >  {
> >       struct memory_tier *memtier;
> >
> > @@ -280,6 +281,13 @@ void node_get_allowed_targets(pg_data_t *pgdat, nodemask_t *targets)
> >       else
> >               *targets = NODE_MASK_NONE;
> >       rcu_read_unlock();
> > +
> > +     /*
> > +      * Exclude the demote_from_targets from the allowed targets if we're
> > +      * trying to demote from a specific set of nodes.
> > +      */
> > +     if (demote_from_targets)
> > +             nodes_andnot(*targets, *targets, *demote_from_targets);
> >  }
>
> Will this cause demotion to not work when we have memory policy like
> MPOL_BIND with nodemask including demotion targets?
>

Hi Aneesh,

You may want to review v3 of this patch that removed this bit:
https://lore.kernel.org/linux-mm/202212070124.VxwbfKCK-lkp@intel.com/T/#t

To answer your question though, it will disable demotion between the
MPOL_BIND nodes I think, yes. That may be another reason not to do
this (it's already removed in v3).

>
> >
> >  /**
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 2b42ac9ad755..97ca0445b5dc 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -1590,7 +1590,8 @@ static struct page *alloc_demote_page(struct page *page, unsigned long private)
> >   * Folios which are not demoted are left on @demote_folios.
> >   */
> >  static unsigned int demote_folio_list(struct list_head *demote_folios,
> > -                                  struct pglist_data *pgdat)
> > +                                   struct pglist_data *pgdat,
> > +                                   nodemask_t *demote_from_nodemask)
> >  {
> >       int target_nid = next_demotion_node(pgdat->node_id);
> >       unsigned int nr_succeeded;
> > @@ -1614,7 +1615,7 @@ static unsigned int demote_folio_list(struct list_head *demote_folios,
> >       if (target_nid == NUMA_NO_NODE)
> >               return 0;
> >
> > -     node_get_allowed_targets(pgdat, &allowed_mask);
> > +     node_get_allowed_targets(pgdat, &allowed_mask, demote_from_nodemask);
> >
> >       /* Demotion ignores all cpuset and mempolicy settings */
> >       migrate_pages(demote_folios, alloc_demote_page, NULL,
> > @@ -1653,6 +1654,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
> >       LIST_HEAD(free_folios);
> >       LIST_HEAD(demote_folios);
> >       unsigned int nr_reclaimed = 0;
> > +     unsigned int nr_demoted = 0;
> >       unsigned int pgactivate = 0;
> >       bool do_demote_pass;
> >       struct swap_iocb *plug = NULL;
> > @@ -2085,7 +2087,19 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
> >       /* 'folio_list' is always empty here */
> >
> >       /* Migrate folios selected for demotion */
> > -     nr_reclaimed += demote_folio_list(&demote_folios, pgdat);
> > +     nr_demoted = demote_folio_list(&demote_folios, pgdat, sc->nodemask);
> > +
> > +     /*
> > +      * Only count demoted folios as reclaimed if the caller has requested
> > +      * demotion from a specific nodemask. In this case pages inside the
> > +      * noedmask have been demoted to outside the nodemask and we can count
> > +      * these pages as reclaimed. If no nodemask is passed, then the caller
> > +      * is requesting reclaim from all memory, which should not count
> > +      * demoted pages.
> > +      */
> > +     if (sc->nodemask)
> > +             nr_reclaimed += nr_demoted;
> > +
> >       /* Folios that could not be demoted are still in @demote_folios */
> >       if (!list_empty(&demote_folios)) {
> >               /* Folios which weren't demoted go back on @folio_list */
> > --
> > 2.39.0.rc0.267.gcb52ba06e7-goog


      reply	other threads:[~2022-12-07 22:15 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-04  9:30 [PATCH v2] [mm-unstable] mm: Fix memcg reclaim on memory tiered systems Mina Almasry
2022-12-04 10:31 ` kernel test robot
2022-12-04 10:35   ` Mina Almasry
2022-12-04 11:21 ` kernel test robot
2022-12-07  8:07 ` Aneesh Kumar K.V
2022-12-07 22:14   ` Mina Almasry [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAHS8izOUfBhctUw7Vx4rCgx0bfRETDx_taKRuoUx14jG8vzZ3w@mail.gmail.com \
    --to=almasrymina@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=fvdl@google.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=roman.gushchin@linux.dev \
    --cc=shakeelb@google.com \
    --cc=songmuchun@bytedance.com \
    --cc=weixugc@google.com \
    --cc=yang.shi@linux.alibaba.com \
    --cc=ying.huang@intel.com \
    --cc=yosryahmed@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).