All of lore.kernel.org
 help / color / mirror / Atom feed
From: Yang Shi <shy828301@gmail.com>
To: Shakeel Butt <shakeelb@google.com>
Cc: Muchun Song <songmuchun@bytedance.com>,
	Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Roman Gushchin <guro@fb.com>, Cgroups <cgroups@vger.kernel.org>,
	Linux MM <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>,
	stable <stable@vger.kernel.org>
Subject: Re: [PATCH 2/2] mm: fix numa stats for thp migration
Date: Mon, 28 Dec 2020 09:31:45 -0800	[thread overview]
Message-ID: <CAHbLzkrR1VQLN8+i4S52F-6dJiTx7TExj+rMuMWqou7Ff7SkPA@mail.gmail.com> (raw)
In-Reply-To: <CALvZod5bH6gP=_Qo5d2wx=mpRxXDKGcoxwO3oXGPqe=HXx8ifA@mail.gmail.com>

On Sun, Dec 27, 2020 at 10:16 AM Shakeel Butt <shakeelb@google.com> wrote:
>
> On Sun, Dec 27, 2020 at 10:14 AM Shakeel Butt <shakeelb@google.com> wrote:
> >
> > Currently the kernel is not correctly updating the numa stats for
> > NR_FILE_PAGES and NR_SHMEM on THP migration. Fix that. For NR_FILE_DIRTY
> > and NR_ZONE_WRITE_PENDING, although at the moment there is no need to
> > handle THP migration as kernel still does not have write support for
> > file THP but to be more future proof, this patch adds the THP support
> > for those stats as well.
> >
> > Fixes: e71769ae52609 ("mm: enable thp migration for shmem thp")
> > Signed-off-by: Shakeel Butt <shakeelb@google.com>
> > Cc: <stable@vger.kernel.org>
> > ---
> >  mm/migrate.c | 23 ++++++++++++-----------
> >  1 file changed, 12 insertions(+), 11 deletions(-)
> >
> > diff --git a/mm/migrate.c b/mm/migrate.c
> > index 613794f6a433..ade163c6ecdf 100644
> > --- a/mm/migrate.c
> > +++ b/mm/migrate.c
> > @@ -402,6 +402,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
> >         struct zone *oldzone, *newzone;
> >         int dirty;
> >         int expected_count = expected_page_refs(mapping, page) + extra_count;
> > +       int nr = thp_nr_pages(page);
> >
> >         if (!mapping) {
> >                 /* Anonymous page without mapping */
> > @@ -437,7 +438,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
> >          */
> >         newpage->index = page->index;
> >         newpage->mapping = page->mapping;
> > -       page_ref_add(newpage, thp_nr_pages(page)); /* add cache reference */
> > +       page_ref_add(newpage, nr); /* add cache reference */
> >         if (PageSwapBacked(page)) {
> >                 __SetPageSwapBacked(newpage);
> >                 if (PageSwapCache(page)) {
> > @@ -459,7 +460,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
> >         if (PageTransHuge(page)) {
> >                 int i;
> >
> > -               for (i = 1; i < HPAGE_PMD_NR; i++) {
> > +               for (i = 1; i < nr; i++) {
> >                         xas_next(&xas);
> >                         xas_store(&xas, newpage);
> >                 }
> > @@ -470,7 +471,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
> >          * to one less reference.
> >          * We know this isn't the last reference.
> >          */
> > -       page_ref_unfreeze(page, expected_count - thp_nr_pages(page));
> > +       page_ref_unfreeze(page, expected_count - nr);
> >
> >         xas_unlock(&xas);
> >         /* Leave irq disabled to prevent preemption while updating stats */
> > @@ -493,17 +494,17 @@ int migrate_page_move_mapping(struct address_space *mapping,
> >                 old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat);
> >                 new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat);
> >
> > -               __dec_lruvec_state(old_lruvec, NR_FILE_PAGES);
> > -               __inc_lruvec_state(new_lruvec, NR_FILE_PAGES);
> > +               __mod_lruvec_state(old_lruvec, NR_FILE_PAGES, -nr);
> > +               __mod_lruvec_state(new_lruvec, NR_FILE_PAGES, nr);
> >                 if (PageSwapBacked(page) && !PageSwapCache(page)) {
> > -                       __dec_lruvec_state(old_lruvec, NR_SHMEM);
> > -                       __inc_lruvec_state(new_lruvec, NR_SHMEM);
> > +                       __mod_lruvec_state(old_lruvec, NR_SHMEM, -nr);
> > +                       __mod_lruvec_state(new_lruvec, NR_SHMEM, nr);
> >                 }
> >                 if (dirty && mapping_can_writeback(mapping)) {
> > -                       __dec_lruvec_state(old_lruvec, NR_FILE_DIRTY);
> > -                       __dec_zone_state(oldzone, NR_ZONE_WRITE_PENDING);
> > -                       __inc_lruvec_state(new_lruvec, NR_FILE_DIRTY);
> > -                       __inc_zone_state(newzone, NR_ZONE_WRITE_PENDING);
> > +                       __mod_lruvec_state(old_lruvec, NR_FILE_DIRTY, -nr);
> > +                       __mod_zone_page_tate(oldzone, NR_ZONE_WRITE_PENDING, -nr);
>
> This should be __mod_zone_page_state(). I fixed locally but sent the
> older patch by mistake.

Acked-by: Yang Shi <shy828301@gmail.com>

>
> > +                       __mod_lruvec_state(new_lruvec, NR_FILE_DIRTY, nr);
> > +                       __mod_zone_page_state(newzone, NR_ZONE_WRITE_PENDING, nr);
> >                 }
> >         }
> >         local_irq_enable();
> > --
> > 2.29.2.729.g45daf8777d-goog
> >
>

WARNING: multiple messages have this Message-ID (diff)
From: Yang Shi <shy828301-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
To: Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
Cc: Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org>,
	Naoya Horiguchi
	<n-horiguchi-PaJj6Psr51x8UrSeD/g0lQ@public.gmane.org>,
	Andrew Morton
	<akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>,
	"Kirill A . Shutemov"
	<kirill.shutemov-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>,
	Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>,
	Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>,
	Cgroups <cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	Linux MM <linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org>,
	LKML <linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	stable <stable-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>
Subject: Re: [PATCH 2/2] mm: fix numa stats for thp migration
Date: Mon, 28 Dec 2020 09:31:45 -0800	[thread overview]
Message-ID: <CAHbLzkrR1VQLN8+i4S52F-6dJiTx7TExj+rMuMWqou7Ff7SkPA@mail.gmail.com> (raw)
In-Reply-To: <CALvZod5bH6gP=_Qo5d2wx=mpRxXDKGcoxwO3oXGPqe=HXx8ifA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>

On Sun, Dec 27, 2020 at 10:16 AM Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
>
> On Sun, Dec 27, 2020 at 10:14 AM Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
> >
> > Currently the kernel is not correctly updating the numa stats for
> > NR_FILE_PAGES and NR_SHMEM on THP migration. Fix that. For NR_FILE_DIRTY
> > and NR_ZONE_WRITE_PENDING, although at the moment there is no need to
> > handle THP migration as kernel still does not have write support for
> > file THP but to be more future proof, this patch adds the THP support
> > for those stats as well.
> >
> > Fixes: e71769ae52609 ("mm: enable thp migration for shmem thp")
> > Signed-off-by: Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> > Cc: <stable-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>
> > ---
> >  mm/migrate.c | 23 ++++++++++++-----------
> >  1 file changed, 12 insertions(+), 11 deletions(-)
> >
> > diff --git a/mm/migrate.c b/mm/migrate.c
> > index 613794f6a433..ade163c6ecdf 100644
> > --- a/mm/migrate.c
> > +++ b/mm/migrate.c
> > @@ -402,6 +402,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
> >         struct zone *oldzone, *newzone;
> >         int dirty;
> >         int expected_count = expected_page_refs(mapping, page) + extra_count;
> > +       int nr = thp_nr_pages(page);
> >
> >         if (!mapping) {
> >                 /* Anonymous page without mapping */
> > @@ -437,7 +438,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
> >          */
> >         newpage->index = page->index;
> >         newpage->mapping = page->mapping;
> > -       page_ref_add(newpage, thp_nr_pages(page)); /* add cache reference */
> > +       page_ref_add(newpage, nr); /* add cache reference */
> >         if (PageSwapBacked(page)) {
> >                 __SetPageSwapBacked(newpage);
> >                 if (PageSwapCache(page)) {
> > @@ -459,7 +460,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
> >         if (PageTransHuge(page)) {
> >                 int i;
> >
> > -               for (i = 1; i < HPAGE_PMD_NR; i++) {
> > +               for (i = 1; i < nr; i++) {
> >                         xas_next(&xas);
> >                         xas_store(&xas, newpage);
> >                 }
> > @@ -470,7 +471,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
> >          * to one less reference.
> >          * We know this isn't the last reference.
> >          */
> > -       page_ref_unfreeze(page, expected_count - thp_nr_pages(page));
> > +       page_ref_unfreeze(page, expected_count - nr);
> >
> >         xas_unlock(&xas);
> >         /* Leave irq disabled to prevent preemption while updating stats */
> > @@ -493,17 +494,17 @@ int migrate_page_move_mapping(struct address_space *mapping,
> >                 old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat);
> >                 new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat);
> >
> > -               __dec_lruvec_state(old_lruvec, NR_FILE_PAGES);
> > -               __inc_lruvec_state(new_lruvec, NR_FILE_PAGES);
> > +               __mod_lruvec_state(old_lruvec, NR_FILE_PAGES, -nr);
> > +               __mod_lruvec_state(new_lruvec, NR_FILE_PAGES, nr);
> >                 if (PageSwapBacked(page) && !PageSwapCache(page)) {
> > -                       __dec_lruvec_state(old_lruvec, NR_SHMEM);
> > -                       __inc_lruvec_state(new_lruvec, NR_SHMEM);
> > +                       __mod_lruvec_state(old_lruvec, NR_SHMEM, -nr);
> > +                       __mod_lruvec_state(new_lruvec, NR_SHMEM, nr);
> >                 }
> >                 if (dirty && mapping_can_writeback(mapping)) {
> > -                       __dec_lruvec_state(old_lruvec, NR_FILE_DIRTY);
> > -                       __dec_zone_state(oldzone, NR_ZONE_WRITE_PENDING);
> > -                       __inc_lruvec_state(new_lruvec, NR_FILE_DIRTY);
> > -                       __inc_zone_state(newzone, NR_ZONE_WRITE_PENDING);
> > +                       __mod_lruvec_state(old_lruvec, NR_FILE_DIRTY, -nr);
> > +                       __mod_zone_page_tate(oldzone, NR_ZONE_WRITE_PENDING, -nr);
>
> This should be __mod_zone_page_state(). I fixed locally but sent the
> older patch by mistake.

Acked-by: Yang Shi <shy828301-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>

>
> > +                       __mod_lruvec_state(new_lruvec, NR_FILE_DIRTY, nr);
> > +                       __mod_zone_page_state(newzone, NR_ZONE_WRITE_PENDING, nr);
> >                 }
> >         }
> >         local_irq_enable();
> > --
> > 2.29.2.729.g45daf8777d-goog
> >
>

  reply	other threads:[~2020-12-28 17:32 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-27 18:13 [PATCH 1/2] mm: memcg: fix memcg file_dirty numa stat Shakeel Butt
2020-12-27 18:13 ` Shakeel Butt
2020-12-27 18:13 ` Shakeel Butt
2020-12-27 18:13 ` [PATCH 2/2] mm: fix numa stats for thp migration Shakeel Butt
2020-12-27 18:13   ` Shakeel Butt
2020-12-27 18:13   ` Shakeel Butt
2020-12-27 18:16   ` Shakeel Butt
2020-12-27 18:16     ` Shakeel Butt
2020-12-28 17:31     ` Yang Shi [this message]
2020-12-28 17:31       ` Yang Shi
2020-12-28 17:31       ` Yang Shi
2020-12-28 19:44   ` Roman Gushchin
2020-12-28 19:44     ` Roman Gushchin
2020-12-28  5:40 ` [External] [PATCH 1/2] mm: memcg: fix memcg file_dirty numa stat Muchun Song
2020-12-28  5:40   ` Muchun Song
2020-12-28 17:31 ` Yang Shi
2020-12-28 17:31   ` Yang Shi
2020-12-28 19:40 ` Roman Gushchin
2020-12-28 19:40   ` Roman Gushchin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAHbLzkrR1VQLN8+i4S52F-6dJiTx7TExj+rMuMWqou7Ff7SkPA@mail.gmail.com \
    --to=shy828301@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=n-horiguchi@ah.jp.nec.com \
    --cc=shakeelb@google.com \
    --cc=songmuchun@bytedance.com \
    --cc=stable@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.