linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yang Shi <shy828301@gmail.com>
To: Yu Zhao <yuzhao@google.com>
Cc: Linux MM <linux-mm@kvack.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	 Hugh Dickins <hughd@google.com>,
	"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
	 Matthew Wilcox <willy@infradead.org>,
	Vlastimil Babka <vbabka@suse.cz>, Zi Yan <ziy@nvidia.com>,
	 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Shuang Zhai <zhais@google.com>
Subject: Re: [PATCH 2/3] mm: free zapped tail pages when splitting isolated thp
Date: Fri, 13 Aug 2021 19:34:11 -0700	[thread overview]
Message-ID: <CAHbLzkpsp51=zKTrw=P=1YbFq7MtcdgTvZ_ds4SwVMLRKqQtVQ@mail.gmail.com> (raw)
In-Reply-To: <CAOUHufZiWuvHPUBji_4OT0eP6C_tdQxGiQLipV=ApKo8ua=jjQ@mail.gmail.com>

On Fri, Aug 13, 2021 at 6:49 PM Yu Zhao <yuzhao@google.com> wrote:
>
> On Fri, Aug 13, 2021 at 6:30 PM Yang Shi <shy828301@gmail.com> wrote:
> >
> > On Fri, Aug 13, 2021 at 4:56 PM Yu Zhao <yuzhao@google.com> wrote:
> > >
> > > ()
> > > On Fri, Aug 13, 2021 at 5:24 PM Yang Shi <shy828301@gmail.com> wrote:
> > > >
> > > > On Wed, Aug 11, 2021 at 4:12 PM Yu Zhao <yuzhao@google.com> wrote:
> > > > >
> > > > > On Wed, Aug 11, 2021 at 4:25 PM Yang Shi <shy828301@gmail.com> wrote:
> > > > > >
> > > > > > On Sun, Aug 8, 2021 at 10:49 AM Yu Zhao <yuzhao@google.com> wrote:
> > > > > > >
> > > > > > > On Wed, Aug 4, 2021 at 6:13 PM Yang Shi <shy828301@gmail.com> wrote:
> > > > > > > >
> > > > > > > > On Fri, Jul 30, 2021 at 11:39 PM Yu Zhao <yuzhao@google.com> wrote:
> > > > > > > > >
> > > > > > > > > If a tail page has only two references left, one inherited from the
> > > > > > > > > isolation of its head and the other from lru_add_page_tail() which we
> > > > > > > > > are about to drop, it means this tail page was concurrently zapped.
> > > > > > > > > Then we can safely free it and save page reclaim or migration the
> > > > > > > > > trouble of trying it.
> > > > > > > > >
> > > > > > > > > Signed-off-by: Yu Zhao <yuzhao@google.com>
> > > > > > > > > Tested-by: Shuang Zhai <zhais@google.com>
> > > > > > > > > ---
> > > > > > > > >  include/linux/vm_event_item.h |  1 +
> > > > > > > > >  mm/huge_memory.c              | 28 ++++++++++++++++++++++++++++
> > > > > > > > >  mm/vmstat.c                   |  1 +
> > > > > > > > >  3 files changed, 30 insertions(+)
> > > > > > > > >
> > > > > > > > > diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
> > > > > > > > > index ae0dd1948c2b..829eeac84094 100644
> > > > > > > > > --- a/include/linux/vm_event_item.h
> > > > > > > > > +++ b/include/linux/vm_event_item.h
> > > > > > > > > @@ -99,6 +99,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
> > > > > > > > >  #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> > > > > > > > >                 THP_SPLIT_PUD,
> > > > > > > > >  #endif
> > > > > > > > > +               THP_SPLIT_FREE,
> > > > > > > > >                 THP_ZERO_PAGE_ALLOC,
> > > > > > > > >                 THP_ZERO_PAGE_ALLOC_FAILED,
> > > > > > > > >                 THP_SWPOUT,
> > > > > > > > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > > > > > > > > index d8b655856e79..5120478bca41 100644
> > > > > > > > > --- a/mm/huge_memory.c
> > > > > > > > > +++ b/mm/huge_memory.c
> > > > > > > > > @@ -2432,6 +2432,8 @@ static void __split_huge_page(struct page *page, struct list_head *list,
> > > > > > > > >         struct address_space *swap_cache = NULL;
> > > > > > > > >         unsigned long offset = 0;
> > > > > > > > >         unsigned int nr = thp_nr_pages(head);
> > > > > > > > > +       LIST_HEAD(pages_to_free);
> > > > > > > > > +       int nr_pages_to_free = 0;
> > > > > > > > >         int i;
> > > > > > > > >
> > > > > > > > >         VM_BUG_ON_PAGE(list && PageLRU(head), head);
> > > > > > > > > @@ -2506,6 +2508,25 @@ static void __split_huge_page(struct page *page, struct list_head *list,
> > > > > > > > >                         continue;
> > > > > > > > >                 unlock_page(subpage);
> > > > > > > > >
> > > > > > > > > +               /*
> > > > > > > > > +                * If a tail page has only two references left, one inherited
> > > > > > > > > +                * from the isolation of its head and the other from
> > > > > > > > > +                * lru_add_page_tail() which we are about to drop, it means this
> > > > > > > > > +                * tail page was concurrently zapped. Then we can safely free it
> > > > > > > > > +                * and save page reclaim or migration the trouble of trying it.
> > > > > > > > > +                */
> > > > > > > > > +               if (list && page_ref_freeze(subpage, 2)) {
> > > > > > > > > +                       VM_BUG_ON_PAGE(PageLRU(subpage), subpage);
> > > > > > > > > +                       VM_BUG_ON_PAGE(PageCompound(subpage), subpage);
> > > > > > > > > +                       VM_BUG_ON_PAGE(page_mapped(subpage), subpage);
> > > > > > > > > +
> > > > > > > > > +                       ClearPageActive(subpage);
> > > > > > > > > +                       ClearPageUnevictable(subpage);
> > > > > > > > > +                       list_move(&subpage->lru, &pages_to_free);
> > > > > > > > > +                       nr_pages_to_free++;
> > > > > > > > > +                       continue;
> > > > > > > > > +               }
> > > > > > > >
> > > > > > > > Yes, such page could be freed instead of swapping out. But I'm
> > > > > > > > wondering if we could have some simpler implementation. Since such
> > > > > > > > pages will be re-added to page list, so we should be able to check
> > > > > > > > their refcount in shrink_page_list(). If the refcount is 1, the
> > > > > > > > refcount inc'ed by lru_add_page_tail() has been put by later
> > > > > > > > put_page(), we know it is freed under us since the only refcount comes
> > > > > > > > from isolation, we could just jump to "keep" (the label in
> > > > > > > > shrink_page_list()), then such page will be freed later by
> > > > > > > > shrink_inactive_list().
> > > > > > > >
> > > > > > > > For MADV_PAGEOUT, I think we could add some logic to handle such page
> > > > > > > > after shrink_page_list(), just like what shrink_inactive_list() does.
> > > > > > > >
> > > > > > > > Migration already handles refcount == 1 page, so should not need any change.
> > > > > > > >
> > > > > > > > Is this idea feasible?
> > > > > > >
> > > > > > > Yes, but then we would have to loop over the tail pages twice, here
> > > > > > > and in shrink_page_list(), right?
> > > > > >
> > > > > > I don't quite get what you mean "loop over the tail pages twice". Once
> > > > > > THP is isolated then get split, all the tail pages will be put on the
> > > > > > list (local list for isolated pages), then the reclaimer would deal
> > > > > > with the head page, then continue to iterate the list to deal with
> > > > > > tail pages. Your patch could free the tail pages earlier. But it
> > > > > > should not make too much difference to free the tail pages a little
> > > > > > bit later IMHO.
> > > > >
> > > > > We are in a (the first) loop here. If we free the tail pages later,
> > > > > then we will need to loop over them again (the second).
> > > > >
> > > > > IOW,
> > > > > 1) __split_huge_page(): for each of the 511 tail pages (first loop).
> > > > > 2) shrink_page_list(): for each of the 511 tail pages (second loop).
> > > > >
> > > > > > > In addition, if we try to freeze the refcount of a page in
> > > > > > > shrink_page_list(), we couldn't be certain whether this page used to
> > > > > > > be a tail page. So we would have to test every page. If a page wasn't
> > > > > > > a tail page, it's unlikely for its refcount to drop unless there is a
> > > > > > > race. But this patch isn't really intended to optimize such a race.
> > > > > > > It's mainly for the next, i.e., we know there is a good chance to drop
> > > > > > > tail pages (~10% on our systems). Sounds reasonable? Thanks.
> > > > > >
> > > > > > I'm not sure what is the main source of the partial mapped THPs from
> > > > > > your fleets. But if most of them are generated by MADV_DONTNEED (this
> > > > > > is used by some userspace memory allocator libs), they should be on
> > > > > > deferred split list too. Currently deferred split shrinker just
> > > > > > shrinks those THPs (simply split them and free unmapped sub pages)
> > > > > > proportionally, we definitely could shrink them more aggressively, for
> > > > > > example, by setting shrinker->seeks to 0. I'm wondering if this will
> > > > > > achieve a similar effect or not.
> > > > >
> > > > > Not partially mapped but internal fragmentation.
> > > > >
> > > > > IOW, some of the 4KB pages within a THP were never written into, which
> > > > > can be common depending on the implementations of userspace memory
> > > > > allocators.
> > > >
> > > > OK, this is actually what the patch #3 does. The patch #3 just doesn't
> > > > remap the "all zero" page when splitting the THP IIUC. But the page
> > > > has refcount from isolation so it can't be simply freed by put_page().
> > > >
> > > > Actually this makes me think my suggestion is better. It doesn't make
> > > > too much sense to me to have page free logic (manipulate flags,
> > > > uncharge memcg, etc) in THP split.
> > > >
> > > > There have been a couple of places to handle such cases:
> > > > - deferred split shrinker: the unmapped subpage is just freed by
> > > > put_page() since there is no extra refcount
> > > > - migration: check page refcount then free the refcount == 1 one
> > > >
> > > > Here you add the third case in the page reclaim path, so why not just
> > > > let the page reclaim handle all the work for freeing page?
> > >
> > > As I have explained previously:
> > >
> > > 1) We would have to loop over tail pages twice. Not much overhead but
> > > unnecessary.
> > > 2) We would have to try to freeze the refcount on _every_ page in
> > > shrink_page_list() -- shrink_page_list() takes all pages, not just
> > > relevant ones (previously being tail). Attempting to freeze refcount
> > > on an irrelevant page will likely fail. Again, not a significant
> > > overhead but better to avoid.
> >
> > IIUC you don't need to freeze the refcount, such page is not in swap
> > cache, they don't have mapping. I'm supposed you just need simply do:
>
> Well, not really. There are speculatively page refcount increments and
> decrements. Specifically for what you just mentioned, those pages
> don't have owners anymore but GUP can't reach them. But those who use
> PFN to get pages can always reach them, e.g., compaction. And another
> similar but simpler example:

Yes, all the paths which traverse page via PFN could reach the page
and may inc/dec refcount, some require the page on LRU (e.g.
compaction), some don't (e.g. hwpoison). The page is off LRU and such
refcount race should be harmless.

>
> static struct page *page_idle_get_page(unsigned long pfn)
> {
>         struct page *page = pfn_to_online_page(pfn);
>
>         if (!page || !PageLRU(page) ||
>             !get_page_unless_zero(page))
>                 return NULL;
>
>         if (unlikely(!PageLRU(page))) {
>                 put_page(page);
>                 page = NULL;
>         }
>         return page;
> }
>
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 403a175a720f..031b98627a02 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -1547,6 +1547,9 @@ static unsigned int shrink_page_list(struct
> > list_head *page_list,
> >                  * Lazyfree page could be freed directly
> >                  */
> >                 if (PageAnon(page) && PageSwapBacked(page)) {
> > +                       if (page_count(page) == 1)
> > +                               goto locked;
> > +
> >                         if (!PageSwapCache(page)) {
> >                                 if (!(sc->gfp_mask & __GFP_IO))
> >                                         goto keep_locked;
> >
> > It is an unmapped anonymous page, nobody could see it other than
> > hwpoison handler AFAICT.
>
> This claim is false but the code works (if we change locked to keep_locked).

Yeah, keep_locked. I thought keep_locked, but typed locked.

>
> When we are here, we have called
> 1) trylock_page()
> 2) page_check_references() -- _costly_
>
> We have to call
> 1) unlock_page()
> 2) lock lru -- _costly_
> 3) put_page_testzero() in move_pages_to_lru()
> 4) unlock lru
> before we reach mem_cgroup_uncharge_list() and free_unref_page_list().
>
> These 6 extra steps are unnecessary. If we want to do it properly in
> shrink_page_list(), we should try to freeze the refcount of each page
> before step 1, and if successful, add this page to the free_pages
> list. But again, the two points I mentioned earlier are still valid.
> We do save a few lines of code though.

You definitely could move the check earlier or freeze the refcount.
There are a couple of different ways, that example code is just for
illustration.

>
> > > I'm not against your idea. But I'd like to hear some clarifications
> > > about the points above. That is whether you think it's still a good
> > > idea to do what you suggested after taking these into account.
> >
> > I personally don't feel very comfortable to have the extra freeing
> > page logic in THP split when we could leverage page reclaim code with
> > acceptable overhead. And migration code already did so.
>
> I understand. And I agree that what you suggested is better for
> readability. I'm just listing things we may want to consider while
> deciding which option is more favorable.
>
> > > > > > I really don't have any objection to free such pages, but just
> > > > > > wondering if we could have something simpler or not.
> > > > >
> > > > > Thanks.
> > > > >
> > > > > > > > > +
> > > > > > > > >                 /*
> > > > > > > > >                  * Subpages may be freed if there wasn't any mapping
> > > > > > > > >                  * like if add_to_swap() is running on a lru page that
> > > > > > > > > @@ -2515,6 +2536,13 @@ static void __split_huge_page(struct page *page, struct list_head *list,
> > > > > > > > >                  */
> > > > > > > > >                 put_page(subpage);
> > > > > > > > >         }
> > > > > > > > > +
> > > > > > > > > +       if (!nr_pages_to_free)
> > > > > > > > > +               return;
> > > > > > > > > +
> > > > > > > > > +       mem_cgroup_uncharge_list(&pages_to_free);
> > > > > > > > > +       free_unref_page_list(&pages_to_free);
> > > > > > > > > +       count_vm_events(THP_SPLIT_FREE, nr_pages_to_free);
> > > > > > > > >  }
> > > > > > > > >
> > > > > > > > >  int total_mapcount(struct page *page)
> > > > > > > > > diff --git a/mm/vmstat.c b/mm/vmstat.c
> > > > > > > > > index b0534e068166..f486e5d98d96 100644
> > > > > > > > > --- a/mm/vmstat.c
> > > > > > > > > +++ b/mm/vmstat.c
> > > > > > > > > @@ -1300,6 +1300,7 @@ const char * const vmstat_text[] = {
> > > > > > > > >  #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> > > > > > > > >         "thp_split_pud",
> > > > > > > > >  #endif
> > > > > > > > > +       "thp_split_free",
> > > > > > > > >         "thp_zero_page_alloc",
> > > > > > > > >         "thp_zero_page_alloc_failed",
> > > > > > > > >         "thp_swpout",
> > > > > > > > > --
> > > > > > > > > 2.32.0.554.ge1b32706d8-goog
> > > > > > > > >


  reply	other threads:[~2021-08-14  2:34 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-31  6:39 [PATCH 0/3] mm: optimize thp for reclaim and migration Yu Zhao
2021-07-31  6:39 ` [PATCH 1/3] mm: don't take lru lock when splitting isolated thp Yu Zhao
2021-07-31  6:39 ` [PATCH 2/3] mm: free zapped tail pages " Yu Zhao
2021-08-04 14:22   ` Kirill A. Shutemov
2021-08-08 17:28     ` Yu Zhao
2021-08-05  0:13   ` Yang Shi
2021-08-08 17:49     ` Yu Zhao
2021-08-11 22:25       ` Yang Shi
2021-08-11 23:12         ` Yu Zhao
2021-08-13 23:24           ` Yang Shi
2021-08-13 23:56             ` Yu Zhao
2021-08-14  0:30               ` Yang Shi
2021-08-14  1:49                 ` Yu Zhao
2021-08-14  2:34                   ` Yang Shi [this message]
2021-07-31  6:39 ` [PATCH 3/3] mm: don't remap clean subpages " Yu Zhao
2021-07-31  9:53   ` kernel test robot
2021-07-31 15:45   ` kernel test robot
2021-08-03 11:25   ` Matthew Wilcox
2021-08-03 11:36   ` Matthew Wilcox
2021-08-08 17:21     ` Yu Zhao
2021-08-04 14:27   ` Kirill A. Shutemov
2021-08-08 17:20     ` Yu Zhao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAHbLzkpsp51=zKTrw=P=1YbFq7MtcdgTvZ_ds4SwVMLRKqQtVQ@mail.gmail.com' \
    --to=shy828301@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=hughd@google.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    --cc=yuzhao@google.com \
    --cc=zhais@google.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).