All of lore.kernel.org
 help / color / mirror / Atom feed
From: Muchun Song <songmuchun@bytedance.com>
To: Michal Hocko <mhocko@suse.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Oscar Salvador <osalvador@suse.de>,
	Linux Memory Management List <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Subject: Re: [External] Re: [PATCH] mm: hugetlb: fix a race between memory-failure/soft_offline and gather_surplus_pages
Date: Wed, 21 Apr 2021 16:15:00 +0800	[thread overview]
Message-ID: <CAMZfGtWh4tRiMrOTLvv5GHM1JUCt9b+UHf_DwLev32S=+iLW8g@mail.gmail.com> (raw)
In-Reply-To: <YH/cVoUCTKu/UkqB@dhcp22.suse.cz>

On Wed, Apr 21, 2021 at 4:03 PM Michal Hocko <mhocko@suse.com> wrote:
>
> [Cc Naoya]
>
> On Wed 21-04-21 14:02:59, Muchun Song wrote:
> > The possible bad scenario:
> >
> > CPU0:                           CPU1:
> >
> >                                 gather_surplus_pages()
> >                                   page = alloc_surplus_huge_page()
> > memory_failure_hugetlb()
> >   get_hwpoison_page(page)
> >     __get_hwpoison_page(page)
> >       get_page_unless_zero(page)
> >                                   zero = put_page_testzero(page)
> >                                   VM_BUG_ON_PAGE(!zero, page)
> >                                   enqueue_huge_page(h, page)
> >   put_page(page)
> >
> > The refcount can possibly be increased by memory-failure or soft_offline
> > handlers, we can trigger VM_BUG_ON_PAGE and wrongly add the page to the
> > hugetlb pool list.
>
> The hwpoison side of this looks really suspicious to me. It shouldn't
> really touch the reference count of hugetlb pages without being very
> careful (and having hugetlb_lock held). What would happen if the
> reference count was increased after the page has been enqueed into the
> pool? This can just blow up later.

If the page has been enqueued into the pool, then the page can be
allocated to other users. The page reference count will be reset to
1 in the dequeue_huge_page_node_exact(). Then memory-failure
will free the page because of put_page(). This is wrong. Because
there is another user.

>
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > ---
> >  mm/hugetlb.c | 11 ++++-------
> >  1 file changed, 4 insertions(+), 7 deletions(-)
> >
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index 3476aa06da70..6c96332db34b 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -2145,17 +2145,14 @@ static int gather_surplus_pages(struct hstate *h, long delta)
> >
> >       /* Free the needed pages to the hugetlb pool */
> >       list_for_each_entry_safe(page, tmp, &surplus_list, lru) {
> > -             int zeroed;
> > -
> >               if ((--needed) < 0)
> >                       break;
> >               /*
> > -              * This page is now managed by the hugetlb allocator and has
> > -              * no users -- drop the buddy allocator's reference.
> > +              * The refcount can possibly be increased by memory-failure or
> > +              * soft_offline handlers.
> >                */
> > -             zeroed = put_page_testzero(page);
> > -             VM_BUG_ON_PAGE(!zeroed, page);
> > -             enqueue_huge_page(h, page);
> > +             if (likely(put_page_testzero(page)))
> > +                     enqueue_huge_page(h, page);
> >       }
> >  free:
> >       spin_unlock_irq(&hugetlb_lock);
> > --
> > 2.11.0
> >
>
> --
> Michal Hocko
> SUSE Labs

  reply	other threads:[~2021-04-21  8:15 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-21  6:02 [PATCH] mm: hugetlb: fix a race between memory-failure/soft_offline and gather_surplus_pages Muchun Song
2021-04-21  8:03 ` Michal Hocko
2021-04-21  8:15   ` Muchun Song [this message]
2021-04-21  8:15     ` [External] " Muchun Song
2021-04-21  8:21     ` Oscar Salvador
2021-04-21  8:41       ` Muchun Song
2021-04-21  8:41         ` Muchun Song
2021-04-21  8:49         ` Oscar Salvador
2021-04-21  8:58           ` Muchun Song
2021-04-21  8:58             ` Muchun Song
2021-04-21  8:43       ` Michal Hocko
2021-04-21  8:25     ` Michal Hocko
2021-04-21  8:33   ` HORIGUCHI NAOYA(堀口 直也)
2021-04-21  9:02     ` [External] " Muchun Song
2021-04-21  9:02       ` Muchun Song
2021-04-21 18:03     ` Mike Kravetz
2021-04-22  8:27       ` HORIGUCHI NAOYA(堀口 直也)
2021-04-23  8:01         ` HORIGUCHI NAOYA(堀口 直也)
2021-04-28  7:46           ` [PATCH] mm,hwpoison: fix race with compound page allocation Naoya Horiguchi
2021-04-28  8:23             ` Oscar Salvador
2021-04-28  9:18               ` HORIGUCHI NAOYA(堀口 直也)
2021-05-06  1:31                 ` [PATCH v2] " Naoya Horiguchi
2021-05-06  8:51                   ` Oscar Salvador
2021-05-07  4:17                     ` HORIGUCHI NAOYA(堀口 直也)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAMZfGtWh4tRiMrOTLvv5GHM1JUCt9b+UHf_DwLev32S=+iLW8g@mail.gmail.com' \
    --to=songmuchun@bytedance.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=mike.kravetz@oracle.com \
    --cc=n-horiguchi@ah.jp.nec.com \
    --cc=osalvador@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.