From: Muchun Song <songmuchun@bytedance.com>
To: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>,
Andi Kleen <ak@linux.intel.com>,
mhocko@suse.cz, Linux Memory Management List <linux-mm@kvack.org>,
LKML <linux-kernel@vger.kernel.org>
Subject: Re: [External] Re: [PATCH 4/6] mm: hugetlb: add return -EAGAIN for dissolve_free_huge_page
Date: Tue, 5 Jan 2021 11:14:46 +0800 [thread overview]
Message-ID: <CAMZfGtXZqbNwb2k5sq29gXSBMO3sVNaATiJnPWSggoAG5mZMqA@mail.gmail.com> (raw)
In-Reply-To: <e043e137-5ca7-d478-248c-9defcecc6ac7@oracle.com>
On Tue, Jan 5, 2021 at 9:33 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>
> On 1/3/21 10:58 PM, Muchun Song wrote:
> > When dissolve_free_huge_page() races with __free_huge_page(), we can
> > do a retry. Because the race window is small.
>
> In general, I agree that the race window is small. However, worst case
> would be if the freeing of the page is put on a work queue. Is it acceptable
> to keep retrying in that case? In addition, the 'Free some vmemmap' series
> may slow the free_huge_page path even more.
I also consider the 'Free some vmemmap' series case. In my next
version series, I will flush the work before dissolve_free_huge_page
returns when encountering this race. So the retry is acceptable.
Right?
Thanks.
>
> In these worst case scenarios, I am not sure we want to just spin retrying.
>
> --
> Mike Kravetz
>
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > ---
> > mm/hugetlb.c | 16 +++++++++++-----
> > 1 file changed, 11 insertions(+), 5 deletions(-)
> >
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index 72608008f8b4..db00ae375d2a 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -1763,10 +1763,11 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
> > * nothing for in-use hugepages and non-hugepages.
> > * This function returns values like below:
> > *
> > - * -EBUSY: failed to dissolved free hugepages or the hugepage is in-use
> > - * (allocated or reserved.)
> > - * 0: successfully dissolved free hugepages or the page is not a
> > - * hugepage (considered as already dissolved)
> > + * -EAGAIN: race with __free_huge_page() and can do a retry
> > + * -EBUSY: failed to dissolved free hugepages or the hugepage is in-use
> > + * (allocated or reserved.)
> > + * 0: successfully dissolved free hugepages or the page is not a
> > + * hugepage (considered as already dissolved)
> > */
> > int dissolve_free_huge_page(struct page *page)
> > {
> > @@ -1815,8 +1816,10 @@ int dissolve_free_huge_page(struct page *page)
> > * We should make sure that the page is already on the free list
> > * when it is dissolved.
> > */
> > - if (unlikely(!PageHugeFreed(head)))
> > + if (unlikely(!PageHugeFreed(head))) {
> > + rc = -EAGAIN;
> > goto out;
> > + }
> >
> > /*
> > * Move PageHWPoison flag from head page to the raw error page,
> > @@ -1857,7 +1860,10 @@ int dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
> >
> > for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << minimum_order) {
> > page = pfn_to_page(pfn);
> > +retry:
> > rc = dissolve_free_huge_page(page);
> > + if (rc == -EAGAIN)
> > + goto retry;
> > if (rc)
> > break;
> > }
> >
next prev parent reply other threads:[~2021-01-05 3:16 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-04 6:58 [PATCH 1/6] mm: migrate: do not migrate HugeTLB page whose refcount is one Muchun Song
2021-01-04 6:58 ` [PATCH 2/6] hugetlbfs: fix cannot migrate the fallocated HugeTLB page Muchun Song
2021-01-04 22:38 ` Mike Kravetz
2021-01-05 2:44 ` [External] " Muchun Song
2021-01-05 22:27 ` Mike Kravetz
2021-01-06 2:57 ` Muchun Song
2021-01-04 6:58 ` [PATCH 3/6] mm: hugetlb: fix a race between freeing and dissolving the page Muchun Song
2021-01-05 0:00 ` Mike Kravetz
2021-01-05 2:55 ` [External] " Muchun Song
2021-01-05 23:22 ` Mike Kravetz
2021-01-06 6:05 ` Muchun Song
2021-01-05 6:12 ` Muchun Song
2021-01-04 6:58 ` [PATCH 4/6] mm: hugetlb: add return -EAGAIN for dissolve_free_huge_page Muchun Song
2021-01-05 1:32 ` Mike Kravetz
2021-01-05 3:14 ` Muchun Song [this message]
2021-01-05 3:46 ` [External] " Muchun Song
2021-01-06 0:07 ` Mike Kravetz
2021-01-05 6:37 ` HORIGUCHI NAOYA(堀口 直也)
2021-01-05 7:10 ` [External] " Muchun Song
2021-01-05 7:30 ` HORIGUCHI NAOYA(堀口 直也)
2021-01-04 6:58 ` [PATCH 5/6] mm: hugetlb: fix a race between isolating and freeing page Muchun Song
2021-01-05 1:42 ` Mike Kravetz
2021-01-04 6:58 ` [PATCH 6/6] mm: hugetlb: remove VM_BUG_ON_PAGE from page_huge_active Muchun Song
2021-01-05 1:50 ` Mike Kravetz
2021-01-04 22:17 ` [PATCH 1/6] mm: migrate: do not migrate HugeTLB page whose refcount is one Mike Kravetz
2021-01-05 16:58 ` David Hildenbrand
2021-01-05 18:04 ` Yang Shi
2021-01-05 18:05 ` David Hildenbrand
2021-01-05 18:04 ` Yang Shi
2021-01-06 16:11 ` Michal Hocko
2021-01-06 16:12 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAMZfGtXZqbNwb2k5sq29gXSBMO3sVNaATiJnPWSggoAG5mZMqA@mail.gmail.com \
--to=songmuchun@bytedance.com \
--cc=ak@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.cz \
--cc=mike.kravetz@oracle.com \
--cc=n-horiguchi@ah.jp.nec.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).