All of lore.kernel.org
 help / color / mirror / Atom feed
From: Minchan Kim <minchan@kernel.org>
To: Mike Kravetz <mike.kravetz@oracle.com>
Cc: John Hubbard <jhubbard@nvidia.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	syzbot <syzbot+acf65ca584991f3cc447@syzkaller.appspotmail.com>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	llvm@lists.linux.dev, nathan@kernel.org, ndesaulniers@google.com,
	syzkaller-bugs@googlegroups.com, trix@redhat.com,
	Matthew Wilcox <willy@infradead.org>,
	Stephen Rothwell <sfr@canb.auug.org.au>,
	David Hildenbrand <david@redhat.com>
Subject: Re: [syzbot] WARNING in follow_hugetlb_page
Date: Fri, 20 May 2022 15:19:32 -0700	[thread overview]
Message-ID: <YogT9AwVclxAnyvs@google.com> (raw)
In-Reply-To: <6d281052-485c-5e17-4f1c-ef5689831450@oracle.com>

On Mon, May 16, 2022 at 08:37:01PM -0700, Mike Kravetz wrote:

< snip >

> > I need to look at this a little more closely, it is making me wonder
> > whether the is_pinnable_page() check is a problem in this path. The
> > comment in try_grab_folio() indicates that the early return is a hack
> > (it assumes that the caller is in the gup fast path), and maybe the hack
> > is just wrong here--I think we're actually on the slow gup path. Not
> > good.
> > 
> > Mike, any thoughts here?
> > 
> 
> Do you know why try_grab_compound_page(now try_grab_folio) checks for
> pinnable when try_grab_page does not?
> 
> Then I guess the next question is 'Should we allow pinning of hugetlb pages
> in these areas?'.  My first thought would be no.  But, recall it was 'allowed'
> until that commit which changed try_grab_page to try_grab_compound_page.

The reason we don't allow longterm pinning in CMA area is to improve
big contigus memory allocation sccuess ratio when someone claim the memory
space. Thus, any pages mapped at userspace given the CMA area shouldn't be
pinned with longterm. Otherwise, the cma_alloc will fail due to migration
failure.

In hugetlb case(I might miss something..), the CMA memory was already
claimed by hugeTLB and the big contiguous memory was mapped at userspace
so there is no reason to prevent longterm pinning since HugeTLB will
never claim those CMA memory until user release the memory and HugeTLB
free the range using cma_release.

> In the 'common' case of compaction, we do not attempt to migrate/move hugetlb
> pages (last time I looked), so long term pinning should not be an issue.
> However, for things like memory offline or alloc_contig_range() we want to

The memory offline would be an issue so we shouldn't allow pinning of any
pages in *movable zone*.

Isn't alloc_contig_range just best effort? Then, it wouldn't be a big
problem to allow pinning on those area. The matter is what target range
on alloc_contig_range is backed by CMA or movable zone and usecases.

IOW, movable zone should be never allowed. But CMA case, if pages
are used by normal process memory instead of hugeTLB, we shouldn't
allow longterm pinning since someone can claim those memory suddenly.
However, we are fine to allow longterm pinning if the CMA memory
already claimed and mapped at userspace(hugeTLB case IIUC).

Please correct me if I miss something.

Thanks.

  parent reply	other threads:[~2022-05-20 22:19 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-13  9:03 [syzbot] WARNING in follow_hugetlb_page syzbot
2022-05-13 16:43 ` syzbot
2022-05-13 17:26   ` Andrew Morton
2022-05-13 18:09     ` Mike Kravetz
2022-05-13 22:48       ` Mike Kravetz
2022-05-13 23:19         ` Andrew Morton
2022-05-13 23:54           ` Minchan Kim
2022-05-14  0:09             ` John Hubbard
2022-05-14  0:26               ` Minchan Kim
2022-05-14  0:56                 ` John Hubbard
2022-05-14  1:16                   ` John Hubbard
2022-05-17  3:37                   ` Mike Kravetz
2022-05-18  7:12                     ` John Hubbard
2022-05-20 22:19                     ` Minchan Kim [this message]
2022-05-20 22:56                       ` John Hubbard
2022-05-20 23:25                         ` Minchan Kim
2022-05-20 23:31                         ` Mike Kravetz
2022-05-20 23:43                           ` Minchan Kim
2022-05-21  0:04                             ` Mike Kravetz
2022-05-21 15:24                               ` Minchan Kim
2022-05-21 15:51                                 ` David Hildenbrand
2022-05-21 16:36                                   ` Minchan Kim
2022-05-21 16:46                                     ` David Hildenbrand
2022-05-21 18:25                                       ` Minchan Kim
2022-05-21 23:50                                         ` Mike Kravetz
2022-05-14  0:18             ` Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YogT9AwVclxAnyvs@google.com \
    --to=minchan@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=jhubbard@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=llvm@lists.linux.dev \
    --cc=mike.kravetz@oracle.com \
    --cc=nathan@kernel.org \
    --cc=ndesaulniers@google.com \
    --cc=sfr@canb.auug.org.au \
    --cc=syzbot+acf65ca584991f3cc447@syzkaller.appspotmail.com \
    --cc=syzkaller-bugs@googlegroups.com \
    --cc=trix@redhat.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.