All of lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Wilcox <willy@infradead.org>
To: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Michal Hocko <mhocko@suse.com>,
	liam.howlett@oracle.com, walken.cr@gmail.com, hannes@cmpxchg.org,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: Memory allocation on speculative fastpaths
Date: Tue, 3 May 2022 19:28:23 +0100	[thread overview]
Message-ID: <YnF0RyBaBSC1mdKo@casper.infradead.org> (raw)
In-Reply-To: <20220503163905.GM1790663@paulmck-ThinkPad-P17-Gen-1>

On Tue, May 03, 2022 at 09:39:05AM -0700, Paul E. McKenney wrote:
> On Tue, May 03, 2022 at 06:04:13PM +0200, Michal Hocko wrote:
> > On Tue 03-05-22 08:59:13, Paul E. McKenney wrote:
> > > Hello!
> > > 
> > > Just following up from off-list discussions yesterday.
> > > 
> > > The requirements to allocate on an RCU-protected speculative fastpath
> > > seem to be as follows:
> > > 
> > > 1.	Never sleep.
> > > 2.	Never reclaim.
> > > 3.	Leave emergency pools alone.
> > > 
> > > Any others?
> > > 
> > > If those rules suffice, and if my understanding of the GFP flags is
> > > correct (ha!!!), then the following GFP flags should cover this:
> > > 
> > > 	__GFP_NOMEMALLOC | __GFP_NOWARN
> > 
> > GFP_NOWAIT | __GFP_NOMEMALLOC | __GFP_NOWARN
> 
> Ah, good point on GFP_NOWAIT, thank you!

Johannes (I think it was?) made the point to me that if we have another
task very slowly freeing memory, a task in this path can take advantage
of that other task's hard work and never go into reclaim.  So the
approach we should take is:

p4d_alloc(GFP_NOWAIT | __GFP_NOMEMALLOC | __GFP_NOWARN);
pud_alloc(GFP_NOWAIT | __GFP_NOMEMALLOC | __GFP_NOWARN);
pmd_alloc(GFP_NOWAIT | __GFP_NOMEMALLOC | __GFP_NOWARN);

if (failure) {
  rcu_read_unlock();
  do_reclaim();
  return FAULT_FLAG_RETRY;
}

... but all this is now moot since the approach we agreed to yesterday
is:

rcu_read_lock();
vma = vma_lookup();
if (down_read_trylock(&vma->sem)) {
	rcu_read_unlock();
} else {
	rcu_read_unlock();
	mmap_read_lock(mm);
	vma = vma_lookup();
	down_read(&vma->sem);
}

... and we then execute the page table allocation under the protection of
the vma->sem.

At least, that's what I think we agreed to yesterday.

  reply	other threads:[~2022-05-03 18:28 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-03 15:59 Memory allocation on speculative fastpaths Paul E. McKenney
2022-05-03 16:04 ` Michal Hocko
2022-05-03 16:39   ` Paul E. McKenney
2022-05-03 18:28     ` Matthew Wilcox [this message]
2022-05-03 23:15       ` Suren Baghdasaryan
2022-05-03 23:45         ` Michal Hocko
2022-05-04  0:22           ` Matthew Wilcox
2022-05-04 16:23         ` Johannes Weiner
2022-05-24 20:37           ` Vlastimil Babka
2022-05-25 13:12             ` Johannes Weiner
2022-05-04  8:20       ` Michel Lespinasse
2022-05-04 16:52         ` Matthew Wilcox

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YnF0RyBaBSC1mdKo@casper.infradead.org \
    --to=willy@infradead.org \
    --cc=hannes@cmpxchg.org \
    --cc=liam.howlett@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=paulmck@kernel.org \
    --cc=walken.cr@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.