linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@kernel.org>
To: Uladzislau Rezki <urezki@gmail.com>
Cc: Joel Fernandes <joel@joelfernandes.org>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	rcu@vger.kernel.org, willy@infradead.org, peterz@infradead.org,
	neilb@suse.com, vbabka@suse.cz, mgorman@suse.de,
	Andrew Morton <akpm@linux-foundation.org>,
	Josh Triplett <josh@joshtriplett.org>,
	Lai Jiangshan <jiangshanlai@gmail.com>,
	Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
	Steven Rostedt <rostedt@goodmis.org>
Subject: Re: [PATCH RFC] rcu/tree: Use GFP_MEMALLOC for alloc memory to free memory pattern
Date: Wed, 1 Apr 2020 12:34:05 -0700	[thread overview]
Message-ID: <20200401193405.GH19865@paulmck-ThinkPad-P72> (raw)
In-Reply-To: <20200401190548.GA6360@pc636>

On Wed, Apr 01, 2020 at 09:05:48PM +0200, Uladzislau Rezki wrote:
> On Wed, Apr 01, 2020 at 11:54:39AM -0700, Paul E. McKenney wrote:
> > On Wed, Apr 01, 2020 at 08:37:45PM +0200, Uladzislau Rezki wrote:
> > > On Wed, Apr 01, 2020 at 11:26:15AM -0700, Paul E. McKenney wrote:
> > > > On Wed, Apr 01, 2020 at 08:16:01PM +0200, Uladzislau Rezki wrote:
> > > > > > > > 
> > > > > > > > Right. Per discussion with Paul, we discussed that it is better if we
> > > > > > > > pre-allocate N number of array blocks per-CPU and use it for the cache.
> > > > > > > > Default for N being 1 and tunable with a boot parameter. I agree with this.
> > > > > > > > 
> > > > > > > As discussed before, we can make use of memory pool API for such
> > > > > > > purpose. But i am not sure if it should be one pool per CPU or
> > > > > > > one pool per NR_CPUS, that would contain NR_CPUS * N pre-allocated
> > > > > > > blocks.
> > > > > > 
> > > > > > There are advantages and disadvantages either way.  The advantage of the
> > > > > > per-CPU pool is that you don't have to worry about something like lock
> > > > > > contention causing even more pain during an OOM event.  One potential
> > > > > > problem wtih the per-CPU pool can happen when callbacks are offloaded,
> > > > > > in which case the CPUs needing the memory might never be getting it,
> > > > > > because in the offloaded case (RCU_NOCB_CPU=y) the CPU posting callbacks
> > > > > > might never be invoking them.
> > > > > > 
> > > > > > But from what I know now, systems built with CONFIG_RCU_NOCB_CPU=y
> > > > > > either don't have heavy callback loads (HPC systems) or are carefully
> > > > > > configured (real-time systems).  Plus large systems would probably end
> > > > > > up needing something pretty close to a slab allocator to keep from dying
> > > > > > from lock contention, and it is hard to justify that level of complexity
> > > > > > at this point.
> > > > > > 
> > > > > > Or is there some way to mark a specific slab allocator instance as being
> > > > > > able to keep some amount of memory no matter what the OOM conditions are?
> > > > > > If not, the current per-CPU pre-allocated cache is a better choice in the
> > > > > > near term.
> > > > > > 
> > > > > As for mempool API:
> > > > > 
> > > > > mempool_alloc() just tries to make regular allocation taking into
> > > > > account passed gfp_t bitmask. If it fails due to memory pressure,
> > > > > it uses reserved preallocated pool that consists of number of
> > > > > desirable elements(preallocated when a pool is created).
> > > > > 
> > > > > mempoll_free() returns an element to to pool, if it detects that
> > > > > current reserved elements are lower then minimum allowed elements,
> > > > > it will add an element to reserved pool, i.e. refill it. Otherwise
> > > > > just call kfree() or whatever we define as "element-freeing function."
> > > > 
> > > > Unless I am missing something, mempool_alloc() acquires a per-mempool
> > > > lock on each invocation under OOM conditions.  For our purposes, this
> > > > is essentially a global lock.  This will not be at all acceptable on a
> > > > large system.
> > > > 
> > > It uses pool->lock to access to reserved objects, so if we have one memory
> > > pool per one CPU then it would be serialized.
> > 
> > I am having difficulty parsing your sentence.  It looks like your thought
> > is to invoke mempool_create() for each CPU, so that the locking would be
> > on a per-CPU basis, as in 128 invocations of mempool_init() on a system
> > having 128 hardware threads.  Is that your intent?
> > 
> In order to serialize it, you need to have it per CPU. So if you have 128
> cpus, it means:
> 
> <snip>
> for_each_possible_cpu(...)
>     cpu_pool = mempool_create();
> <snip>
> 
> but please keep in mind that it is not my intention, but i had a though
> about mempool API. Because it has pre-reserve logic inside.

OK, fair point on use of mempool API, but my guess is that extending
the current kfree_rcu() logic will be simpler.

							Thanx, Paul

  reply	other threads:[~2020-04-01 19:34 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-31 13:16 [PATCH RFC] rcu/tree: Use GFP_MEMALLOC for alloc memory to free memory pattern Joel Fernandes (Google)
2020-03-31 14:04 ` Uladzislau Rezki
2020-03-31 15:09   ` Joel Fernandes
2020-03-31 16:01     ` Uladzislau Rezki
2020-03-31 17:02       ` Uladzislau Rezki
2020-03-31 17:49         ` Paul E. McKenney
2020-03-31 18:30       ` Joel Fernandes
2020-04-01 12:25         ` Uladzislau Rezki
2020-04-01 13:47           ` Paul E. McKenney
2020-04-01 18:16             ` Uladzislau Rezki
2020-04-01 18:26               ` Paul E. McKenney
2020-04-01 18:37                 ` Uladzislau Rezki
2020-04-01 18:54                   ` Paul E. McKenney
2020-04-01 19:05                     ` Uladzislau Rezki
2020-04-01 19:34                       ` Paul E. McKenney [this message]
2020-04-01 19:35                         ` Uladzislau Rezki
2020-03-31 14:58 ` Joel Fernandes
2020-03-31 15:34   ` Michal Hocko
2020-03-31 16:01     ` Joel Fernandes
2020-03-31 22:19       ` NeilBrown
2020-04-01  3:25         ` Joel Fernandes
2020-04-01  4:52           ` NeilBrown
2020-04-01  7:23       ` Michal Hocko
2020-04-01 11:14         ` joel
2020-04-01 12:05           ` Michal Hocko
2020-04-01 13:14         ` Mel Gorman
2020-04-01 14:45           ` Joel Fernandes
2020-03-31 16:12     ` Uladzislau Rezki
2020-04-01  7:09       ` Michal Hocko
2020-04-01 12:32         ` Uladzislau Rezki
2020-04-01 12:55           ` Michal Hocko
2020-04-01 13:08             ` Uladzislau Rezki
2020-04-01 13:15               ` Michal Hocko
2020-04-01 13:22                 ` Uladzislau Rezki
2020-04-01 15:28                   ` Michal Hocko
2020-04-01 15:46                     ` Uladzislau Rezki
2020-04-01 15:57                     ` Paul E. McKenney
2020-04-01 16:10                       ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200401193405.GH19865@paulmck-ThinkPad-P72 \
    --to=paulmck@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=jiangshanlai@gmail.com \
    --cc=joel@joelfernandes.org \
    --cc=josh@joshtriplett.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mathieu.desnoyers@efficios.com \
    --cc=mgorman@suse.de \
    --cc=neilb@suse.com \
    --cc=peterz@infradead.org \
    --cc=rcu@vger.kernel.org \
    --cc=rostedt@goodmis.org \
    --cc=urezki@gmail.com \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).