rcu.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@kernel.org>
To: Joel Fernandes <joel@joelfernandes.org>
Cc: Uladzislau Rezki <urezki@gmail.com>,
	LKML <linux-kernel@vger.kernel.org>, RCU <rcu@vger.kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Oleksiy Avramchenko <oleksiy.avramchenko@sonymobile.com>
Subject: Re: [PATCH 1/1] rcu/tree: support kfree_bulk() interface in kfree_rcu()
Date: Fri, 17 Jan 2020 13:37:21 -0800	[thread overview]
Message-ID: <20200117213721.GN2935@paulmck-ThinkPad-P72> (raw)
In-Reply-To: <20200117185732.GH246464@google.com>

On Fri, Jan 17, 2020 at 01:57:32PM -0500, Joel Fernandes wrote:
> On Fri, Jan 17, 2020 at 06:52:17PM +0100, Uladzislau Rezki wrote:
> > > > > > But rcuperf uses a single block size, which turns into kfree_bulk() using
> > > > > > a single slab, which results in good locality of reference.  So I have to
> > > > > 
> > > > > You meant a "single cache" category when you say "single slab"? Just to
> > > > > mention, the number of slabs (in a single cache) when a large number of
> > > > > objects are allocated is more than 1 (not single). With current rcuperf, I
> > > > > see 100s of slabs (each slab being one page) in the kmalloc-32 cache. Each
> > > > > slab contains around 128 objects of type kfree_rcu (24 byte object aligned to
> > > > > 32-byte slab object).
> > > > > 
> > > > I think that is about using different slab caches to break locality. It
> > > > makes sense, IMHO, because usually the system make use of different slabs,
> > > > because of different object sizes. From the other hand i guess there are
> > > > test cases when only one slab gets used.
> > > 
> > > I was wondering about "locality". A cache can be split into many slabs. Only
> > > the data on a page is local (contiguous). If there are a large number of
> > > objects, then it goes to a new slab (on the same cache). At least on the
> > > kmalloc slabs, there is only 1 slab per page. So for example, if on
> > > kmalloc-32 slab, there are more than 128 objects, then it goes to a different
> > > slab / page. So how is there still locality?
> > > 
> > Hmm.. On a high level:
> > 
> > one slab cache manages a specific object size, i.e. the slab memory consists of
> > contiguous pages(when increased probably not) of memory(4096 bytes or so) divided
> > into equal object size. For example when kmalloc() gets called, the appropriate
> > cache size(slab that serves only specific size) is selected and an object assigned
> > from it is returned.
> > 
> > But that is theory and i have not deeply analyzed how the SLAB works internally,
> > so i can be wrong :)
> > 
> > You mentioned 128 objects per one slab in the kmalloc-32 slab-cache. But all of
> > them follows each other, i mean it is sequential and is like regular array. In
> 
> Yes, for these 128 objects it is sequential. But the next 128 could be on
> some other page is what I was saying  And we are allocating 10s of 1000s of
> objects in this test.  (I believe pages are sequential only per slab and not
> for a different slab within same cache).
> 
> > that sense freeing can be beneficial because when an access is done to any object
> > whole CPU cache-line is fetched(if it was not before), usually it is 64K.
> 
> You mean size of the whole L1 cache right? cachelines are in the order of bytes.
> 
> > That is what i meant "locality". In order to "break it" i meant to allocate from
> > different slabs to see how kfree_slub() behaves in that sense, what is more real
> > scenario and workload, i think.
> 
> Ok, agreed.
> (BTW I do agree your patch is beneficial, just wanted to get the slab
> discussion right).

Thank you both!

Then I should be looking for an updated version of the patch with an upgraded
commit log?  Or is there more investigation/testing/review in process?

							Thanx, Paul

  reply	other threads:[~2020-01-17 21:37 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-31 12:22 [PATCH 1/1] rcu/tree: support kfree_bulk() interface in kfree_rcu() Uladzislau Rezki (Sony)
2020-01-13 19:03 ` Paul E. McKenney
2020-01-14 16:49   ` Joel Fernandes
2020-01-15 13:14     ` Uladzislau Rezki
2020-01-15 22:53       ` Joel Fernandes
2020-01-17 17:52         ` Uladzislau Rezki
2020-01-17 18:57           ` Joel Fernandes
2020-01-17 21:37             ` Paul E. McKenney [this message]
2020-01-17 21:59               ` Joel Fernandes
2020-01-19 13:03                 ` Uladzislau Rezki
2020-01-16  1:14 ` Joel Fernandes
2020-01-16  2:41   ` Paul E. McKenney
2020-01-16 17:27     ` Uladzislau Rezki
2020-01-16 17:44       ` Paul E. McKenney
2020-01-16 17:24   ` Uladzislau Rezki
  -- strict thread matches above, loose matches on Subject: below --
2019-12-20 12:56 Uladzislau Rezki (Sony)
2019-12-21 23:21 ` Joel Fernandes
2019-12-24 18:49   ` Uladzislau Rezki

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200117213721.GN2935@paulmck-ThinkPad-P72 \
    --to=paulmck@kernel.org \
    --cc=joel@joelfernandes.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=oleksiy.avramchenko@sonymobile.com \
    --cc=rcu@vger.kernel.org \
    --cc=rostedt@goodmis.org \
    --cc=urezki@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).