RCU Archive on lore.kernel.org
 help / color / Atom feed
From: Uladzislau Rezki <urezki@gmail.com>
To: Joel Fernandes <joel@joelfernandes.org>
Cc: Uladzislau Rezki <urezki@gmail.com>,
	"Paul E. McKenney" <paulmck@kernel.org>,
	LKML <linux-kernel@vger.kernel.org>, RCU <rcu@vger.kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Oleksiy Avramchenko <oleksiy.avramchenko@sonymobile.com>
Subject: Re: [PATCH 1/1] rcu/tree: support kfree_bulk() interface in kfree_rcu()
Date: Fri, 17 Jan 2020 18:52:17 +0100
Message-ID: <20200117175217.GA23622@pc636> (raw)
In-Reply-To: <20200115225350.GA246464@google.com>

> > > > But rcuperf uses a single block size, which turns into kfree_bulk() using
> > > > a single slab, which results in good locality of reference.  So I have to
> > > 
> > > You meant a "single cache" category when you say "single slab"? Just to
> > > mention, the number of slabs (in a single cache) when a large number of
> > > objects are allocated is more than 1 (not single). With current rcuperf, I
> > > see 100s of slabs (each slab being one page) in the kmalloc-32 cache. Each
> > > slab contains around 128 objects of type kfree_rcu (24 byte object aligned to
> > > 32-byte slab object).
> > > 
> > I think that is about using different slab caches to break locality. It
> > makes sense, IMHO, because usually the system make use of different slabs,
> > because of different object sizes. From the other hand i guess there are
> > test cases when only one slab gets used.
> 
> I was wondering about "locality". A cache can be split into many slabs. Only
> the data on a page is local (contiguous). If there are a large number of
> objects, then it goes to a new slab (on the same cache). At least on the
> kmalloc slabs, there is only 1 slab per page. So for example, if on
> kmalloc-32 slab, there are more than 128 objects, then it goes to a different
> slab / page. So how is there still locality?
> 
Hmm.. On a high level:

one slab cache manages a specific object size, i.e. the slab memory consists of
contiguous pages(when increased probably not) of memory(4096 bytes or so) divided
into equal object size. For example when kmalloc() gets called, the appropriate
cache size(slab that serves only specific size) is selected and an object assigned
from it is returned.

But that is theory and i have not deeply analyzed how the SLAB works internally,
so i can be wrong :)

You mentioned 128 objects per one slab in the kmalloc-32 slab-cache. But all of
them follows each other, i mean it is sequential and is like regular array. In
that sense freeing can be beneficial because when an access is done to any object
whole CPU cache-line is fetched(if it was not before), usually it is 64K.

That is what i meant "locality". In order to "break it" i meant to allocate from
different slabs to see how kfree_slub() behaves in that sense, what is more real
scenario and workload, i think.

--
Vlad Rezki

  reply index

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-31 12:22 Uladzislau Rezki (Sony)
2020-01-13 19:03 ` Paul E. McKenney
2020-01-14 16:49   ` Joel Fernandes
2020-01-15 13:14     ` Uladzislau Rezki
2020-01-15 22:53       ` Joel Fernandes
2020-01-17 17:52         ` Uladzislau Rezki [this message]
2020-01-17 18:57           ` Joel Fernandes
2020-01-17 21:37             ` Paul E. McKenney
2020-01-17 21:59               ` Joel Fernandes
2020-01-19 13:03                 ` Uladzislau Rezki
2020-01-16  1:14 ` Joel Fernandes
2020-01-16  2:41   ` Paul E. McKenney
2020-01-16 17:27     ` Uladzislau Rezki
2020-01-16 17:44       ` Paul E. McKenney
2020-01-16 17:24   ` Uladzislau Rezki
  -- strict thread matches above, loose matches on Subject: below --
2019-12-20 12:56 Uladzislau Rezki (Sony)
2019-12-21 23:21 ` Joel Fernandes
2019-12-24 18:49   ` Uladzislau Rezki

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200117175217.GA23622@pc636 \
    --to=urezki@gmail.com \
    --cc=joel@joelfernandes.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=oleksiy.avramchenko@sonymobile.com \
    --cc=paulmck@kernel.org \
    --cc=rcu@vger.kernel.org \
    --cc=rostedt@goodmis.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

RCU Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/rcu/0 rcu/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 rcu rcu/ https://lore.kernel.org/rcu \
		rcu@vger.kernel.org
	public-inbox-index rcu

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.rcu


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git