From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
To: David Rientjes <rientjes@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Greg Thelen <gthelen@google.com>,
Aruna Ramakrishna <aruna.ramakrishna@oracle.com>,
Christoph Lameter <cl@linux.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [patch] mm, slab: faster active and free stats
Date: Fri, 11 Nov 2016 14:53:26 +0900 [thread overview]
Message-ID: <20161111055326.GA16336@js1304-P5Q-DELUXE> (raw)
In-Reply-To: <alpine.DEB.2.10.1611091637460.125130@chino.kir.corp.google.com>
On Wed, Nov 09, 2016 at 04:38:08PM -0800, David Rientjes wrote:
> On Tue, 8 Nov 2016, Andrew Morton wrote:
>
> > > Reading /proc/slabinfo or monitoring slabtop(1) can become very expensive
> > > if there are many slab caches and if there are very lengthy per-node
> > > partial and/or free lists.
> > >
> > > Commit 07a63c41fa1f ("mm/slab: improve performance of gathering slabinfo
> > > stats") addressed the per-node full lists which showed a significant
> > > improvement when no objects were freed. This patch has the same
> > > motivation and optimizes the remainder of the usecases where there are
> > > very lengthy partial and free lists.
> > >
> > > This patch maintains per-node active_slabs (full and partial) and
> > > free_slabs rather than iterating the lists at runtime when reading
> > > /proc/slabinfo.
> >
> > Are there any nice numbers you can share?
> >
>
> Yes, please add this to the description:
>
>
> When allocating 100GB of slab from a test cache where every slab page is
> on the partial list, reading /proc/slabinfo (includes all other slab
> caches on the system) takes ~247ms on average with 48 samples.
>
> As a result of this patch, the same read takes ~0.856ms on average.
Hello, David.
Maintaining acitve/free_slab counters looks so complex. And, I think
that we don't need to maintain these counters for faster slabinfo.
Key point is to remove iterating n->slabs_partial list.
We can calculate active slab/object by following equation as you did in
this patch.
active_slab(n) = n->num_slab - the number of free_slab
active_object(n) = n->num_slab * cachep->num - n->free_objects
To get the number of free_slab, we need to iterate n->slabs_free list
but I guess it would be small enough.
If you don't like to iterate n->slabs_free list in slabinfo, just
maintaining the number of slabs_free would be enough.
Thanks.
next prev parent reply other threads:[~2016-11-11 5:51 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-11-08 23:06 [patch] mm, slab: faster active and free stats David Rientjes
2016-11-08 23:17 ` Andrew Morton
2016-11-10 0:38 ` David Rientjes
2016-11-11 5:53 ` Joonsoo Kim [this message]
2016-11-11 10:30 ` David Rientjes
2016-11-28 7:40 ` Joonsoo Kim
2016-11-30 0:56 ` David Rientjes
2016-12-02 7:58 ` 김준수/선임연구원/SW Platform(연)AOT팀(iamjoonsoo.kim@lge.com)
2016-12-05 4:23 ` [patch -mm] mm, slab: maintain total slab count instead of active count David Rientjes
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20161111055326.GA16336@js1304-P5Q-DELUXE \
--to=iamjoonsoo.kim@lge.com \
--cc=akpm@linux-foundation.org \
--cc=aruna.ramakrishna@oracle.com \
--cc=cl@linux.com \
--cc=gthelen@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=rientjes@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).