linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Roman Gushchin <guro@fb.com>
To: Christopher Lameter <cl@linux.com>
Cc: Wen Yang <wenyang@linux.alibaba.com>,
	Pekka Enberg <penberg@kernel.org>,
	David Rientjes <rientjes@google.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Xunlei Pang <xlpang@linux.alibaba.com>, <linux-mm@kvack.org>,
	<linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] mm/slub: improve count_partial() for CONFIG_SLUB_CPU_PARTIAL
Date: Mon, 24 Feb 2020 08:57:50 -0800	[thread overview]
Message-ID: <20200224165750.GA478187@carbon.dhcp.thefacebook.com> (raw)
In-Reply-To: <alpine.DEB.2.21.2002240126190.13486@www.lameter.com>

On Mon, Feb 24, 2020 at 01:29:09AM +0000, Christoph Lameter wrote:
> On Sat, 22 Feb 2020, Wen Yang wrote:

Hello, Christopher!

> 
> > We also observed that in this scenario, CONFIG_SLUB_CPU_PARTIAL is turned
> > on by default, and count_partial() is useless because the returned number
> > is far from the reality.
> 
> Well its not useless. Its just not counting the partial objects in per cpu
> partial slabs. Those are counted by a different counter it.

Do you mean CPU_PARTIAL_ALLOC or something else?

"useless" isn't the most accurate wording, sorry for that.

The point is that the number of active objects displayed in /proc/slabinfo
is misleading if percpu partial lists are used. So it's strange to pay
for it by potentially slowing down concurrent allocations.

> 
> > Therefore, we can simply return 0, then nr_free is also 0, and eventually
> > active_objects == total_objects. We do not introduce any regression, and
> > it's preferable to show the unrealistic uniform 100% slab utilization
> > rather than some very high but incorrect value.
> 
> I suggest that you simply use the number of partial slabs and multiply
> them by the number of objects in a slab and use that as a value. Both
> values are readily available via /sys/kernel/slab/<...>/

So maybe something like this?

@@ -5907,7 +5907,9 @@ void get_slabinfo(struct kmem_cache *s, struct slabinfo *sinfo)
 	for_each_kmem_cache_node(s, node, n) {
 		nr_slabs += node_nr_slabs(n);
 		nr_objs += node_nr_objs(n);
+#ifndef CONFIG_SLUB_CPU_PARTIAL
 		nr_free += count_partial(n, count_free);
+#endif
 	}
 
 	sinfo->active_objs = nr_objs - nr_free;


Thank you!

  reply	other threads:[~2020-02-24 16:59 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-22  9:24 [PATCH] mm/slub: improve count_partial() for CONFIG_SLUB_CPU_PARTIAL Wen Yang
2020-02-24  1:29 ` Christopher Lameter
2020-02-24 16:57   ` Roman Gushchin [this message]
2020-02-25 15:49     ` Vlastimil Babka
2020-02-26 18:31     ` Christopher Lameter
2020-02-27 18:35       ` Roman Gushchin
2020-03-03 16:05         ` Christopher Lameter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200224165750.GA478187@carbon.dhcp.thefacebook.com \
    --to=guro@fb.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    --cc=wenyang@linux.alibaba.com \
    --cc=xlpang@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).