linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Jann Horn <jannh@google.com>
To: Vlastimil Babka <vbabka@suse.cz>
Cc: Christoph Lameter <cl@linux.com>,
	Pekka Enberg <penberg@kernel.org>,
	 David Rientjes <rientjes@google.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	 Andrew Morton <akpm@linux-foundation.org>,
	Linux-MM <linux-mm@kvack.org>,
	 kernel list <linux-kernel@vger.kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	 Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Roman Gushchin <guro@fb.com>,
	 Johannes Weiner <hannes@cmpxchg.org>,
	Shakeel Butt <shakeelb@google.com>,
	 Suren Baghdasaryan <surenb@google.com>,
	Minchan Kim <minchan@kernel.org>,
	Michal Hocko <mhocko@kernel.org>
Subject: Re: SLUB: percpu partial object count is highly inaccurate, causing some memory wastage and maybe also worse tail latencies?
Date: Wed, 13 Jan 2021 23:37:34 +0100	[thread overview]
Message-ID: <CAG48ez2-DzC4qf_vzqj+VoGu4nM7275WmyxiPMT7dAp+MHDtOg@mail.gmail.com> (raw)
In-Reply-To: <2f0f46e8-2535-410a-1859-e9cfa4e57c18@suse.cz>

On Wed, Jan 13, 2021 at 8:14 PM Vlastimil Babka <vbabka@suse.cz> wrote:
> On 1/12/21 12:12 AM, Jann Horn wrote:
> It doesn't help that slabinfo (global or per-memcg) is also
> inaccurate as it cannot count free objects on per-cpu partial slabs and thus
> reports them as active.

Maybe SLUB could be taught to track how many objects are in the percpu
machinery, and then print that number separately so that you can at
least know how much data you're missing without having to collect data
with IPIs...

> > It might be a good idea to figure out whether it is possible to
> > efficiently keep track of a more accurate count of the free objects on
>
> As long as there are some inuse objects, it shouldn't matter much if the slab is
> sitting on per-cpu partial list or per-node list, as it can't be freed anyway.
> It becomes a real problem only after the slab become fully free. If we detected
> that in __slab_free() also for already-frozen slabs, we would need to know which
> CPU this slab belongs to (currently that's not tracked afaik),

Yeah, but at least on 64-bit systems we still have 32 completely
unused bits in the counter field that's updated via cmpxchg_double on
struct page. (On 32-bit systems the bitfields are also wider than they
strictly need to be, I think, at least if the system has 4K page
size.) So at least on 64-bit systems, we could squeeze a CPU number in
there, and then you'd know to which CPU the page belonged at the time
the object was freed.

> and send it an
> IPI to do some light version of unfreeze_partials() that would only remove empty
> slabs. The trick would be not to cause too many IPI's by this, obviously :/

Some brainstorming:

Maybe you could have an atomic counter in kmem_cache_cpu that tracks
the number of empty frozen pages that are associated with a specific
CPU? So the freeing slowpath would do its cmpxchg_double, and if the
new state after a successful cmpxchg_double is "inuse==0 && frozen ==
1" with a valid CPU number, you afterwards do
"atomic_long_inc(&per_cpu_ptr(cache->cpu_slab,
cpu)->empty_partial_pages)". I think it should be possible to
implement that such that the empty_partial_pages count, while not
immediately completely accurate, would be eventually consistent; and
readers on the CPU owning the kmem_cache_cpu should never see a number
that is too large, only one that is too small.

You could additionally have a plain percpu counter, not tied to the
kmem_cache, and increment it by 1<<page_order - then that would track
the amount of memory you could reclaim by sending an IPI to a given
CPU core. Then that threshold could help decide whether it's worth
sending IPIs from SLUB and/or the shrinker?


  reply	other threads:[~2021-01-13 22:38 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-11 23:12 SLUB: percpu partial object count is highly inaccurate, causing some memory wastage and maybe also worse tail latencies? Jann Horn
2021-01-12  0:27 ` Roman Gushchin
2021-01-12 16:35 ` Christoph Lameter
2021-01-14  9:27   ` Vlastimil Babka
2021-01-18 11:03     ` Michal Hocko
2021-01-18 15:46       ` Christoph Lameter
2021-01-18 16:07         ` Michal Hocko
2021-01-13 19:14 ` Vlastimil Babka
2021-01-13 22:37   ` Jann Horn [this message]
2021-01-14  9:04     ` Christoph Lameter
2021-01-21 17:21 ` Vlastimil Babka
2021-01-21 17:21   ` [RFC 1/2] mm, vmscan: add priority field to struct shrink_control Vlastimil Babka
2021-01-21 17:21     ` [RFC 2/2] mm, slub: add shrinker to reclaim cached slabs Vlastimil Babka
2021-01-22  0:48       ` Roman Gushchin
2021-01-26 12:06         ` Vlastimil Babka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAG48ez2-DzC4qf_vzqj+VoGu4nM7275WmyxiPMT7dAp+MHDtOg@mail.gmail.com \
    --to=jannh@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=bigeasy@linutronix.de \
    --cc=cl@linux.com \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=minchan@kernel.org \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    --cc=shakeelb@google.com \
    --cc=surenb@google.com \
    --cc=tglx@linutronix.de \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).