All of lore.kernel.org
 help / color / mirror / Atom feed
From: Vlastimil Babka <vbabka@suse.cz>
To: Binder Makin <merimus@google.com>
Cc: lsf-pc@lists.linux-foundation.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org, linux-block@vger.kernel.org,
	bpf@vger.kernel.org, linux-xfs@vger.kernel.org,
	David Rientjes <rientjes@google.com>,
	Christoph Lameter <cl@linux.com>,
	Pekka Enberg <penberg@kernel.org>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Hyeonggon Yoo <42.hyeyoo@gmail.com>,
	Roman Gushchin <roman.gushchin@linux.dev>
Subject: Re: [LSF/MM/BPF TOPIC] SLOB+SLAB allocators removal and future SLUB improvements
Date: Tue, 4 Apr 2023 18:03:07 +0200	[thread overview]
Message-ID: <951d364a-05c0-b290-8abe-7cbfcaeb2df7@suse.cz> (raw)
In-Reply-To: <CAANmLtwGS75WJ9AXfmqZv73pNdHJn6zfrrCCWjKK_6jPk9pWRg@mail.gmail.com>

On 3/22/23 13:30, Binder Makin wrote:
> Was looking at SLAB removal and started by running A/B tests of SLAB
> vs SLUB.  Please note these are only preliminary results.

Thanks, that's very useful.

> These were run using 6.1.13 configured for SLAB/SLUB.
> Machines were standard datacenter servers.
> 
> Hackbench shows completion time, so smaller is better.
> On all others larger is better.
> https://docs.google.com/spreadsheets/d/e/2PACX-1vQ47Mekl8BOp3ekCefwL6wL8SQiv6Qvp5avkU2ssQSh41gntjivE-aKM4PkwzkC4N_s_MxUdcsokhhz/pubhtml
> 
> Some notes:
> SUnreclaim and SReclaimable shows unreclaimable and reclaimable memory.
> Substantially higher with SLUB, but I believe that is to be expected.
> 
> Various results showing a 5-10% degradation with SLUB.  That feels
> concerning to me, but I'm not sure what others' tolerance would be.
> 
> redis results on AMD show some pretty bad degredations.  10-20% range
> netpipe on Intel also has issues.. 10-17%

I guess one question is which ones are genuine SLAB/SLUB differences and not
e.g. some artifact of different cache layout or something. For example it
seems suspicious if results are widely different between architectures.

E.g. will-it-scale writeseek3_scalability regresses on arm64 and amd, but
improves on intel? Or is something wrong with the data, all columns for that
whole benchmark suite are identical.

hackbench ("smaller is better") seems drastically better on arm64 (30%
median time reduction?) and amd (80% reduction?!?), but 10% slower intel?

redis seems a bit improved on arm64, slightly worse on intel but much worse
on amd.

specjbb similar story, also I thought it was a java focused benchmark,
should it really be exercising kernel slab allocators in such notable way?

I guess netpipe is the least surprising as networking was always mentioned
in SLAB vs SLUB discussions.

> On Tue, Mar 14, 2023 at 4:05 AM Vlastimil Babka <vbabka@suse.cz> wrote:
>>
>> As you're probably aware, my plan is to get rid of SLOB and SLAB, leaving
>> only SLUB going forward. The removal of SLOB seems to be going well, there
>> were no objections to the deprecation and I've posted v1 of the removal
>> itself [1] so it could be in -next soon.
>>
>> The immediate benefit of that is that we can allow kfree() (and kfree_rcu())
>> to free objects from kmem_cache_alloc() - something that IIRC at least xfs
>> people wanted in the past, and SLOB was incompatible with that.
>>
>> For SLAB removal I haven't yet heard any objections (but also didn't
>> deprecate it yet) but if there are any users due to particular workloads
>> doing better with SLAB than SLUB, we can discuss why those would regress and
>> what can be done about that in SLUB.
>>
>> Once we have just one slab allocator in the kernel, we can take a closer
>> look at what the users are missing from it that forces them to create own
>> allocators (e.g. BPF), and could be considered to be added as a generic
>> implementation to SLUB.
>>
>> Thanks,
>> Vlastimil
>>
>> [1] https://lore.kernel.org/all/20230310103210.22372-1-vbabka@suse.cz/
>>


  reply	other threads:[~2023-04-04 16:03 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-14  8:05 [LSF/MM/BPF TOPIC] SLOB+SLAB allocators removal and future SLUB improvements Vlastimil Babka
2023-03-14 13:06 ` Matthew Wilcox
2023-03-15  2:54 ` Roman Gushchin
2023-03-16  8:18   ` Vlastimil Babka
2023-03-16 20:20     ` Roman Gushchin
2023-03-22 12:15 ` Binder Makin
2023-03-22 13:02   ` Hyeonggon Yoo
2023-03-22 13:24     ` Binder Makin
2023-03-22 13:30     ` Binder Makin
2023-03-22 12:30 ` Binder Makin
2023-04-04 16:03   ` Vlastimil Babka [this message]
2023-04-05 19:54     ` Binder Makin
2023-04-27  8:29       ` Vlastimil Babka
2023-05-05 19:44         ` Binder Makin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=951d364a-05c0-b290-8abe-7cbfcaeb2df7@suse.cz \
    --to=vbabka@suse.cz \
    --cc=42.hyeyoo@gmail.com \
    --cc=bpf@vger.kernel.org \
    --cc=cl@linux.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=merimus@google.com \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    --cc=roman.gushchin@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.