linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Rientjes <rientjes@google.com>
To: Julian Pidancet <julian.pidancet@oracle.com>
Cc: Christoph Lameter <cl@linux.com>,
	 "Lameter, Christopher" <cl@os.amperecomputing.com>,
	 Pekka Enberg <penberg@kernel.org>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	 Andrew Morton <akpm@linux-foundation.org>,
	 Vlastimil Babka <vbabka@suse.cz>,
	 Roman Gushchin <roman.gushchin@linux.dev>,
	 Hyeonggon Yoo <42.hyeyoo@gmail.com>,
	linux-mm@kvack.org,  Jonathan Corbet <corbet@lwn.net>,
	linux-doc@vger.kernel.org,  linux-kernel@vger.kernel.org,
	Matthew Wilcox <willy@infradead.org>,
	 Kees Cook <keescook@chromium.org>,
	Rafael Aquini <aquini@redhat.com>
Subject: Re: [PATCH v2] mm/slub: disable slab merging in the default configuration
Date: Mon, 3 Jul 2023 13:17:53 -0700 (PDT)	[thread overview]
Message-ID: <8813897d-4a52-37a0-fe44-a9157716be9b@google.com> (raw)
In-Reply-To: <CTSGWINSM18Q.3HQ1DN27GNA1R@imme>

On Mon, 3 Jul 2023, Julian Pidancet wrote:

> On Mon Jul 3, 2023 at 02:09, David Rientjes wrote:
> > I think we need more data beyond just kernbench.  Christoph's point about 
> > different page sizes is interesting.  In the above results, I don't know 
> > the page orders for the various slab caches that this workload will 
> > stress.  I think the memory overhead data may be different depending on 
> > how slab_max_order is being used, if at all.
> >
> > We should be able to run this through a variety of different benchmarks 
> > and measure peak slab usage at the same time for due diligence.  I support 
> > the change in the default, I would just prefer to know what the 
> > implications of it is.
> >
> > Is it possible to collect data for other microbenchmarks and real-world 
> > workloads?  And perhaps also with different page sizes where this will 
> > impact memory overhead more?  I can help running more workloads once we 
> > have the next set of data.
> >
> 
> David,
> 
> I agree about the need to perform those tests on hardware using larger
> pages. I will collect data if I have the chance to get my hands on one
> of these systems.
> 

Thanks.  I think arm64 should suffice for things like 64KB pages that 
Christoph was referring to.

We also may want to play around with slub_min_order on the kernel command 
line since that will inflate the size of slab pages and we may see some 
different results because of the increased page size.

> Do you have specific tests or workload in mind ? Compiling the kernel
> with files sitting on an XFS partition is not exhaustive but it is the
> only test I could think of that is both easy to set up and can be 
> reproduced while keeping external interferences as little as possible.
> 

The ones that Binder, cc'd, used to evaluate SLAB vs SLUB memory overhead:

hackbench
netperf
redis
specjbb2015
unixbench
will-it-scale

And Vlastimil had also suggested a few XFS specific benchmarks.

I can try to help run benchmarks that you're not able to run or if you 
can't get your hands on an arm64 system.

Additionally, I wouldn't consider this to be super urgent: slab cache 
merging has been this way for several years, we have some time to do an 
assessment of the implications of changing an important aspect of kernel 
memory allocation that will affect everybody.  I agree with the patch if 
we can make it work, I'd just like to study the effect of it more fully 
beyond some kernbench runs.


  parent reply	other threads:[~2023-07-03 20:17 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-29 22:19 [PATCH v2] mm/slub: disable slab merging in the default configuration Julian Pidancet
2023-07-03  0:09 ` David Rientjes
2023-07-03 10:33   ` Julian Pidancet
2023-07-03 18:38     ` Kees Cook
2023-07-03 20:17     ` David Rientjes [this message]
2023-07-06  7:38       ` David Rientjes
2023-07-09  8:55         ` David Rientjes
2023-07-10  2:40           ` David Rientjes
2023-07-18 12:08             ` Julian Pidancet
2023-07-25 23:25               ` David Rientjes
2023-07-26  8:34                 ` Vlastimil Babka
2023-07-10 14:56       ` Vlastimil Babka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8813897d-4a52-37a0-fe44-a9157716be9b@google.com \
    --to=rientjes@google.com \
    --cc=42.hyeyoo@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=aquini@redhat.com \
    --cc=cl@linux.com \
    --cc=cl@os.amperecomputing.com \
    --cc=corbet@lwn.net \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=julian.pidancet@oracle.com \
    --cc=keescook@chromium.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=penberg@kernel.org \
    --cc=roman.gushchin@linux.dev \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).