linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Jay Patel <jaypatel@linux.ibm.com>
To: Vlastimil Babka <vbabka@suse.cz>, Hyeonggon Yoo <42.hyeyoo@gmail.com>
Cc: linux-mm@kvack.org, cl@linux.com, penberg@kernel.org,
	rientjes@google.com, iamjoonsoo.kim@lge.com,
	akpm@linux-foundation.org, aneesh.kumar@linux.ibm.com,
	tsahu@linux.ibm.com, piyushs@linux.ibm.com
Subject: Re: [RFC PATCH v4] mm/slub: Optimize slub memory usage
Date: Thu, 24 Aug 2023 16:22:33 +0530	[thread overview]
Message-ID: <7fdf3f5dfd9fa1b5e210cc4176cac58a9992ecc0.camel@linux.ibm.com> (raw)
In-Reply-To: <d22badc1-27bc-51f7-5312-a07ec63c1144@suse.cz>

On Fri, 2023-08-11 at 17:43 +0200, Vlastimil Babka wrote:
> On 8/10/23 19:54, Hyeonggon Yoo wrote:
> > >                         order = calc_slab_order(size,
> > > min_objects,
> > >                                         slub_max_order,
> > > fraction);
> > > @@ -4159,14 +4164,6 @@ static inline int calculate_order(unsigned
> > > int size)
> > >                 min_objects--;
> > >         }
> > > -       /*
> > > -        * We were unable to place multiple objects in a slab.
> > > Now
> > > -        * lets see if we can place a single object there.
> > > -        */
> > > -       order = calc_slab_order(size, 1, slub_max_order, 1);
> > > -       if (order <= slub_max_order)
> > > -               return order;
> > 
> > I'm not sure if it's okay to remove this?
> > It was fine in v2 because the least wasteful order was chosen
> > regardless of fraction but that's not true anymore.
> > 
> > Otherwise, everything looks fine to me. I'm too dumb to anticipate
> > the outcome of increasing the slab order :P but this patch does not
> > sound crazy to me.
> 
> I wanted to have a better idea how the orders change so I hacked up a
> patch
> to print them for all sizes up to 1MB (unnecessarily large I guess)
> and also
> for various page sizes and nr_cpus (that's however rather invasive
> and prone
> to me missing some helper being used that still relies on real
> PAGE_SHIFT),
> then I applied v4 (needed some conflict fixups with my hack) on top:
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-orders
> 
> As expected, things didn't change with 4k PAGE_SIZE. With 64k
> PAGE_SIZE, I
> thought the patch in v4 form would result in lower orders, but seems
> not always?
> 
> I.e. I can see before the patch:
> 
>  Calculated slab orders for page_shift 16 nr_cpus 1:
>           8       0
>        4376       1
> 
> (so until 4368 bytes it keeps order at 0)
> 
> And after:
>           8       0
>        2264       1
>        2272       0
>        2344       1
>        2352       0
>        2432       1
> 
> Not sure this kind of "oscillation" is helpful with a small machine
> (1CPU),
> and 64kB pages so the unused part of page is quite small.
> 
Hi Vlastimil,
 
With patch. it will cause the fraction_size to rise to 32
when utilizing a 64k page size. As a result, the maximum wastage cap
for each slab cache will be 2k (64k divided by 32). Any object size
exceeding this cap will be moved to order 1 or beyond due to which this
oscillation is seen.
 
> With 16 cpus, AFAICS the orders are also larger for some sizes.
> Hm but you reported reduction of total slab memory which suggests
> lower
> orders were selected somewhere, so maybe I did some mistake.A

AFAIK total slab memory is reduce because of two reason (with this
patch for larger page size) 
1) order for some slab cache is reduce (by increasing fraction_size)
2) Have also seen reduction in overall slab cache numbers as because of
increasing page order

> 
> Anyway my point here is that this evaluation approach might be
> useful, even
> if it's a non-upstreamable hack, and some postprocessing of the
> output is
> needed for easier comparison of before/after, so feel free to try
> that out.

Thank you for this details test :) 
> 
> BTW I'll be away for 2 weeks from now, so further feedback will have
> to come
> from others in that time...
> 
Do we have any additional feedback from others on the same matter?

Thank

Jay Patel
> > Thanks!
> > --
> > Hyeonggon



  reply	other threads:[~2023-08-24 10:52 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-20 10:23 [RFC PATCH v4] mm/slub: Optimize slub memory usage Jay Patel
2023-08-10 17:54 ` Hyeonggon Yoo
2023-08-11  6:52   ` Jay Patel
2023-08-18  5:11     ` Hyeonggon Yoo
2023-08-18  6:41       ` Jay Patel
2023-08-11 15:43   ` Vlastimil Babka
2023-08-24 10:52     ` Jay Patel [this message]
2023-09-07 13:42       ` Vlastimil Babka
2023-09-14  5:40         ` Jay Patel
2023-09-14  6:38           ` Vlastimil Babka
2023-09-14 12:43             ` Jay Patel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7fdf3f5dfd9fa1b5e210cc4176cac58a9992ecc0.camel@linux.ibm.com \
    --to=jaypatel@linux.ibm.com \
    --cc=42.hyeyoo@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=cl@linux.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-mm@kvack.org \
    --cc=penberg@kernel.org \
    --cc=piyushs@linux.ibm.com \
    --cc=rientjes@google.com \
    --cc=tsahu@linux.ibm.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).