All of lore.kernel.org
 help / color / mirror / Atom feed
* [merged] mm-slub-let-number-of-online-cpus-determine-the-slub-page-order.patch removed from -mm tree
@ 2020-12-15 23:16 akpm
  0 siblings, 0 replies; only message in thread
From: akpm @ 2020-12-15 23:16 UTC (permalink / raw)
  To: aneesh.kumar, bharata, cl, guro, hannes, iamjoonsoo.kim,
	mm-commits, rientjes, shakeelb, vbabka


The patch titled
     Subject: mm/slub: let number of online CPUs determine the slub page order
has been removed from the -mm tree.  Its filename was
     mm-slub-let-number-of-online-cpus-determine-the-slub-page-order.patch

This patch was dropped because it was merged into mainline or a subsystem tree

------------------------------------------------------
From: Bharata B Rao <bharata@linux.ibm.com>
Subject: mm/slub: let number of online CPUs determine the slub page order

The page order of the slab that gets chosen for a given slab cache depends
on the number of objects that can be fit in the slab while meeting other
requirements.  We start with a value of minimum objects based on
nr_cpu_ids that is driven by possible number of CPUs and hence could be
higher than the actual number of CPUs present in the system.  This leads
to calculate_order() chosing a page order that is on the higher side
leading to increased slab memory consumption on systems that have bigger
page sizes.

Hence rely on the number of online CPUs when determining the mininum
objects, thereby increasing the chances of chosing a lower conservative
page order for the slab.

Vlastimil:

: Ideally, we would react to hotplug events and update existing caches 
: accordingly. But for that, recalculation of order for existing caches 
: would have to be made safe, while not affecting hot paths. We have 
: removed the sysfs interface with 32a6f409b693 ("mm, slub: remove runtime 
: allocation order changes") as it didn't seem easy and worth the trouble.
: 
: In case somebody wants to start with a large order right from the boot 
: because they know they will hotplug lots of cpus later, they can use 
: slub_min_objects= boot param to override this heuristic. So in case this 
: change regresses somebody's performance, there's a way around it and 
: thus the risk is low IMHO.

Link: https://lkml.kernel.org/r/20201118082759.1413056-1-bharata@linux.ibm.com
Signed-off-by: Bharata B Rao <bharata@linux.ibm.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Roman Gushchin <guro@fb.com>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/slub.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/mm/slub.c~mm-slub-let-number-of-online-cpus-determine-the-slub-page-order
+++ a/mm/slub.c
@@ -3431,7 +3431,7 @@ static inline int calculate_order(unsign
 	 */
 	min_objects = slub_min_objects;
 	if (!min_objects)
-		min_objects = 4 * (fls(nr_cpu_ids) + 1);
+		min_objects = 4 * (fls(num_online_cpus()) + 1);
 	max_objects = order_objects(slub_max_order, size);
 	min_objects = min(min_objects, max_objects);
 
_

Patches currently in -mm which might be from bharata@linux.ibm.com are



^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2020-12-16  0:00 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-15 23:16 [merged] mm-slub-let-number-of-online-cpus-determine-the-slub-page-order.patch removed from -mm tree akpm

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.