linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [patch] mm, slab: avoid high-order slab pages when it does not reduce waste
@ 2018-10-12 21:24 David Rientjes
  2018-10-12 22:13 ` Andrew Morton
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: David Rientjes @ 2018-10-12 21:24 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, Joonsoo Kim, Andrew Morton
  Cc: linux-mm, linux-kernel

The slab allocator has a heuristic that checks whether the internal
fragmentation is satisfactory and, if not, increases cachep->gfporder to
try to improve this.

If the amount of waste is the same at higher cachep->gfporder values,
there is no significant benefit to allocating higher order memory.  There
will be fewer calls to the page allocator, but each call will require
zone->lock and finding the page of best fit from the per-zone free areas.

Instead, it is better to allocate order-0 memory if possible so that pages
can be returned from the per-cpu pagesets (pcp).

There are two reasons to prefer this over allocating high order memory:

 - allocating from the pcp lists does not require a per-zone lock, and

 - this reduces stranding of MIGRATE_UNMOVABLE pageblocks on pcp lists
   that increases slab fragmentation across a zone.

We are particularly interested in the second point to eliminate cases
where all other pages on a pageblock are movable (or free) and fallback to
pageblocks of other migratetypes from the per-zone free areas causes
high-order slab memory to be allocated from them rather than from free
MIGRATE_UNMOVABLE pages on the pcp.

Signed-off-by: David Rientjes <rientjes@google.com>
---
 mm/slab.c | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/mm/slab.c b/mm/slab.c
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1748,6 +1748,7 @@ static size_t calculate_slab_order(struct kmem_cache *cachep,
 	for (gfporder = 0; gfporder <= KMALLOC_MAX_ORDER; gfporder++) {
 		unsigned int num;
 		size_t remainder;
+		int order;
 
 		num = cache_estimate(gfporder, size, flags, &remainder);
 		if (!num)
@@ -1803,6 +1804,20 @@ static size_t calculate_slab_order(struct kmem_cache *cachep,
 		 */
 		if (left_over * 8 <= (PAGE_SIZE << gfporder))
 			break;
+
+		/*
+		 * If a higher gfporder would not reduce internal fragmentation,
+		 * no need to continue.  The preference is to keep gfporder as
+		 * small as possible so slab allocations can be served from
+		 * MIGRATE_UNMOVABLE pcp lists to avoid stranding.
+		 */
+		for (order = gfporder + 1; order <= slab_max_order; order++) {
+			cache_estimate(order, size, flags, &remainder);
+			if (remainder < left_over)
+				break;
+		}
+		if (order > slab_max_order)
+			break;
 	}
 	return left_over;
 }

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2018-10-17 15:38 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-12 21:24 [patch] mm, slab: avoid high-order slab pages when it does not reduce waste David Rientjes
2018-10-12 22:13 ` Andrew Morton
2018-10-12 23:09   ` David Rientjes
2018-10-15 22:41   ` Christopher Lameter
2018-10-16  0:39     ` David Rientjes
2018-10-16 15:17       ` Christopher Lameter
2018-10-17  9:09         ` Vlastimil Babka
2018-10-17 15:38           ` Christopher Lameter
2018-10-15 22:42 ` Christopher Lameter
2018-10-16  8:42 ` Vlastimil Babka

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).