All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] mm/slub: improve count_partial() for CONFIG_SLUB_CPU_PARTIAL
@ 2020-02-22  9:24 Wen Yang
  2020-02-24  1:29   ` Christopher Lameter
  0 siblings, 1 reply; 10+ messages in thread
From: Wen Yang @ 2020-02-22  9:24 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Wen Yang, Roman Gushchin, Xunlei Pang, linux-mm, linux-kernel

In the cloud server scenario, reading "/proc/slabinfo" can possibily
block the slab allocation on another CPU for a while, 200ms in extreme
cases. If the slab object is to carry network packet, targeting the
far-end disk array, it causes block IO jitter issues.

This is because the list_lock, which protecting the node partial list,
is taken when couting the free objects resident in that list. It introduces
locking contention when the page(s) is moved between CPU and node partial
lists in allocation path on another CPU.

We also observed that in this scenario, CONFIG_SLUB_CPU_PARTIAL is turned
on by default, and count_partial() is useless because the returned number
is far from the reality.

Therefore, we can simply return 0, then nr_free is also 0, and eventually
active_objects == total_objects. We do not introduce any regression, and
it's preferable to show the unrealistic uniform 100% slab utilization
rather than some very high but incorrect value.

Co-developed-by: Roman Gushchin <guro@fb.com>
Signed-off-by: Roman Gushchin <guro@fb.com>
Signed-off-by: Wen Yang <wenyang@linux.alibaba.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Xunlei Pang <xlpang@linux.alibaba.com>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
---
 mm/slub.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/mm/slub.c b/mm/slub.c
index 17dc00e..d5b7230 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2411,14 +2411,16 @@ static inline unsigned long node_nr_objs(struct kmem_cache_node *n)
 static unsigned long count_partial(struct kmem_cache_node *n,
 					int (*get_count)(struct page *))
 {
-	unsigned long flags;
 	unsigned long x = 0;
+#ifndef CONFIG_SLUB_CPU_PARTIAL
+	unsigned long flags;
 	struct page *page;
 
 	spin_lock_irqsave(&n->list_lock, flags);
 	list_for_each_entry(page, &n->partial, slab_list)
 		x += get_count(page);
 	spin_unlock_irqrestore(&n->list_lock, flags);
+#endif
 	return x;
 }
 #endif /* CONFIG_SLUB_DEBUG || CONFIG_SYSFS */
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2020-03-03 16:09 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-22  9:24 [PATCH] mm/slub: improve count_partial() for CONFIG_SLUB_CPU_PARTIAL Wen Yang
2020-02-24  1:29 ` Christopher Lameter
2020-02-24  1:29   ` Christopher Lameter
2020-02-24 16:57   ` Roman Gushchin
2020-02-25 15:49     ` Vlastimil Babka
2020-02-26 18:31     ` Christopher Lameter
2020-02-26 18:31       ` Christopher Lameter
2020-02-27 18:35       ` Roman Gushchin
2020-03-03 16:05         ` Christopher Lameter
2020-03-03 16:05           ` Christopher Lameter

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.