All of lore.kernel.org
 help / color / mirror / Atom feed
From: Muchun Song <songmuchun@bytedance.com>
To: cl@linux.com, penberg@kernel.org, rientjes@google.com,
	iamjoonsoo.kim@lge.com, akpm@linux-foundation.org,
	vbabka@suse.cz
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Muchun Song <songmuchun@bytedance.com>
Subject: [PATCH] mm: slub: remove preemption disabling from put_cpu_partial
Date: Wed, 11 Aug 2021 19:19:21 +0800	[thread overview]
Message-ID: <20210811111921.85999-1-songmuchun@bytedance.com> (raw)

The commit d6e0b7fa1186 ("slub: make dead caches discard free slabs
immediately") introduced those logic to speed up the destruction of
per-memcg kmem caches, because kmem caches created for a memory
cgroup are only destroyed after the last page charged to the cgroup
is freed at that time. But since commit 9855609bde03 ("mm: memcg/slab:
use a single set of kmem_caches for all accounted allocations), we
do not have per-memcg kmem caches anymore. Are those code pointless?
No, the kmem_cache->cpu_partial can be set to zero by 'echo 0 > /sys/
kernel/slab/*/cpu_partial'. In this case, the slab page will be put
into cpu partial list and then moved to node list (because
slub_cpu_partial() returns zero). However, we can skip putting the
slab page to cpu partial list and just move it to node list directly.
We can adjust the condition of kmem_cache_has_cpu_partial() to
slub_cpu_partial() in __slab_free() and remove those code from
put_cpu_partial() for simplification.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/slub.c | 23 +++--------------------
 1 file changed, 3 insertions(+), 20 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index b6c5205252eb..69c8ada322a0 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2438,7 +2438,6 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
 	int pages;
 	int pobjects;
 
-	preempt_disable();
 	do {
 		pages = 0;
 		pobjects = 0;
@@ -2470,16 +2469,7 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
 		page->pobjects = pobjects;
 		page->next = oldpage;
 
-	} while (this_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page)
-								!= oldpage);
-	if (unlikely(!slub_cpu_partial(s))) {
-		unsigned long flags;
-
-		local_irq_save(flags);
-		unfreeze_partials(s, this_cpu_ptr(s->cpu_slab));
-		local_irq_restore(flags);
-	}
-	preempt_enable();
+	} while (this_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page) != oldpage);
 #endif	/* CONFIG_SLUB_CPU_PARTIAL */
 }
 
@@ -3059,9 +3049,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
 		was_frozen = new.frozen;
 		new.inuse -= cnt;
 		if ((!new.inuse || !prior) && !was_frozen) {
-
-			if (kmem_cache_has_cpu_partial(s) && !prior) {
-
+			if (slub_cpu_partial(s) && !prior) {
 				/*
 				 * Slab was on no list before and will be
 				 * partially empty
@@ -3069,9 +3057,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
 				 * freeze it.
 				 */
 				new.frozen = 1;
-
 			} else { /* Needs to be taken off a list */
-
 				n = get_node(s, page_to_nid(page));
 				/*
 				 * Speculatively acquire the list_lock.
@@ -3082,17 +3068,14 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
 				 * other processors updating the list of slabs.
 				 */
 				spin_lock_irqsave(&n->list_lock, flags);
-
 			}
 		}
-
 	} while (!cmpxchg_double_slab(s, page,
 		prior, counters,
 		head, new.counters,
 		"__slab_free"));
 
 	if (likely(!n)) {
-
 		if (likely(was_frozen)) {
 			/*
 			 * The list lock was not taken therefore no list
@@ -3118,7 +3101,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
 	 * Objects left in the slab. If it was not on the partial list before
 	 * then add it.
 	 */
-	if (!kmem_cache_has_cpu_partial(s) && unlikely(!prior)) {
+	if (unlikely(!prior)) {
 		remove_full(s, n, page);
 		add_partial(n, page, DEACTIVATE_TO_TAIL);
 		stat(s, FREE_ADD_PARTIAL);
-- 
2.11.0


             reply	other threads:[~2021-08-11 11:19 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-11 11:19 Muchun Song [this message]
2021-08-11 12:40 ` [PATCH] mm: slub: remove preemption disabling from put_cpu_partial Vlastimil Babka
2021-08-11 14:49   ` Muchun Song
2021-08-11 14:49     ` Muchun Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210811111921.85999-1-songmuchun@bytedance.com \
    --to=songmuchun@bytedance.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.