* [PATCH] mm: slub: remove preemption disabling from put_cpu_partial
@ 2021-08-11 11:19 Muchun Song
2021-08-11 12:40 ` Vlastimil Babka
0 siblings, 1 reply; 3+ messages in thread
From: Muchun Song @ 2021-08-11 11:19 UTC (permalink / raw)
To: cl, penberg, rientjes, iamjoonsoo.kim, akpm, vbabka
Cc: linux-mm, linux-kernel, Muchun Song
The commit d6e0b7fa1186 ("slub: make dead caches discard free slabs
immediately") introduced those logic to speed up the destruction of
per-memcg kmem caches, because kmem caches created for a memory
cgroup are only destroyed after the last page charged to the cgroup
is freed at that time. But since commit 9855609bde03 ("mm: memcg/slab:
use a single set of kmem_caches for all accounted allocations), we
do not have per-memcg kmem caches anymore. Are those code pointless?
No, the kmem_cache->cpu_partial can be set to zero by 'echo 0 > /sys/
kernel/slab/*/cpu_partial'. In this case, the slab page will be put
into cpu partial list and then moved to node list (because
slub_cpu_partial() returns zero). However, we can skip putting the
slab page to cpu partial list and just move it to node list directly.
We can adjust the condition of kmem_cache_has_cpu_partial() to
slub_cpu_partial() in __slab_free() and remove those code from
put_cpu_partial() for simplification.
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
mm/slub.c | 23 +++--------------------
1 file changed, 3 insertions(+), 20 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index b6c5205252eb..69c8ada322a0 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2438,7 +2438,6 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
int pages;
int pobjects;
- preempt_disable();
do {
pages = 0;
pobjects = 0;
@@ -2470,16 +2469,7 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
page->pobjects = pobjects;
page->next = oldpage;
- } while (this_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page)
- != oldpage);
- if (unlikely(!slub_cpu_partial(s))) {
- unsigned long flags;
-
- local_irq_save(flags);
- unfreeze_partials(s, this_cpu_ptr(s->cpu_slab));
- local_irq_restore(flags);
- }
- preempt_enable();
+ } while (this_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page) != oldpage);
#endif /* CONFIG_SLUB_CPU_PARTIAL */
}
@@ -3059,9 +3049,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
was_frozen = new.frozen;
new.inuse -= cnt;
if ((!new.inuse || !prior) && !was_frozen) {
-
- if (kmem_cache_has_cpu_partial(s) && !prior) {
-
+ if (slub_cpu_partial(s) && !prior) {
/*
* Slab was on no list before and will be
* partially empty
@@ -3069,9 +3057,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
* freeze it.
*/
new.frozen = 1;
-
} else { /* Needs to be taken off a list */
-
n = get_node(s, page_to_nid(page));
/*
* Speculatively acquire the list_lock.
@@ -3082,17 +3068,14 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
* other processors updating the list of slabs.
*/
spin_lock_irqsave(&n->list_lock, flags);
-
}
}
-
} while (!cmpxchg_double_slab(s, page,
prior, counters,
head, new.counters,
"__slab_free"));
if (likely(!n)) {
-
if (likely(was_frozen)) {
/*
* The list lock was not taken therefore no list
@@ -3118,7 +3101,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
* Objects left in the slab. If it was not on the partial list before
* then add it.
*/
- if (!kmem_cache_has_cpu_partial(s) && unlikely(!prior)) {
+ if (unlikely(!prior)) {
remove_full(s, n, page);
add_partial(n, page, DEACTIVATE_TO_TAIL);
stat(s, FREE_ADD_PARTIAL);
--
2.11.0
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH] mm: slub: remove preemption disabling from put_cpu_partial
2021-08-11 11:19 [PATCH] mm: slub: remove preemption disabling from put_cpu_partial Muchun Song
@ 2021-08-11 12:40 ` Vlastimil Babka
2021-08-11 14:49 ` Muchun Song
0 siblings, 1 reply; 3+ messages in thread
From: Vlastimil Babka @ 2021-08-11 12:40 UTC (permalink / raw)
To: Muchun Song, cl, penberg, rientjes, iamjoonsoo.kim, akpm
Cc: linux-mm, linux-kernel
On 8/11/21 1:19 PM, Muchun Song wrote:
> The commit d6e0b7fa1186 ("slub: make dead caches discard free slabs
> immediately") introduced those logic to speed up the destruction of
> per-memcg kmem caches, because kmem caches created for a memory
> cgroup are only destroyed after the last page charged to the cgroup
> is freed at that time. But since commit 9855609bde03 ("mm: memcg/slab:
> use a single set of kmem_caches for all accounted allocations), we
> do not have per-memcg kmem caches anymore. Are those code pointless?
> No, the kmem_cache->cpu_partial can be set to zero by 'echo 0 > /sys/
> kernel/slab/*/cpu_partial'. In this case, the slab page will be put
> into cpu partial list and then moved to node list (because
> slub_cpu_partial() returns zero). However, we can skip putting the
> slab page to cpu partial list and just move it to node list directly.
> We can adjust the condition of kmem_cache_has_cpu_partial() to
> slub_cpu_partial() in __slab_free() and remove those code from
> put_cpu_partial() for simplification.
>
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Please check again current mmotm/next if this still applies, I think it
shouldn't anymore. Thanks.
> ---
> mm/slub.c | 23 +++--------------------
> 1 file changed, 3 insertions(+), 20 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index b6c5205252eb..69c8ada322a0 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2438,7 +2438,6 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
> int pages;
> int pobjects;
>
> - preempt_disable();
> do {
> pages = 0;
> pobjects = 0;
> @@ -2470,16 +2469,7 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
> page->pobjects = pobjects;
> page->next = oldpage;
>
> - } while (this_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page)
> - != oldpage);
> - if (unlikely(!slub_cpu_partial(s))) {
> - unsigned long flags;
> -
> - local_irq_save(flags);
> - unfreeze_partials(s, this_cpu_ptr(s->cpu_slab));
> - local_irq_restore(flags);
> - }
> - preempt_enable();
> + } while (this_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page) != oldpage);
> #endif /* CONFIG_SLUB_CPU_PARTIAL */
> }
>
> @@ -3059,9 +3049,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
> was_frozen = new.frozen;
> new.inuse -= cnt;
> if ((!new.inuse || !prior) && !was_frozen) {
> -
> - if (kmem_cache_has_cpu_partial(s) && !prior) {
> -
> + if (slub_cpu_partial(s) && !prior) {
> /*
> * Slab was on no list before and will be
> * partially empty
> @@ -3069,9 +3057,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
> * freeze it.
> */
> new.frozen = 1;
> -
> } else { /* Needs to be taken off a list */
> -
> n = get_node(s, page_to_nid(page));
> /*
> * Speculatively acquire the list_lock.
> @@ -3082,17 +3068,14 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
> * other processors updating the list of slabs.
> */
> spin_lock_irqsave(&n->list_lock, flags);
> -
> }
> }
> -
> } while (!cmpxchg_double_slab(s, page,
> prior, counters,
> head, new.counters,
> "__slab_free"));
>
> if (likely(!n)) {
> -
> if (likely(was_frozen)) {
> /*
> * The list lock was not taken therefore no list
> @@ -3118,7 +3101,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
> * Objects left in the slab. If it was not on the partial list before
> * then add it.
> */
> - if (!kmem_cache_has_cpu_partial(s) && unlikely(!prior)) {
> + if (unlikely(!prior)) {
> remove_full(s, n, page);
> add_partial(n, page, DEACTIVATE_TO_TAIL);
> stat(s, FREE_ADD_PARTIAL);
>
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] mm: slub: remove preemption disabling from put_cpu_partial
2021-08-11 12:40 ` Vlastimil Babka
@ 2021-08-11 14:49 ` Muchun Song
0 siblings, 0 replies; 3+ messages in thread
From: Muchun Song @ 2021-08-11 14:49 UTC (permalink / raw)
To: Vlastimil Babka
Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
Andrew Morton, Linux Memory Management List, LKML
On Wed, Aug 11, 2021 at 8:40 PM Vlastimil Babka <vbabka@suse.cz> wrote:
>
> On 8/11/21 1:19 PM, Muchun Song wrote:
> > The commit d6e0b7fa1186 ("slub: make dead caches discard free slabs
> > immediately") introduced those logic to speed up the destruction of
> > per-memcg kmem caches, because kmem caches created for a memory
> > cgroup are only destroyed after the last page charged to the cgroup
> > is freed at that time. But since commit 9855609bde03 ("mm: memcg/slab:
> > use a single set of kmem_caches for all accounted allocations), we
> > do not have per-memcg kmem caches anymore. Are those code pointless?
> > No, the kmem_cache->cpu_partial can be set to zero by 'echo 0 > /sys/
> > kernel/slab/*/cpu_partial'. In this case, the slab page will be put
> > into cpu partial list and then moved to node list (because
> > slub_cpu_partial() returns zero). However, we can skip putting the
> > slab page to cpu partial list and just move it to node list directly.
> > We can adjust the condition of kmem_cache_has_cpu_partial() to
> > slub_cpu_partial() in __slab_free() and remove those code from
> > put_cpu_partial() for simplification.
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
>
> Please check again current mmotm/next if this still applies, I think it
> shouldn't anymore. Thanks.
>
You are right. I didn't see it before. I guess it was merged
recently. But thanks for your reminder.
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2021-08-11 14:49 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-11 11:19 [PATCH] mm: slub: remove preemption disabling from put_cpu_partial Muchun Song
2021-08-11 12:40 ` Vlastimil Babka
2021-08-11 14:49 ` Muchun Song
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).