All of lore.kernel.org
 help / color / mirror / Atom feed
From: David Rientjes <rientjes@google.com>
To: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Cc: Rongwei Wang <rongwei.wang@linux.alibaba.com>,
	akpm@linux-foundation.org, vbabka@suse.cz,
	roman.gushchin@linux.dev, iamjoonsoo.kim@lge.com,
	penberg@kernel.org, cl@linux.com, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 1/3] mm/slub: fix the race between validate_slab and slab_free
Date: Mon, 30 May 2022 14:14:46 -0700 (PDT)	[thread overview]
Message-ID: <ac9ba68f-9ee2-1611-9ff8-b486ed9c4df0@google.com> (raw)
In-Reply-To: <YpNa4tB/jfW3MDyi@n2.us-central1-a.c.spheric-algebra-350919.internal>

On Sun, 29 May 2022, Hyeonggon Yoo wrote:

> > diff --git a/mm/slub.c b/mm/slub.c
> > index ed5c2c03a47a..310e56d99116 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -1374,15 +1374,12 @@ static noinline int free_debug_processing(
> >  	void *head, void *tail, int bulk_cnt,
> >  	unsigned long addr)
> >  {
> > -	struct kmem_cache_node *n = get_node(s, slab_nid(slab));
> >  	void *object = head;
> >  	int cnt = 0;
> > -	unsigned long flags, flags2;
> > +	unsigned long flags;
> >  	int ret = 0;
> >  
> > -	spin_lock_irqsave(&n->list_lock, flags);
> > -	slab_lock(slab, &flags2);
> > -
> > +	slab_lock(slab, &flags);
> >  	if (s->flags & SLAB_CONSISTENCY_CHECKS) {
> >  		if (!check_slab(s, slab))
> >  			goto out;
> > @@ -1414,8 +1411,7 @@ static noinline int free_debug_processing(
> >  		slab_err(s, slab, "Bulk freelist count(%d) invalid(%d)\n",
> >  			 bulk_cnt, cnt);
> >  
> > -	slab_unlock(slab, &flags2);
> > -	spin_unlock_irqrestore(&n->list_lock, flags);
> > +	slab_unlock(slab, &flags);
> >  	if (!ret)
> >  		slab_fix(s, "Object at 0x%p not freed", object);
> >  	return ret;
> > @@ -3304,7 +3300,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab,
> >  
> >  {
> >  	void *prior;
> > -	int was_frozen;
> > +	int was_frozen, to_take_off = 0;
> >  	struct slab new;
> >  	unsigned long counters;
> >  	struct kmem_cache_node *n = NULL;
> > @@ -3315,15 +3311,19 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab,
> >  	if (kfence_free(head))
> >  		return;
> >  
> > +	n = get_node(s, slab_nid(slab));
> > +	spin_lock_irqsave(&n->list_lock, flags);
> > +
> 
> Oh please don't do this.
> 
> SLUB free slowpath can be hit a lot depending on workload.
> 
> __slab_free() try its best not to take n->list_lock. currently takes n->list_lock
> only when the slab need to be taken from list.
> 
> Unconditionally taking n->list_lock will degrade performance.
> 

This is a good point, it would be useful to gather some benchmarks for 
workloads that are known to thrash some caches and would hit this path 
such as netperf TCP_RR.

  reply	other threads:[~2022-05-30 21:17 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-29  8:15 [PATCH 1/3] mm/slub: fix the race between validate_slab and slab_free Rongwei Wang
2022-05-29  8:15 ` [PATCH 2/3] mm/slub: improve consistency of nr_slabs count Rongwei Wang
2022-05-29 12:26   ` Hyeonggon Yoo
2022-05-29  8:15 ` [PATCH 3/3] mm/slub: add nr_full count for debugging slub Rongwei Wang
2022-05-29 11:37 ` [PATCH 1/3] mm/slub: fix the race between validate_slab and slab_free Hyeonggon Yoo
2022-05-30 21:14   ` David Rientjes [this message]
2022-06-02 15:14     ` Christoph Lameter
2022-06-03  3:35       ` Rongwei Wang
2022-06-07 12:14         ` Christoph Lameter
2022-06-08  3:04           ` Rongwei Wang
2022-06-08 12:23             ` Christoph Lameter
2022-06-11  4:04               ` Rongwei Wang
2022-06-13 13:50                 ` Christoph Lameter
2022-06-14  2:38                   ` Rongwei Wang
2022-06-17  7:55                   ` Rongwei Wang
2022-06-17 14:19                     ` Christoph Lameter
2022-06-18  2:33                       ` Rongwei Wang
2022-06-20 11:57                         ` Christoph Lameter
2022-06-26 16:48                           ` Rongwei Wang
2022-06-17  9:40               ` Vlastimil Babka
2022-07-15  8:05                 ` Rongwei Wang
2022-07-15 10:33                   ` Vlastimil Babka
2022-07-15 10:51                     ` Rongwei Wang
2022-05-31  3:47   ` Muchun Song
2022-06-04 11:05     ` Hyeonggon Yoo
2022-05-31  8:50   ` Rongwei Wang
2022-07-18 11:09 ` Vlastimil Babka
2022-07-19 14:15   ` Rongwei Wang
2022-07-19 14:21     ` Vlastimil Babka
2022-07-19 14:43       ` Rongwei Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ac9ba68f-9ee2-1611-9ff8-b486ed9c4df0@google.com \
    --to=rientjes@google.com \
    --cc=42.hyeyoo@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=penberg@kernel.org \
    --cc=roman.gushchin@linux.dev \
    --cc=rongwei.wang@linux.alibaba.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.