All of lore.kernel.org
 help / color / mirror / Atom feed
From: Rongwei Wang <rongwei.wang@linux.alibaba.com>
To: Christoph Lameter <cl@gentwo.de>
Cc: David Rientjes <rientjes@google.com>,
	songmuchun@bytedance.com, Hyeonggon Yoo <42.hyeyoo@gmail.com>,
	akpm@linux-foundation.org, vbabka@suse.cz,
	roman.gushchin@linux.dev, iamjoonsoo.kim@lge.com,
	penberg@kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 1/3] mm/slub: fix the race between validate_slab and slab_free
Date: Wed, 8 Jun 2022 11:04:56 +0800	[thread overview]
Message-ID: <29723aaa-5e28-51d3-7f87-9edf0f7b9c33@linux.alibaba.com> (raw)
In-Reply-To: <alpine.DEB.2.22.394.2206071411460.375438@gentwo.de>



On 6/7/22 8:14 PM, Christoph Lameter wrote:
> On Fri, 3 Jun 2022, Rongwei Wang wrote:
> 
>> Recently, I am also find other ways to solve this. That case was provided by
>> Muchun is useful (Thanks Muchun!). Indeed, it seems that use n->list_lock here
>> is unwise. Actually, I'm not sure if you recognize the existence of such race?
>> If all agrees this race, then the next question may be: do we want to solve
>> this problem? or as David said, it would be better to deprecate validate
>> attribute directly. I have no idea about it, hope to rely on your experience.
>>
>> In fact, I mainly want to collect your views on whether or how to fix this bug
>> here. Thanks!
> 
> 
> Well validate_slab() is rarely used and should not cause the hot paths to
> incur performance penalties. Fix it in the validation logic somehow? Or
> document the issue and warn that validation may not be correct if there
If available, I think document the issue and warn this incorrect 
behavior is OK. But it still prints a large amount of confusing 
messages, and disturbs us?
> are current operations on the slab being validated.
And I am trying to fix it in following way. In a short, these changes 
only works under the slub debug mode, and not affects the normal mode 
(I'm not sure). It looks not elegant enough. And if all approve of this 
way, I can submit the next version.

Anyway, thanks for your time:).
-wrw

@@ -3304,7 +3300,7 @@ static void __slab_free(struct kmem_cache *s, 
struct slab *slab,

  {
         void *prior;
-       int was_frozen;
+       int was_frozen, to_take_off = 0;
         struct slab new;
         unsigned long counters;
         struct kmem_cache_node *n = NULL;
@@ -3315,14 +3311,23 @@ static void __slab_free(struct kmem_cache *s, 
struct slab *slab,
         if (kfence_free(head))
                 return;

-       if (kmem_cache_debug(s) &&
-           !free_debug_processing(s, slab, head, tail, cnt, addr))
-               return;
+       n = get_node(s, slab_nid(slab));
+       if (kmem_cache_debug(s)) {
+               int ret;

-       do {
-               if (unlikely(n)) {
+               spin_lock_irqsave(&n->list_lock, flags);
+               ret = free_debug_processing(s, slab, head, tail, cnt, addr);
+               if (!ret) {
                         spin_unlock_irqrestore(&n->list_lock, flags);
-                       n = NULL;
+                       return;
+               }
+       }
+
+       do {
+               if (unlikely(to_take_off)) {
+                       if (!kmem_cache_debug(s))
+                               spin_unlock_irqrestore(&n->list_lock, 
flags);
+                       to_take_off = 0;
                 }
                 prior = slab->freelist;
                 counters = slab->counters;
@@ -3343,8 +3348,6 @@ static void __slab_free(struct kmem_cache *s, 
struct slab *slab,
                                 new.frozen = 1;

                         } else { /* Needs to be taken off a list */
-
-                               n = get_node(s, slab_nid(slab));
                                 /*
                                  * Speculatively acquire the list_lock.
                                  * If the cmpxchg does not succeed then 
we may
@@ -3353,8 +3356,10 @@ static void __slab_free(struct kmem_cache *s, 
struct slab *slab,
                                  * Otherwise the list_lock will 
synchronize with
                                  * other processors updating the list 
of slabs.
                                  */
-                               spin_lock_irqsave(&n->list_lock, flags);
+                               if (!kmem_cache_debug(s))
+                                       spin_lock_irqsave(&n->list_lock, 
flags);

+                               to_take_off = 1;
                         }
                 }

@@ -3363,8 +3368,9 @@ static void __slab_free(struct kmem_cache *s, 
struct slab *slab,
                 head, new.counters,
                 "__slab_free"));

-       if (likely(!n)) {
-
+       if (likely(!to_take_off)) {
+               if (kmem_cache_debug(s))
+                       spin_unlock_irqrestore(&n->list_lock, flags);
                 if (likely(was_frozen)) {
                         /*
                          * The list lock was not taken therefore no list
> 

  reply	other threads:[~2022-06-08  5:38 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-29  8:15 [PATCH 1/3] mm/slub: fix the race between validate_slab and slab_free Rongwei Wang
2022-05-29  8:15 ` [PATCH 2/3] mm/slub: improve consistency of nr_slabs count Rongwei Wang
2022-05-29 12:26   ` Hyeonggon Yoo
2022-05-29  8:15 ` [PATCH 3/3] mm/slub: add nr_full count for debugging slub Rongwei Wang
2022-05-29 11:37 ` [PATCH 1/3] mm/slub: fix the race between validate_slab and slab_free Hyeonggon Yoo
2022-05-30 21:14   ` David Rientjes
2022-06-02 15:14     ` Christoph Lameter
2022-06-03  3:35       ` Rongwei Wang
2022-06-07 12:14         ` Christoph Lameter
2022-06-08  3:04           ` Rongwei Wang [this message]
2022-06-08 12:23             ` Christoph Lameter
2022-06-11  4:04               ` Rongwei Wang
2022-06-13 13:50                 ` Christoph Lameter
2022-06-14  2:38                   ` Rongwei Wang
2022-06-17  7:55                   ` Rongwei Wang
2022-06-17 14:19                     ` Christoph Lameter
2022-06-18  2:33                       ` Rongwei Wang
2022-06-20 11:57                         ` Christoph Lameter
2022-06-26 16:48                           ` Rongwei Wang
2022-06-17  9:40               ` Vlastimil Babka
2022-07-15  8:05                 ` Rongwei Wang
2022-07-15 10:33                   ` Vlastimil Babka
2022-07-15 10:51                     ` Rongwei Wang
2022-05-31  3:47   ` Muchun Song
2022-06-04 11:05     ` Hyeonggon Yoo
2022-05-31  8:50   ` Rongwei Wang
2022-07-18 11:09 ` Vlastimil Babka
2022-07-19 14:15   ` Rongwei Wang
2022-07-19 14:21     ` Vlastimil Babka
2022-07-19 14:43       ` Rongwei Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=29723aaa-5e28-51d3-7f87-9edf0f7b9c33@linux.alibaba.com \
    --to=rongwei.wang@linux.alibaba.com \
    --cc=42.hyeyoo@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@gentwo.de \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    --cc=roman.gushchin@linux.dev \
    --cc=songmuchun@bytedance.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.