All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2] mm/slab_common: Deleting kobject in kmem_cache_destroy() without holding slab_mutex/cpu_hotplug_lock
@ 2022-08-10 16:49 Waiman Long
  2022-08-10 18:10 ` Roman Gushchin
  0 siblings, 1 reply; 4+ messages in thread
From: Waiman Long @ 2022-08-10 16:49 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Vlastimil Babka, Roman Gushchin, Hyeonggon Yoo
  Cc: linux-mm, linux-kernel, Waiman Long

A circular locking problem is reported by lockdep due to the following
circular locking dependency.

  +--> cpu_hotplug_lock --> slab_mutex --> kn->active --+
  |                                                     |
  +-----------------------------------------------------+

The forward cpu_hotplug_lock ==> slab_mutex ==> kn->active dependency
happens in

  kmem_cache_destroy():	cpus_read_lock(); mutex_lock(&slab_mutex);
  ==> sysfs_slab_unlink()
      ==> kobject_del()
          ==> kernfs_remove()
	      ==> __kernfs_remove()
	          ==> kernfs_drain(): rwsem_acquire(&kn->dep_map, ...);

The backward kn->active ==> cpu_hotplug_lock dependency happens in

  kernfs_fop_write_iter(): kernfs_get_active();
  ==> slab_attr_store()
      ==> cpu_partial_store()
          ==> flush_all(): cpus_read_lock()

One way to break this circular locking chain is to avoid holding
cpu_hotplug_lock and slab_mutex while deleting the kobject in
sysfs_slab_unlink() which should be equivalent to doing a write_lock
and write_unlock pair of the kn->active virtual lock.

Since the kobject structures are not protected by slab_mutex or the
cpu_hotplug_lock, we can certainly release those locks before doing
the delete operation.

Move sysfs_slab_unlink() and sysfs_slab_release() to the newly
created kmem_cache_release() and call it outside the slab_mutex &
cpu_hotplug_lock critical sections.

Signed-off-by: Waiman Long <longman@redhat.com>
---
 [v2] Break kmem_cache_release() helper into 2 separate ones.

 mm/slab_common.c | 54 +++++++++++++++++++++++++++++++++---------------
 1 file changed, 37 insertions(+), 17 deletions(-)

diff --git a/mm/slab_common.c b/mm/slab_common.c
index 17996649cfe3..7742d0446d8b 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -392,6 +392,36 @@ kmem_cache_create(const char *name, unsigned int size, unsigned int align,
 }
 EXPORT_SYMBOL(kmem_cache_create);
 
+#ifdef SLAB_SUPPORTS_SYSFS
+static void kmem_cache_workfn_release(struct kmem_cache *s)
+{
+	sysfs_slab_release(s);
+}
+#else
+static void kmem_cache_workfn_release(struct kmem_cache *s)
+{
+	slab_kmem_cache_release(s);
+}
+#endif
+
+/*
+ * For a given kmem_cache, kmem_cache_destroy() should only be called
+ * once or there will be a use-after-free problem. The actual deletion
+ * and release of the kobject does not need slab_mutex or cpu_hotplug_lock
+ * protection. So they are now done without holding those locks.
+ */
+static void kmem_cache_release(struct kmem_cache *s)
+{
+#ifdef SLAB_SUPPORTS_SYSFS
+	sysfs_slab_unlink(s);
+#endif
+
+	if (s->flags & SLAB_TYPESAFE_BY_RCU)
+		schedule_work(&slab_caches_to_rcu_destroy_work);
+	else
+		kmem_cache_workfn_release(s);
+}
+
 static void slab_caches_to_rcu_destroy_workfn(struct work_struct *work)
 {
 	LIST_HEAD(to_destroy);
@@ -418,11 +448,7 @@ static void slab_caches_to_rcu_destroy_workfn(struct work_struct *work)
 	list_for_each_entry_safe(s, s2, &to_destroy, list) {
 		debugfs_slab_release(s);
 		kfence_shutdown_cache(s);
-#ifdef SLAB_SUPPORTS_SYSFS
-		sysfs_slab_release(s);
-#else
-		slab_kmem_cache_release(s);
-#endif
+		kmem_cache_workfn_release(s);
 	}
 }
 
@@ -437,20 +463,10 @@ static int shutdown_cache(struct kmem_cache *s)
 	list_del(&s->list);
 
 	if (s->flags & SLAB_TYPESAFE_BY_RCU) {
-#ifdef SLAB_SUPPORTS_SYSFS
-		sysfs_slab_unlink(s);
-#endif
 		list_add_tail(&s->list, &slab_caches_to_rcu_destroy);
-		schedule_work(&slab_caches_to_rcu_destroy_work);
 	} else {
 		kfence_shutdown_cache(s);
 		debugfs_slab_release(s);
-#ifdef SLAB_SUPPORTS_SYSFS
-		sysfs_slab_unlink(s);
-		sysfs_slab_release(s);
-#else
-		slab_kmem_cache_release(s);
-#endif
 	}
 
 	return 0;
@@ -465,14 +481,16 @@ void slab_kmem_cache_release(struct kmem_cache *s)
 
 void kmem_cache_destroy(struct kmem_cache *s)
 {
+	int refcnt;
+
 	if (unlikely(!s) || !kasan_check_byte(s))
 		return;
 
 	cpus_read_lock();
 	mutex_lock(&slab_mutex);
 
-	s->refcount--;
-	if (s->refcount)
+	refcnt = --s->refcount;
+	if (refcnt)
 		goto out_unlock;
 
 	WARN(shutdown_cache(s),
@@ -481,6 +499,8 @@ void kmem_cache_destroy(struct kmem_cache *s)
 out_unlock:
 	mutex_unlock(&slab_mutex);
 	cpus_read_unlock();
+	if (!refcnt)
+		kmem_cache_release(s);
 }
 EXPORT_SYMBOL(kmem_cache_destroy);
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] mm/slab_common: Deleting kobject in kmem_cache_destroy() without holding slab_mutex/cpu_hotplug_lock
  2022-08-10 16:49 [PATCH v2] mm/slab_common: Deleting kobject in kmem_cache_destroy() without holding slab_mutex/cpu_hotplug_lock Waiman Long
@ 2022-08-10 18:10 ` Roman Gushchin
  2022-08-10 18:27   ` Waiman Long
  0 siblings, 1 reply; 4+ messages in thread
From: Roman Gushchin @ 2022-08-10 18:10 UTC (permalink / raw)
  To: Waiman Long
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Vlastimil Babka, Hyeonggon Yoo, linux-mm,
	linux-kernel

On Wed, Aug 10, 2022 at 12:49:46PM -0400, Waiman Long wrote:
> A circular locking problem is reported by lockdep due to the following
> circular locking dependency.
> 
>   +--> cpu_hotplug_lock --> slab_mutex --> kn->active --+
>   |                                                     |
>   +-----------------------------------------------------+
> 
> The forward cpu_hotplug_lock ==> slab_mutex ==> kn->active dependency
> happens in
> 
>   kmem_cache_destroy():	cpus_read_lock(); mutex_lock(&slab_mutex);
>   ==> sysfs_slab_unlink()
>       ==> kobject_del()
>           ==> kernfs_remove()
> 	      ==> __kernfs_remove()
> 	          ==> kernfs_drain(): rwsem_acquire(&kn->dep_map, ...);
> 
> The backward kn->active ==> cpu_hotplug_lock dependency happens in
> 
>   kernfs_fop_write_iter(): kernfs_get_active();
>   ==> slab_attr_store()
>       ==> cpu_partial_store()
>           ==> flush_all(): cpus_read_lock()
> 
> One way to break this circular locking chain is to avoid holding
> cpu_hotplug_lock and slab_mutex while deleting the kobject in
> sysfs_slab_unlink() which should be equivalent to doing a write_lock
> and write_unlock pair of the kn->active virtual lock.
> 
> Since the kobject structures are not protected by slab_mutex or the
> cpu_hotplug_lock, we can certainly release those locks before doing
> the delete operation.
> 
> Move sysfs_slab_unlink() and sysfs_slab_release() to the newly
> created kmem_cache_release() and call it outside the slab_mutex &
> cpu_hotplug_lock critical sections.
> 
> Signed-off-by: Waiman Long <longman@redhat.com>
> ---
>  [v2] Break kmem_cache_release() helper into 2 separate ones.
> 
>  mm/slab_common.c | 54 +++++++++++++++++++++++++++++++++---------------
>  1 file changed, 37 insertions(+), 17 deletions(-)
> 
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index 17996649cfe3..7742d0446d8b 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -392,6 +392,36 @@ kmem_cache_create(const char *name, unsigned int size, unsigned int align,
>  }
>  EXPORT_SYMBOL(kmem_cache_create);
>  
> +#ifdef SLAB_SUPPORTS_SYSFS
> +static void kmem_cache_workfn_release(struct kmem_cache *s)
> +{
> +	sysfs_slab_release(s);
> +}
> +#else
> +static void kmem_cache_workfn_release(struct kmem_cache *s)
> +{
> +	slab_kmem_cache_release(s);
> +}
> +#endif
> +
> +/*
> + * For a given kmem_cache, kmem_cache_destroy() should only be called
> + * once or there will be a use-after-free problem. The actual deletion
> + * and release of the kobject does not need slab_mutex or cpu_hotplug_lock
> + * protection. So they are now done without holding those locks.
> + */
> +static void kmem_cache_release(struct kmem_cache *s)
> +{
> +#ifdef SLAB_SUPPORTS_SYSFS
> +	sysfs_slab_unlink(s);
> +#endif
> +
> +	if (s->flags & SLAB_TYPESAFE_BY_RCU)
> +		schedule_work(&slab_caches_to_rcu_destroy_work);
> +	else
> +		kmem_cache_workfn_release(s);
> +}
> +
>  static void slab_caches_to_rcu_destroy_workfn(struct work_struct *work)
>  {
>  	LIST_HEAD(to_destroy);
> @@ -418,11 +448,7 @@ static void slab_caches_to_rcu_destroy_workfn(struct work_struct *work)
>  	list_for_each_entry_safe(s, s2, &to_destroy, list) {
>  		debugfs_slab_release(s);
>  		kfence_shutdown_cache(s);
> -#ifdef SLAB_SUPPORTS_SYSFS
> -		sysfs_slab_release(s);
> -#else
> -		slab_kmem_cache_release(s);
> -#endif
> +		kmem_cache_workfn_release(s);
>  	}
>  }
>  
> @@ -437,20 +463,10 @@ static int shutdown_cache(struct kmem_cache *s)
>  	list_del(&s->list);
>  
>  	if (s->flags & SLAB_TYPESAFE_BY_RCU) {
> -#ifdef SLAB_SUPPORTS_SYSFS
> -		sysfs_slab_unlink(s);
> -#endif
>  		list_add_tail(&s->list, &slab_caches_to_rcu_destroy);
> -		schedule_work(&slab_caches_to_rcu_destroy_work);

Hi Waiman!

This version is much more readable, thank you!

But can we, please, leave this schedule_work(&slab_caches_to_rcu_destroy_work)
call here? I don't see a good reason to move it, do I miss something?
It's nice to have list_add_tail() and schedule_work() calls nearby, so
it's obvious we can't miss the latter.

Thanks!

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] mm/slab_common: Deleting kobject in kmem_cache_destroy() without holding slab_mutex/cpu_hotplug_lock
  2022-08-10 18:10 ` Roman Gushchin
@ 2022-08-10 18:27   ` Waiman Long
  2022-08-10 18:45     ` Waiman Long
  0 siblings, 1 reply; 4+ messages in thread
From: Waiman Long @ 2022-08-10 18:27 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Vlastimil Babka, Hyeonggon Yoo, linux-mm,
	linux-kernel

On 8/10/22 14:10, Roman Gushchin wrote:
> On Wed, Aug 10, 2022 at 12:49:46PM -0400, Waiman Long wrote:
>> A circular locking problem is reported by lockdep due to the following
>> circular locking dependency.
>>
>>    +--> cpu_hotplug_lock --> slab_mutex --> kn->active --+
>>    |                                                     |
>>    +-----------------------------------------------------+
>>
>> The forward cpu_hotplug_lock ==> slab_mutex ==> kn->active dependency
>> happens in
>>
>>    kmem_cache_destroy():	cpus_read_lock(); mutex_lock(&slab_mutex);
>>    ==> sysfs_slab_unlink()
>>        ==> kobject_del()
>>            ==> kernfs_remove()
>> 	      ==> __kernfs_remove()
>> 	          ==> kernfs_drain(): rwsem_acquire(&kn->dep_map, ...);
>>
>> The backward kn->active ==> cpu_hotplug_lock dependency happens in
>>
>>    kernfs_fop_write_iter(): kernfs_get_active();
>>    ==> slab_attr_store()
>>        ==> cpu_partial_store()
>>            ==> flush_all(): cpus_read_lock()
>>
>> One way to break this circular locking chain is to avoid holding
>> cpu_hotplug_lock and slab_mutex while deleting the kobject in
>> sysfs_slab_unlink() which should be equivalent to doing a write_lock
>> and write_unlock pair of the kn->active virtual lock.
>>
>> Since the kobject structures are not protected by slab_mutex or the
>> cpu_hotplug_lock, we can certainly release those locks before doing
>> the delete operation.
>>
>> Move sysfs_slab_unlink() and sysfs_slab_release() to the newly
>> created kmem_cache_release() and call it outside the slab_mutex &
>> cpu_hotplug_lock critical sections.
>>
>> Signed-off-by: Waiman Long <longman@redhat.com>
>> ---
>>   [v2] Break kmem_cache_release() helper into 2 separate ones.
>>
>>   mm/slab_common.c | 54 +++++++++++++++++++++++++++++++++---------------
>>   1 file changed, 37 insertions(+), 17 deletions(-)
>>
>> diff --git a/mm/slab_common.c b/mm/slab_common.c
>> index 17996649cfe3..7742d0446d8b 100644
>> --- a/mm/slab_common.c
>> +++ b/mm/slab_common.c
>> @@ -392,6 +392,36 @@ kmem_cache_create(const char *name, unsigned int size, unsigned int align,
>>   }
>>   EXPORT_SYMBOL(kmem_cache_create);
>>   
>> +#ifdef SLAB_SUPPORTS_SYSFS
>> +static void kmem_cache_workfn_release(struct kmem_cache *s)
>> +{
>> +	sysfs_slab_release(s);
>> +}
>> +#else
>> +static void kmem_cache_workfn_release(struct kmem_cache *s)
>> +{
>> +	slab_kmem_cache_release(s);
>> +}
>> +#endif
>> +
>> +/*
>> + * For a given kmem_cache, kmem_cache_destroy() should only be called
>> + * once or there will be a use-after-free problem. The actual deletion
>> + * and release of the kobject does not need slab_mutex or cpu_hotplug_lock
>> + * protection. So they are now done without holding those locks.
>> + */
>> +static void kmem_cache_release(struct kmem_cache *s)
>> +{
>> +#ifdef SLAB_SUPPORTS_SYSFS
>> +	sysfs_slab_unlink(s);
>> +#endif
>> +
>> +	if (s->flags & SLAB_TYPESAFE_BY_RCU)
>> +		schedule_work(&slab_caches_to_rcu_destroy_work);
>> +	else
>> +		kmem_cache_workfn_release(s);
>> +}
>> +
>>   static void slab_caches_to_rcu_destroy_workfn(struct work_struct *work)
>>   {
>>   	LIST_HEAD(to_destroy);
>> @@ -418,11 +448,7 @@ static void slab_caches_to_rcu_destroy_workfn(struct work_struct *work)
>>   	list_for_each_entry_safe(s, s2, &to_destroy, list) {
>>   		debugfs_slab_release(s);
>>   		kfence_shutdown_cache(s);
>> -#ifdef SLAB_SUPPORTS_SYSFS
>> -		sysfs_slab_release(s);
>> -#else
>> -		slab_kmem_cache_release(s);
>> -#endif
>> +		kmem_cache_workfn_release(s);
>>   	}
>>   }
>>   
>> @@ -437,20 +463,10 @@ static int shutdown_cache(struct kmem_cache *s)
>>   	list_del(&s->list);
>>   
>>   	if (s->flags & SLAB_TYPESAFE_BY_RCU) {
>> -#ifdef SLAB_SUPPORTS_SYSFS
>> -		sysfs_slab_unlink(s);
>> -#endif
>>   		list_add_tail(&s->list, &slab_caches_to_rcu_destroy);
>> -		schedule_work(&slab_caches_to_rcu_destroy_work);
> Hi Waiman!
>
> This version is much more readable, thank you!
>
> But can we, please, leave this schedule_work(&slab_caches_to_rcu_destroy_work)
> call here? I don't see a good reason to move it, do I miss something?
> It's nice to have list_add_tail() and schedule_work() calls nearby, so
> it's obvious we can't miss the latter.

The reason that I need to move out schedule_work() as well is to make 
sure that sysfs_slab_unlink() is called before sysfs_slab_release(). I 
can't guarantee that if I do schedule_work() first. On the other hand, 
moving sysfs_slab_unlink() into kmem_cache_workfn_release() introduces 
unknown delay of when the sysfs file will be removed. I can add some 
comment to make it more clear.

Please let me know if you have a better idea of dealing with this issue.

Thanks,
Longman


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] mm/slab_common: Deleting kobject in kmem_cache_destroy() without holding slab_mutex/cpu_hotplug_lock
  2022-08-10 18:27   ` Waiman Long
@ 2022-08-10 18:45     ` Waiman Long
  0 siblings, 0 replies; 4+ messages in thread
From: Waiman Long @ 2022-08-10 18:45 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Vlastimil Babka, Hyeonggon Yoo, linux-mm,
	linux-kernel

On 8/10/22 14:27, Waiman Long wrote:
> On 8/10/22 14:10, Roman Gushchin wrote:
>> On Wed, Aug 10, 2022 at 12:49:46PM -0400, Waiman Long wrote:
>>> A circular locking problem is reported by lockdep due to the following
>>> circular locking dependency.
>>>
>>>    +--> cpu_hotplug_lock --> slab_mutex --> kn->active --+
>>>    |                                                     |
>>>    +-----------------------------------------------------+
>>>
>>> The forward cpu_hotplug_lock ==> slab_mutex ==> kn->active dependency
>>> happens in
>>>
>>>    kmem_cache_destroy():    cpus_read_lock(); mutex_lock(&slab_mutex);
>>>    ==> sysfs_slab_unlink()
>>>        ==> kobject_del()
>>>            ==> kernfs_remove()
>>>           ==> __kernfs_remove()
>>>               ==> kernfs_drain(): rwsem_acquire(&kn->dep_map, ...);
>>>
>>> The backward kn->active ==> cpu_hotplug_lock dependency happens in
>>>
>>>    kernfs_fop_write_iter(): kernfs_get_active();
>>>    ==> slab_attr_store()
>>>        ==> cpu_partial_store()
>>>            ==> flush_all(): cpus_read_lock()
>>>
>>> One way to break this circular locking chain is to avoid holding
>>> cpu_hotplug_lock and slab_mutex while deleting the kobject in
>>> sysfs_slab_unlink() which should be equivalent to doing a write_lock
>>> and write_unlock pair of the kn->active virtual lock.
>>>
>>> Since the kobject structures are not protected by slab_mutex or the
>>> cpu_hotplug_lock, we can certainly release those locks before doing
>>> the delete operation.
>>>
>>> Move sysfs_slab_unlink() and sysfs_slab_release() to the newly
>>> created kmem_cache_release() and call it outside the slab_mutex &
>>> cpu_hotplug_lock critical sections.
>>>
>>> Signed-off-by: Waiman Long <longman@redhat.com>
>>> ---
>>>   [v2] Break kmem_cache_release() helper into 2 separate ones.
>>>
>>>   mm/slab_common.c | 54 
>>> +++++++++++++++++++++++++++++++++---------------
>>>   1 file changed, 37 insertions(+), 17 deletions(-)
>>>
>>> diff --git a/mm/slab_common.c b/mm/slab_common.c
>>> index 17996649cfe3..7742d0446d8b 100644
>>> --- a/mm/slab_common.c
>>> +++ b/mm/slab_common.c
>>> @@ -392,6 +392,36 @@ kmem_cache_create(const char *name, unsigned 
>>> int size, unsigned int align,
>>>   }
>>>   EXPORT_SYMBOL(kmem_cache_create);
>>>   +#ifdef SLAB_SUPPORTS_SYSFS
>>> +static void kmem_cache_workfn_release(struct kmem_cache *s)
>>> +{
>>> +    sysfs_slab_release(s);
>>> +}
>>> +#else
>>> +static void kmem_cache_workfn_release(struct kmem_cache *s)
>>> +{
>>> +    slab_kmem_cache_release(s);
>>> +}
>>> +#endif
>>> +
>>> +/*
>>> + * For a given kmem_cache, kmem_cache_destroy() should only be called
>>> + * once or there will be a use-after-free problem. The actual deletion
>>> + * and release of the kobject does not need slab_mutex or 
>>> cpu_hotplug_lock
>>> + * protection. So they are now done without holding those locks.
>>> + */
>>> +static void kmem_cache_release(struct kmem_cache *s)
>>> +{
>>> +#ifdef SLAB_SUPPORTS_SYSFS
>>> +    sysfs_slab_unlink(s);
>>> +#endif
>>> +
>>> +    if (s->flags & SLAB_TYPESAFE_BY_RCU)
>>> +        schedule_work(&slab_caches_to_rcu_destroy_work);
>>> +    else
>>> +        kmem_cache_workfn_release(s);
>>> +}
>>> +
>>>   static void slab_caches_to_rcu_destroy_workfn(struct work_struct 
>>> *work)
>>>   {
>>>       LIST_HEAD(to_destroy);
>>> @@ -418,11 +448,7 @@ static void 
>>> slab_caches_to_rcu_destroy_workfn(struct work_struct *work)
>>>       list_for_each_entry_safe(s, s2, &to_destroy, list) {
>>>           debugfs_slab_release(s);
>>>           kfence_shutdown_cache(s);
>>> -#ifdef SLAB_SUPPORTS_SYSFS
>>> -        sysfs_slab_release(s);
>>> -#else
>>> -        slab_kmem_cache_release(s);
>>> -#endif
>>> +        kmem_cache_workfn_release(s);
>>>       }
>>>   }
>>>   @@ -437,20 +463,10 @@ static int shutdown_cache(struct kmem_cache *s)
>>>       list_del(&s->list);
>>>         if (s->flags & SLAB_TYPESAFE_BY_RCU) {
>>> -#ifdef SLAB_SUPPORTS_SYSFS
>>> -        sysfs_slab_unlink(s);
>>> -#endif
>>>           list_add_tail(&s->list, &slab_caches_to_rcu_destroy);
>>> -        schedule_work(&slab_caches_to_rcu_destroy_work);
>> Hi Waiman!
>>
>> This version is much more readable, thank you!
>>
>> But can we, please, leave this 
>> schedule_work(&slab_caches_to_rcu_destroy_work)
>> call here? I don't see a good reason to move it, do I miss something?
>> It's nice to have list_add_tail() and schedule_work() calls nearby, so
>> it's obvious we can't miss the latter.
>
> The reason that I need to move out schedule_work() as well is to make 
> sure that sysfs_slab_unlink() is called before sysfs_slab_release(). I 
> can't guarantee that if I do schedule_work() first. On the other hand, 
> moving sysfs_slab_unlink() into kmem_cache_workfn_release() introduces 
> unknown delay of when the sysfs file will be removed. I can add some 
> comment to make it more clear.

OK, I just realize that the current patch doesn't have the ordering 
guarantee either if another kmem_cache_destroy() is happening in 
parallel. I will have to push sysfs_slab_unlink() into 
kmem_cache_workfn_release() and tolerate some delay in the disappearance 
of the sysfs files. Now I can move schedule_work() back to after 
list_add_tail().

Cheers,
Longman


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-08-10 18:46 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-10 16:49 [PATCH v2] mm/slab_common: Deleting kobject in kmem_cache_destroy() without holding slab_mutex/cpu_hotplug_lock Waiman Long
2022-08-10 18:10 ` Roman Gushchin
2022-08-10 18:27   ` Waiman Long
2022-08-10 18:45     ` Waiman Long

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.