All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC] slub: do not drop slab_mutex for sysfs_slab_{add,remove}
@ 2014-02-06 15:58 ` Vladimir Davydov
  0 siblings, 0 replies; 8+ messages in thread
From: Vladimir Davydov @ 2014-02-06 15:58 UTC (permalink / raw)
  To: cl, penberg; +Cc: akpm, rientjes, mhocko, linux-kernel, linux-mm, devel

When creating/destroying a kmem cache, we do a lot of work holding the
slab_mutex, but we drop it for sysfs_slab_{add,remove} for some reason.
Since __kmem_cache_create and __kmem_cache_shutdown are extremely rare,
I propose to simplify locking by calling sysfs_slab_{add,remove} w/o
dropping the slab_mutex.

I'm interested in this, because when creating a memcg cache I need the
slab_mutex locked until the cache is fully initialized and registered to
the memcg subsys (memcg_cache_register() is called). If this is not
true, I get races when several threads try to create a cache for the
same memcg.  An alternative fix for my problem would be moving
sysfs_slab_{add,remove} after the slab_mutex is dropped, but I'd like to
try the shortest path first.

Any objections to this?

Thanks.
---
 mm/slub.c |   15 +--------------
 1 file changed, 1 insertion(+), 14 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 3d3a8a7a0f8c..6f4393892d2d 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3229,19 +3229,8 @@ int __kmem_cache_shutdown(struct kmem_cache *s)
 {
 	int rc = kmem_cache_close(s);
 
-	if (!rc) {
-		/*
-		 * We do the same lock strategy around sysfs_slab_add, see
-		 * __kmem_cache_create. Because this is pretty much the last
-		 * operation we do and the lock will be released shortly after
-		 * that in slab_common.c, we could just move sysfs_slab_remove
-		 * to a later point in common code. We should do that when we
-		 * have a common sysfs framework for all allocators.
-		 */
-		mutex_unlock(&slab_mutex);
+	if (!rc)
 		sysfs_slab_remove(s);
-		mutex_lock(&slab_mutex);
-	}
 
 	return rc;
 }
@@ -3772,9 +3761,7 @@ int __kmem_cache_create(struct kmem_cache *s, unsigned long flags)
 		return 0;
 
 	memcg_propagate_slab_attrs(s);
-	mutex_unlock(&slab_mutex);
 	err = sysfs_slab_add(s);
-	mutex_lock(&slab_mutex);
 
 	if (err)
 		kmem_cache_close(s);
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH RFC] slub: do not drop slab_mutex for sysfs_slab_{add,remove}
@ 2014-02-06 15:58 ` Vladimir Davydov
  0 siblings, 0 replies; 8+ messages in thread
From: Vladimir Davydov @ 2014-02-06 15:58 UTC (permalink / raw)
  To: cl, penberg; +Cc: akpm, rientjes, mhocko, linux-kernel, linux-mm, devel

When creating/destroying a kmem cache, we do a lot of work holding the
slab_mutex, but we drop it for sysfs_slab_{add,remove} for some reason.
Since __kmem_cache_create and __kmem_cache_shutdown are extremely rare,
I propose to simplify locking by calling sysfs_slab_{add,remove} w/o
dropping the slab_mutex.

I'm interested in this, because when creating a memcg cache I need the
slab_mutex locked until the cache is fully initialized and registered to
the memcg subsys (memcg_cache_register() is called). If this is not
true, I get races when several threads try to create a cache for the
same memcg.  An alternative fix for my problem would be moving
sysfs_slab_{add,remove} after the slab_mutex is dropped, but I'd like to
try the shortest path first.

Any objections to this?

Thanks.
---
 mm/slub.c |   15 +--------------
 1 file changed, 1 insertion(+), 14 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 3d3a8a7a0f8c..6f4393892d2d 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3229,19 +3229,8 @@ int __kmem_cache_shutdown(struct kmem_cache *s)
 {
 	int rc = kmem_cache_close(s);
 
-	if (!rc) {
-		/*
-		 * We do the same lock strategy around sysfs_slab_add, see
-		 * __kmem_cache_create. Because this is pretty much the last
-		 * operation we do and the lock will be released shortly after
-		 * that in slab_common.c, we could just move sysfs_slab_remove
-		 * to a later point in common code. We should do that when we
-		 * have a common sysfs framework for all allocators.
-		 */
-		mutex_unlock(&slab_mutex);
+	if (!rc)
 		sysfs_slab_remove(s);
-		mutex_lock(&slab_mutex);
-	}
 
 	return rc;
 }
@@ -3772,9 +3761,7 @@ int __kmem_cache_create(struct kmem_cache *s, unsigned long flags)
 		return 0;
 
 	memcg_propagate_slab_attrs(s);
-	mutex_unlock(&slab_mutex);
 	err = sysfs_slab_add(s);
-	mutex_lock(&slab_mutex);
 
 	if (err)
 		kmem_cache_close(s);
-- 
1.7.10.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH RFC] slub: do not drop slab_mutex for sysfs_slab_{add,remove}
  2014-02-06 15:58 ` Vladimir Davydov
@ 2014-02-06 16:22   ` Christoph Lameter
  -1 siblings, 0 replies; 8+ messages in thread
From: Christoph Lameter @ 2014-02-06 16:22 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: penberg, akpm, rientjes, mhocko, linux-kernel, linux-mm, devel

On Thu, 6 Feb 2014, Vladimir Davydov wrote:

> When creating/destroying a kmem cache, we do a lot of work holding the
> slab_mutex, but we drop it for sysfs_slab_{add,remove} for some reason.
> Since __kmem_cache_create and __kmem_cache_shutdown are extremely rare,
> I propose to simplify locking by calling sysfs_slab_{add,remove} w/o
> dropping the slab_mutex.

The problem is that sysfs does nasty things like spawning a process in
user space that may lead to something wanting to create slabs too. The
module may then hang waiting on the lock ...

I would be very thankful, if you can get that actually working reliably
without deadlock issues.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH RFC] slub: do not drop slab_mutex for sysfs_slab_{add,remove}
@ 2014-02-06 16:22   ` Christoph Lameter
  0 siblings, 0 replies; 8+ messages in thread
From: Christoph Lameter @ 2014-02-06 16:22 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: penberg, akpm, rientjes, mhocko, linux-kernel, linux-mm, devel

On Thu, 6 Feb 2014, Vladimir Davydov wrote:

> When creating/destroying a kmem cache, we do a lot of work holding the
> slab_mutex, but we drop it for sysfs_slab_{add,remove} for some reason.
> Since __kmem_cache_create and __kmem_cache_shutdown are extremely rare,
> I propose to simplify locking by calling sysfs_slab_{add,remove} w/o
> dropping the slab_mutex.

The problem is that sysfs does nasty things like spawning a process in
user space that may lead to something wanting to create slabs too. The
module may then hang waiting on the lock ...

I would be very thankful, if you can get that actually working reliably
without deadlock issues.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH RFC] slub: do not drop slab_mutex for sysfs_slab_{add,remove}
  2014-02-06 16:22   ` Christoph Lameter
@ 2014-02-06 18:06     ` Vladimir Davydov
  -1 siblings, 0 replies; 8+ messages in thread
From: Vladimir Davydov @ 2014-02-06 18:06 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: penberg, akpm, rientjes, mhocko, linux-kernel, linux-mm, devel

On 02/06/2014 08:22 PM, Christoph Lameter wrote:
> On Thu, 6 Feb 2014, Vladimir Davydov wrote:
>
>> When creating/destroying a kmem cache, we do a lot of work holding the
>> slab_mutex, but we drop it for sysfs_slab_{add,remove} for some reason.
>> Since __kmem_cache_create and __kmem_cache_shutdown are extremely rare,
>> I propose to simplify locking by calling sysfs_slab_{add,remove} w/o
>> dropping the slab_mutex.
> The problem is that sysfs does nasty things like spawning a process in
> user space that may lead to something wanting to create slabs too. The
> module may then hang waiting on the lock ...

Hmm... IIUC the only function of concern is kobject_uevent() -
everything else called from sysfs_slab_{add,remove} is a mix of kmalloc,
kfree, mutex_lock/unlock - in short, nothing dangerous. There we do
call_usermodehelper(), but we do it with UMH_WAIT_EXEC, which means
"wait for exec only, but not for the process to complete". An exec
shouldn't issue any slab-related stuff AFAIU. At least, I tried to run
the patched kernel with lockdep enabled and got no warnings at all when
getting uevents about adding/removing caches. That's why I started to
doubt whether we really need this lock...

Please correct me if I'm wrong.

> I would be very thankful, if you can get that actually working reliably
> without deadlock issues.

If there is no choice rather than moving sysfs_slab_{add,remove} out of
the slab_mutex critical section, I'll have to do it that way. But first
I'd like to make sure it cannot be done with less footprint.

Thanks.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH RFC] slub: do not drop slab_mutex for sysfs_slab_{add,remove}
@ 2014-02-06 18:06     ` Vladimir Davydov
  0 siblings, 0 replies; 8+ messages in thread
From: Vladimir Davydov @ 2014-02-06 18:06 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: penberg, akpm, rientjes, mhocko, linux-kernel, linux-mm, devel

On 02/06/2014 08:22 PM, Christoph Lameter wrote:
> On Thu, 6 Feb 2014, Vladimir Davydov wrote:
>
>> When creating/destroying a kmem cache, we do a lot of work holding the
>> slab_mutex, but we drop it for sysfs_slab_{add,remove} for some reason.
>> Since __kmem_cache_create and __kmem_cache_shutdown are extremely rare,
>> I propose to simplify locking by calling sysfs_slab_{add,remove} w/o
>> dropping the slab_mutex.
> The problem is that sysfs does nasty things like spawning a process in
> user space that may lead to something wanting to create slabs too. The
> module may then hang waiting on the lock ...

Hmm... IIUC the only function of concern is kobject_uevent() -
everything else called from sysfs_slab_{add,remove} is a mix of kmalloc,
kfree, mutex_lock/unlock - in short, nothing dangerous. There we do
call_usermodehelper(), but we do it with UMH_WAIT_EXEC, which means
"wait for exec only, but not for the process to complete". An exec
shouldn't issue any slab-related stuff AFAIU. At least, I tried to run
the patched kernel with lockdep enabled and got no warnings at all when
getting uevents about adding/removing caches. That's why I started to
doubt whether we really need this lock...

Please correct me if I'm wrong.

> I would be very thankful, if you can get that actually working reliably
> without deadlock issues.

If there is no choice rather than moving sysfs_slab_{add,remove} out of
the slab_mutex critical section, I'll have to do it that way. But first
I'd like to make sure it cannot be done with less footprint.

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH RFC] slub: do not drop slab_mutex for sysfs_slab_{add,remove}
  2014-02-06 18:06     ` Vladimir Davydov
@ 2014-02-06 19:13       ` Christoph Lameter
  -1 siblings, 0 replies; 8+ messages in thread
From: Christoph Lameter @ 2014-02-06 19:13 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: penberg, akpm, rientjes, mhocko, linux-kernel, linux-mm, devel

On Thu, 6 Feb 2014, Vladimir Davydov wrote:

> Hmm... IIUC the only function of concern is kobject_uevent() -
> everything else called from sysfs_slab_{add,remove} is a mix of kmalloc,
> kfree, mutex_lock/unlock - in short, nothing dangerous. There we do
> call_usermodehelper(), but we do it with UMH_WAIT_EXEC, which means
> "wait for exec only, but not for the process to complete". An exec
> shouldn't issue any slab-related stuff AFAIU. At least, I tried to run
> the patched kernel with lockdep enabled and got no warnings at all when
> getting uevents about adding/removing caches. That's why I started to
> doubt whether we really need this lock...
>
> Please correct me if I'm wrong.

I have had this deadlock a couple of years ago. Sysfs seems to change over
time. Not sure if that is still the case.

> > I would be very thankful, if you can get that actually working reliably
> > without deadlock issues.
>
> If there is no choice rather than moving sysfs_slab_{add,remove} out of
> the slab_mutex critical section, I'll have to do it that way. But first
> I'd like to make sure it cannot be done with less footprint.

I am all for holding the lock as long as possible.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH RFC] slub: do not drop slab_mutex for sysfs_slab_{add,remove}
@ 2014-02-06 19:13       ` Christoph Lameter
  0 siblings, 0 replies; 8+ messages in thread
From: Christoph Lameter @ 2014-02-06 19:13 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: penberg, akpm, rientjes, mhocko, linux-kernel, linux-mm, devel

On Thu, 6 Feb 2014, Vladimir Davydov wrote:

> Hmm... IIUC the only function of concern is kobject_uevent() -
> everything else called from sysfs_slab_{add,remove} is a mix of kmalloc,
> kfree, mutex_lock/unlock - in short, nothing dangerous. There we do
> call_usermodehelper(), but we do it with UMH_WAIT_EXEC, which means
> "wait for exec only, but not for the process to complete". An exec
> shouldn't issue any slab-related stuff AFAIU. At least, I tried to run
> the patched kernel with lockdep enabled and got no warnings at all when
> getting uevents about adding/removing caches. That's why I started to
> doubt whether we really need this lock...
>
> Please correct me if I'm wrong.

I have had this deadlock a couple of years ago. Sysfs seems to change over
time. Not sure if that is still the case.

> > I would be very thankful, if you can get that actually working reliably
> > without deadlock issues.
>
> If there is no choice rather than moving sysfs_slab_{add,remove} out of
> the slab_mutex critical section, I'll have to do it that way. But first
> I'd like to make sure it cannot be done with less footprint.

I am all for holding the lock as long as possible.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2014-02-06 23:24 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-02-06 15:58 [PATCH RFC] slub: do not drop slab_mutex for sysfs_slab_{add,remove} Vladimir Davydov
2014-02-06 15:58 ` Vladimir Davydov
2014-02-06 16:22 ` Christoph Lameter
2014-02-06 16:22   ` Christoph Lameter
2014-02-06 18:06   ` Vladimir Davydov
2014-02-06 18:06     ` Vladimir Davydov
2014-02-06 19:13     ` Christoph Lameter
2014-02-06 19:13       ` Christoph Lameter

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.