All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC,PATCH] mutex: mutex_is_owner() helper
@ 2009-11-04 15:25 Eric Dumazet
  2009-11-04 15:40 ` Ingo Molnar
  0 siblings, 1 reply; 6+ messages in thread
From: Eric Dumazet @ 2009-11-04 15:25 UTC (permalink / raw)
  To: David S. Miller, Ingo Molnar; +Cc: Linux Netdev List, linux kernel

mutex_is_locked() is called most of the time to check if mutex is locked by current
thread. But it's a lazy check, because mutex might be locked by another thread.

Adds a new mutex_is_owned_by() helper, that can check ownership if CONFIG_SMP or
CONFIG_DEBUG_MUTEXES are set.

Returns are 
 0 if mutex is unlocked.
 1 if locked
-1 if not locked by designated thread.

Last return value is possible only if CONFIG_SMP=y or CONFIG_DEBUG_MUTEXES=y

Example of use :

int rtnl_is_locked(void)
{
	return mutex_is_locked(&rtnl_mutex);
}
->
int rtnl_is_locked(void)
{
	return mutex_is_owned_by(&rtnl_mutex, current_thread_info()) == 1;
}

Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
---
 Documentation/mutex-design.txt |    1 +
 include/linux/mutex.h          |   20 ++++++++++++++++++++
 2 files changed, 21 insertions(+)

diff --git a/Documentation/mutex-design.txt b/Documentation/mutex-design.txt
index aa60d1f..521607c 100644
--- a/Documentation/mutex-design.txt
+++ b/Documentation/mutex-design.txt
@@ -133,6 +133,7 @@ the APIs of 'struct mutex' have been streamlined:
  int  mutex_trylock(struct mutex *lock);
  void mutex_unlock(struct mutex *lock);
  int  mutex_is_locked(struct mutex *lock);
+ int  mutex_is_owned_by(struct mutex *lock, struct thread_info *ti);
  void mutex_lock_nested(struct mutex *lock, unsigned int subclass);
  int  mutex_lock_interruptible_nested(struct mutex *lock,
                                       unsigned int subclass);
diff --git a/include/linux/mutex.h b/include/linux/mutex.h
index 878cab4..95a8c5b 100644
--- a/include/linux/mutex.h
+++ b/include/linux/mutex.h
@@ -118,6 +118,26 @@ static inline int mutex_is_locked(struct mutex *lock)
 	return atomic_read(&lock->count) != 1;
 }
 
+/**
+ * mutex_is_owned_by - check mutex ownership
+ * @lock: the mutex to be queried
+ * @ti: thread_info pointer
+ *
+ * Returns: 0 if mutex is unlocked.
+ *          1 if locked
+ *         -1 if not locked by designated thread.
+ */
+static inline int mutex_is_owned_by(struct mutex *lock, struct thread_info *ti)
+{
+	if (atomic_read(&lock->count) == 1)
+		return 0;
+#if defined(CONFIG_DEBUG_MUTEXES) || defined(CONFIG_SMP)
+	if (lock->owner != ti)
+		return -1;
+#endif
+	return 1;
+}
+
 /*
  * See kernel/mutex.c for detailed documentation of these APIs.
  * Also see Documentation/mutex-design.txt.

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [RFC,PATCH] mutex: mutex_is_owner() helper
  2009-11-04 15:25 [RFC,PATCH] mutex: mutex_is_owner() helper Eric Dumazet
@ 2009-11-04 15:40 ` Ingo Molnar
  2009-11-04 17:19   ` Eric Dumazet
  0 siblings, 1 reply; 6+ messages in thread
From: Ingo Molnar @ 2009-11-04 15:40 UTC (permalink / raw)
  To: Eric Dumazet, Peter Zijlstra, Linus Torvalds
  Cc: David S. Miller, Linux Netdev List, linux kernel


* Eric Dumazet <eric.dumazet@gmail.com> wrote:

> mutex_is_locked() is called most of the time to check if mutex is locked by current
> thread. But it's a lazy check, because mutex might be locked by another thread.
> 
> Adds a new mutex_is_owned_by() helper, that can check ownership if CONFIG_SMP or
> CONFIG_DEBUG_MUTEXES are set.
> 
> Returns are 
>  0 if mutex is unlocked.
>  1 if locked
> -1 if not locked by designated thread.
> 
> Last return value is possible only if CONFIG_SMP=y or CONFIG_DEBUG_MUTEXES=y
> 
> Example of use :
> 
> int rtnl_is_locked(void)
> {
> 	return mutex_is_locked(&rtnl_mutex);
> }
> ->
> int rtnl_is_locked(void)
> {
> 	return mutex_is_owned_by(&rtnl_mutex, current_thread_info()) == 1;
> }

To make sure this does not extend mutexes to be used a recursive 
mutexes, mind naming it more clearly, like debug_mutex_is_owned(), and 
adding a comment that says that this shouldnt be taken?

Also, it's somewhat imprecise: on !SMP && !DEBUG_MUTEXES we might return 
a false '1'. Which happens to work for the rtnl usecase - but might not 
in other cases.

	Ingo

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC,PATCH] mutex: mutex_is_owner() helper
  2009-11-04 15:40 ` Ingo Molnar
@ 2009-11-04 17:19   ` Eric Dumazet
  2009-11-09 18:56     ` Peter Zijlstra
  0 siblings, 1 reply; 6+ messages in thread
From: Eric Dumazet @ 2009-11-04 17:19 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Peter Zijlstra, Linus Torvalds, David S. Miller,
	Linux Netdev List, linux kernel

Ingo Molnar a écrit :

> To make sure this does not extend mutexes to be used a recursive 
> mutexes, mind naming it more clearly, like debug_mutex_is_owned(), and 
> adding a comment that says that this shouldnt be taken?
> 
> Also, it's somewhat imprecise: on !SMP && !DEBUG_MUTEXES we might return 
> a false '1'. Which happens to work for the rtnl usecase - but might not 
> in other cases.
>

Sure, we can chose another name, but what do you mean by a false '1' ?

1 means mutex is locked and that we could not check ownership.
(best effort, ie same imprecise result than mutex_is_locked())


BTW, I was thinking of a mutex_yield() implementation, but could not
cook it without hard thinking, maybe you already have some nice implementation ?

We have some uses of "mutex_unlock();mutex_lock();" things that are
not working nicely because current thread immediately takes again mutex.


a true mutex_yield() would force current thread to go at the end of wait_list.

int mutex_yield(struct mutex *lock)
{
	unsigned long flags;

	// OK to test list without locking
	if (list_empty(&lock->wait_list))
		return 0;

	spin_lock_mutex(&lock->wait_lock, flags);
	if (!list_empty(&lock->wait_list)) {
		atomic_xchg(&lock->count, 1);// free mutex
		list_add_tail(&waiter.list, &lock->wait_list);//insert me at tail of wait_list
		wake head of wait_list
		__mutex_lock_common_condadd(mutex, TASK_UNINTERRUPTIBLE, DONT_ADD_TAIL, ...);
	} else {
		spin_unlock_mutex(&lock->wait_lock, flags);
	}
	return 1;
}


Or maybe we should try something less complex (slowpath anyway)

int mutex_yield(struct mutex *lock)
{
	int ret = 0;

	if (mutex_needbreak(lock) || should_resched()) {
		mutex_unlock(lock); 
		__cond_resched();
		mutex_lock(lock);
		ret = 1;
	}
	return ret;
}

Thanks

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC,PATCH] mutex: mutex_is_owner() helper
  2009-11-04 17:19   ` Eric Dumazet
@ 2009-11-09 18:56     ` Peter Zijlstra
  2009-11-09 23:21       ` Eric Dumazet
  0 siblings, 1 reply; 6+ messages in thread
From: Peter Zijlstra @ 2009-11-09 18:56 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Ingo Molnar, Linus Torvalds, David S. Miller, Linux Netdev List,
	linux kernel, Thomas Gleixner

On Wed, 2009-11-04 at 18:19 +0100, Eric Dumazet wrote:
> 
> BTW, I was thinking of a mutex_yield() implementation, but could not
> cook it without hard thinking, maybe you already have some nice
> implementation ?

Why? Yield sets off alarm bells, since 99.9%, and possibly more, of its
uses are wrong.

> int mutex_yield(struct mutex *lock)
> {
>         int ret = 0;
> 
>         if (mutex_needbreak(lock) || should_resched()) {
>                 mutex_unlock(lock); 
>                 __cond_resched();
>                 mutex_lock(lock);
>                 ret = 1;
>         }
>         return ret;
> } 

That reads like it should be called cond_resched_mutex(), except that
the should_resched() thing seems daft (but maybe it makes sense for
silly preemption modes like voluntary).

iirc we actually have something similar in -rt in order to implement the
lock-break for the rt-mutex based spinlocks, we set ->needbreak when a
higher priority task contends -- a policy for regular mutexes might be
'interesting' though.

As to your 'debug' helper that started this thread, doesn't
lockdep_assert_held() work for you?


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC,PATCH] mutex: mutex_is_owner() helper
  2009-11-09 18:56     ` Peter Zijlstra
@ 2009-11-09 23:21       ` Eric Dumazet
  2009-11-10  9:41         ` Peter Zijlstra
  0 siblings, 1 reply; 6+ messages in thread
From: Eric Dumazet @ 2009-11-09 23:21 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, Linus Torvalds, David S. Miller, Linux Netdev List,
	linux kernel, Thomas Gleixner

Peter Zijlstra a écrit :
> On Wed, 2009-11-04 at 18:19 +0100, Eric Dumazet wrote:
>> BTW, I was thinking of a mutex_yield() implementation, but could not
>> cook it without hard thinking, maybe you already have some nice
>> implementation ?
> 
> Why? Yield sets off alarm bells, since 99.9%, and possibly more, of its
> uses are wrong.

If I remember well, I had problems doing "modprobe dummy numdummies=30000",
because it creates 30000 netdevices, and thanks to hotplug starts 30000 udev
that all wait that my modprobe is finished... Nice to see load average going
so big by the way :)

I tried following patch without success, because rtnl_unlock()/rtnl_lock()
is too fast (awaken process(es) ha(s/ve) no chance to get the lock, as we
take it immediately after releasing it)

diff --git a/drivers/net/dummy.c b/drivers/net/dummy.c
index 37dcfdc..3de1466 100644
--- a/drivers/net/dummy.c
+++ b/drivers/net/dummy.c
@@ -138,8 +138,11 @@ static int __init dummy_init_module(void)
 	rtnl_lock();
 	err = __rtnl_link_register(&dummy_link_ops);
 
-	for (i = 0; i < numdummies && !err; i++)
+	for (i = 0; i < numdummies && !err; i++) {
 		err = dummy_init_one();
+		rtnl_unlock();
+		rtnl_lock();
+	}
 	if (err < 0)
 		__rtnl_link_unregister(&dummy_link_ops);
 	rtnl_unlock();



Then I added a msleep(1) between the unlock/lock and got beter results.



diff --git a/drivers/net/dummy.c b/drivers/net/dummy.c
index 37dcfdc..108c4fa 100644
--- a/drivers/net/dummy.c
+++ b/drivers/net/dummy.c
@@ -138,8 +138,12 @@ static int __init dummy_init_module(void)
 	rtnl_lock();
 	err = __rtnl_link_register(&dummy_link_ops);
 
-	for (i = 0; i < numdummies && !err; i++)
+	for (i = 0; i < numdummies && !err; i++) {
 		err = dummy_init_one();
+		rtnl_unlock();
+		msleep(1);
+		rtnl_lock();
+	}
 	if (err < 0)
 		__rtnl_link_unregister(&dummy_link_ops);
 	rtnl_unlock();

But if hotplug is disabled, this force a useless msleep(1) * 30000 -> this is bit slow

Yes, this code is stupid, but I use it to stress network stack
with insane number of devices, to spot scalability problems.


mutex_yield() could help in this situation.

mutex is said to be FIFO, but its not exactly true : A new comer can take the mutex
even if 10000 threads are waiting on mutex...

I wont mention other problems, because mutex_{try}lock() has no timedwait variant, and
funny code doing :

if (!rtnl_trylock())
	return restart_syscall();

Making 30000 processes running/fighting to get the mutex :(


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [RFC,PATCH] mutex: mutex_is_owner() helper
  2009-11-09 23:21       ` Eric Dumazet
@ 2009-11-10  9:41         ` Peter Zijlstra
  0 siblings, 0 replies; 6+ messages in thread
From: Peter Zijlstra @ 2009-11-10  9:41 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Ingo Molnar, Linus Torvalds, David S. Miller, Linux Netdev List,
	linux kernel, Thomas Gleixner

On Tue, 2009-11-10 at 00:21 +0100, Eric Dumazet wrote:
> Peter Zijlstra a écrit :
> > On Wed, 2009-11-04 at 18:19 +0100, Eric Dumazet wrote:
> >> BTW, I was thinking of a mutex_yield() implementation, but could not
> >> cook it without hard thinking, maybe you already have some nice
> >> implementation ?
> > 
> > Why? Yield sets off alarm bells, since 99.9%, and possibly more, of its
> > uses are wrong.
> 
> If I remember well, I had problems doing "modprobe dummy numdummies=30000",
> because it creates 30000 netdevices, and thanks to hotplug starts 30000 udev
> that all wait that my modprobe is finished... Nice to see load average going
> so big by the way :)

lol :-) With a bit of luck udev will spawn a python interpreter for each
of those things too..

> I tried following patch without success, because rtnl_unlock()/rtnl_lock()
> is too fast (awaken process(es) ha(s/ve) no chance to get the lock, as we
> take it immediately after releasing it)

Right, due to lock-stealing.

> diff --git a/drivers/net/dummy.c b/drivers/net/dummy.c
> index 37dcfdc..108c4fa 100644
> --- a/drivers/net/dummy.c
> +++ b/drivers/net/dummy.c
> @@ -138,8 +138,12 @@ static int __init dummy_init_module(void)
>  	rtnl_lock();
>  	err = __rtnl_link_register(&dummy_link_ops);
>  
> -	for (i = 0; i < numdummies && !err; i++)
> +	for (i = 0; i < numdummies && !err; i++) {
>  		err = dummy_init_one();
> +		rtnl_unlock();
> +		msleep(1);
> +		rtnl_lock();
> +	}
>  	if (err < 0)
>  		__rtnl_link_unregister(&dummy_link_ops);
>  	rtnl_unlock();
> 
> But if hotplug is disabled, this force a useless msleep(1) * 30000 -> this is bit slow
> 
> Yes, this code is stupid, but I use it to stress network stack
> with insane number of devices, to spot scalability problems.

Right...

> mutex_yield() could help in this situation.

Agreed, except I don't like the name, but I could be tained from
sched_yield().

> mutex is said to be FIFO, but its not exactly true : A new comer can take the mutex
> even if 10000 threads are waiting on mutex...

Yep, lock-stealing, you don't want to see the regression reports if you
'fix' that :-)

> I wont mention other problems, because mutex_{try}lock() has no timedwait variant

Nobody needed it I guess.. also I never quite understood the need for
timedwait, either you need to get the work done or you don't, not maybe.

Use mutex_lock_interruptible() and set a timer or something.

> , and funny code doing :
> 
> if (!rtnl_trylock())
> 	return restart_syscall();
> 
> Making 30000 processes running/fighting to get the mutex :(

Funny definition of funny ;-) That's some seriously fugly code there.



^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2009-11-10  9:41 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-11-04 15:25 [RFC,PATCH] mutex: mutex_is_owner() helper Eric Dumazet
2009-11-04 15:40 ` Ingo Molnar
2009-11-04 17:19   ` Eric Dumazet
2009-11-09 18:56     ` Peter Zijlstra
2009-11-09 23:21       ` Eric Dumazet
2009-11-10  9:41         ` Peter Zijlstra

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.