All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] sched; Simplify mutex_spin_on_owner()
@ 2011-06-10 13:08 Thomas Gleixner
  2011-06-11  1:04 ` Paul E. McKenney
  2011-07-01 15:16 ` [tip:sched/core] sched: " tip-bot for Thomas Gleixner
  0 siblings, 2 replies; 4+ messages in thread
From: Thomas Gleixner @ 2011-06-10 13:08 UTC (permalink / raw)
  To: LKML; +Cc: Peter Zijlstra, Ingo Molnar, Paul E. McKenney

It does not make sense to rcu_read_lock/unlock() in every loop
iteration while spinning on the mutex.

Move the rcu protection once outside the loop. Also simplify the
return path to always check for lock->owner == NULL which meets the
requirements of both owner changed and need_resched() caused loop
exits.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/sched.c |   25 +++++++++----------------
 1 file changed, 9 insertions(+), 16 deletions(-)

Index: linux-2.6/kernel/sched.c
===================================================================
--- linux-2.6.orig/kernel/sched.c
+++ linux-2.6/kernel/sched.c
@@ -4306,11 +4306,8 @@ EXPORT_SYMBOL(schedule);
 
 static inline bool owner_running(struct mutex *lock, struct task_struct *owner)
 {
-	bool ret = false;
-
-	rcu_read_lock();
 	if (lock->owner != owner)
-		goto fail;
+		return false;
 
 	/*
 	 * Ensure we emit the owner->on_cpu, dereference _after_ checking
@@ -4320,11 +4317,7 @@ static inline bool owner_running(struct 
 	 */
 	barrier();
 
-	ret = owner->on_cpu;
-fail:
-	rcu_read_unlock();
-
-	return ret;
+	return owner->on_cpu;
 }
 
 /*
@@ -4336,21 +4329,21 @@ int mutex_spin_on_owner(struct mutex *lo
 	if (!sched_feat(OWNER_SPIN))
 		return 0;
 
+	rcu_read_lock();
 	while (owner_running(lock, owner)) {
 		if (need_resched())
-			return 0;
+			break;
 
 		arch_mutex_cpu_relax();
 	}
+	rcu_read_unlock();
 
 	/*
-	 * If the owner changed to another task there is likely
-	 * heavy contention, stop spinning.
+	 * We break out the loop above on need_resched() and when the
+	 * owner changed, which is a sign for heavy contention. Return
+	 * success only when lock->owner is NULL.
 	 */
-	if (lock->owner)
-		return 0;
-
-	return 1;
+	return lock->owner == NULL;
 }
 #endif
 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] sched; Simplify mutex_spin_on_owner()
  2011-06-10 13:08 [PATCH] sched; Simplify mutex_spin_on_owner() Thomas Gleixner
@ 2011-06-11  1:04 ` Paul E. McKenney
  2011-06-11 15:20   ` Paul E. McKenney
  2011-07-01 15:16 ` [tip:sched/core] sched: " tip-bot for Thomas Gleixner
  1 sibling, 1 reply; 4+ messages in thread
From: Paul E. McKenney @ 2011-06-11  1:04 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: LKML, Peter Zijlstra, Ingo Molnar

On Fri, Jun 10, 2011 at 03:08:55PM +0200, Thomas Gleixner wrote:
> It does not make sense to rcu_read_lock/unlock() in every loop
> iteration while spinning on the mutex.
> 
> Move the rcu protection once outside the loop. Also simplify the
> return path to always check for lock->owner == NULL which meets the
> requirements of both owner changed and need_resched() caused loop
> exits.

Interesting.  If the spin was preempted in the new form, then
RCU priority boosting would boost the priority of the task spinning
on the mutex.  My guess is that this would happen rarely enough
to not be a problem, but other thoughts?

							Thanx, Paul

> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  kernel/sched.c |   25 +++++++++----------------
>  1 file changed, 9 insertions(+), 16 deletions(-)
> 
> Index: linux-2.6/kernel/sched.c
> ===================================================================
> --- linux-2.6.orig/kernel/sched.c
> +++ linux-2.6/kernel/sched.c
> @@ -4306,11 +4306,8 @@ EXPORT_SYMBOL(schedule);
> 
>  static inline bool owner_running(struct mutex *lock, struct task_struct *owner)
>  {
> -	bool ret = false;
> -
> -	rcu_read_lock();
>  	if (lock->owner != owner)
> -		goto fail;
> +		return false;
> 
>  	/*
>  	 * Ensure we emit the owner->on_cpu, dereference _after_ checking
> @@ -4320,11 +4317,7 @@ static inline bool owner_running(struct 
>  	 */
>  	barrier();
> 
> -	ret = owner->on_cpu;
> -fail:
> -	rcu_read_unlock();
> -
> -	return ret;
> +	return owner->on_cpu;
>  }
> 
>  /*
> @@ -4336,21 +4329,21 @@ int mutex_spin_on_owner(struct mutex *lo
>  	if (!sched_feat(OWNER_SPIN))
>  		return 0;
> 
> +	rcu_read_lock();
>  	while (owner_running(lock, owner)) {
>  		if (need_resched())
> -			return 0;
> +			break;
> 
>  		arch_mutex_cpu_relax();
>  	}
> +	rcu_read_unlock();
> 
>  	/*
> -	 * If the owner changed to another task there is likely
> -	 * heavy contention, stop spinning.
> +	 * We break out the loop above on need_resched() and when the
> +	 * owner changed, which is a sign for heavy contention. Return
> +	 * success only when lock->owner is NULL.
>  	 */
> -	if (lock->owner)
> -		return 0;
> -
> -	return 1;
> +	return lock->owner == NULL;
>  }
>  #endif
> 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] sched; Simplify mutex_spin_on_owner()
  2011-06-11  1:04 ` Paul E. McKenney
@ 2011-06-11 15:20   ` Paul E. McKenney
  0 siblings, 0 replies; 4+ messages in thread
From: Paul E. McKenney @ 2011-06-11 15:20 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: LKML, Peter Zijlstra, Ingo Molnar

On Fri, Jun 10, 2011 at 06:04:24PM -0700, Paul E. McKenney wrote:
> On Fri, Jun 10, 2011 at 03:08:55PM +0200, Thomas Gleixner wrote:
> > It does not make sense to rcu_read_lock/unlock() in every loop
> > iteration while spinning on the mutex.
> > 
> > Move the rcu protection once outside the loop. Also simplify the
> > return path to always check for lock->owner == NULL which meets the
> > requirements of both owner changed and need_resched() caused loop
> > exits.
> 
> Interesting.  If the spin was preempted in the new form, then
> RCU priority boosting would boost the priority of the task spinning
> on the mutex.  My guess is that this would happen rarely enough
> to not be a problem, but other thoughts?

And if it does turn out to be a problem, one way to handle it would
be for me to provide an rcu_boosted_me() or some such that checks
the bit in the task structure, and then add something like the
following to your patch, which would momentarily exit the RCU
read-side critical section in order to deboost.

Thoughts?

							Thanx, Paul

> > Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> > ---
> >  kernel/sched.c |   25 +++++++++----------------
> >  1 file changed, 9 insertions(+), 16 deletions(-)
> > 
> > Index: linux-2.6/kernel/sched.c
> > ===================================================================
> > --- linux-2.6.orig/kernel/sched.c
> > +++ linux-2.6/kernel/sched.c
> > @@ -4306,11 +4306,8 @@ EXPORT_SYMBOL(schedule);
> > 
> >  static inline bool owner_running(struct mutex *lock, struct task_struct *owner)
> >  {
> > -	bool ret = false;
> > -
> > -	rcu_read_lock();
> >  	if (lock->owner != owner)
> > -		goto fail;
> > +		return false;
> > 
> >  	/*
> >  	 * Ensure we emit the owner->on_cpu, dereference _after_ checking
> > @@ -4320,11 +4317,7 @@ static inline bool owner_running(struct 
> >  	 */
> >  	barrier();
> > 
> > -	ret = owner->on_cpu;
> > -fail:
> > -	rcu_read_unlock();
> > -
> > -	return ret;
> > +	return owner->on_cpu;
> >  }
> > 
> >  /*
> > @@ -4336,21 +4329,21 @@ int mutex_spin_on_owner(struct mutex *lo
> >  	if (!sched_feat(OWNER_SPIN))
> >  		return 0;
> > 
> > +	rcu_read_lock();
> >  	while (owner_running(lock, owner)) {
> >  		if (need_resched())
> > -			return 0;
> > +			break;

		if (rcu_boosted_me()) {
			rcu_read_unlock();
			rcu_read_lock();
		}

> >  		arch_mutex_cpu_relax();
> >  	}
> > +	rcu_read_unlock();
> > 
> >  	/*
> > -	 * If the owner changed to another task there is likely
> > -	 * heavy contention, stop spinning.
> > +	 * We break out the loop above on need_resched() and when the
> > +	 * owner changed, which is a sign for heavy contention. Return
> > +	 * success only when lock->owner is NULL.
> >  	 */
> > -	if (lock->owner)
> > -		return 0;
> > -
> > -	return 1;
> > +	return lock->owner == NULL;
> >  }
> >  #endif
> > 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [tip:sched/core] sched: Simplify mutex_spin_on_owner()
  2011-06-10 13:08 [PATCH] sched; Simplify mutex_spin_on_owner() Thomas Gleixner
  2011-06-11  1:04 ` Paul E. McKenney
@ 2011-07-01 15:16 ` tip-bot for Thomas Gleixner
  1 sibling, 0 replies; 4+ messages in thread
From: tip-bot for Thomas Gleixner @ 2011-07-01 15:16 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, hpa, mingo, akpm, torvalds, a.p.zijlstra, tglx, mingo

Commit-ID:  307bf9803f25a8a3f53c1012110fb74e2f893eb0
Gitweb:     http://git.kernel.org/tip/307bf9803f25a8a3f53c1012110fb74e2f893eb0
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Fri, 10 Jun 2011 15:08:55 +0200
Committer:  Ingo Molnar <mingo@elte.hu>
CommitDate: Fri, 1 Jul 2011 10:39:07 +0200

sched: Simplify mutex_spin_on_owner()

It does not make sense to rcu_read_lock/unlock() in every loop
iteration while spinning on the mutex.

Move the rcu protection outside the loop. Also simplify the
return path to always check for lock->owner == NULL which
meets the requirements of both owner changed and need_resched()
caused loop exits.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1106101458350.11814@ionos
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
 kernel/sched.c |   25 +++++++++----------------
 1 files changed, 9 insertions(+), 16 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index 5925275..e355ee7 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -4306,11 +4306,8 @@ EXPORT_SYMBOL(schedule);
 
 static inline bool owner_running(struct mutex *lock, struct task_struct *owner)
 {
-	bool ret = false;
-
-	rcu_read_lock();
 	if (lock->owner != owner)
-		goto fail;
+		return false;
 
 	/*
 	 * Ensure we emit the owner->on_cpu, dereference _after_ checking
@@ -4320,11 +4317,7 @@ static inline bool owner_running(struct mutex *lock, struct task_struct *owner)
 	 */
 	barrier();
 
-	ret = owner->on_cpu;
-fail:
-	rcu_read_unlock();
-
-	return ret;
+	return owner->on_cpu;
 }
 
 /*
@@ -4336,21 +4329,21 @@ int mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner)
 	if (!sched_feat(OWNER_SPIN))
 		return 0;
 
+	rcu_read_lock();
 	while (owner_running(lock, owner)) {
 		if (need_resched())
-			return 0;
+			break;
 
 		arch_mutex_cpu_relax();
 	}
+	rcu_read_unlock();
 
 	/*
-	 * If the owner changed to another task there is likely
-	 * heavy contention, stop spinning.
+	 * We break out the loop above on need_resched() and when the
+	 * owner changed, which is a sign for heavy contention. Return
+	 * success only when lock->owner is NULL.
 	 */
-	if (lock->owner)
-		return 0;
-
-	return 1;
+	return lock->owner == NULL;
 }
 #endif
 

^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2011-07-01 15:17 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-06-10 13:08 [PATCH] sched; Simplify mutex_spin_on_owner() Thomas Gleixner
2011-06-11  1:04 ` Paul E. McKenney
2011-06-11 15:20   ` Paul E. McKenney
2011-07-01 15:16 ` [tip:sched/core] sched: " tip-bot for Thomas Gleixner

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.