linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] locking/mutex: Reduce chance of setting HANDOFF bit on unlocked mutex
@ 2021-06-29 20:11 Waiman Long
  2021-06-30 10:21 ` Peter Zijlstra
  0 siblings, 1 reply; 8+ messages in thread
From: Waiman Long @ 2021-06-29 20:11 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Will Deacon, Boqun Feng
  Cc: linux-kernel, Xu, Yanfei, Waiman Long

The current mutex code may set the HANDOFF bit right after wakeup
without checking if the mutex is unlocked.  The chance of setting the
HANDOFF bit on an unlocked mutex can be relatively high. In this case,
it doesn't really block other waiters from acquiring the lock thus
wasting an unnecessary atomic operation.

To reduce the chance, do a trylock first before setting the HANDOFF bit.
In addition, optimistic spinning on the mutex will only be done if the
HANDOFF bit is set on a locked mutex to guarantee that no one else can
steal it.

Reported-by: Xu, Yanfei <yanfei.xu@windriver.com>
Signed-off-by: Waiman Long <longman@redhat.com>
---
 kernel/locking/mutex.c | 42 +++++++++++++++++++++++++++++-------------
 1 file changed, 29 insertions(+), 13 deletions(-)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index d2df5e68b503..472ab21b5b8e 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -118,9 +118,9 @@ static inline struct task_struct *__mutex_trylock_or_owner(struct mutex *lock)
 		}
 
 		/*
-		 * We set the HANDOFF bit, we must make sure it doesn't live
-		 * past the point where we acquire it. This would be possible
-		 * if we (accidentally) set the bit on an unlocked mutex.
+		 * Always clear the HANDOFF bit before acquiring the lock.
+		 * Note that if the bit is accidentally set on an unlocked
+		 * mutex, anyone can acquire it.
 		 */
 		flags &= ~MUTEX_FLAG_HANDOFF;
 
@@ -180,6 +180,11 @@ static inline void __mutex_set_flag(struct mutex *lock, unsigned long flag)
 	atomic_long_or(flag, &lock->owner);
 }
 
+static inline long __mutex_fetch_set_flag(struct mutex *lock, unsigned long flag)
+{
+	return atomic_long_fetch_or_relaxed(flag, &lock->owner);
+}
+
 static inline void __mutex_clear_flag(struct mutex *lock, unsigned long flag)
 {
 	atomic_long_andnot(flag, &lock->owner);
@@ -1007,6 +1012,8 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
 
 	set_current_state(state);
 	for (;;) {
+		long owner = 0L;
+
 		/*
 		 * Once we hold wait_lock, we're serialized against
 		 * mutex_unlock() handing the lock off to us, do a trylock
@@ -1035,24 +1042,33 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
 		spin_unlock(&lock->wait_lock);
 		schedule_preempt_disabled();
 
+		/*
+		 * Here we order against unlock; we must either see it change
+		 * state back to RUNNING and fall through the next schedule(),
+		 * or we must see its unlock and acquire.
+		 */
+		if (__mutex_trylock(lock))
+			break;
+
+		set_current_state(state);
+
 		/*
 		 * ww_mutex needs to always recheck its position since its waiter
 		 * list is not FIFO ordered.
 		 */
-		if (ww_ctx || !first) {
+		if (ww_ctx || !first)
 			first = __mutex_waiter_is_first(lock, &waiter);
-			if (first)
-				__mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
-		}
 
-		set_current_state(state);
+		if (first)
+			owner = __mutex_fetch_set_flag(lock, MUTEX_FLAG_HANDOFF);
+
 		/*
-		 * Here we order against unlock; we must either see it change
-		 * state back to RUNNING and fall through the next schedule(),
-		 * or we must see its unlock and acquire.
+		 * If a lock holder is present with HANDOFF bit set, it will
+		 * guarantee that no one else can steal the lock. We may spin
+		 * on the lock to acquire it earlier.
 		 */
-		if (__mutex_trylock(lock) ||
-		    (first && mutex_optimistic_spin(lock, ww_ctx, &waiter)))
+		if ((owner & ~MUTEX_FLAGS) &&
+		     mutex_optimistic_spin(lock, ww_ctx, &waiter))
 			break;
 
 		spin_lock(&lock->wait_lock);
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] locking/mutex: Reduce chance of setting HANDOFF bit on unlocked mutex
  2021-06-29 20:11 [PATCH] locking/mutex: Reduce chance of setting HANDOFF bit on unlocked mutex Waiman Long
@ 2021-06-30 10:21 ` Peter Zijlstra
  2021-06-30 13:50   ` Waiman Long
  2021-06-30 14:43   ` Xu, Yanfei
  0 siblings, 2 replies; 8+ messages in thread
From: Peter Zijlstra @ 2021-06-30 10:21 UTC (permalink / raw)
  To: Waiman Long
  Cc: Ingo Molnar, Will Deacon, Boqun Feng, linux-kernel, Xu, Yanfei

On Tue, Jun 29, 2021 at 04:11:38PM -0400, Waiman Long wrote:

> diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
> index d2df5e68b503..472ab21b5b8e 100644
> --- a/kernel/locking/mutex.c
> +++ b/kernel/locking/mutex.c
> @@ -118,9 +118,9 @@ static inline struct task_struct *__mutex_trylock_or_owner(struct mutex *lock)
>  		}
>  
>  		/*
> -		 * We set the HANDOFF bit, we must make sure it doesn't live
> -		 * past the point where we acquire it. This would be possible
> -		 * if we (accidentally) set the bit on an unlocked mutex.
> +		 * Always clear the HANDOFF bit before acquiring the lock.
> +		 * Note that if the bit is accidentally set on an unlocked
> +		 * mutex, anyone can acquire it.
>  		 */
>  		flags &= ~MUTEX_FLAG_HANDOFF;
>  
> @@ -180,6 +180,11 @@ static inline void __mutex_set_flag(struct mutex *lock, unsigned long flag)
>  	atomic_long_or(flag, &lock->owner);
>  }
>  
> +static inline long __mutex_fetch_set_flag(struct mutex *lock, unsigned long flag)
> +{
> +	return atomic_long_fetch_or_relaxed(flag, &lock->owner);
> +}
> +
>  static inline void __mutex_clear_flag(struct mutex *lock, unsigned long flag)
>  {

Hurmph, so we already have a cmpxchg loop in trylock, might as well make
that do exactly what we want without holes on.

How's something like the below? Boot tested, but please verify.

---
 kernel/locking/mutex.c | 89 ++++++++++++++++++++++++++------------------------
 1 file changed, 46 insertions(+), 43 deletions(-)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index d2df5e68b503..53f7fcadce85 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -91,44 +91,54 @@ static inline unsigned long __owner_flags(unsigned long owner)
 	return owner & MUTEX_FLAGS;
 }
 
+#ifdef CONFIG_DEBUG_MUTEXES
+#define MUTEX_WARN_ON(cond) DEBUG_LOCKS_WARN_ON(cond)
+#else
+#define MUTEX_WARN_ON(cond)
+#endif
+
 /*
  * Trylock variant that returns the owning task on failure.
  */
-static inline struct task_struct *__mutex_trylock_or_owner(struct mutex *lock)
+static inline struct task_struct *__mutex_trylock_or_owner(struct mutex *lock, bool handoff)
 {
 	unsigned long owner, curr = (unsigned long)current;
 
 	owner = atomic_long_read(&lock->owner);
 	for (;;) { /* must loop, can race against a flag */
-		unsigned long old, flags = __owner_flags(owner);
+		unsigned long flags = __owner_flags(owner);
 		unsigned long task = owner & ~MUTEX_FLAGS;
 
 		if (task) {
-			if (likely(task != curr))
-				break;
+			if (flags & MUTEX_FLAG_PICKUP) {
 
-			if (likely(!(flags & MUTEX_FLAG_PICKUP)))
-				break;
+				if (task != curr)
+					break;
+
+				flags &= ~MUTEX_FLAG_PICKUP;
+
+			} else if (handoff) {
 
-			flags &= ~MUTEX_FLAG_PICKUP;
+				if (flags & MUTEX_FLAG_HANDOFF)
+					break;
+
+				flags |= MUTEX_FLAG_HANDOFF;
+
+			} else {
+
+				break;
+			}
 		} else {
-#ifdef CONFIG_DEBUG_MUTEXES
-			DEBUG_LOCKS_WARN_ON(flags & MUTEX_FLAG_PICKUP);
-#endif
+			MUTEX_WARN_ON(flags & (MUTEX_FLAG_HANDOFF | MUTEX_FLAG_PICKUP));
+			task = curr;
 		}
 
-		/*
-		 * We set the HANDOFF bit, we must make sure it doesn't live
-		 * past the point where we acquire it. This would be possible
-		 * if we (accidentally) set the bit on an unlocked mutex.
-		 */
-		flags &= ~MUTEX_FLAG_HANDOFF;
+		if (atomic_long_try_cmpxchg_acquire(&lock->owner, &owner, task | flags)) {
+			if (task == curr)
+				return NULL;
 
-		old = atomic_long_cmpxchg_acquire(&lock->owner, owner, curr | flags);
-		if (old == owner)
-			return NULL;
-
-		owner = old;
+			break;
+		}
 	}
 
 	return __owner_task(owner);
@@ -139,7 +149,7 @@ static inline struct task_struct *__mutex_trylock_or_owner(struct mutex *lock)
  */
 static inline bool __mutex_trylock(struct mutex *lock)
 {
-	return !__mutex_trylock_or_owner(lock);
+	return !__mutex_trylock_or_owner(lock, false);
 }
 
 #ifndef CONFIG_DEBUG_LOCK_ALLOC
@@ -226,7 +236,7 @@ static void __mutex_handoff(struct mutex *lock, struct task_struct *task)
 	unsigned long owner = atomic_long_read(&lock->owner);
 
 	for (;;) {
-		unsigned long old, new;
+		unsigned long new;
 
 #ifdef CONFIG_DEBUG_MUTEXES
 		DEBUG_LOCKS_WARN_ON(__owner_task(owner) != current);
@@ -238,11 +248,8 @@ static void __mutex_handoff(struct mutex *lock, struct task_struct *task)
 		if (task)
 			new |= MUTEX_FLAG_PICKUP;
 
-		old = atomic_long_cmpxchg_release(&lock->owner, owner, new);
-		if (old == owner)
+		if (atomic_long_try_cmpxchg_release(&lock->owner, &owner, new))
 			break;
-
-		owner = old;
 	}
 }
 
@@ -662,7 +669,7 @@ mutex_optimistic_spin(struct mutex *lock, struct ww_acquire_ctx *ww_ctx,
 		struct task_struct *owner;
 
 		/* Try to acquire the mutex... */
-		owner = __mutex_trylock_or_owner(lock);
+		owner = __mutex_trylock_or_owner(lock, false);
 		if (!owner)
 			break;
 
@@ -928,7 +935,6 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
 		    struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx)
 {
 	struct mutex_waiter waiter;
-	bool first = false;
 	struct ww_mutex *ww;
 	int ret;
 
@@ -1007,6 +1013,8 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
 
 	set_current_state(state);
 	for (;;) {
+		bool first;
+
 		/*
 		 * Once we hold wait_lock, we're serialized against
 		 * mutex_unlock() handing the lock off to us, do a trylock
@@ -1035,23 +1043,18 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
 		spin_unlock(&lock->wait_lock);
 		schedule_preempt_disabled();
 
-		/*
-		 * ww_mutex needs to always recheck its position since its waiter
-		 * list is not FIFO ordered.
-		 */
-		if (ww_ctx || !first) {
-			first = __mutex_waiter_is_first(lock, &waiter);
-			if (first)
-				__mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
-		}
-
 		set_current_state(state);
+
+		first = __mutex_waiter_is_first(lock, &waiter);
+
 		/*
-		 * Here we order against unlock; we must either see it change
-		 * state back to RUNNING and fall through the next schedule(),
-		 * or we must see its unlock and acquire.
+		 * We got woken up, see if we can acquire the mutex now. If
+		 * not, and we're the first waiter, make sure to mark the mutex
+		 * for HANDOFF to avoid starvation.
+		 *
+		 * XXX spin-wait vs sigpending
 		 */
-		if (__mutex_trylock(lock) ||
+		if (!__mutex_trylock_or_owner(lock, first) ||
 		    (first && mutex_optimistic_spin(lock, ww_ctx, &waiter)))
 			break;
 

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] locking/mutex: Reduce chance of setting HANDOFF bit on unlocked mutex
  2021-06-30 10:21 ` Peter Zijlstra
@ 2021-06-30 13:50   ` Waiman Long
  2021-06-30 13:56     ` Peter Zijlstra
  2021-06-30 14:43   ` Xu, Yanfei
  1 sibling, 1 reply; 8+ messages in thread
From: Waiman Long @ 2021-06-30 13:50 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, Will Deacon, Boqun Feng, linux-kernel, Xu, Yanfei

On 6/30/21 6:21 AM, Peter Zijlstra wrote:
> On Tue, Jun 29, 2021 at 04:11:38PM -0400, Waiman Long wrote:
>
>> diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
>> index d2df5e68b503..472ab21b5b8e 100644
>> --- a/kernel/locking/mutex.c
>> +++ b/kernel/locking/mutex.c
>> @@ -118,9 +118,9 @@ static inline struct task_struct *__mutex_trylock_or_owner(struct mutex *lock)
>>   		}
>>   
>>   		/*
>> -		 * We set the HANDOFF bit, we must make sure it doesn't live
>> -		 * past the point where we acquire it. This would be possible
>> -		 * if we (accidentally) set the bit on an unlocked mutex.
>> +		 * Always clear the HANDOFF bit before acquiring the lock.
>> +		 * Note that if the bit is accidentally set on an unlocked
>> +		 * mutex, anyone can acquire it.
>>   		 */
>>   		flags &= ~MUTEX_FLAG_HANDOFF;
>>   
>> @@ -180,6 +180,11 @@ static inline void __mutex_set_flag(struct mutex *lock, unsigned long flag)
>>   	atomic_long_or(flag, &lock->owner);
>>   }
>>   
>> +static inline long __mutex_fetch_set_flag(struct mutex *lock, unsigned long flag)
>> +{
>> +	return atomic_long_fetch_or_relaxed(flag, &lock->owner);
>> +}
>> +
>>   static inline void __mutex_clear_flag(struct mutex *lock, unsigned long flag)
>>   {
> Hurmph, so we already have a cmpxchg loop in trylock, might as well make
> that do exactly what we want without holes on.
>
> How's something like the below? Boot tested, but please verify.
>
> ---
>   kernel/locking/mutex.c | 89 ++++++++++++++++++++++++++------------------------
>   1 file changed, 46 insertions(+), 43 deletions(-)

The code looks good to me. It is an even better approach to make sure 
that the HANDOFF will never be set on an unlocked mutex.

Reviewed-by: Waiman Long <longman@redhat.com>


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] locking/mutex: Reduce chance of setting HANDOFF bit on unlocked mutex
  2021-06-30 13:50   ` Waiman Long
@ 2021-06-30 13:56     ` Peter Zijlstra
  2021-06-30 14:13       ` Waiman Long
  0 siblings, 1 reply; 8+ messages in thread
From: Peter Zijlstra @ 2021-06-30 13:56 UTC (permalink / raw)
  To: Waiman Long
  Cc: Ingo Molnar, Will Deacon, Boqun Feng, linux-kernel, Xu, Yanfei

On Wed, Jun 30, 2021 at 09:50:11AM -0400, Waiman Long wrote:

> The code looks good to me. It is an even better approach to make sure that
> the HANDOFF will never be set on an unlocked mutex.
> 
> Reviewed-by: Waiman Long <longman@redhat.com>

Thanks, what about that XXX? Should we not check sigpending before doing
the optimistic spinning thing?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] locking/mutex: Reduce chance of setting HANDOFF bit on unlocked mutex
  2021-06-30 13:56     ` Peter Zijlstra
@ 2021-06-30 14:13       ` Waiman Long
  2021-06-30 14:46         ` Peter Zijlstra
  0 siblings, 1 reply; 8+ messages in thread
From: Waiman Long @ 2021-06-30 14:13 UTC (permalink / raw)
  To: Peter Zijlstra, Waiman Long
  Cc: Ingo Molnar, Will Deacon, Boqun Feng, linux-kernel, Xu, Yanfei

On 6/30/21 9:56 AM, Peter Zijlstra wrote:
> On Wed, Jun 30, 2021 at 09:50:11AM -0400, Waiman Long wrote:
>
>> The code looks good to me. It is an even better approach to make sure that
>> the HANDOFF will never be set on an unlocked mutex.
>>
>> Reviewed-by: Waiman Long <longman@redhat.com>
> Thanks, what about that XXX? Should we not check sigpending before doing
> the optimistic spinning thing?
>
Sorry, I missed the XXX comment:-)

This is a generic problem as other waiters that go into the spinning 
loop also don't check for sigpending. On the other hand, I am fine with 
doing the pending signal check before doing the optimistic spinning.

Cheers,
Longman


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] locking/mutex: Reduce chance of setting HANDOFF bit on unlocked mutex
  2021-06-30 10:21 ` Peter Zijlstra
  2021-06-30 13:50   ` Waiman Long
@ 2021-06-30 14:43   ` Xu, Yanfei
  2021-06-30 14:50     ` Peter Zijlstra
  1 sibling, 1 reply; 8+ messages in thread
From: Xu, Yanfei @ 2021-06-30 14:43 UTC (permalink / raw)
  To: Peter Zijlstra, Waiman Long
  Cc: Ingo Molnar, Will Deacon, Boqun Feng, linux-kernel



On 6/30/21 6:21 PM, Peter Zijlstra wrote:
> [Please note: This e-mail is from an EXTERNAL e-mail address]
> 
> On Tue, Jun 29, 2021 at 04:11:38PM -0400, Waiman Long wrote:
> 
>> diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
>> index d2df5e68b503..472ab21b5b8e 100644
>> --- a/kernel/locking/mutex.c
>> +++ b/kernel/locking/mutex.c
>> @@ -118,9 +118,9 @@ static inline struct task_struct *__mutex_trylock_or_owner(struct mutex *lock)
>>                }
>>
>>                /*
>> -              * We set the HANDOFF bit, we must make sure it doesn't live
>> -              * past the point where we acquire it. This would be possible
>> -              * if we (accidentally) set the bit on an unlocked mutex.
>> +              * Always clear the HANDOFF bit before acquiring the lock.
>> +              * Note that if the bit is accidentally set on an unlocked
>> +              * mutex, anyone can acquire it.
>>                 */
>>                flags &= ~MUTEX_FLAG_HANDOFF;
>>
>> @@ -180,6 +180,11 @@ static inline void __mutex_set_flag(struct mutex *lock, unsigned long flag)
>>        atomic_long_or(flag, &lock->owner);
>>   }
>>
>> +static inline long __mutex_fetch_set_flag(struct mutex *lock, unsigned long flag)
>> +{
>> +     return atomic_long_fetch_or_relaxed(flag, &lock->owner);
>> +}
>> +
>>   static inline void __mutex_clear_flag(struct mutex *lock, unsigned long flag)
>>   {
> 
> Hurmph, so we already have a cmpxchg loop in trylock, might as well make
> that do exactly what we want without holes on.
> 
> How's something like the below? Boot tested, but please verify.
> 
> ---
>   kernel/locking/mutex.c | 89 ++++++++++++++++++++++++++------------------------
>   1 file changed, 46 insertions(+), 43 deletions(-)
> 
> diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
> index d2df5e68b503..53f7fcadce85 100644
> --- a/kernel/locking/mutex.c
> +++ b/kernel/locking/mutex.c
> @@ -91,44 +91,54 @@ static inline unsigned long __owner_flags(unsigned long owner)
>          return owner & MUTEX_FLAGS;
>   }
> 
> +#ifdef CONFIG_DEBUG_MUTEXES
> +#define MUTEX_WARN_ON(cond) DEBUG_LOCKS_WARN_ON(cond)
> +#else
> +#define MUTEX_WARN_ON(cond)
> +#endif
> +
>   /*
>    * Trylock variant that returns the owning task on failure.
>    */
> -static inline struct task_struct *__mutex_trylock_or_owner(struct mutex *lock)
> +static inline struct task_struct *__mutex_trylock_or_owner(struct mutex *lock, bool handoff)
>   {
>          unsigned long owner, curr = (unsigned long)current;
> 
>          owner = atomic_long_read(&lock->owner);
>          for (;;) { /* must loop, can race against a flag */
> -               unsigned long old, flags = __owner_flags(owner);
> +               unsigned long flags = __owner_flags(owner);
>                  unsigned long task = owner & ~MUTEX_FLAGS;
> 
>                  if (task) {
> -                       if (likely(task != curr))
> -                               break;
> +                       if (flags & MUTEX_FLAG_PICKUP) {
> 
> -                       if (likely(!(flags & MUTEX_FLAG_PICKUP)))
> -                               break;
> +                               if (task != curr)
> +                                       break;
> +
> +                               flags &= ~MUTEX_FLAG_PICKUP;
> +

Hmm.. Should we also clear HANDOFF bit here? I don't find where it is 
cleared.

Regards,
Yanfei

> +                       } else if (handoff) {
> 
> -                       flags &= ~MUTEX_FLAG_PICKUP;
> +                               if (flags & MUTEX_FLAG_HANDOFF)
> +                                       break;
> +
> +                               flags |= MUTEX_FLAG_HANDOFF;
> +
> +                       } else {
> +
> +                               break;
> +                       }
>                  } else {
> -#ifdef CONFIG_DEBUG_MUTEXES
> -                       DEBUG_LOCKS_WARN_ON(flags & MUTEX_FLAG_PICKUP);
> -#endif
> +                       MUTEX_WARN_ON(flags & (MUTEX_FLAG_HANDOFF | MUTEX_FLAG_PICKUP));
> +                       task = curr;
>                  }
> 
> -               /*
> -                * We set the HANDOFF bit, we must make sure it doesn't live
> -                * past the point where we acquire it. This would be possible
> -                * if we (accidentally) set the bit on an unlocked mutex.
> -                */
> -               flags &= ~MUTEX_FLAG_HANDOFF;
> +               if (atomic_long_try_cmpxchg_acquire(&lock->owner, &owner, task | flags)) {
> +                       if (task == curr)
> +                               return NULL;
> 
> -               old = atomic_long_cmpxchg_acquire(&lock->owner, owner, curr | flags);
> -               if (old == owner)
> -                       return NULL;
> -
> -               owner = old;
> +                       break;
> +               }
>          }
> 
>          return __owner_task(owner);
> @@ -139,7 +149,7 @@ static inline struct task_struct *__mutex_trylock_or_owner(struct mutex *lock)
>    */
>   static inline bool __mutex_trylock(struct mutex *lock)
>   {
> -       return !__mutex_trylock_or_owner(lock);
> +       return !__mutex_trylock_or_owner(lock, false);
>   }
> 
>   #ifndef CONFIG_DEBUG_LOCK_ALLOC
> @@ -226,7 +236,7 @@ static void __mutex_handoff(struct mutex *lock, struct task_struct *task)
>          unsigned long owner = atomic_long_read(&lock->owner);
> 
>          for (;;) {
> -               unsigned long old, new;
> +               unsigned long new;
> 
>   #ifdef CONFIG_DEBUG_MUTEXES
>                  DEBUG_LOCKS_WARN_ON(__owner_task(owner) != current);
> @@ -238,11 +248,8 @@ static void __mutex_handoff(struct mutex *lock, struct task_struct *task)
>                  if (task)
>                          new |= MUTEX_FLAG_PICKUP;
> 
> -               old = atomic_long_cmpxchg_release(&lock->owner, owner, new);
> -               if (old == owner)
> +               if (atomic_long_try_cmpxchg_release(&lock->owner, &owner, new))
>                          break;
> -
> -               owner = old;
>          }
>   }
> 
> @@ -662,7 +669,7 @@ mutex_optimistic_spin(struct mutex *lock, struct ww_acquire_ctx *ww_ctx,
>                  struct task_struct *owner;
> 
>                  /* Try to acquire the mutex... */
> -               owner = __mutex_trylock_or_owner(lock);
> +               owner = __mutex_trylock_or_owner(lock, false);
>                  if (!owner)
>                          break;
> 
> @@ -928,7 +935,6 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
>                      struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx)
>   {
>          struct mutex_waiter waiter;
> -       bool first = false;
>          struct ww_mutex *ww;
>          int ret;
> 
> @@ -1007,6 +1013,8 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
> 
>          set_current_state(state);
>          for (;;) {
> +               bool first;
> +
>                  /*
>                   * Once we hold wait_lock, we're serialized against
>                   * mutex_unlock() handing the lock off to us, do a trylock
> @@ -1035,23 +1043,18 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
>                  spin_unlock(&lock->wait_lock);
>                  schedule_preempt_disabled();
> 
> -               /*
> -                * ww_mutex needs to always recheck its position since its waiter
> -                * list is not FIFO ordered.
> -                */
> -               if (ww_ctx || !first) {
> -                       first = __mutex_waiter_is_first(lock, &waiter);
> -                       if (first)
> -                               __mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
> -               }
> -
>                  set_current_state(state);
> +
> +               first = __mutex_waiter_is_first(lock, &waiter);
> +
>                  /*
> -                * Here we order against unlock; we must either see it change
> -                * state back to RUNNING and fall through the next schedule(),
> -                * or we must see its unlock and acquire.
> +                * We got woken up, see if we can acquire the mutex now. If
> +                * not, and we're the first waiter, make sure to mark the mutex
> +                * for HANDOFF to avoid starvation.
> +                *
> +                * XXX spin-wait vs sigpending
>                   */
> -               if (__mutex_trylock(lock) ||
> +               if (!__mutex_trylock_or_owner(lock, first) ||
>                      (first && mutex_optimistic_spin(lock, ww_ctx, &waiter)))
>                          break;
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] locking/mutex: Reduce chance of setting HANDOFF bit on unlocked mutex
  2021-06-30 14:13       ` Waiman Long
@ 2021-06-30 14:46         ` Peter Zijlstra
  0 siblings, 0 replies; 8+ messages in thread
From: Peter Zijlstra @ 2021-06-30 14:46 UTC (permalink / raw)
  To: Waiman Long
  Cc: Ingo Molnar, Will Deacon, Boqun Feng, linux-kernel, Xu, Yanfei

On Wed, Jun 30, 2021 at 10:13:06AM -0400, Waiman Long wrote:

> Sorry, I missed the XXX comment:-)
> 
> This is a generic problem as other waiters that go into the spinning loop
> also don't check for sigpending. On the other hand, I am fine with doing the
> pending signal check before doing the optimistic spinning.

True.. OK, lemme write changelogs for the 4 patches I've ended up with
and post them. Then we can look at that whole sigpending crud.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] locking/mutex: Reduce chance of setting HANDOFF bit on unlocked mutex
  2021-06-30 14:43   ` Xu, Yanfei
@ 2021-06-30 14:50     ` Peter Zijlstra
  0 siblings, 0 replies; 8+ messages in thread
From: Peter Zijlstra @ 2021-06-30 14:50 UTC (permalink / raw)
  To: Xu, Yanfei
  Cc: Waiman Long, Ingo Molnar, Will Deacon, Boqun Feng, linux-kernel

On Wed, Jun 30, 2021 at 10:43:37PM +0800, Xu, Yanfei wrote:
> > +static inline struct task_struct *__mutex_trylock_or_owner(struct mutex *lock, bool handoff)
> >   {
> >          unsigned long owner, curr = (unsigned long)current;
> > 
> >          owner = atomic_long_read(&lock->owner);
> >          for (;;) { /* must loop, can race against a flag */
> > +               unsigned long flags = __owner_flags(owner);
> >                  unsigned long task = owner & ~MUTEX_FLAGS;
> > 
> >                  if (task) {
> > +                       if (flags & MUTEX_FLAG_PICKUP) {
> > 
> > +                               if (task != curr)
> > +                                       break;
> > +
> > +                               flags &= ~MUTEX_FLAG_PICKUP;
> > +
> 
> Hmm.. Should we also clear HANDOFF bit here? I don't find where it is
> cleared.

Should already be gone; see __mutex_handoff().

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-06-30 14:51 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-29 20:11 [PATCH] locking/mutex: Reduce chance of setting HANDOFF bit on unlocked mutex Waiman Long
2021-06-30 10:21 ` Peter Zijlstra
2021-06-30 13:50   ` Waiman Long
2021-06-30 13:56     ` Peter Zijlstra
2021-06-30 14:13       ` Waiman Long
2021-06-30 14:46         ` Peter Zijlstra
2021-06-30 14:43   ` Xu, Yanfei
2021-06-30 14:50     ` Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).