All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Nicolai Hähnle" <nhaehnle@gmail.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: linux-kernel@vger.kernel.org,
	"Nicolai Hähnle" <Nicolai.Haehnle@amd.com>,
	"Ingo Molnar" <mingo@redhat.com>,
	"Maarten Lankhorst" <dev@mblankhorst.nl>,
	"Daniel Vetter" <daniel@ffwll.ch>,
	"Chris Wilson" <chris@chris-wilson.co.uk>,
	dri-devel@lists.freedesktop.org
Subject: Re: [PATCH v2 05/11] locking/ww_mutex: Add waiters in stamp order
Date: Fri, 16 Dec 2016 15:19:43 +0100	[thread overview]
Message-ID: <a99a86a8-f215-7549-c98a-a5ebdbb1bb00@gmail.com> (raw)
In-Reply-To: <20161206165544.GX3045@worktop.programming.kicks-ass.net>

Hi Peter and Chris,

(trying to combine the handoff discussion here)

On 06.12.2016 17:55, Peter Zijlstra wrote:
> On Thu, Dec 01, 2016 at 03:06:48PM +0100, Nicolai Hähnle wrote:
>> @@ -693,8 +748,12 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
>>  		 * mutex_unlock() handing the lock off to us, do a trylock
>>  		 * before testing the error conditions to make sure we pick up
>>  		 * the handoff.
>> +		 *
>> +		 * For w/w locks, we always need to do this even if we're not
>> +		 * currently the first waiter, because we may have been the
>> +		 * first waiter during the unlock.
>>  		 */
>> -		if (__mutex_trylock(lock, first))
>> +		if (__mutex_trylock(lock, use_ww_ctx || first))
>>  			goto acquired;
>
> So I'm somewhat uncomfortable with this. The point is that with the
> .handoff logic it is very easy to accidentally allow:
>
> 	mutex_lock(&a);
> 	mutex_lock(&a);
>
> And I'm not sure this doesn't make that happen for ww_mutexes. We get to
> this __mutex_trylock() without first having blocked.

Okay, took me a while, but I see the problem. If we have:

	ww_mutex_lock(&a, NULL);
	ww_mutex_lock(&a, ctx);

then it's possible that another currently waiting task sets the HANDOFF 
flag between those calls and we'll allow the second ww_mutex_lock to go 
through.

The concern about picking up a handoff that we didn't request is real, 
though it cannot happen in the first iteration. Perhaps this 
__mutex_trylock can be moved to the end of the loop? See below...


>
>
>>  		/*
>> @@ -716,7 +775,20 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
>>  		spin_unlock_mutex(&lock->wait_lock, flags);
>>  		schedule_preempt_disabled();
>>
>> -		if (!first && __mutex_waiter_is_first(lock, &waiter)) {
>> +		if (use_ww_ctx && ww_ctx) {
>> +			/*
>> +			 * Always re-check whether we're in first position. We
>> +			 * don't want to spin if another task with a lower
>> +			 * stamp has taken our position.
>> +			 *
>> +			 * We also may have to set the handoff flag again, if
>> +			 * our position at the head was temporarily taken away.
>> +			 */
>> +			first = __mutex_waiter_is_first(lock, &waiter);
>> +
>> +			if (first)
>> +				__mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
>> +		} else if (!first && __mutex_waiter_is_first(lock, &waiter)) {
>>  			first = true;
>>  			__mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
>>  		}
>
> So the point is that !ww_ctx entries are 'skipped' during the insertion
> and therefore, if one becomes first, it must stay first?

Yes. Actually, it should be possible to replace all the cases of 
use_ww_ctx || first with ww_ctx. Similarly, all cases of use_ww_ctx && 
ww_ctx could be replaced by just ww_ctx.

>
>> @@ -728,7 +800,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
>>  		 * or we must see its unlock and acquire.
>>  		 */
>>  		if ((first && mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, true)) ||
>> -		     __mutex_trylock(lock, first))
>> +		     __mutex_trylock(lock, use_ww_ctx || first))
>>  			break;
>>
>>  		spin_lock_mutex(&lock->wait_lock, flags);

Change this code to:

		acquired = first &&
		    mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx,
					  &waiter);
		spin_lock_mutex(&lock->wait_lock, flags);
		
		if (acquired ||
		    __mutex_trylock(lock, use_ww_ctx || first))
			break;
	}

This changes the trylock to always be under the wait_lock, but we 
previously had that at the beginning of the loop anyway. It also removes 
back-to-back calls to __mutex_trylock when going through the loop; and 
for the first iteration, there is a __mutex_trylock under wait_lock 
already before adding ourselves to the wait list.

What do you think?

Nicolai

WARNING: multiple messages have this Message-ID (diff)
From: "Nicolai Hähnle" <nhaehnle@gmail.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: "Maarten Lankhorst" <dev@mblankhorst.nl>,
	"Nicolai Hähnle" <Nicolai.Haehnle@amd.com>,
	linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,
	"Ingo Molnar" <mingo@redhat.com>
Subject: Re: [PATCH v2 05/11] locking/ww_mutex: Add waiters in stamp order
Date: Fri, 16 Dec 2016 15:19:43 +0100	[thread overview]
Message-ID: <a99a86a8-f215-7549-c98a-a5ebdbb1bb00@gmail.com> (raw)
In-Reply-To: <20161206165544.GX3045@worktop.programming.kicks-ass.net>

Hi Peter and Chris,

(trying to combine the handoff discussion here)

On 06.12.2016 17:55, Peter Zijlstra wrote:
> On Thu, Dec 01, 2016 at 03:06:48PM +0100, Nicolai Hähnle wrote:
>> @@ -693,8 +748,12 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
>>  		 * mutex_unlock() handing the lock off to us, do a trylock
>>  		 * before testing the error conditions to make sure we pick up
>>  		 * the handoff.
>> +		 *
>> +		 * For w/w locks, we always need to do this even if we're not
>> +		 * currently the first waiter, because we may have been the
>> +		 * first waiter during the unlock.
>>  		 */
>> -		if (__mutex_trylock(lock, first))
>> +		if (__mutex_trylock(lock, use_ww_ctx || first))
>>  			goto acquired;
>
> So I'm somewhat uncomfortable with this. The point is that with the
> .handoff logic it is very easy to accidentally allow:
>
> 	mutex_lock(&a);
> 	mutex_lock(&a);
>
> And I'm not sure this doesn't make that happen for ww_mutexes. We get to
> this __mutex_trylock() without first having blocked.

Okay, took me a while, but I see the problem. If we have:

	ww_mutex_lock(&a, NULL);
	ww_mutex_lock(&a, ctx);

then it's possible that another currently waiting task sets the HANDOFF 
flag between those calls and we'll allow the second ww_mutex_lock to go 
through.

The concern about picking up a handoff that we didn't request is real, 
though it cannot happen in the first iteration. Perhaps this 
__mutex_trylock can be moved to the end of the loop? See below...


>
>
>>  		/*
>> @@ -716,7 +775,20 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
>>  		spin_unlock_mutex(&lock->wait_lock, flags);
>>  		schedule_preempt_disabled();
>>
>> -		if (!first && __mutex_waiter_is_first(lock, &waiter)) {
>> +		if (use_ww_ctx && ww_ctx) {
>> +			/*
>> +			 * Always re-check whether we're in first position. We
>> +			 * don't want to spin if another task with a lower
>> +			 * stamp has taken our position.
>> +			 *
>> +			 * We also may have to set the handoff flag again, if
>> +			 * our position at the head was temporarily taken away.
>> +			 */
>> +			first = __mutex_waiter_is_first(lock, &waiter);
>> +
>> +			if (first)
>> +				__mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
>> +		} else if (!first && __mutex_waiter_is_first(lock, &waiter)) {
>>  			first = true;
>>  			__mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
>>  		}
>
> So the point is that !ww_ctx entries are 'skipped' during the insertion
> and therefore, if one becomes first, it must stay first?

Yes. Actually, it should be possible to replace all the cases of 
use_ww_ctx || first with ww_ctx. Similarly, all cases of use_ww_ctx && 
ww_ctx could be replaced by just ww_ctx.

>
>> @@ -728,7 +800,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
>>  		 * or we must see its unlock and acquire.
>>  		 */
>>  		if ((first && mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, true)) ||
>> -		     __mutex_trylock(lock, first))
>> +		     __mutex_trylock(lock, use_ww_ctx || first))
>>  			break;
>>
>>  		spin_lock_mutex(&lock->wait_lock, flags);

Change this code to:

		acquired = first &&
		    mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx,
					  &waiter);
		spin_lock_mutex(&lock->wait_lock, flags);
		
		if (acquired ||
		    __mutex_trylock(lock, use_ww_ctx || first))
			break;
	}

This changes the trylock to always be under the wait_lock, but we 
previously had that at the beginning of the loop anyway. It also removes 
back-to-back calls to __mutex_trylock when going through the loop; and 
for the first iteration, there is a __mutex_trylock under wait_lock 
already before adding ourselves to the wait list.

What do you think?

Nicolai
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

  reply	other threads:[~2016-12-16 14:20 UTC|newest]

Thread overview: 70+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-12-01 14:06 [PATCH v2 00/11] locking/ww_mutex: Keep sorted wait list to avoid stampedes Nicolai Hähnle
2016-12-01 14:06 ` [PATCH v2 01/11] drm/vgem: Use ww_mutex_(un)lock even with a NULL context Nicolai Hähnle
2016-12-01 14:06   ` Nicolai Hähnle
2016-12-01 14:18   ` Chris Wilson
2016-12-01 14:18     ` Chris Wilson
2016-12-01 15:14     ` Daniel Vetter
2016-12-01 16:24   ` Peter Zijlstra
2016-12-01 16:24     ` Peter Zijlstra
2016-12-01 14:06 ` [PATCH v2 02/11] locking/ww_mutex: Re-check ww->ctx in the inner optimistic spin loop Nicolai Hähnle
2016-12-01 14:06   ` Nicolai Hähnle
2016-12-01 14:36   ` Chris Wilson
2016-12-01 14:36     ` Chris Wilson
2016-12-06 15:06   ` Peter Zijlstra
2016-12-06 15:06     ` Peter Zijlstra
2016-12-06 16:03     ` Waiman Long
2016-12-06 18:29       ` Peter Zijlstra
2016-12-06 18:29         ` Peter Zijlstra
2016-12-06 18:46         ` Waiman Long
2016-12-01 14:06 ` [PATCH v2 03/11] locking/ww_mutex: Extract stamp comparison to __ww_mutex_stamp_after Nicolai Hähnle
2016-12-01 14:06   ` Nicolai Hähnle
2016-12-01 14:42   ` Chris Wilson
2016-12-01 14:42     ` Chris Wilson
2016-12-01 14:06 ` [PATCH v2 04/11] locking/ww_mutex: Set use_ww_ctx even when locking without a context Nicolai Hähnle
2016-12-01 14:06   ` Nicolai Hähnle
2016-12-06 15:14   ` Peter Zijlstra
2016-12-06 15:14     ` Peter Zijlstra
2016-12-06 15:25   ` Peter Zijlstra
2016-12-06 15:25     ` Peter Zijlstra
2016-12-16 13:17     ` Nicolai Hähnle
2016-12-16 13:17       ` Nicolai Hähnle
2016-12-17  7:53       ` Maarten Lankhorst
2016-12-17  7:53         ` Maarten Lankhorst
2016-12-17 13:49       ` Peter Zijlstra
2016-12-17 13:49         ` Peter Zijlstra
2016-12-01 14:06 ` [PATCH v2 05/11] locking/ww_mutex: Add waiters in stamp order Nicolai Hähnle
2016-12-01 15:59   ` Chris Wilson
2016-12-01 15:59     ` Chris Wilson
2016-12-16 14:21     ` Nicolai Hähnle
2016-12-16 14:21       ` Nicolai Hähnle
2016-12-06 15:36   ` Peter Zijlstra
2016-12-06 15:36     ` Peter Zijlstra
2016-12-16 13:34     ` Nicolai Hähnle
2016-12-16 13:34       ` Nicolai Hähnle
2016-12-06 16:55   ` Peter Zijlstra
2016-12-06 16:55     ` Peter Zijlstra
2016-12-16 14:19     ` Nicolai Hähnle [this message]
2016-12-16 14:19       ` Nicolai Hähnle
2016-12-16 14:46       ` Peter Zijlstra
2016-12-16 14:46         ` Peter Zijlstra
2016-12-16 17:15       ` Peter Zijlstra
2016-12-16 17:15         ` Peter Zijlstra
2016-12-16 18:11         ` Nicolai Hähnle
2016-12-16 20:00           ` Peter Zijlstra
2016-12-16 20:00             ` Peter Zijlstra
2016-12-16 22:35             ` Nicolai Hähnle
2016-12-16 17:20       ` Peter Zijlstra
2016-12-16 17:20         ` Peter Zijlstra
2016-12-16 18:12         ` Nicolai Hähnle
2016-12-16 18:12           ` Nicolai Hähnle
2016-12-01 14:06 ` [PATCH v2 06/11] locking/ww_mutex: Notify waiters that have to back off while adding tasks to wait list Nicolai Hähnle
2016-12-01 14:06   ` Nicolai Hähnle
2016-12-01 14:06 ` [PATCH v2 07/11] locking/ww_mutex: Wake at most one waiter for back off when acquiring the lock Nicolai Hähnle
2016-12-01 14:06   ` Nicolai Hähnle
2016-12-01 14:06 ` [PATCH v2 08/11] locking/ww_mutex: Yield to other waiters from optimistic spin Nicolai Hähnle
2016-12-01 14:06 ` [PATCH v2 09/11] locking/mutex: Initialize mutex_waiter::ww_ctx with poison when debugging Nicolai Hähnle
2016-12-01 14:06   ` Nicolai Hähnle
2016-12-01 14:06 ` [PATCH v2 10/11] Documentation/locking/ww_mutex: Update the design document Nicolai Hähnle
2016-12-01 14:06   ` Nicolai Hähnle
2016-12-01 14:06 ` [PATCH v2 11/11] [rfc] locking/ww_mutex: Always spin optimistically for the first waiter Nicolai Hähnle
2016-12-01 14:06   ` Nicolai Hähnle

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a99a86a8-f215-7549-c98a-a5ebdbb1bb00@gmail.com \
    --to=nhaehnle@gmail.com \
    --cc=Nicolai.Haehnle@amd.com \
    --cc=chris@chris-wilson.co.uk \
    --cc=daniel@ffwll.ch \
    --cc=dev@mblankhorst.nl \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.