linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] locking/osq: fix ordering of node initialisation in osq_lock
@ 2015-12-11 17:46 Will Deacon
  2015-12-12  0:18 ` David Daney
  2015-12-17  1:18 ` David Daney
  0 siblings, 2 replies; 7+ messages in thread
From: Will Deacon @ 2015-12-11 17:46 UTC (permalink / raw)
  To: peterz
  Cc: linux-arm-kernel, linux-kernel, andrew.pinski, ddaney.cavm,
	dbueso, Will Deacon

The Cavium guys reported a soft lockup on their arm64 machine, caused
by c55a6ffa6285 ("locking/osq: Relax atomic semantics"):

[   68.909948] [<fffffe00000da03c>] mutex_optimistic_spin+0x9c/0x1d0
[   68.909951] [<fffffe00006fe4b8>] __mutex_lock_slowpath+0x44/0x158
[   68.909953] [<fffffe00006fe620>] mutex_lock+0x54/0x58
[   68.909956] [<fffffe0000265efc>] kernfs_iop_permission+0x38/0x70
[   68.909959] [<fffffe00001fbf50>] __inode_permission+0x88/0xd8
[   68.909961] [<fffffe00001fbfd0>] inode_permission+0x30/0x6c
[   68.909964] [<fffffe00001fe26c>] link_path_walk+0x68/0x4d4
[   68.909966] [<fffffe00001ffa14>] path_openat+0xb4/0x2bc
[   68.909968] [<fffffe000020123c>] do_filp_open+0x74/0xd0
[   68.909971] [<fffffe00001f13e4>] do_sys_open+0x14c/0x228
[   68.909973] [<fffffe00001f1544>] SyS_openat+0x3c/0x48
[   68.909976] [<fffffe00000851f0>] el0_svc_naked+0x24/0x28

This is because in osq_lock we initialise the node for the current CPU:

	node->locked = 0;
	node->next = NULL;
	node->cpu = curr;

and then publish the current CPU in the lock tail:

	old = atomic_xchg_acquire(&lock->tail, curr);

Once the update to lock->tail is visible to another CPU, the node is
then live and can be both read and updated by concurrent lockers.

Unfortunately, the ACQUIRE semantics of the xchg operation mean that
there is no guarantee the contents of the node will be visible before
lock tail is updated. This can lead to lock corruption when, for example,
a concurrent locker races to set the next field.

Fixes: c55a6ffa6285 ("locking/osq: Relax atomic semantics"):
Reported-by: David Daney <ddaney@caviumnetworks.com>
Reported-by: Andrew Pinski <andrew.pinski@caviumnetworks.com>
Tested-by: Andrew Pinski <andrew.pinski@caviumnetworks.com>
Acked-by: Davidlohr Bueso <dave@stgolabs.net>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 kernel/locking/osq_lock.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/kernel/locking/osq_lock.c b/kernel/locking/osq_lock.c
index d092a0c9c2d4..05a37857ab55 100644
--- a/kernel/locking/osq_lock.c
+++ b/kernel/locking/osq_lock.c
@@ -93,10 +93,12 @@ bool osq_lock(struct optimistic_spin_queue *lock)
 	node->cpu = curr;
 
 	/*
-	 * ACQUIRE semantics, pairs with corresponding RELEASE
-	 * in unlock() uncontended, or fastpath.
+	 * We need both ACQUIRE (pairs with corresponding RELEASE in
+	 * unlock() uncontended, or fastpath) and RELEASE (to publish
+	 * the node fields we just initialised) semantics when updating
+	 * the lock tail.
 	 */
-	old = atomic_xchg_acquire(&lock->tail, curr);
+	old = atomic_xchg(&lock->tail, curr);
 	if (old == OSQ_UNLOCKED_VAL)
 		return true;
 
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH] locking/osq: fix ordering of node initialisation in osq_lock
  2015-12-11 17:46 [PATCH] locking/osq: fix ordering of node initialisation in osq_lock Will Deacon
@ 2015-12-12  0:18 ` David Daney
  2015-12-17  1:18 ` David Daney
  1 sibling, 0 replies; 7+ messages in thread
From: David Daney @ 2015-12-12  0:18 UTC (permalink / raw)
  To: Will Deacon; +Cc: peterz, linux-arm-kernel, linux-kernel, andrew.pinski, dbueso

On 12/11/2015 09:46 AM, Will Deacon wrote:
> The Cavium guys reported a soft lockup on their arm64 machine, caused
> by c55a6ffa6285 ("locking/osq: Relax atomic semantics"):
>
> [   68.909948] [<fffffe00000da03c>] mutex_optimistic_spin+0x9c/0x1d0
> [   68.909951] [<fffffe00006fe4b8>] __mutex_lock_slowpath+0x44/0x158
> [   68.909953] [<fffffe00006fe620>] mutex_lock+0x54/0x58
> [   68.909956] [<fffffe0000265efc>] kernfs_iop_permission+0x38/0x70
> [   68.909959] [<fffffe00001fbf50>] __inode_permission+0x88/0xd8
> [   68.909961] [<fffffe00001fbfd0>] inode_permission+0x30/0x6c
> [   68.909964] [<fffffe00001fe26c>] link_path_walk+0x68/0x4d4
> [   68.909966] [<fffffe00001ffa14>] path_openat+0xb4/0x2bc
> [   68.909968] [<fffffe000020123c>] do_filp_open+0x74/0xd0
> [   68.909971] [<fffffe00001f13e4>] do_sys_open+0x14c/0x228
> [   68.909973] [<fffffe00001f1544>] SyS_openat+0x3c/0x48
> [   68.909976] [<fffffe00000851f0>] el0_svc_naked+0x24/0x28
>
> This is because in osq_lock we initialise the node for the current CPU:
>
> 	node->locked = 0;
> 	node->next = NULL;
> 	node->cpu = curr;
>
> and then publish the current CPU in the lock tail:
>
> 	old = atomic_xchg_acquire(&lock->tail, curr);
>
> Once the update to lock->tail is visible to another CPU, the node is
> then live and can be both read and updated by concurrent lockers.
>
> Unfortunately, the ACQUIRE semantics of the xchg operation mean that
> there is no guarantee the contents of the node will be visible before
> lock tail is updated. This can lead to lock corruption when, for example,
> a concurrent locker races to set the next field.
>
> Fixes: c55a6ffa6285 ("locking/osq: Relax atomic semantics"):
> Reported-by: David Daney <ddaney@caviumnetworks.com>
> Reported-by: Andrew Pinski <andrew.pinski@caviumnetworks.com>
> Tested-by: Andrew Pinski <andrew.pinski@caviumnetworks.com>
> Acked-by: Davidlohr Bueso <dave@stgolabs.net>
> Signed-off-by: Will Deacon <will.deacon@arm.com>

Tested-by: David Daney <david.daney@cavium.com>

Thanks Will, this fixes a kernel from two days ago (commit 5406812e5976) 
for me.

Interestingly a more recent kernel from today (commit b9d85451ddd4), was 
not failing.  I think this shows that you have to get the timing Just 
Right to hit the problem, and minor unrelated changes can change the 
outcome.  I also noticed that turning on lock debugging would fix some 
systems, but not others.

David Daney

> ---
>   kernel/locking/osq_lock.c | 8 +++++---
>   1 file changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/locking/osq_lock.c b/kernel/locking/osq_lock.c
> index d092a0c9c2d4..05a37857ab55 100644
> --- a/kernel/locking/osq_lock.c
> +++ b/kernel/locking/osq_lock.c
> @@ -93,10 +93,12 @@ bool osq_lock(struct optimistic_spin_queue *lock)
>   	node->cpu = curr;
>
>   	/*
> -	 * ACQUIRE semantics, pairs with corresponding RELEASE
> -	 * in unlock() uncontended, or fastpath.
> +	 * We need both ACQUIRE (pairs with corresponding RELEASE in
> +	 * unlock() uncontended, or fastpath) and RELEASE (to publish
> +	 * the node fields we just initialised) semantics when updating
> +	 * the lock tail.
>   	 */
> -	old = atomic_xchg_acquire(&lock->tail, curr);
> +	old = atomic_xchg(&lock->tail, curr);
>   	if (old == OSQ_UNLOCKED_VAL)
>   		return true;
>
>


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] locking/osq: fix ordering of node initialisation in osq_lock
  2015-12-11 17:46 [PATCH] locking/osq: fix ordering of node initialisation in osq_lock Will Deacon
  2015-12-12  0:18 ` David Daney
@ 2015-12-17  1:18 ` David Daney
  2015-12-17  9:48   ` Will Deacon
  1 sibling, 1 reply; 7+ messages in thread
From: David Daney @ 2015-12-17  1:18 UTC (permalink / raw)
  To: Will Deacon; +Cc: peterz, linux-arm-kernel, linux-kernel, andrew.pinski, dbueso

What is the status of this patch?  It there a good likelihood that it 
will make it into v4.4?

If not, we should request that c55a6ffa6285 ("locking/osq: Relax atomic 
semantics") be reverted for v4.4

David Daney



On 12/11/2015 09:46 AM, Will Deacon wrote:
> The Cavium guys reported a soft lockup on their arm64 machine, caused
> by c55a6ffa6285 ("locking/osq: Relax atomic semantics"):
>
> [   68.909948] [<fffffe00000da03c>] mutex_optimistic_spin+0x9c/0x1d0
> [   68.909951] [<fffffe00006fe4b8>] __mutex_lock_slowpath+0x44/0x158
> [   68.909953] [<fffffe00006fe620>] mutex_lock+0x54/0x58
> [   68.909956] [<fffffe0000265efc>] kernfs_iop_permission+0x38/0x70
> [   68.909959] [<fffffe00001fbf50>] __inode_permission+0x88/0xd8
> [   68.909961] [<fffffe00001fbfd0>] inode_permission+0x30/0x6c
> [   68.909964] [<fffffe00001fe26c>] link_path_walk+0x68/0x4d4
> [   68.909966] [<fffffe00001ffa14>] path_openat+0xb4/0x2bc
> [   68.909968] [<fffffe000020123c>] do_filp_open+0x74/0xd0
> [   68.909971] [<fffffe00001f13e4>] do_sys_open+0x14c/0x228
> [   68.909973] [<fffffe00001f1544>] SyS_openat+0x3c/0x48
> [   68.909976] [<fffffe00000851f0>] el0_svc_naked+0x24/0x28
>
> This is because in osq_lock we initialise the node for the current CPU:
>
> 	node->locked = 0;
> 	node->next = NULL;
> 	node->cpu = curr;
>
> and then publish the current CPU in the lock tail:
>
> 	old = atomic_xchg_acquire(&lock->tail, curr);
>
> Once the update to lock->tail is visible to another CPU, the node is
> then live and can be both read and updated by concurrent lockers.
>
> Unfortunately, the ACQUIRE semantics of the xchg operation mean that
> there is no guarantee the contents of the node will be visible before
> lock tail is updated. This can lead to lock corruption when, for example,
> a concurrent locker races to set the next field.
>
> Fixes: c55a6ffa6285 ("locking/osq: Relax atomic semantics"):
> Reported-by: David Daney <ddaney@caviumnetworks.com>
> Reported-by: Andrew Pinski <andrew.pinski@caviumnetworks.com>
> Tested-by: Andrew Pinski <andrew.pinski@caviumnetworks.com>
> Acked-by: Davidlohr Bueso <dave@stgolabs.net>
> Signed-off-by: Will Deacon <will.deacon@arm.com>
> ---
>   kernel/locking/osq_lock.c | 8 +++++---
>   1 file changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/locking/osq_lock.c b/kernel/locking/osq_lock.c
> index d092a0c9c2d4..05a37857ab55 100644
> --- a/kernel/locking/osq_lock.c
> +++ b/kernel/locking/osq_lock.c
> @@ -93,10 +93,12 @@ bool osq_lock(struct optimistic_spin_queue *lock)
>   	node->cpu = curr;
>
>   	/*
> -	 * ACQUIRE semantics, pairs with corresponding RELEASE
> -	 * in unlock() uncontended, or fastpath.
> +	 * We need both ACQUIRE (pairs with corresponding RELEASE in
> +	 * unlock() uncontended, or fastpath) and RELEASE (to publish
> +	 * the node fields we just initialised) semantics when updating
> +	 * the lock tail.
>   	 */
> -	old = atomic_xchg_acquire(&lock->tail, curr);
> +	old = atomic_xchg(&lock->tail, curr);
>   	if (old == OSQ_UNLOCKED_VAL)
>   		return true;
>
>


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] locking/osq: fix ordering of node initialisation in osq_lock
  2015-12-17  1:18 ` David Daney
@ 2015-12-17  9:48   ` Will Deacon
  0 siblings, 0 replies; 7+ messages in thread
From: Will Deacon @ 2015-12-17  9:48 UTC (permalink / raw)
  To: David Daney; +Cc: peterz, linux-arm-kernel, linux-kernel, andrew.pinski, dbueso

On Wed, Dec 16, 2015 at 05:18:48PM -0800, David Daney wrote:
> What is the status of this patch?  It there a good likelihood that it will
> make it into v4.4?
> 
> If not, we should request that c55a6ffa6285 ("locking/osq: Relax atomic
> semantics") be reverted for v4.4

I think Peter was going to send it to mingo as an urgent fix, but I've
not seen anything from the tip-bot yet.

Will

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] locking/osq: Fix ordering of node initialisation in osq_lock
  2015-12-17 19:04 ` Linus Torvalds
@ 2015-12-17 19:33   ` Peter Zijlstra
  0 siblings, 0 replies; 7+ messages in thread
From: Peter Zijlstra @ 2015-12-17 19:33 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Ingo Molnar, David Daney, Andrew Pinski, Davidlohr Bueso,
	Will Deacon, Linux Kernel Mailing List

On Thu, Dec 17, 2015 at 11:04:00AM -0800, Linus Torvalds wrote:
> On Thu, Dec 17, 2015 at 8:05 AM, Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > Please consider this patch for 4.4.
> 
> No problem, but just wanted to check that there's nothing else pending
> in any locking tree?

Just this one patch for now.

Thanks!

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] locking/osq: Fix ordering of node initialisation in osq_lock
  2015-12-17 16:05 [PATCH] locking/osq: Fix " Peter Zijlstra
@ 2015-12-17 19:04 ` Linus Torvalds
  2015-12-17 19:33   ` Peter Zijlstra
  0 siblings, 1 reply; 7+ messages in thread
From: Linus Torvalds @ 2015-12-17 19:04 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, David Daney, Andrew Pinski, Davidlohr Bueso,
	Will Deacon, Linux Kernel Mailing List

On Thu, Dec 17, 2015 at 8:05 AM, Peter Zijlstra <peterz@infradead.org> wrote:
>
> Please consider this patch for 4.4.

No problem, but just wanted to check that there's nothing else pending
in any locking tree?

                  Linus

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH] locking/osq: Fix ordering of node initialisation in osq_lock
@ 2015-12-17 16:05 Peter Zijlstra
  2015-12-17 19:04 ` Linus Torvalds
  0 siblings, 1 reply; 7+ messages in thread
From: Peter Zijlstra @ 2015-12-17 16:05 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Ingo Molnar, ddaney, andrew.pinski, dave, will.deacon, linux-kernel


Hi Linus,

Please consider this patch for 4.4.

---
Subject: locking/osq: Fix ordering of node initialisation in osq_lock
From: Will Deacon <will.deacon@arm.com>
Date: Fri, 11 Dec 2015 17:46:41 +0000

The Cavium guys reported a soft lockup on their arm64 machine, caused
by c55a6ffa6285 ("locking/osq: Relax atomic semantics"):

[   68.909948] [<fffffe00000da03c>] mutex_optimistic_spin+0x9c/0x1d0
[   68.909951] [<fffffe00006fe4b8>] __mutex_lock_slowpath+0x44/0x158
[   68.909953] [<fffffe00006fe620>] mutex_lock+0x54/0x58
[   68.909956] [<fffffe0000265efc>] kernfs_iop_permission+0x38/0x70
[   68.909959] [<fffffe00001fbf50>] __inode_permission+0x88/0xd8
[   68.909961] [<fffffe00001fbfd0>] inode_permission+0x30/0x6c
[   68.909964] [<fffffe00001fe26c>] link_path_walk+0x68/0x4d4
[   68.909966] [<fffffe00001ffa14>] path_openat+0xb4/0x2bc
[   68.909968] [<fffffe000020123c>] do_filp_open+0x74/0xd0
[   68.909971] [<fffffe00001f13e4>] do_sys_open+0x14c/0x228
[   68.909973] [<fffffe00001f1544>] SyS_openat+0x3c/0x48
[   68.909976] [<fffffe00000851f0>] el0_svc_naked+0x24/0x28

This is because in osq_lock we initialise the node for the current CPU:

	node->locked = 0;
	node->next = NULL;
	node->cpu = curr;

and then publish the current CPU in the lock tail:

	old = atomic_xchg_acquire(&lock->tail, curr);

Once the update to lock->tail is visible to another CPU, the node is
then live and can be both read and updated by concurrent lockers.

Unfortunately, the ACQUIRE semantics of the xchg operation mean that
there is no guarantee the contents of the node will be visible before
lock tail is updated. This can lead to lock corruption when, for example,
a concurrent locker races to set the next field.

Fixes: c55a6ffa6285 ("locking/osq: Relax atomic semantics"):
Reported-by: David Daney <ddaney@caviumnetworks.com>
Reported-by: Andrew Pinski <andrew.pinski@caviumnetworks.com>
Tested-by: Andrew Pinski <andrew.pinski@caviumnetworks.com>
Acked-by: Davidlohr Bueso <dave@stgolabs.net>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1449856001-21177-1-git-send-email-will.deacon@arm.com
---
 kernel/locking/osq_lock.c |    8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

--- a/kernel/locking/osq_lock.c
+++ b/kernel/locking/osq_lock.c
@@ -93,10 +93,12 @@ bool osq_lock(struct optimistic_spin_que
 	node->cpu = curr;
 
 	/*
-	 * ACQUIRE semantics, pairs with corresponding RELEASE
-	 * in unlock() uncontended, or fastpath.
+	 * We need both ACQUIRE (pairs with corresponding RELEASE in
+	 * unlock() uncontended, or fastpath) and RELEASE (to publish
+	 * the node fields we just initialised) semantics when updating
+	 * the lock tail.
 	 */
-	old = atomic_xchg_acquire(&lock->tail, curr);
+	old = atomic_xchg(&lock->tail, curr);
 	if (old == OSQ_UNLOCKED_VAL)
 		return true;
 

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2015-12-17 19:33 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-12-11 17:46 [PATCH] locking/osq: fix ordering of node initialisation in osq_lock Will Deacon
2015-12-12  0:18 ` David Daney
2015-12-17  1:18 ` David Daney
2015-12-17  9:48   ` Will Deacon
2015-12-17 16:05 [PATCH] locking/osq: Fix " Peter Zijlstra
2015-12-17 19:04 ` Linus Torvalds
2015-12-17 19:33   ` Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).