* [PATCH -next] locking/osq_lock: annotate a data race in osq_lock
@ 2020-02-11 4:06 Qian Cai
2020-02-11 10:16 ` Marco Elver
0 siblings, 1 reply; 4+ messages in thread
From: Qian Cai @ 2020-02-11 4:06 UTC (permalink / raw)
To: peterz, mingo; +Cc: will, elver, linux-kernel, Qian Cai
prev->next could be accessed concurrently as noticed by KCSAN,
write (marked) to 0xffff9d3370dbbe40 of 8 bytes by task 3294 on cpu 107:
osq_lock+0x25f/0x350
osq_wait_next at kernel/locking/osq_lock.c:79
(inlined by) osq_lock at kernel/locking/osq_lock.c:185
rwsem_optimistic_spin
<snip>
read to 0xffff9d3370dbbe40 of 8 bytes by task 3398 on cpu 100:
osq_lock+0x196/0x350
osq_lock at kernel/locking/osq_lock.c:157
rwsem_optimistic_spin
<snip>
Since the write only stores NULL to prev->next and the read tests if
prev->next equals to this_cpu_ptr(&osq_node). Even if the value is
shattered, the code is still working correctly. Thus, mark it as an
intentional data race using the data_race() macro.
Signed-off-by: Qian Cai <cai@lca.pw>
---
kernel/locking/osq_lock.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/locking/osq_lock.c b/kernel/locking/osq_lock.c
index 1f7734949ac8..3c44ddbc11ce 100644
--- a/kernel/locking/osq_lock.c
+++ b/kernel/locking/osq_lock.c
@@ -154,7 +154,7 @@ bool osq_lock(struct optimistic_spin_queue *lock)
*/
for (;;) {
- if (prev->next == node &&
+ if (data_race(prev->next == node) &&
cmpxchg(&prev->next, node, NULL) == node)
break;
--
2.21.0 (Apple Git-122.2)
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH -next] locking/osq_lock: annotate a data race in osq_lock
2020-02-11 4:06 [PATCH -next] locking/osq_lock: annotate a data race in osq_lock Qian Cai
@ 2020-02-11 10:16 ` Marco Elver
2020-02-11 11:57 ` Qian Cai
2020-02-11 12:47 ` Peter Zijlstra
0 siblings, 2 replies; 4+ messages in thread
From: Marco Elver @ 2020-02-11 10:16 UTC (permalink / raw)
To: Qian Cai; +Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, LKML
On Tue, 11 Feb 2020 at 05:07, Qian Cai <cai@lca.pw> wrote:
>
> prev->next could be accessed concurrently as noticed by KCSAN,
>
> write (marked) to 0xffff9d3370dbbe40 of 8 bytes by task 3294 on cpu 107:
> osq_lock+0x25f/0x350
> osq_wait_next at kernel/locking/osq_lock.c:79
> (inlined by) osq_lock at kernel/locking/osq_lock.c:185
> rwsem_optimistic_spin
> <snip>
>
> read to 0xffff9d3370dbbe40 of 8 bytes by task 3398 on cpu 100:
> osq_lock+0x196/0x350
> osq_lock at kernel/locking/osq_lock.c:157
> rwsem_optimistic_spin
> <snip>
>
> Since the write only stores NULL to prev->next and the read tests if
> prev->next equals to this_cpu_ptr(&osq_node). Even if the value is
> shattered, the code is still working correctly. Thus, mark it as an
> intentional data race using the data_race() macro.
I have said this before: we're not just guarding against load/store
tearing, although on their own, they make it deceptively easy to
reason about data races.
The case here seems to be another instance of a C-CAS, to avoid
unnecessarily dirtying a cacheline.
Here, the loop would make me suspicious, because a compiler could
optimize out re-loading the value. Due to the smp_load_acquire,
however, at the least we have 1 implied compiler barrier in this loop
which means that will likely not happen.
Before jumping to 'data_race()', I would ask again: how bad is the
READ_ONCE? Is the generated code the same? If so, just use the
READ_ONCE. Do you want to reason about all compiler optimizations? For
this code here, I certainly don't want to.
But in the end it's up to what maintainers prefer, and maybe there is
a very compelling argument that I missed that makes the fact this is a
data race always safe.
Thanks,
-- Marco
> Signed-off-by: Qian Cai <cai@lca.pw>
> ---
> kernel/locking/osq_lock.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/locking/osq_lock.c b/kernel/locking/osq_lock.c
> index 1f7734949ac8..3c44ddbc11ce 100644
> --- a/kernel/locking/osq_lock.c
> +++ b/kernel/locking/osq_lock.c
> @@ -154,7 +154,7 @@ bool osq_lock(struct optimistic_spin_queue *lock)
> */
>
> for (;;) {
> - if (prev->next == node &&
> + if (data_race(prev->next == node) &&
> cmpxchg(&prev->next, node, NULL) == node)
> break;
>
> --
> 2.21.0 (Apple Git-122.2)
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH -next] locking/osq_lock: annotate a data race in osq_lock
2020-02-11 10:16 ` Marco Elver
@ 2020-02-11 11:57 ` Qian Cai
2020-02-11 12:47 ` Peter Zijlstra
1 sibling, 0 replies; 4+ messages in thread
From: Qian Cai @ 2020-02-11 11:57 UTC (permalink / raw)
To: Marco Elver; +Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, LKML
> On Feb 11, 2020, at 5:16 AM, Marco Elver <elver@google.com> wrote:
>
> I have said this before: we're not just guarding against load/store
> tearing, although on their own, they make it deceptively easy to
> reason about data races.
>
> The case here seems to be another instance of a C-CAS, to avoid
> unnecessarily dirtying a cacheline.
>
> Here, the loop would make me suspicious, because a compiler could
> optimize out re-loading the value. Due to the smp_load_acquire,
> however, at the least we have 1 implied compiler barrier in this loop
> which means that will likely not happen.
>
> Before jumping to 'data_race()', I would ask again: how bad is the
> READ_ONCE? Is the generated code the same? If so, just use the
> READ_ONCE. Do you want to reason about all compiler optimizations? For
> this code here, I certainly don't want to.
>
> But in the end it's up to what maintainers prefer, and maybe there is
> a very compelling argument that I missed that makes the fact this is a
> data race always safe.
Yes, I feel like locking maintainers prefer data_race() rather than blindly adding READ_ONCE() unless there is an strong evidence that the later is needed.
Since I can’t prove it is strictly needed to prevent from which specific optimization, I had chosen the data_race() approach.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH -next] locking/osq_lock: annotate a data race in osq_lock
2020-02-11 10:16 ` Marco Elver
2020-02-11 11:57 ` Qian Cai
@ 2020-02-11 12:47 ` Peter Zijlstra
1 sibling, 0 replies; 4+ messages in thread
From: Peter Zijlstra @ 2020-02-11 12:47 UTC (permalink / raw)
To: Marco Elver; +Cc: Qian Cai, Ingo Molnar, Will Deacon, LKML
On Tue, Feb 11, 2020 at 11:16:05AM +0100, Marco Elver wrote:
> On Tue, 11 Feb 2020 at 05:07, Qian Cai <cai@lca.pw> wrote:
> >
> > prev->next could be accessed concurrently as noticed by KCSAN,
> >
> > write (marked) to 0xffff9d3370dbbe40 of 8 bytes by task 3294 on cpu 107:
> > osq_lock+0x25f/0x350
> > osq_wait_next at kernel/locking/osq_lock.c:79
> > (inlined by) osq_lock at kernel/locking/osq_lock.c:185
> > rwsem_optimistic_spin
> > <snip>
> >
> > read to 0xffff9d3370dbbe40 of 8 bytes by task 3398 on cpu 100:
> > osq_lock+0x196/0x350
> > osq_lock at kernel/locking/osq_lock.c:157
> > rwsem_optimistic_spin
> > <snip>
> >
> > Since the write only stores NULL to prev->next and the read tests if
> > prev->next equals to this_cpu_ptr(&osq_node). Even if the value is
> > shattered, the code is still working correctly. Thus, mark it as an
> > intentional data race using the data_race() macro.
>
> I have said this before: we're not just guarding against load/store
> tearing, although on their own, they make it deceptively easy to
> reason about data races.
>
> The case here seems to be another instance of a C-CAS, to avoid
> unnecessarily dirtying a cacheline.
>
> Here, the loop would make me suspicious, because a compiler could
> optimize out re-loading the value. Due to the smp_load_acquire,
> however, at the least we have 1 implied compiler barrier in this loop
> which means that will likely not happen.
The loop has cpu_relax() (as any spin loop should have), that implies a
compiler barrier() and should disallow the compiler from being funny.
That said; I feel it would be very good to mandate a comment with every
use of data_race(), just like we mandate a comment with memory barriers.
This comment can then explain why the data_race() annotation is correct.
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2020-02-11 12:48 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-11 4:06 [PATCH -next] locking/osq_lock: annotate a data race in osq_lock Qian Cai
2020-02-11 10:16 ` Marco Elver
2020-02-11 11:57 ` Qian Cai
2020-02-11 12:47 ` Peter Zijlstra
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).