From: Peter Zijlstra <peterz@infradead.org>
To: Qian Cai <cai@lca.pw>
Cc: mingo@redhat.com, will@kernel.org, elver@google.com,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH] locking/osq_lock: fix a data race in osq_wait_next
Date: Wed, 22 Jan 2020 18:09:44 +0100 [thread overview]
Message-ID: <20200122170944.GZ14879@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <20200122163857.4605-1-cai@lca.pw>
On Wed, Jan 22, 2020 at 11:38:57AM -0500, Qian Cai wrote:
> KCSAN complains,
>
> write (marked) to 0xffff941ca3b3be00 of 8 bytes by task 670 on cpu 6:
> osq_lock+0x24c/0x340
> __mutex_lock+0x277/0xd20
> mutex_lock_nested+0x31/0x40
> memcg_create_kmem_cache+0x2e/0x190
> memcg_kmem_cache_create_func+0x40/0x80
> process_one_work+0x54c/0xbe0
> worker_thread+0x80/0x650
> kthread+0x1e0/0x200
> ret_from_fork+0x27/0x50
>
> read to 0xffff941ca3b3be00 of 8 bytes by task 703 on cpu 44:
> osq_lock+0x18e/0x340
> __mutex_lock+0x277/0xd20
> mutex_lock_nested+0x31/0x40
> memcg_create_kmem_cache+0x2e/0x190
> memcg_kmem_cache_create_func+0x40/0x80
> process_one_work+0x54c/0xbe0
> worker_thread+0x80/0x650
> kthread+0x1e0/0x200
> ret_from_fork+0x27/0x50
That's useless gibberish, at the very least run it through some decode
so we get line numbers.
> which points to those lines in osq_wait_next(),
>
> next = xchg(&node->next, NULL);
> if (next)
> break;
>
> Since only the read is outside of critical sections, fixed it by adding
> a READ_ONCE().
What?!?! Did you actually read what you wrote?
Also, you have to stop calling things fixes unless you can prove (and
explain) there is an actual problem.
> Signed-off-by: Qian Cai <cai@lca.pw>
> ---
> kernel/locking/osq_lock.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/locking/osq_lock.c b/kernel/locking/osq_lock.c
> index 6ef600aa0f47..8f565165019a 100644
> --- a/kernel/locking/osq_lock.c
> +++ b/kernel/locking/osq_lock.c
> @@ -77,7 +77,7 @@ osq_wait_next(struct optimistic_spin_queue *lock,
> */
> if (node->next) {
> next = xchg(&node->next, NULL);
> - if (next)
> + if (READ_ONCE(next))
> break;
> }
>
This seems to suggest you ought to maybe brush up on your C skills.
prev parent reply other threads:[~2020-01-22 17:09 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-01-22 16:38 [PATCH] locking/osq_lock: fix a data race in osq_wait_next Qian Cai
2020-01-22 16:59 ` Will Deacon
2020-01-22 17:08 ` Qian Cai
2020-01-22 22:38 ` Marco Elver
2020-01-22 23:54 ` Qian Cai
2020-01-23 9:39 ` Peter Zijlstra
2020-01-28 3:11 ` Qian Cai
2020-01-28 11:46 ` Marco Elver
2020-01-28 12:53 ` Qian Cai
2020-01-28 16:52 ` Peter Zijlstra
2020-01-28 16:56 ` Peter Zijlstra
2020-01-29 0:22 ` Paul E. McKenney
2020-01-29 15:29 ` Marco Elver
2020-01-29 18:40 ` Peter Zijlstra
2020-01-30 13:39 ` Marco Elver
2020-01-30 13:48 ` Peter Zijlstra
2020-01-31 3:32 ` Qian Cai
2020-01-29 18:49 ` Peter Zijlstra
2020-01-29 19:26 ` Paul E. McKenney
2020-01-23 9:36 ` Peter Zijlstra
2020-01-28 3:12 ` Qian Cai
2020-01-28 8:18 ` Marco Elver
2020-01-28 10:10 ` Qian Cai
2020-01-28 10:29 ` Marco Elver
2020-01-22 17:09 ` Peter Zijlstra [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200122170944.GZ14879@hirez.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=cai@lca.pw \
--cc=elver@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).