linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@kernel.org>
To: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>,
	Steven Rostedt <rostedt@goodmis.org>, linke li <lilinke99@qq.com>,
	joel@joelfernandes.org, boqun.feng@gmail.com, dave@stgolabs.net,
	frederic@kernel.org, jiangshanlai@gmail.com,
	josh@joshtriplett.org, linux-kernel@vger.kernel.org,
	qiang.zhang1211@gmail.com, quic_neeraju@quicinc.com,
	rcu@vger.kernel.org
Subject: Re: [PATCH] rcutorture: Fix rcu_torture_pipe_update_one()/rcu_torture_writer() data race and concurrency bug
Date: Wed, 6 Mar 2024 19:37:13 -0800	[thread overview]
Message-ID: <72b14322-78c1-4479-9c4e-b0e11c1f0d53@paulmck-laptop> (raw)
In-Reply-To: <851dc594-d2ea-4050-b7c6-e33a1cddf3a1@efficios.com>

On Wed, Mar 06, 2024 at 10:06:21PM -0500, Mathieu Desnoyers wrote:
> On 2024-03-06 21:43, Linus Torvalds wrote:
> [...]
> > 
> > Honestly, this all makes me think that we'd be *much* better off
> > showing the real "handoff" with smp_store_release() and
> > smp_load_acquire().
> 
> We've done something similar in liburcu (userspace code) to allow
> Thread Sanitizer to understand the happens-before relationships
> within the RCU implementations and lock-free data structures.
> 
> Moving to load-acquire/store-release (C11 model in our case)
> allowed us to provide enough happens-before relationship for
> Thread Sanitizer to understand what is happening under the
> hood in liburcu and perform relevant race detection of user
> code.

Good point!

In the kernel, though, KCSAN understands the Linux-kernel memory model,
and so we don't get that sort of false positive.

> As far as the WRITE_ONCE(x, READ_ONCE(x) + 1) pattern
> is concerned, the only valid use-case I can think of is
> split counters or RCU implementations where there is a
> single updater doing the increment, and one or more
> concurrent reader threads that need to snapshot a
> consistent value with READ_ONCE().

It is wrong here.  OK, not wrong from a functional viewpoint, but it
is at best very confusing.  I am applying patches to fix this.  I will
push out a new "dev" branch on -rcu that will make this function appear
as shown below.

So what would you use that pattern for?

One possibility is a per-CPU statistical counter in userspace on a
fastpath, in cases where losing the occasional count is OK.  Then learning
your CPU (and possibly being immediately migrated to some other CPU),
READ_ONCE() of the count, increment, and WRITE_ONCE() might (or might not)
make sense.

I suppose the same in the kernel if there was a fastpath so extreme you
could not afford to disable preemption.

At best, very niche.

Or am I suffering a failure of imagination yet again?  ;-)

							Thanx, Paul

------------------------------------------------------------------------

static bool
rcu_torture_pipe_update_one(struct rcu_torture *rp)
{
	int i;
	struct rcu_torture_reader_check *rtrcp = READ_ONCE(rp->rtort_chkp);

	if (rtrcp) {
		WRITE_ONCE(rp->rtort_chkp, NULL);
		smp_store_release(&rtrcp->rtc_ready, 1); // Pair with smp_load_acquire().
	}
	i = rp->rtort_pipe_count;
	if (i > RCU_TORTURE_PIPE_LEN)
		i = RCU_TORTURE_PIPE_LEN;
	atomic_inc(&rcu_torture_wcount[i]);
	WRITE_ONCE(rp->rtort_pipe_count, i + 1);
	ASSERT_EXCLUSIVE_WRITER(rp->rtort_pipe_count);
	if (i + 1 >= RCU_TORTURE_PIPE_LEN) {
		rp->rtort_mbtest = 0;
		return true;
	}
	return false;
}

  reply	other threads:[~2024-03-07  3:37 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-04 10:54 [PATCH] rcutorture: Fix rcu_torture_pipe_update_one()/rcu_torture_writer() data race and concurrency bug linke li
2024-03-04 16:19 ` Joel Fernandes
2024-03-04 17:14   ` Paul E. McKenney
2024-03-04 19:10     ` Joel Fernandes
2024-03-04 19:44       ` Paul E. McKenney
2024-03-04 20:13         ` Joel Fernandes
2024-03-04 20:47           ` Paul E. McKenney
2024-03-05  3:30             ` linke
2024-03-05  6:24   ` linke li
2024-03-06 15:37     ` Steven Rostedt
2024-03-06 17:36       ` Paul E. McKenney
2024-03-06 18:01         ` Steven Rostedt
2024-03-06 18:09           ` Paul E. McKenney
2024-03-06 18:20             ` Steven Rostedt
2024-03-06 18:43           ` Linus Torvalds
2024-03-06 18:55             ` Steven Rostedt
2024-03-06 19:01               ` Linus Torvalds
2024-03-06 19:27                 ` Linus Torvalds
2024-03-06 19:47                   ` Steven Rostedt
2024-03-06 20:06                     ` Linus Torvalds
2024-03-07 13:20                       ` Steven Rostedt
2024-03-07 16:12                         ` Steven Rostedt
2024-03-06 19:27                 ` Steven Rostedt
2024-03-06 19:46                   ` Linus Torvalds
2024-03-06 20:20                     ` Linus Torvalds
2024-03-07  2:29                     ` Paul E. McKenney
2024-03-07  2:43                       ` Linus Torvalds
2024-03-07  2:49                         ` Linus Torvalds
2024-03-07  3:21                           ` Paul E. McKenney
2024-03-07  3:06                         ` Paul E. McKenney
2024-03-07  3:06                         ` Mathieu Desnoyers
2024-03-07  3:37                           ` Paul E. McKenney [this message]
2024-03-07  5:44                             ` Joel Fernandes
2024-03-07 19:05                               ` Paul E. McKenney
2024-03-07 13:53                             ` Mathieu Desnoyers
2024-03-07 19:47                               ` Paul E. McKenney
2024-03-07 19:53                                 ` Mathieu Desnoyers
2024-03-08  0:58                                   ` Paul E. McKenney
2024-03-07 20:00                                 ` Linus Torvalds
2024-03-07 20:57                                   ` Paul E. McKenney
2024-03-07 21:40                                     ` Julia Lawall
2024-03-07 22:09                                       ` Linus Torvalds
2024-03-08  0:55                                         ` Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=72b14322-78c1-4479-9c4e-b0e11c1f0d53@paulmck-laptop \
    --to=paulmck@kernel.org \
    --cc=boqun.feng@gmail.com \
    --cc=dave@stgolabs.net \
    --cc=frederic@kernel.org \
    --cc=jiangshanlai@gmail.com \
    --cc=joel@joelfernandes.org \
    --cc=josh@joshtriplett.org \
    --cc=lilinke99@qq.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mathieu.desnoyers@efficios.com \
    --cc=qiang.zhang1211@gmail.com \
    --cc=quic_neeraju@quicinc.com \
    --cc=rcu@vger.kernel.org \
    --cc=rostedt@goodmis.org \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).