linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@kernel.org>
To: Alan Stern <stern@rowland.harvard.edu>
Cc: Andrea Parri <parri.andrea@gmail.com>,
	Jonas Oberhauser <jonas.oberhauser@huaweicloud.com>,
	will@kernel.org, peterz@infradead.org, boqun.feng@gmail.com,
	npiggin@gmail.com, dhowells@redhat.com, j.alglave@ucl.ac.uk,
	luc.maranget@inria.fr, akiyks@gmail.com, dlustig@nvidia.com,
	joel@joelfernandes.org, urezki@gmail.com,
	quic_neeraju@quicinc.com, frederic@kernel.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 2/2] tools/memory-model: Make ppo a subrelation of po
Date: Sun, 29 Jan 2023 08:21:56 -0800	[thread overview]
Message-ID: <20230129162156.GG2948950@paulmck-ThinkPad-P17-Gen-1> (raw)
In-Reply-To: <Y9aY4hG3p+82vVIw@rowland.harvard.edu>

On Sun, Jan 29, 2023 at 11:03:46AM -0500, Alan Stern wrote:
> On Sat, Jan 28, 2023 at 09:17:34PM -0800, Paul E. McKenney wrote:
> > On Sat, Jan 28, 2023 at 05:59:52PM -0500, Alan Stern wrote:
> > > On Sat, Jan 28, 2023 at 11:14:17PM +0100, Andrea Parri wrote:
> > > > > Evidently the plain-coherence check rules out x=1 at the 
> > > > > end, because when I relax that check, x=1 becomes a possible result.  
> > > > > Furthermore, the graphical output confirms that this execution has a 
> > > > > ww-incoh edge from Wx=2 to Wx=1.  But there is no ww-vis edge from Wx=1 
> > > > > to Wx=2!  How can this be possible?  It seems like a bug in herd7.
> > > > 
> > > > By default, herd7 performs some edges removal when generating the
> > > > graphical outputs.  The option -showraw can be useful to increase
> > > > the "verbosity", for example,
> > > > 
> > > >   [with "exists (x=2)", output in /tmp/T.dot]
> > > >   $ herd7 -conf linux-kernel.cfg T.litmus -show prop -o /tmp -skipchecks plain-coherence -doshow ww-vis -showraw ww-vis
> > > 
> > > Okay, thanks, that helps a lot.
> > > 
> > > So here's what we've got.  The litmus test:
> > > 
> > > 
> > > C hb-and-int
> > > {}
> > > 
> > > P0(int *x, int *y)
> > > {
> > >     *x = 1;
> > >     smp_store_release(y, 1);
> > > }
> > > 
> > > P1(int *x, int *y, int *dx, int *dy, spinlock_t *l)
> > > {
> > >     spin_lock(l);
> > >     int r1 = READ_ONCE(*dy);
> > >     if (r1==1)
> > >         spin_unlock(l);
> > > 
> > >     int r0 = smp_load_acquire(y);
> > >     if (r0 == 1) {
> > >         WRITE_ONCE(*dx,1);
> > >     }
> > 
> > The lack of a spin_unlock() when r1!=1 is intentional?
> 
> I assume so.
> 
> > It is admittedly a cute way to prevent P3 from doing anything
> > when r1!=1.  And P1 won't do anything if P3 runs first.
> 
> Right.
> 
> > > }
> > > 
> > > P2(int *dx, int *dy)
> > > {
> > >     WRITE_ONCE(*dy,READ_ONCE(*dx));
> > > }
> > > 
> > > 
> > > P3(int *x, spinlock_t *l)
> > > {
> > >     spin_lock(l);
> > >     smp_mb__after_unlock_lock();
> > >     *x = 2;
> > > }
> > > 
> > > exists (x=2)
> > > 
> > > 
> > > The reason why Wx=1 ->ww-vis Wx=2:
> > > 
> > > 	0:Wx=1 ->po-rel 0:Wy=1 and po-rel < fence < ww-post-bounded.
> > > 
> > > 	0:Wy=1 ->rfe 1:Ry=1 ->(hb* & int) 1:Rdy=1 and
> > > 		(rfe ; hb* & int) <= (rfe ; xbstar & int) <= vis.
> > > 
> > > 	1:Rdy=1 ->po 1:unlock ->rfe 3:lock ->po 3:Wx=2
> > > 		so 1:Rdy=1 ->po-unlock-lock-po 3:Wx=2
> > > 		and po-unlock-lock-po <= mb <= fence <= w-pre-bounded.
> > > 
> > > Finally, w-post-bounded ; vis ; w-pre-bounded <= ww-vis.
> > > 
> > > This explains why the memory model says there isn't a data race.  This 
> > > doesn't use the smp_mb__after_unlock_lock at all.
> > 
> > You lost me on this one.
> > 
> > Suppose that P3 starts first, then P0.  P1 is then stuck at the
> > spin_lock() because P3 does not release that lock.  P2 goes out for a
> > pizza.
> 
> That wouldn't be a valid execution.  One of the rules in lock.cat says 
> that a spin_lock() call must read from a spin_unlock() or from an 
> initial write, which rules out executions in which P3 acquires the lock 
> first.

OK, I will bite...

Why can't P3's spin_lock() read from that initial write?

> > Why can't the two stores to x by P0 and P3 conflict, resulting in a
> > data race?
> 
> That can't happen in executions where P1 acquires the lock first for the 
> reason outlined above (P0's store to x propagates to P3 before P3 writes 
> to x).  And there are no other executions -- basically, herd7 ignores 
> deadlock scenarios.

True enough, if P1 gets there first, then P3 never stores to x.

What I don't understand is why P1 must always get there first.

							Thanx, Paul

  reply	other threads:[~2023-01-29 16:22 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-26 13:46 [PATCH v2 0/2] Streamlining treatment of smp_mb__after_unlock_lock Jonas Oberhauser
2023-01-26 13:46 ` [PATCH v2 1/2] tools/memory-model: Unify UNLOCK+LOCK pairings to po-unlock-lock-po Jonas Oberhauser
2023-01-26 16:36   ` Alan Stern
2023-01-26 20:08     ` Paul E. McKenney
2023-01-26 23:21       ` Paul E. McKenney
2023-01-27 13:18         ` Jonas Oberhauser
2023-01-27 15:13           ` Paul E. McKenney
2023-01-27 15:57             ` Jonas Oberhauser
2023-01-27 16:48               ` Paul E. McKenney
2023-01-26 13:46 ` [PATCH v2 2/2] tools/memory-model: Make ppo a subrelation of po Jonas Oberhauser
2023-01-26 16:36   ` Alan Stern
2023-01-27 14:31     ` Jonas Oberhauser
2023-01-28 19:56       ` Alan Stern
2023-01-28 22:14         ` Andrea Parri
2023-01-28 22:21           ` Andrea Parri
2023-01-28 22:59           ` Alan Stern
2023-01-29  5:17             ` Paul E. McKenney
2023-01-29 16:03               ` Alan Stern
2023-01-29 16:21                 ` Paul E. McKenney [this message]
2023-01-29 17:28                   ` Andrea Parri
2023-01-29 18:44                     ` Paul E. McKenney
2023-01-29 21:43                       ` Boqun Feng
2023-01-29 23:09                         ` Paul E. McKenney
2023-01-30  2:18                           ` Alan Stern
2023-01-30  4:43                             ` Paul E. McKenney
2023-01-29 19:17                   ` Paul E. McKenney
2023-01-29 17:11             ` Andrea Parri
2023-01-29 22:10               ` Alan Stern
2023-01-29 22:19             ` Jonas Oberhauser
2023-01-30  2:39               ` Alan Stern
2023-01-30  4:36                 ` Paul E. McKenney
2023-01-30 16:47                   ` Alan Stern
2023-01-30 16:50                     ` Paul E. McKenney
2023-01-31 13:56                 ` Jonas Oberhauser
2023-01-31 15:06                   ` Alan Stern
2023-01-31 15:33                     ` Jonas Oberhauser
2023-01-31 16:55                       ` Alan Stern
2023-02-01 10:37                         ` Jonas Oberhauser
2023-01-30  4:46               ` Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230129162156.GG2948950@paulmck-ThinkPad-P17-Gen-1 \
    --to=paulmck@kernel.org \
    --cc=akiyks@gmail.com \
    --cc=boqun.feng@gmail.com \
    --cc=dhowells@redhat.com \
    --cc=dlustig@nvidia.com \
    --cc=frederic@kernel.org \
    --cc=j.alglave@ucl.ac.uk \
    --cc=joel@joelfernandes.org \
    --cc=jonas.oberhauser@huaweicloud.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luc.maranget@inria.fr \
    --cc=npiggin@gmail.com \
    --cc=parri.andrea@gmail.com \
    --cc=peterz@infradead.org \
    --cc=quic_neeraju@quicinc.com \
    --cc=stern@rowland.harvard.edu \
    --cc=urezki@gmail.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).