linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Nicholas Piggin <npiggin@gmail.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>,
	Will Deacon <will.deacon@arm.com>,
	Oleg Nesterov <oleg@redhat.com>,
	Paul McKenney <paulmck@linux.vnet.ibm.com>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Michael Ellerman <mpe@ellerman.id.au>,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@kernel.org>,
	Alan Stern <stern@rowland.harvard.edu>
Subject: Re: Question on smp_mb__before_spinlock
Date: Mon, 12 Sep 2016 12:27:08 +1000	[thread overview]
Message-ID: <20160912122708.71a91ea3@roar.ozlabs.ibm.com> (raw)
In-Reply-To: <20160907132354.GR10138@twins.programming.kicks-ass.net>

On Wed, 7 Sep 2016 15:23:54 +0200
Peter Zijlstra <peterz@infradead.org> wrote:

> On Wed, Sep 07, 2016 at 10:17:26PM +1000, Nicholas Piggin wrote:
> > >  /*
> > > + * This barrier must provide two things:
> > > + *
> > > + *   - it must guarantee a STORE before the spin_lock() is ordered against a
> > > + *     LOAD after it, see the comments at its two usage sites.
> > > + *
> > > + *   - it must ensure the critical section is RCsc.
> > > + *
> > > + * The latter is important for cases where we observe values written by other
> > > + * CPUs in spin-loops, without barriers, while being subject to scheduling.
> > > + *
> > > + * CPU0			CPU1			CPU2
> > > + * 
> > > + * 			for (;;) {
> > > + * 			  if (READ_ONCE(X))
> > > + * 			  	break;
> > > + * 			}
> > > + * X=1
> > > + * 			<sched-out>
> > > + * 						<sched-in>
> > > + * 						r = X;
> > > + *
> > > + * without transitivity it could be that CPU1 observes X!=0 breaks the loop,
> > > + * we get migrated and CPU2 sees X==0.
> > > + *
> > > + * Since most load-store architectures implement ACQUIRE with an smp_mb() after
> > > + * the LL/SC loop, they need no further barriers. Similarly all our TSO
> > > + * architectures imlpy an smp_mb() for each atomic instruction and equally don't
> > > + * need more.
> > > + *
> > > + * Architectures that can implement ACQUIRE better need to take care.
> > >   */
> > > +#ifndef smp_mb__after_spinlock
> > > +#define smp_mb__after_spinlock()	do { } while (0)
> > >  #endif  
> > 
> > It seems okay, but why not make it a special sched-only function name
> > to prevent it being used in generic code?
> > 
> > I would not mind seeing responsibility for the switch barrier moved to
> > generic context switch code like this (alternative for powerpc reducing
> > number of hwsync instructions was to add documentation and warnings about
> > the barriers in arch dependent and independent code). And pairing it with
> > a spinlock is reasonable.
> > 
> > It may not strictly be an "smp_" style of barrier if MMIO accesses are to
> > be ordered here too, despite critical section may only be providing
> > acquire/release for cacheable memory, so maybe it's slightly more
> > complicated than just cacheable RCsc?  
> 
> Interesting idea..
> 
> So I'm not a fan of that raw_spin_lock wrapper, since that would end up
> with a lot more boiler-plate code than just the one extra barrier.

#ifndef sched_ctxsw_raw_spin_lock
#define sched_ctxsw_raw_spin_lock(lock) raw_spin_lock(lock)
#endif

#define sched_ctxsw_raw_spin_lock(lock) do { smp_mb() ; raw_spin_lock(lock); } while (0)

?


> But moving MMIO/DMA/TLB etc.. barriers into this spinlock might not be a
> good idea, since those are typically fairly heavy barriers, and its
> quite common to call schedule() without ending up in switch_to().

That's true I guess, but if we already have the arch specific smp_mb__
specifically for this context switch code, and you are asking for them to
implement *cacheable* memory barrier vs migration, then I see no reason
not to allow them to implement uncacheable as well.

You make a good point about schedule() without switch_to(), but
architectures will still have no less flexibility than they do now.

Thanks,
Nick

  parent reply	other threads:[~2016-09-12  2:27 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-09-05  9:37 Question on smp_mb__before_spinlock Peter Zijlstra
2016-09-05  9:56 ` kbuild test robot
2016-09-05 10:19   ` Peter Zijlstra
2016-09-05 11:26     ` Fengguang Wu
2016-09-05 10:10 ` Will Deacon
2016-09-06 11:17   ` Peter Zijlstra
2016-09-06 17:42     ` Will Deacon
2016-09-05 10:37 ` Paul E. McKenney
2016-09-05 11:34   ` Peter Zijlstra
2016-09-05 13:57     ` Paul E. McKenney
2016-09-05 10:51 ` kbuild test robot
2016-09-07 12:17 ` Nicholas Piggin
2016-09-07 13:23   ` Peter Zijlstra
2016-09-07 13:51     ` Will Deacon
2016-09-12  2:35       ` Nicholas Piggin
2016-09-12  2:27     ` Nicholas Piggin [this message]
2016-09-12 12:54       ` Peter Zijlstra
2016-09-13  2:05         ` Nicholas Piggin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160912122708.71a91ea3@roar.ozlabs.ibm.com \
    --to=npiggin@gmail.com \
    --cc=benh@kernel.crashing.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=mpe@ellerman.id.au \
    --cc=oleg@redhat.com \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=peterz@infradead.org \
    --cc=stern@rowland.harvard.edu \
    --cc=torvalds@linux-foundation.org \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).