All of lore.kernel.org
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Manfred Spraul <manfred@colorfullife.com>
Cc: Oleg Nesterov <oleg@redhat.com>,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Kirill Tkhai <ktkhai@parallels.com>,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	Josh Poimboeuf <jpoimboe@redhat.com>
Subject: Re: [PATCH 2/2] [PATCH] sched: Add smp_rmb() in task rq locking cycles
Date: Fri, 20 Feb 2015 19:45:51 +0100	[thread overview]
Message-ID: <20150220184551.GQ2896@worktop.programming.kicks-ass.net> (raw)
In-Reply-To: <54E77CC0.5030401@colorfullife.com>

On Fri, Feb 20, 2015 at 07:28:16PM +0100, Manfred Spraul wrote:

> >We need the full barrier to serialize STORE's as well, but probably we can
> >rely on control dependancy and thus we only need rmb().
> Do we need a full barrier or not?
> 
> I don't manage to create a proper line of reasoning.

I think I agree with Oleg in that we only need the smp_rmb(); of course
that wants a somewhat elaborate comment to go along with it. How about
something like so:

	spin_unlock_wait(&local);
	/*
	 * The above spin_unlock_wait() forms a control dependency with
	 * any following stores; because we must first observe the lock
	 * unlocked and we cannot speculate stores.
	 *
	 * Subsequent loads however can easily pass through the loads
	 * represented by spin_unlock_wait() and therefore we need the
	 * read barrier.
	 *
	 * This together is stronger than ACQUIRE for @local and
	 * therefore we will observe the complete prior critical section
	 * of @local.
	 */
	 smp_rmb();

The obvious alternative is using spin_unlock_wait() with an
smp_load_acquire(), but that might be more expensive on some archs due
to repeated issuing of memory barriers.


  reply	other threads:[~2015-02-20 18:46 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20150217104516.12144.85911.stgit@tkhai>
2015-02-17 10:47 ` [PATCH 2/2] [PATCH] sched: Add smp_rmb() in task rq locking cycles Kirill Tkhai
2015-02-17 12:12   ` Peter Zijlstra
2015-02-17 12:36     ` Kirill Tkhai
2015-02-17 12:45       ` Peter Zijlstra
2015-02-17 13:05     ` Peter Zijlstra
2015-02-17 16:05       ` Paul E. McKenney
2015-02-17 18:01         ` Paul E. McKenney
2015-02-17 18:23           ` Peter Zijlstra
2015-02-17 21:45             ` Paul E. McKenney
2015-02-18 13:41               ` Peter Zijlstra
2015-02-17 18:36         ` Peter Zijlstra
2015-02-17 21:52           ` Paul E. McKenney
2015-02-18 13:47             ` Peter Zijlstra
2015-02-18 18:43               ` Paul E. McKenney
2015-02-18 15:53             ` Oleg Nesterov
2015-02-18 16:11               ` Peter Zijlstra
2015-02-18 16:32                 ` Oleg Nesterov
2015-02-18 19:23                   ` Paul E. McKenney
2015-02-18 15:59             ` Oleg Nesterov
2015-02-18 19:14               ` Manfred Spraul
2015-02-18 22:43                 ` Peter Zijlstra
2015-02-19 14:19                   ` Oleg Nesterov
2015-02-20 18:28                     ` Manfred Spraul
2015-02-20 18:45                       ` Peter Zijlstra [this message]
2015-02-20 20:23                         ` Oleg Nesterov
2015-02-21 12:54                           ` Peter Zijlstra
2015-04-25 19:56                         ` Paul E. McKenney
2015-04-26 10:52                           ` Paul E. McKenney
2015-04-28 14:33                             ` Peter Zijlstra
2015-04-28 15:53                               ` Chris Metcalf
2015-04-28 16:24                                 ` Peter Zijlstra
2015-04-28 16:44                                   ` [PATCH] spinlock: clarify doc for raw_spin_unlock_wait() Chris Metcalf
2015-04-29 17:34                                     ` Manfred Spraul
2015-04-28 17:33                                   ` [PATCH 1/2] tile: modify arch_spin_unlock_wait() semantics Chris Metcalf
2015-04-28 17:33                                     ` [PATCH 2/2] tile: use READ_ONCE() in arch_spin_is_locked() Chris Metcalf
2015-04-28 16:40                                 ` [PATCH 2/2] [PATCH] sched: Add smp_rmb() in task rq locking cycles Peter Zijlstra
2015-04-28 16:58                                   ` Chris Metcalf
2015-04-28 17:43                                     ` Peter Zijlstra
2015-04-28 18:00                                       ` Chris Metcalf
2015-04-28 18:24                                         ` Peter Zijlstra
2015-04-28 18:38                                           ` Chris Metcalf
2015-04-28 14:32                           ` Peter Zijlstra
2015-04-28 20:33                             ` Paul E. McKenney
2015-02-21  3:26                       ` Paul E. McKenney
2015-02-23 18:29                         ` Paul E. McKenney
2015-02-18 17:05     ` [tip:sched/core] sched: Clarify ordering between task_rq_lock() and move_queued_task() tip-bot for Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150220184551.GQ2896@worktop.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=jpoimboe@redhat.com \
    --cc=ktkhai@parallels.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=manfred@colorfullife.com \
    --cc=mingo@redhat.com \
    --cc=oleg@redhat.com \
    --cc=paulmck@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.