From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759527AbbKTKCl (ORCPT ); Fri, 20 Nov 2015 05:02:41 -0500 Received: from casper.infradead.org ([85.118.1.10]:57962 "EHLO casper.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752026AbbKTKCh (ORCPT ); Fri, 20 Nov 2015 05:02:37 -0500 Date: Fri, 20 Nov 2015 11:02:30 +0100 From: Peter Zijlstra To: Paul Turner Cc: Ingo Molnar , Oleg Nesterov , LKML , Paul McKenney , boqun.feng@gmail.com, Jonathan Corbet , mhocko@kernel.org, dhowells@redhat.com, Linus Torvalds , will.deacon@arm.com Subject: Re: [PATCH 2/4] sched: Document Program-Order guarantees Message-ID: <20151120100230.GA17308@twins.programming.kicks-ass.net> References: <20151102132901.157178466@infradead.org> <20151102134940.883198067@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Nov 02, 2015 at 12:27:05PM -0800, Paul Turner wrote: > I suspect this part might be more explicitly expressed by specifying > the requirements that migration satisfies; then providing an example. > This makes it easier for others to reason about the locks and saves > worrying about whether the examples hit our 3 million sub-cases. Something like so then? Note that this patch (obviously) comes after introducing smp_cond_acquire(), simplifying the RELEASE/ACQUIRE on p->on_cpu. --- Subject: sched: Document Program-Order guarantees From: Peter Zijlstra Date: Tue Nov 17 19:01:11 CET 2015 These are some notes on the scheduler locking and how it provides program order guarantees on SMP systems. Cc: "Paul E. McKenney" Cc: Jonathan Corbet Cc: Michal Hocko Cc: David Howells Cc: Linus Torvalds Cc: Will Deacon Cc: Oleg Nesterov Cc: Boqun Feng Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/core.c | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 91 insertions(+) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1916,6 +1916,97 @@ static void ttwu_queue(struct task_struc raw_spin_unlock(&rq->lock); } +/* + * Notes on Program-Order guarantees on SMP systems. + * + * MIGRATION + * + * The basic program-order guarantee on SMP systems is that when a task [t] + * migrates, all its activity on its old cpu [c0] happens-before any subsequent + * execution on its new cpu [c1]. + * + * For migration (of runnable tasks) this is provided by the following means: + * + * A) UNLOCK of the rq(c0)->lock scheduling out task t + * B) migration for t is required to synchronize *both* rq(c0)->lock and + * rq(c1)->lock (if not at the same time, then in that order). + * C) LOCK of the rq(c1)->lock scheduling in task + * + * Transitivity guarantees that B happens after A and C after B. + * Note: we only require RCpc transitivity. + * Note: the cpu doing B need not be c0 or c1 + * + * Example: + * + * CPU0 CPU1 CPU2 + * + * LOCK rq(0)->lock + * sched-out X + * sched-in Y + * UNLOCK rq(0)->lock + * + * LOCK rq(0)->lock // orders against CPU0 + * dequeue X + * UNLOCK rq(0)->lock + * + * LOCK rq(1)->lock + * enqueue X + * UNLOCK rq(1)->lock + * + * LOCK rq(1)->lock // orders against CPU2 + * sched-out Z + * sched-in X + * UNLOCK rq(1)->lock + * + * + * BLOCKING -- aka. SLEEP + WAKEUP + * + * For blocking we (obviously) need to provide the same guarantee as for + * migration. However the means are completely different as there is no lock + * chain to provide order. Instead we do: + * + * 1) smp_store_release(X->on_cpu, 0) + * 2) smp_cond_acquire(!X->on_cpu) + * + * Example: + * + * CPU0 (schedule) CPU1 (try_to_wake_up) CPU2 (schedule) + * + * LOCK rq(0)->lock LOCK X->pi_lock + * dequeue X + * sched-out X + * smp_store_release(X->on_cpu, 0); + * + * smp_cond_acquire(!X->on_cpu); + * X->state = WAKING + * set_task_cpu(X,2) + * + * LOCK rq(2)->lock + * enqueue X + * X->state = RUNNING + * UNLOCK rq(2)->lock + * + * LOCK rq(2)->lock // orders against CPU1 + * sched-out Z + * sched-in X + * UNLOCK rq(1)->lock + * + * UNLOCK X->pi_lock + * UNLOCK rq(0)->lock + * + * + * However; for wakeups there is a second guarantee we must provide, namely we + * must observe the state that lead to our wakeup. That is, not only must our + * task observe its own prior state, it must also observe the stores prior to + * its wakeup. + * + * This means that any means of doing remote wakeups must order the CPU doing + * the wakeup against the CPU the task is going to end up running on. This, + * however, is already required for the regular Program-Order guarantee above, + * since the waking CPU is the one issueing the ACQUIRE (2). + * + */ + /** * try_to_wake_up - wake up a thread * @p: the thread to be awakened