From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933681AbbBQKr0 (ORCPT ); Tue, 17 Feb 2015 05:47:26 -0500 Received: from relay.parallels.com ([195.214.232.42]:58473 "EHLO relay.parallels.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756769AbbBQKrE (ORCPT ); Tue, 17 Feb 2015 05:47:04 -0500 Message-ID: <1424170021.5749.22.camel@tkhai> Subject: [PATCH 2/2] [PATCH] sched: Add smp_rmb() in task rq locking cycles From: Kirill Tkhai To: CC: Peter Zijlstra , Ingo Molnar , Josh Poimboeuf Date: Tue, 17 Feb 2015 13:47:01 +0300 In-Reply-To: <20150217104516.12144.85911.stgit@tkhai> References: <20150217104516.12144.85911.stgit@tkhai> Organization: Parallels Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.8.5-2+b3 MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Originating-IP: [10.24.40.85] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We migrate a task using TASK_ON_RQ_MIGRATING state of on_rq: raw_spin_lock(&old_rq->lock); deactivate_task(old_rq, p, 0); p->on_rq = TASK_ON_RQ_MIGRATING; set_task_cpu(p, new_cpu); raw_spin_unlock(&rq->lock); I.e.: write TASK_ON_RQ_MIGRATING smp_wmb() (in __set_task_cpu) write new_cpu But {,__}task_rq_lock() don't use smp_rmb(), and they may see the cpu and TASK_ON_RQ_MIGRATING in opposite order. In this case {,__}task_rq_lock() lock new_rq before the task is actually queued on it. Fix that using ordered reading. Fixes: cca26e8009d1 "sched: Teach scheduler to understand TASK_ON_RQ_MIGRATING state" Signed-off-by: Kirill Tkhai --- kernel/sched/core.c | 8 ++++++-- kernel/sched/sched.h | 8 ++++++-- 2 files changed, 12 insertions(+), 4 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index fc12a1d..a42fb88 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -319,8 +319,12 @@ static struct rq *task_rq_lock(struct task_struct *p, unsigned long *flags) raw_spin_lock_irqsave(&p->pi_lock, *flags); rq = task_rq(p); raw_spin_lock(&rq->lock); - if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) - return rq; + if (likely(rq == task_rq(p))) { + /* Pairs with smp_wmb() in __set_task_cpu() */ + smp_rmb(); + if (likely(!task_on_rq_migrating(p))) + return rq; + } raw_spin_unlock(&rq->lock); raw_spin_unlock_irqrestore(&p->pi_lock, *flags); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index f65f57c..4d7b03c 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1031,8 +1031,12 @@ static inline struct rq *__task_rq_lock(struct task_struct *p) for (;;) { rq = task_rq(p); raw_spin_lock(&rq->lock); - if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) - return rq; + if (likely(rq == task_rq(p))) { + /* Pairs with smp_wmb() in __set_task_cpu() */ + smp_rmb(); + if (likely(!task_on_rq_migrating(p))) + return rq; + } raw_spin_unlock(&rq->lock); while (unlikely(task_on_rq_migrating(p)))