From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756070AbbDOKBc (ORCPT ); Wed, 15 Apr 2015 06:01:32 -0400 Received: from mail-wg0-f51.google.com ([74.125.82.51]:33175 "EHLO mail-wg0-f51.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754452AbbDOKAj (ORCPT ); Wed, 15 Apr 2015 06:00:39 -0400 From: Daniel Lezcano To: peterz@infradead.org, rjw@rjwysocki.net Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, nicolas.pitre@linaro.org, Ingo Molnar Subject: [PATCH 3/3] sched: fair: Fix wrong idle timestamp usage Date: Wed, 15 Apr 2015 12:00:24 +0200 Message-Id: <1429092024-20498-3-git-send-email-daniel.lezcano@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1429092024-20498-1-git-send-email-daniel.lezcano@linaro.org> References: <1429092024-20498-1-git-send-email-daniel.lezcano@linaro.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The find_idlest_cpu is assuming the rq->idle_stamp information reflects when the cpu entered the idle state. This is wrong as the cpu may exit and enter the idle state several times without the rq->idle_stamp being updated. We have two informations here: * rq->idle_stamp gives when the idle task has been scheduled * idle->idle_stamp gives when the cpu entered the idle state The patch fixes that by using the latter information and fallbacks to the rq's timestamp when the idle state is not accessible Signed-off-by: Daniel Lezcano --- kernel/sched/fair.c | 42 ++++++++++++++++++++++++++++-------------- 1 file changed, 28 insertions(+), 14 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 46855d0..b44f1ad 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4704,21 +4704,35 @@ find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu) if (idle_cpu(i)) { struct rq *rq = cpu_rq(i); struct cpuidle_state *idle = idle_get_state(rq); - if (idle && idle->exit_latency < min_exit_latency) { - /* - * We give priority to a CPU whose idle state - * has the smallest exit latency irrespective - * of any idle timestamp. - */ - min_exit_latency = idle->exit_latency; - latest_idle_timestamp = rq->idle_stamp; - shallowest_idle_cpu = i; - } else if ((!idle || idle->exit_latency == min_exit_latency) && - rq->idle_stamp > latest_idle_timestamp) { + + if (idle) { + if (idle->exit_latency < min_exit_latency) { + /* + * We give priority to a CPU + * whose idle state has the + * smallest exit latency + * irrespective of any idle + * timestamp. + */ + min_exit_latency = idle->exit_latency; + latest_idle_timestamp = idle->idle_stamp; + shallowest_idle_cpu = i; + } else if (idle->exit_latency == min_exit_latency && + idle->idle_stamp > latest_idle_timestamp) { + /* + * If the CPU is in the same + * idle state, choose the more + * recent one as it might have + * a warmer cache + */ + latest_idle_timestamp = idle->idle_stamp; + shallowest_idle_cpu = i; + } + } else if (rq->idle_stamp > latest_idle_timestamp) { /* - * If equal or no active idle state, then - * the most recently idled CPU might have - * a warmer cache. + * If no active idle state, then the + * most recent idled CPU might have a + * warmer cache */ latest_idle_timestamp = rq->idle_stamp; shallowest_idle_cpu = i; -- 1.9.1