From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752455AbaFXHx6 (ORCPT ); Tue, 24 Jun 2014 03:53:58 -0400 Received: from relay.parallels.com ([195.214.232.42]:42499 "EHLO relay.parallels.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752164AbaFXHx4 (ORCPT ); Tue, 24 Jun 2014 03:53:56 -0400 Message-ID: <1403596432.3462.26.camel@tkhai> Subject: [PATCH v2 1/3] sched/fair: Disable runtime_enabled on dying rq From: Kirill Tkhai To: CC: Peter Zijlstra , Ingo Molnar , , Srikar Dronamraju , "Mike Galbraith" , , Ben Segall , Paul Turner Date: Tue, 24 Jun 2014 11:53:52 +0400 In-Reply-To: <20140624074148.8738.57690.stgit@tkhai> References: <20140624074148.8738.57690.stgit@tkhai> Organization: Parallels Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.8.5-2+b3 MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Originating-IP: [10.30.26.172] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We kill rq->rd on the CPU_DOWN_PREPARE stage: cpuset_cpu_inactive -> cpuset_update_active_cpus -> partition_sched_domains -> -> cpu_attach_domain -> rq_attach_root -> set_rq_offline This unthrottles all throttled cfs_rqs. But the cpu is still able to call schedule() till take_cpu_down->__cpu_disable() is called from stop_machine. This case the tasks from just unthrottled cfs_rqs are pickable in a standard scheduler way, and they are picked by dying cpu. The cfs_rqs becomes throttled again, and migrate_tasks() in migration_call skips their tasks (one more unthrottle in migrate_tasks()->CPU_DYING does not happen, because rq->rd is already NULL). Patch sets runtime_enabled to zero. This guarantees, the runtime is not accounted, and the cfs_rqs won't exceed given cfs_rq->runtime_remaining = 1, and tasks will be pickable in migrate_tasks(). runtime_enabled is recalculated again when rq becomes online again. Ben Segall also noticed, we always enable runtime in tg_set_cfs_bandwidth(). Actually, we should do that for online cpus only. To fix that, we check if a cpu is online when its rq is locked. This guarantees we do not have races with set_rq_offline(), which also requires rq->lock. v2: Fix race with tg_set_cfs_bandwidth(). Move cfs_rq->runtime_enabled=0 above unthrottle_cfs_rq(). Signed-off-by: Kirill Tkhai CC: Konstantin Khorenko CC: Ben Segall CC: Paul Turner CC: Srikar Dronamraju CC: Mike Galbraith CC: Peter Zijlstra CC: Ingo Molnar --- kernel/sched/core.c | 15 +++++++++++---- kernel/sched/fair.c | 22 ++++++++++++++++++++++ 2 files changed, 33 insertions(+), 4 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 7f3063c..707a3c5 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7842,11 +7842,18 @@ static int tg_set_cfs_bandwidth(struct task_group *tg, u64 period, u64 quota) struct rq *rq = cfs_rq->rq; raw_spin_lock_irq(&rq->lock); - cfs_rq->runtime_enabled = runtime_enabled; - cfs_rq->runtime_remaining = 0; + /* + * Do not enable runtime on offline runqueues. We specially + * make it disabled in unthrottle_offline_cfs_rqs(). + */ + if (cpu_online(i)) { + cfs_rq->runtime_enabled = runtime_enabled; + cfs_rq->runtime_remaining = 0; + + if (cfs_rq->throttled) + unthrottle_cfs_rq(cfs_rq); + } - if (cfs_rq->throttled) - unthrottle_cfs_rq(cfs_rq); raw_spin_unlock_irq(&rq->lock); } if (runtime_was_enabled && !runtime_enabled) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 1f9c457..5616d23 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3776,6 +3776,19 @@ static void destroy_cfs_bandwidth(struct cfs_bandwidth *cfs_b) hrtimer_cancel(&cfs_b->slack_timer); } +static void __maybe_unused update_runtime_enabled(struct rq *rq) +{ + struct cfs_rq *cfs_rq; + + for_each_leaf_cfs_rq(rq, cfs_rq) { + struct cfs_bandwidth *cfs_b = &cfs_rq->tg->cfs_bandwidth; + + raw_spin_lock(&cfs_b->lock); + cfs_rq->runtime_enabled = cfs_b->quota != RUNTIME_INF; + raw_spin_unlock(&cfs_b->lock); + } +} + static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq) { struct cfs_rq *cfs_rq; @@ -3789,6 +3802,12 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq) * there's some valid quota amount */ cfs_rq->runtime_remaining = 1; + /* + * Offline rq is schedulable till cpu is completely disabled + * in take_cpu_down(), so we prevent new cfs throttling here. + */ + cfs_rq->runtime_enabled = 0; + if (cfs_rq_throttled(cfs_rq)) unthrottle_cfs_rq(cfs_rq); } @@ -3832,6 +3851,7 @@ static inline struct cfs_bandwidth *tg_cfs_bandwidth(struct task_group *tg) return NULL; } static inline void destroy_cfs_bandwidth(struct cfs_bandwidth *cfs_b) {} +static inline void update_runtime_enabled(struct rq *rq) {} static inline void unthrottle_offline_cfs_rqs(struct rq *rq) {} #endif /* CONFIG_CFS_BANDWIDTH */ @@ -7325,6 +7345,8 @@ void trigger_load_balance(struct rq *rq) static void rq_online_fair(struct rq *rq) { update_sysctl(); + + update_runtime_enabled(rq); } static void rq_offline_fair(struct rq *rq)