From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756805AbcILHt2 (ORCPT ); Mon, 12 Sep 2016 03:49:28 -0400 Received: from mail-wm0-f50.google.com ([74.125.82.50]:38426 "EHLO mail-wm0-f50.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932353AbcILHsm (ORCPT ); Mon, 12 Sep 2016 03:48:42 -0400 From: Vincent Guittot To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org, yuyang.du@intel.com, Morten.Rasmussen@arm.com Cc: linaro-kernel@lists.linaro.org, dietmar.eggemann@arm.com, pjt@google.com, bsegall@google.com, Vincent Guittot Subject: [PATCH 5/7 v3] sched: propagate asynchrous detach Date: Mon, 12 Sep 2016 09:47:50 +0200 Message-Id: <1473666472-13749-6-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1473666472-13749-1-git-send-email-vincent.guittot@linaro.org> References: <1473666472-13749-1-git-send-email-vincent.guittot@linaro.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org A task can be asynchronously detached from cfs_rq when migrating between CPUs. The load of the migrated task is then removed from source cfs_rq during its next update. We use this event to set propagation flag. During the load balance, we take advanatge of the update of blocked load to we propagate any pending changes. Signed-off-by: Vincent Guittot --- kernel/sched/fair.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index e4015f6..25533ba 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3188,6 +3188,7 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq, bool update_freq) sub_positive(&sa->load_avg, r); sub_positive(&sa->load_sum, r * LOAD_AVG_MAX); removed_load = 1; + set_tg_cfs_propagate(cfs_rq); } if (atomic_long_read(&cfs_rq->removed_util_avg)) { @@ -3195,6 +3196,7 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq, bool update_freq) sub_positive(&sa->util_avg, r); sub_positive(&sa->util_sum, r * LOAD_AVG_MAX); removed_util = 1; + set_tg_cfs_propagate(cfs_rq); } decayed = __update_load_avg(now, cpu_of(rq_of(cfs_rq)), sa, @@ -6567,6 +6569,10 @@ static void update_blocked_averages(int cpu) if (update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq, true)) update_tg_load_avg(cfs_rq, 0); + + /* Propagate pending load changes to the parent */ + if (cfs_rq->tg->se[cpu]) + update_load_avg(cfs_rq->tg->se[cpu], 0, 0); } raw_spin_unlock_irqrestore(&rq->lock, flags); } -- 1.9.1