From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753052Ab3FGHWw (ORCPT ); Fri, 7 Jun 2013 03:22:52 -0400 Received: from mga14.intel.com ([143.182.124.37]:50709 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753096Ab3FGHWi (ORCPT ); Fri, 7 Jun 2013 03:22:38 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.87,820,1363158000"; d="scan'208";a="251855955" From: Alex Shi To: mingo@redhat.com, peterz@infradead.org, tglx@linutronix.de, akpm@linux-foundation.org, bp@alien8.de, pjt@google.com, namhyung@kernel.org, efault@gmx.de, morten.rasmussen@arm.com Cc: vincent.guittot@linaro.org, preeti@linux.vnet.ibm.com, viresh.kumar@linaro.org, linux-kernel@vger.kernel.org, alex.shi@intel.com, mgorman@suse.de, riel@redhat.com, wangyun@linux.vnet.ibm.com, Jason Low , Changlong Xie , sgruszka@redhat.com, fweisbec@gmail.com Subject: [patch v8 8/9] sched: consider runnable load average in move_tasks Date: Fri, 7 Jun 2013 15:20:51 +0800 Message-Id: <1370589652-24549-9-git-send-email-alex.shi@intel.com> X-Mailer: git-send-email 1.7.12 In-Reply-To: <1370589652-24549-1-git-send-email-alex.shi@intel.com> References: <1370589652-24549-1-git-send-email-alex.shi@intel.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Except using runnable load average in background, move_tasks is also the key functions in load balance. We need consider the runnable load average in it in order to the apple to apple load comparison. Morten had caught a div u64 bug on ARM, thanks! Signed-off-by: Alex Shi --- kernel/sched/fair.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index eadd2e7..3aa1dc0 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4178,11 +4178,14 @@ static int tg_load_down(struct task_group *tg, void *data) long cpu = (long)data; if (!tg->parent) { - load = cpu_rq(cpu)->load.weight; + load = cpu_rq(cpu)->avg.load_avg_contrib; } else { + unsigned long tmp_rla; + tmp_rla = tg->parent->cfs_rq[cpu]->runnable_load_avg + 1; + load = tg->parent->cfs_rq[cpu]->h_load; - load *= tg->se[cpu]->load.weight; - load /= tg->parent->cfs_rq[cpu]->load.weight + 1; + load *= tg->se[cpu]->avg.load_avg_contrib; + load /= tmp_rla; } tg->cfs_rq[cpu]->h_load = load; @@ -4208,12 +4211,9 @@ static void update_h_load(long cpu) static unsigned long task_h_load(struct task_struct *p) { struct cfs_rq *cfs_rq = task_cfs_rq(p); - unsigned long load; - - load = p->se.load.weight; - load = div_u64(load * cfs_rq->h_load, cfs_rq->load.weight + 1); - return load; + return div64_ul(p->se.avg.load_avg_contrib * cfs_rq->h_load, + cfs_rq->runnable_load_avg + 1); } #else static inline void update_blocked_averages(int cpu) -- 1.7.12