From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752559AbdIANf2 (ORCPT ); Fri, 1 Sep 2017 09:35:28 -0400 Received: from merlin.infradead.org ([205.233.59.134]:54948 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752381AbdIANfX (ORCPT ); Fri, 1 Sep 2017 09:35:23 -0400 Message-Id: <20170901132748.786114980@infradead.org> User-Agent: quilt/0.63-1 Date: Fri, 01 Sep 2017 15:21:15 +0200 From: Peter Zijlstra To: mingo@kernel.org, linux-kernel@vger.kernel.org, tj@kernel.org, josef@toxicpanda.com Cc: torvalds@linux-foundation.org, vincent.guittot@linaro.org, efault@gmx.de, pjt@google.com, clm@fb.com, dietmar.eggemann@arm.com, morten.rasmussen@arm.com, bsegall@google.com, yuyang.du@intel.com, peterz@infradead.org Subject: [PATCH -v2 16/18] sched/fair: More accurate async detach References: <20170901132059.342024223@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline; filename=peterz-sched-fair-more-accurate-remove.patch Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The problem with the overestimate is that it will subtract too big a value from the load_sum, thereby pushing it down further than it ought to go. Since runnable_load_avg is not subject to a similar 'force', this results in the occasional 'runnable_load > load' situation. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/fair.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3469,6 +3469,7 @@ update_cfs_rq_load_avg(u64 now, struct c if (cfs_rq->removed.nr) { unsigned long r; + u32 divider = LOAD_AVG_MAX - 1024 + sa->period_contrib; raw_spin_lock(&cfs_rq->removed.lock); swap(cfs_rq->removed.util_avg, removed_util); @@ -3477,17 +3478,13 @@ update_cfs_rq_load_avg(u64 now, struct c cfs_rq->removed.nr = 0; raw_spin_unlock(&cfs_rq->removed.lock); - /* - * The LOAD_AVG_MAX for _sum is a slight over-estimate, - * which is safe due to sub_positive() clipping at 0. - */ r = removed_load; sub_positive(&sa->load_avg, r); - sub_positive(&sa->load_sum, r * LOAD_AVG_MAX); + sub_positive(&sa->load_sum, r * divider); r = removed_util; sub_positive(&sa->util_avg, r); - sub_positive(&sa->util_sum, r * LOAD_AVG_MAX); + sub_positive(&sa->util_sum, r * divider); add_tg_cfs_propagate(cfs_rq, -(long)removed_runnable_sum);