From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752943AbcEJP0N (ORCPT ); Tue, 10 May 2016 11:26:13 -0400 Received: from mail-wm0-f48.google.com ([74.125.82.48]:38736 "EHLO mail-wm0-f48.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752356AbcEJP0J (ORCPT ); Tue, 10 May 2016 11:26:09 -0400 Message-ID: <1462893965.3702.56.camel@gmail.com> Subject: Re: sched: tweak select_idle_sibling to look for idle threads From: Mike Galbraith To: Yuyang Du Cc: Peter Zijlstra , Chris Mason , Ingo Molnar , Matt Fleming , linux-kernel@vger.kernel.org Date: Tue, 10 May 2016 17:26:05 +0200 In-Reply-To: <1462866562.3702.33.camel@suse.de> References: <20160501085303.GF2975@worktop.cust.blueprintrf.com> <1462094425.9717.45.camel@suse.de> <20160507012417.GK16093@intel.com> <1462694935.4155.83.camel@suse.de> <20160508185747.GL16093@intel.com> <1462765540.3803.44.camel@suse.de> <20160508202201.GM16093@intel.com> <1462779853.3803.128.camel@suse.de> <20160509011311.GQ16093@intel.com> <1462786745.3803.181.camel@suse.de> <20160509232623.GR16093@intel.com> <1462866562.3702.33.camel@suse.de> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.16.5 Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 2016-05-10 at 09:49 +0200, Mike Galbraith wrote: > Only whacking > cfs_rq_runnable_load_avg() with a rock makes schbench -m -t > -a work well. 'Course a rock in its gearbox also > rendered load balancing fairly busted for the general case :) Smaller rock doesn't injure heavy tbench, but more importantly, still demonstrates the issue when you want full spread. schbench -m4 -t38 -a cputime 30000 threads 38 p99 177 cputime 30000 threads 39 p99 10160 LB_TIP_AVG_HIGH cputime 30000 threads 38 p99 193 cputime 30000 threads 39 p99 184 cputime 30000 threads 40 p99 203 cputime 30000 threads 41 p99 202 cputime 30000 threads 42 p99 205 cputime 30000 threads 43 p99 218 cputime 30000 threads 44 p99 237 cputime 30000 threads 45 p99 245 cputime 30000 threads 46 p99 262 cputime 30000 threads 47 p99 296 cputime 30000 threads 48 p99 3308 47*4+4=nr_cpus yay --- kernel/sched/fair.c | 3 +++ kernel/sched/features.h | 1 + 2 files changed, 4 insertions(+) --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3027,6 +3027,9 @@ void remove_entity_load_avg(struct sched static inline unsigned long cfs_rq_runnable_load_avg(struct cfs_rq *cfs_rq) { + if (sched_feat(LB_TIP_AVG_HIGH) && cfs_rq->load.weight > cfs_rq->runnable_load_avg*2) + return cfs_rq->runnable_load_avg + min_t(unsigned long, NICE_0_LOAD, + cfs_rq->load.weight/2); return cfs_rq->runnable_load_avg; } --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -67,6 +67,7 @@ SCHED_FEAT(RT_PUSH_IPI, true) SCHED_FEAT(FORCE_SD_OVERLAP, false) SCHED_FEAT(RT_RUNTIME_SHARE, true) SCHED_FEAT(LB_MIN, false) +SCHED_FEAT(LB_TIP_AVG_HIGH, false) SCHED_FEAT(ATTACH_AGE_LOAD, true) SCHED_FEAT(OLD_IDLE, false)