From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759145AbcHaKBe (ORCPT ); Wed, 31 Aug 2016 06:01:34 -0400 Received: from merlin.infradead.org ([205.233.59.134]:45182 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755315AbcHaKBW (ORCPT ); Wed, 31 Aug 2016 06:01:22 -0400 Date: Wed, 31 Aug 2016 12:01:17 +0200 From: Peter Zijlstra To: Mike Galbraith Cc: LKML , Rik van Riel , Vincent Guittot Subject: Re: [patch v3.18+ regression fix] sched: Further improve spurious CPU_IDLE active migrations Message-ID: <20160831100117.GV10121@twins.programming.kicks-ass.net> References: <1472535775.3960.3.camel@suse.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1472535775.3960.3.camel@suse.de> User-Agent: Mutt/1.5.23.1 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Aug 30, 2016 at 07:42:55AM +0200, Mike Galbraith wrote: > > 43f4d666 partially cured spurious migrations, but when there are > completely idle groups on a lightly loaded processor, and there is > a buddy pair occupying the busiest group, we will not attempt to > migrate due to select_idle_sibling() buddy placement, leaving the > busiest queue with one task. We skip balancing, but increment > nr_balance_failed until we kick active balancing, and bounce a > buddy pair endlessly, demolishing throughput. Have you ran this patch through other benchmarks? It looks like something that might make something else go funny. > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -7249,11 +7249,12 @@ static struct sched_group *find_busiest_ > * This cpu is idle. If the busiest group is not overloaded > * and there is no imbalance between this and busiest group > * wrt idle cpus, it is balanced. The imbalance becomes > - * significant if the diff is greater than 1 otherwise we > - * might end up to just move the imbalance on another group > + * significant if the diff is greater than 2 otherwise we > + * may end up merely moving the imbalance to another group, > + * or bouncing a buddy pair needlessly. > */ > if ((busiest->group_type != group_overloaded) && > - (local->idle_cpus <= (busiest->idle_cpus + 1))) > + (local->idle_cpus <= (busiest->idle_cpus + 2))) > goto out_balanced; So 43f4d66637bc ("sched: Improve sysbench performance by fixing spurious active migration") 's +1 made sense in that its a tie breaker. If you have 3 tasks on 2 groups, one group will have to have 2 tasks, and bouncing the one task around just isn't going to help _anything_. Incrementing that to +2 has the effect that if you have two tasks on two groups, 0,2 is a valid distribution. Which I understand is exactly what you want for this workload. But if the two tasks are unrelated, 1,1 really is a better spread.