From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965101AbcHaPws (ORCPT ); Wed, 31 Aug 2016 11:52:48 -0400 Received: from mail-lf0-f53.google.com ([209.85.215.53]:36038 "EHLO mail-lf0-f53.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965056AbcHaPwr (ORCPT ); Wed, 31 Aug 2016 11:52:47 -0400 MIME-Version: 1.0 In-Reply-To: <1472639782.3942.27.camel@gmail.com> References: <1472535775.3960.3.camel@suse.de> <20160831100117.GV10121@twins.programming.kicks-ass.net> <1472638699.3942.14.camel@suse.de> <1472639782.3942.27.camel@gmail.com> From: Vincent Guittot Date: Wed, 31 Aug 2016 17:52:25 +0200 Message-ID: Subject: Re: [patch v3.18+ regression fix] sched: Further improve spurious CPU_IDLE active migrations To: Mike Galbraith Cc: Peter Zijlstra , LKML , Rik van Riel Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 31 August 2016 at 12:36, Mike Galbraith wrote: > On Wed, 2016-08-31 at 12:18 +0200, Mike Galbraith wrote: >> On Wed, 2016-08-31 at 12:01 +0200, Peter Zijlstra wrote: > >> > So 43f4d66637bc ("sched: Improve sysbench performance by fixing spurious >> > active migration") 's +1 made sense in that its a tie breaker. If you >> > have 3 tasks on 2 groups, one group will have to have 2 tasks, and >> > bouncing the one task around just isn't going to help _anything_. >> >> Yeah, but frequently tasks don't come in ones, so, you end up with an >> endless tug of war between LB ripping communicating buddies apart, and >> select_idle_sibling() pulling them back together.. bouncing cow >> syndrome. > replacing +1 by +2 fixes this use case that involves 2 threads but similar behavior can happen with 3 tasks on system with 4 cores per MC as an example IIUC, you have on - one side, periodic load balance that spreads the 2 tasks in the system - on the other side, wake up path that moves the task back in the same MC. Isn't your regression more linked to spurious migration than where the task is scheduled ? I don't see any direct relation between the client and the server in this netperf test, isn't it ? we could either remove the condition which tries to keep an even number of tasks in each group until busiest group becomes overloaded but it means that unrelated tasks may have to share same resources or we could try to prevent the migration at wake up. I was looking at wake_affine which seems to choose local cpu when both prev and local cpu are idle. I wonder if local cpu is really a better choice when both are idle Vincent > The whole business of trying to balance groups down to the single task > seems a bit illogical given we care enough to wake to shared cache in > the first place, creating the 'imbalance' we then try to correct. > 'course that weakens your unrelated tasks (which may meet on a sleepin > g lock or whatever) argument not one bit, it's also valid. > > hrm. > > -Mike