From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752441AbbGNORx (ORCPT ); Tue, 14 Jul 2015 10:17:53 -0400 Received: from mail-wg0-f47.google.com ([74.125.82.47]:34686 "EHLO mail-wg0-f47.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751558AbbGNORw (ORCPT ); Tue, 14 Jul 2015 10:17:52 -0400 Message-ID: <1436883466.7983.17.camel@gmail.com> Subject: Re: [patch] sched: beef up wake_wide() From: Mike Galbraith To: Peter Zijlstra Cc: Josef Bacik , riel@redhat.com, mingo@redhat.com, linux-kernel@vger.kernel.org, morten.rasmussen@arm.com, kernel-team Date: Tue, 14 Jul 2015 16:17:46 +0200 In-Reply-To: <20150714140710.GL19282@twins.programming.kicks-ass.net> References: <1436241678.1836.29.camel@gmail.com> <1436262224.1836.74.camel@gmail.com> <559C0700.6090009@fb.com> <1436336026.3767.53.camel@gmail.com> <20150709132654.GE3644@twins.programming.kicks-ass.net> <1436505566.5715.50.camel@gmail.com> <55A03232.2090500@fb.com> <1436584311.3429.7.camel@gmail.com> <20150714111905.GJ3644@twins.programming.kicks-ass.net> <1436881757.7983.12.camel@gmail.com> <20150714140710.GL19282@twins.programming.kicks-ass.net> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.12.11 Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 2015-07-14 at 16:07 +0200, Peter Zijlstra wrote: > On Tue, Jul 14, 2015 at 03:49:17PM +0200, Mike Galbraith wrote: > > On Tue, 2015-07-14 at 13:19 +0200, Peter Zijlstra wrote: > > > > > OK, how about something like the below; it tightens things up by > > > applying two rules: > > > > > > - We really should not continue looking for a balancing domain once > > > SD_LOAD_BALANCE is not set. > > > > > > - SD (balance) flags should really be set in a single contiguous range, > > > always starting at the bottom. > > > > > > The latter means what if !want_affine and the (first) sd doesn't have > > > BALANCE_WAKE set, we're done. Getting rid of (most of) that iteration > > > junk you didn't like.. > > > > > > Hmm? > > > > Yeah, that's better. It's not big hairy deal either way, it just bugged > > me to knowingly toss those cycles out the window ;-) > > > > select_idle_sibling() looks kinda funny down there, but otoh when the > > day comes (hah) that we can just balance, it's closer to the exit. > > Right, not too pretty, does this look beter? There's a buglet, I was just about to mention the inverse in the other. > @@ -5041,17 +5037,17 @@ select_task_rq_fair(struct task_struct * > > if (tmp->flags & sd_flag) > sd = tmp; > + else if (!want_affine) > + break; > } > > - if (affine_sd && cpu != prev_cpu && wake_affine(affine_sd, p, sync)) > - prev_cpu = cpu; > + if (affine_sd) { /* Prefer affinity over any other flags */ > + if (cpu != prev_cpu && wake_affine(affine_sd, p, sync)) > + new_cpu = cpu; > > - if (sd_flag & SD_BALANCE_WAKE) { > - new_cpu = select_idle_sibling(p, prev_cpu); > - goto unlock; > - } > + new_cpu = select_idle_sibling(p, new_cpu); We'll not look for a idle cpu when wake_wide() naks want_affine. -Mike