From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1172635AbdDXO2H (ORCPT ); Mon, 24 Apr 2017 10:28:07 -0400 Received: from bombadil.infradead.org ([65.50.211.133]:42261 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1170087AbdDXO1q (ORCPT ); Mon, 24 Apr 2017 10:27:46 -0400 Date: Mon, 24 Apr 2017 16:27:39 +0200 From: Peter Zijlstra To: Lauro Ramos Venancio Cc: lwang@redhat.com, riel@redhat.com, Mike Galbraith , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org Subject: Re: [PATCH 4/4] sched/topology: the group balance cpu must be a cpu where the group is installed Message-ID: <20170424142739.nlawad5ozakf3mjn@hirez.programming.kicks-ass.net> References: <1492717903-5195-1-git-send-email-lvenanci@redhat.com> <1492717903-5195-5-git-send-email-lvenanci@redhat.com> <20170424130326.nfbaujvcdjca22tl@hirez.programming.kicks-ass.net> <20170424141944.r6vuzxcweae3krz7@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170424141944.r6vuzxcweae3krz7@hirez.programming.kicks-ass.net> User-Agent: NeoMutt/20170113 (1.7.2) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Apr 24, 2017 at 04:19:44PM +0200, Peter Zijlstra wrote: > On Mon, Apr 24, 2017 at 03:03:26PM +0200, Peter Zijlstra wrote: > > > Also, would it not make sense to re-order patch 2 to come after this, > > such that we _do_ have the group_mask available and don't have to jump > > through hoops in order to link up the sgc? Afaict we don't actually use > > the sgc until the above (reverse) loop computing the CPU capacities. > > That is, if I force 4 on without 2, then doesn't something like the > below also do the right thing? (without duplicating part of the magic > already contained in build_group_mask) > > --- > --- a/kernel/sched/topology.c > +++ b/kernel/sched/topology.c > @@ -498,13 +498,16 @@ enum s_alloc { > * > * This function can only be used when all the groups are already built. > */ > -static void build_group_mask(struct sched_domain *sd, struct sched_group *sg) > +static void > +build_group_mask(struct sched_domain *sd, struct sched_group *sg, struct cpumask *mask) > { > const struct cpumask *sg_span = sched_group_cpus(sg); > struct sd_data *sdd = sd->private; > struct sched_domain *sibling; > int i; > > + cpumask_clear(mask); > + > for_each_cpu(i, sg_span) { > sibling = *per_cpu_ptr(sdd->sd, i); > > @@ -514,7 +517,7 @@ static void build_group_mask(struct sche > if (!cpumask_equal(sg_span, sched_group_cpus(sibling->groups))) > continue; > > - cpumask_set_cpu(i, sched_group_mask(sg)); > + cpumask_set_cpu(i, mask); > } > } > > @@ -549,14 +552,19 @@ build_group_from_child_sched_domain(stru > } > > static void init_overlap_sched_group(struct sched_domain *sd, > - struct sched_group *sg, int cpu) > + struct sched_group *sg) > { > + struct cpumask *mask = sched_domains_tmpmask; > struct sd_data *sdd = sd->private; > struct cpumask *sg_span; > + int cpu; > + > + build_group_mask(sd, sg, mask); > + cpu = cpumask_first_and(sched_group_mask(sg), mask); /* balance cpu */ s/group_mask/group_span/ > > sg->sgc = *per_cpu_ptr(sdd->sgc, cpu); > if (atomic_inc_return(&sg->sgc->ref) == 1) > - build_group_mask(sd, sg); > + cpumask_copy(sched_group_mask(sg), mask); > > /* > * Initialize sgc->capacity such that even if we mess up the