From: Beata Michalska <beata.michalska@arm.com> To: Dietmar Eggemann <dietmar.eggemann@arm.com> Cc: linux-kernel@vger.kernel.org, peterz@infradead.org, mingo@redhat.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, valentin.schneider@arm.com, corbet@lwn.net, rdunlap@infradead.org, linux-doc@vger.kernel.org Subject: Re: [PATCH v6 2/3] sched/topology: Rework CPU capacity asymmetry detection Date: Wed, 2 Jun 2021 20:54:35 +0100 [thread overview] Message-ID: <20210602195435.GB18136@e120325.cambridge.arm.com> (raw) In-Reply-To: <8ea4cfc2-514b-6b5c-7269-7720a54dbb39@arm.com> On Wed, Jun 02, 2021 at 09:09:54PM +0200, Dietmar Eggemann wrote: > On 27/05/2021 17:38, Beata Michalska wrote: > > [...] > > > +/* > > + * Verify whether there is any CPU capacity asymmetry in a given sched domain. > > + * Provides sd_flags reflecting the asymmetry scope. > > + */ > > +static inline int > > +asym_cpu_capacity_classify(struct sched_domain *sd, > > + const struct cpumask *cpu_map) > > +{ > > + struct asym_cap_data *entry; > > + int sd_asym_flags = 0; > > + int asym_cap_count = 0; > > + int asym_cap_miss = 0; > > + > > + /* > > + * Count how many unique CPU capacities this domain spans across > > + * (compare sched_domain CPUs mask with ones representing available > > + * CPUs capacities). Take into account CPUs that might be offline: > > + * skip those. > > + */ > > + list_for_each_entry(entry, &asym_cap_list, link) { > > + if (cpumask_intersects(sched_domain_span(sd), > > + cpu_capacity_span(entry))) > > + ++asym_cap_count; > > + else if (cpumask_intersects(cpu_capacity_span(entry), cpu_map)) > > nit: `sd span, entry span` but `entry span, cpu_map`. Why not `cpu_map, entry span`? > Cannot recall any reason for that. > > + ++asym_cap_miss; > > + } > > + /* No asymmetry detected */ > > + if (WARN_ON_ONCE(!asym_cap_count) || asym_cap_count == 1) > > + goto leave; > > + > > + sd_asym_flags |= SD_ASYM_CPUCAPACITY; > > + > > + /* > > + * All the available capacities have been found within given sched > > + * domain: no misses reported. > > + */ > > + if (!asym_cap_miss) > > + sd_asym_flags |= SD_ASYM_CPUCAPACITY_FULL; > > + > > +leave: > > + return sd_asym_flags; > > +} > > Everything looks good except that I like this more compact version better, proposed in: > > https://lkml.kernel.org/r/YK9ESqNEo+uacyMD@hirez.programming.kicks-ass.net > > And passing `const struct cpumask *sd_span` instead of `struct > sched_domain *sd` into the function. > I do understand the parameter argument, but honestly don't see much difference in naming and switching single return for asymmetric topologies vs two return statement, but if that is more preferred/readable version I do not mind changing that as well. Thanks for the review. --- BR B. > > diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c > index 77b73abbb9a4..0de8eebded9f 100644 > --- a/kernel/sched/topology.c > +++ b/kernel/sched/topology.c > @@ -1290,13 +1290,11 @@ static LIST_HEAD(asym_cap_list); > * Provides sd_flags reflecting the asymmetry scope. > */ > static inline int > -asym_cpu_capacity_classify(struct sched_domain *sd, > +asym_cpu_capacity_classify(const struct cpumask *sd_span, > const struct cpumask *cpu_map) > { > struct asym_cap_data *entry; > - int sd_asym_flags = 0; > - int asym_cap_count = 0; > - int asym_cap_miss = 0; > + int count = 0, miss = 0; > > /* > * Count how many unique CPU capacities this domain spans across > @@ -1305,27 +1303,20 @@ asym_cpu_capacity_classify(struct sched_domain *sd, > * skip those. > */ > list_for_each_entry(entry, &asym_cap_list, link) { > - if (cpumask_intersects(sched_domain_span(sd), > - cpu_capacity_span(entry))) > - ++asym_cap_count; > - else if (cpumask_intersects(cpu_capacity_span(entry), cpu_map)) > - ++asym_cap_miss; > + if (cpumask_intersects(sd_span, cpu_capacity_span(entry))) > + ++count; > + else if (cpumask_intersects(cpu_map, cpu_capacity_span(entry))) > + ++miss; > } > - /* No asymmetry detected */ > - if (WARN_ON_ONCE(!asym_cap_count) || asym_cap_count == 1) > - goto leave; > > - sd_asym_flags |= SD_ASYM_CPUCAPACITY; > + if (WARN_ON_ONCE(!count) || count == 1) /* No asymmetry */ > + return 0; > > - /* > - * All the available capacities have been found within given sched > - * domain: no misses reported. > - */ > - if (!asym_cap_miss) > - sd_asym_flags |= SD_ASYM_CPUCAPACITY_FULL; > + if (miss) /* Partial asymmetry */ > + return SD_ASYM_CPUCAPACITY; > > -leave: > - return sd_asym_flags; > + /* Full asymmetry */ > + return SD_ASYM_CPUCAPACITY | SD_ASYM_CPUCAPACITY_FULL; > } > > static inline void asym_cpu_capacity_update_data(int cpu) > @@ -1510,6 +1501,7 @@ sd_init(struct sched_domain_topology_level *tl, > struct sd_data *sdd = &tl->data; > struct sched_domain *sd = *per_cpu_ptr(sdd->sd, cpu); > int sd_id, sd_weight, sd_flags = 0; > + struct cpumask *sd_span; > > #ifdef CONFIG_NUMA > /* > @@ -1557,10 +1549,11 @@ sd_init(struct sched_domain_topology_level *tl, > #endif > }; > > - cpumask_and(sched_domain_span(sd), cpu_map, tl->mask(cpu)); > - sd_id = cpumask_first(sched_domain_span(sd)); > + sd_span = sched_domain_span(sd); > + cpumask_and(sd_span, cpu_map, tl->mask(cpu)); > + sd_id = cpumask_first(sd_span); > > - sd->flags |= asym_cpu_capacity_classify(sd, cpu_map); > + sd->flags |= asym_cpu_capacity_classify(sd_span, cpu_map); > > WARN_ONCE((sd->flags & (SD_SHARE_CPUCAPACITY | SD_ASYM_CPUCAPACITY)) == > (SD_SHARE_CPUCAPACITY | SD_ASYM_CPUCAPACITY), > -- > 2.25.1
next prev parent reply other threads:[~2021-06-02 19:54 UTC|newest] Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-05-27 15:38 [PATCH v6 0/3] " Beata Michalska 2021-05-27 15:38 ` [PATCH v6 1/3] sched/core: Introduce SD_ASYM_CPUCAPACITY_FULL sched_domain flag Beata Michalska 2021-05-27 15:38 ` [PATCH v6 2/3] sched/topology: Rework CPU capacity asymmetry detection Beata Michalska 2021-06-02 12:50 ` Valentin Schneider 2021-06-02 13:03 ` Beata Michalska 2021-06-02 19:09 ` Dietmar Eggemann 2021-06-02 19:54 ` Beata Michalska [this message] 2021-05-27 15:38 ` [PATCH v6 3/3] sched/doc: Update the CPU capacity asymmetry bits Beata Michalska 2021-06-02 19:10 ` [PATCH v6 0/3] Rework CPU capacity asymmetry detection Dietmar Eggemann
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20210602195435.GB18136@e120325.cambridge.arm.com \ --to=beata.michalska@arm.com \ --cc=corbet@lwn.net \ --cc=dietmar.eggemann@arm.com \ --cc=juri.lelli@redhat.com \ --cc=linux-doc@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=mingo@redhat.com \ --cc=peterz@infradead.org \ --cc=rdunlap@infradead.org \ --cc=valentin.schneider@arm.com \ --cc=vincent.guittot@linaro.org \ --subject='Re: [PATCH v6 2/3] sched/topology: Rework CPU capacity asymmetry detection' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.