linux-doc.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 0/3] Rework CPU capacity asymmetry detection
@ 2021-05-27 15:38 Beata Michalska
  2021-05-27 15:38 ` [PATCH v6 1/3] sched/core: Introduce SD_ASYM_CPUCAPACITY_FULL sched_domain flag Beata Michalska
                   ` (3 more replies)
  0 siblings, 4 replies; 9+ messages in thread
From: Beata Michalska @ 2021-05-27 15:38 UTC (permalink / raw)
  To: linux-kernel
  Cc: peterz, mingo, juri.lelli, vincent.guittot, valentin.schneider,
	dietmar.eggemann, corbet, rdunlap, linux-doc

As of now, the asym_cpu_capacity_level will try to locate the lowest
topology level where the highest available CPU capacity is being
visible to all CPUs. This works perfectly fine for most of existing
asymmetric designs out there, though for some possible and completely
valid setups, combining different cpu microarchitectures within
clusters, this might not be the best approach, resulting in pointing
at a level, at which some of the domains might not see any asymmetry
at all. This could be problematic for misfit migration and/or energy
aware placement. And as such, for affected platforms it might result
in custom changes to wake-up and CPU selection paths.

As mentioned in the previous version, based on the available sources out there,
one of the potentially affected (by original approach) platforms might be
Exynos 9820/990 with it's 'sliced' LLC(SLC) divided between the two custom (big)
cores and the remaining A75/A55 cores, which seems to be reflected in the
made available dt entries for those platforms.

The following patches rework how the asymmetric detection is being
carried out, allowing pinning the asymmetric topology level to the lowest one,
where full range of CPU capacities is visible to all CPUs within given
sched domain. The asym_cpu_capacity_level will also keep track of those
levels where any scope of asymmetry is being observed, to denote
corresponding sched domains with the SD_ASYM_CPUCAPACITY flag
and to enable misfit migration for those.

In order to distinguish the sched domains with partial vs full range
of CPU capacity asymmetry, new sched domain flag has been introduced:
SD_ASYM_CPUCAPACITY_FULL.

The overall idea of changing the asymmetry detection has been suggested
by Valentin Schneider <valentin.schneider@arm.com>

Verified on (mostly):
    - QEMU (version 4.2.1) with variants of possible asymmetric topologies
	- machine: virt
	- modifying the device-tree 'cpus' node for virt machine:

	qemu-system-aarch64 -kernel $KERNEL_IMG
	    -drive format=qcow2,file=$IMAGE
	    -append 'root=/dev/vda earlycon console=ttyAMA0 sched_debug
	     sched_verbose loglevel=15 kmemleak=on' -m 2G  --nographic
	    -cpu cortex-a57 -machine virt -smp cores=8
	    -machine dumpdtb=$CUSTOM_DTB.dtb

	$KERNEL_PATH/scripts/dtc/dtc -I dtb -O dts $CUSTOM_DTB.dts >
	$CUSTOM_DTB.dtb

	(modify the dts)

	$KERNEL_PATH/scripts/dtc/dtc -I dts -O dtb $CUSTOM_DTB.dts >
	$CUSTOM_DTB.dtb

	qemu-system-aarch64 -kernel $KERNEL_IMG
	    -drive format=qcow2,file=$IMAGE
	    -append 'root=/dev/vda earlycon console=ttyAMA0 sched_debug
	     sched_verbose loglevel=15 kmemleak=on' -m 2G  --nographic
	    -cpu cortex-a57 -machine virt -smp cores=8
	    -machine dtb=$CUSTOM_DTB.dtb

v6:
 - improving code readability
v5:
 - building CPUs list based on their capacity now triggered upon init
   and explicit request from arch specific code to rebuild sched domains
 - detecting asymmetry scope now done directly in sd_init
v4:
 - Based on Peter's idea, reworking asym detection to use per-cpu
   capacity list to serve as base for determining the asym scope
v3:
 - Additional style/doc fixes
v2:
 - Fixed style issues
 - Reworked accessing the cached topology data as suggested by Valentin



Beata Michalska (3):
  sched/core: Introduce SD_ASYM_CPUCAPACITY_FULL sched_domain flag
  sched/topology: Rework CPU capacity asymmetry detection
  sched/doc: Update the CPU capacity asymmetry bits

 Documentation/scheduler/sched-capacity.rst |   6 +-
 Documentation/scheduler/sched-energy.rst   |   2 +-
 include/linux/sched/sd_flags.h             |  10 ++
 kernel/sched/topology.c                    | 194 +++++++++++++--------
 4 files changed, 133 insertions(+), 79 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v6 1/3] sched/core: Introduce SD_ASYM_CPUCAPACITY_FULL sched_domain flag
  2021-05-27 15:38 [PATCH v6 0/3] Rework CPU capacity asymmetry detection Beata Michalska
@ 2021-05-27 15:38 ` Beata Michalska
  2021-05-27 15:38 ` [PATCH v6 2/3] sched/topology: Rework CPU capacity asymmetry detection Beata Michalska
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 9+ messages in thread
From: Beata Michalska @ 2021-05-27 15:38 UTC (permalink / raw)
  To: linux-kernel
  Cc: peterz, mingo, juri.lelli, vincent.guittot, valentin.schneider,
	dietmar.eggemann, corbet, rdunlap, linux-doc

Introducing new, complementary to SD_ASYM_CPUCAPACITY, sched_domain
topology flag, to distinguish between shed_domains where any CPU
capacity asymmetry is detected (SD_ASYM_CPUCAPACITY) and ones where
a full set of CPU capacities is visible to all domain members
(SD_ASYM_CPUCAPACITY_FULL).

With the distinction between full and partial CPU capacity asymmetry,
brought in by the newly introduced flag, the scope of the original
SD_ASYM_CPUCAPACITY flag gets shifted, still maintaining the existing
behaviour when one is detected on a given sched domain, allowing
misfit migrations within sched domains that do not observe full range
of CPU capacities but still do have members with different capacity
values. It loses though it's meaning when it comes to the lowest CPU
asymmetry sched_domain level per-cpu pointer, which is to be now
denoted by SD_ASYM_CPUCAPACITY_FULL flag.

Signed-off-by: Beata Michalska <beata.michalska@arm.com>
Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
---
 include/linux/sched/sd_flags.h | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/include/linux/sched/sd_flags.h b/include/linux/sched/sd_flags.h
index 34b21e971d77..57bde66d95f7 100644
--- a/include/linux/sched/sd_flags.h
+++ b/include/linux/sched/sd_flags.h
@@ -90,6 +90,16 @@ SD_FLAG(SD_WAKE_AFFINE, SDF_SHARED_CHILD)
  */
 SD_FLAG(SD_ASYM_CPUCAPACITY, SDF_SHARED_PARENT | SDF_NEEDS_GROUPS)
 
+/*
+ * Domain members have different CPU capacities spanning all unique CPU
+ * capacity values.
+ *
+ * SHARED_PARENT: Set from the topmost domain down to the first domain where
+ *		  all available CPU capacities are visible
+ * NEEDS_GROUPS: Per-CPU capacity is asymmetric between groups.
+ */
+SD_FLAG(SD_ASYM_CPUCAPACITY_FULL, SDF_SHARED_PARENT | SDF_NEEDS_GROUPS)
+
 /*
  * Domain members share CPU capacity (i.e. SMT)
  *
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v6 2/3] sched/topology: Rework CPU capacity asymmetry detection
  2021-05-27 15:38 [PATCH v6 0/3] Rework CPU capacity asymmetry detection Beata Michalska
  2021-05-27 15:38 ` [PATCH v6 1/3] sched/core: Introduce SD_ASYM_CPUCAPACITY_FULL sched_domain flag Beata Michalska
@ 2021-05-27 15:38 ` Beata Michalska
  2021-06-02 12:50   ` Valentin Schneider
  2021-06-02 19:09   ` Dietmar Eggemann
  2021-05-27 15:38 ` [PATCH v6 3/3] sched/doc: Update the CPU capacity asymmetry bits Beata Michalska
  2021-06-02 19:10 ` [PATCH v6 0/3] Rework CPU capacity asymmetry detection Dietmar Eggemann
  3 siblings, 2 replies; 9+ messages in thread
From: Beata Michalska @ 2021-05-27 15:38 UTC (permalink / raw)
  To: linux-kernel
  Cc: peterz, mingo, juri.lelli, vincent.guittot, valentin.schneider,
	dietmar.eggemann, corbet, rdunlap, linux-doc

Currently the CPU capacity asymmetry detection, performed through
asym_cpu_capacity_level, tries to identify the lowest topology level
at which the highest CPU capacity is being observed, not necessarily
finding the level at which all possible capacity values are visible
to all CPUs, which might be bit problematic for some possible/valid
asymmetric topologies i.e.:

DIE      [                                ]
MC       [                       ][       ]

CPU       [0] [1] [2] [3] [4] [5]  [6] [7]
Capacity  |.....| |.....| |.....|  |.....|
	     L	     M       B        B

Where:
 arch_scale_cpu_capacity(L) = 512
 arch_scale_cpu_capacity(M) = 871
 arch_scale_cpu_capacity(B) = 1024

In this particular case, the asymmetric topology level will point
at MC, as all possible CPU masks for that level do cover the CPU
with the highest capacity. It will work just fine for the first
cluster, not so much for the second one though (consider the
find_energy_efficient_cpu which might end up attempting the energy
aware wake-up for a domain that does not see any asymmetry at all)

Rework the way the capacity asymmetry levels are being detected,
allowing to point to the lowest topology level (for a given CPU), where
full set of available CPU capacities is visible to all CPUs within given
domain. As a result, the per-cpu sd_asym_cpucapacity might differ across
the domains. This will have an impact on EAS wake-up placement in a way
that it might see different range of CPUs to be considered, depending on
the given current and target CPUs.

Additionally, those levels, where any range of asymmetry (not
necessarily full) is being detected will get identified as well.
The selected asymmetric topology level will be denoted by
SD_ASYM_CPUCAPACITY_FULL sched domain flag whereas the 'sub-levels'
would receive the already used SD_ASYM_CPUCAPACITY flag. This allows
maintaining the current behaviour for asymmetric topologies, with
misfit migration operating correctly on lower levels, if applicable,
as any asymmetry is enough to trigger the misfit migration.
The logic there relies on the SD_ASYM_CPUCAPACITY flag and does not
relate to the full asymmetry level denoted by the sd_asym_cpucapacity
pointer.

Detecting the CPU capacity asymmetry is being based on a set of
available CPU capacities for all possible CPUs. This data is being
generated upon init and updated once CPU topology changes are being
detected (through arch_update_cpu_topology). As such, any changes
to identified CPU capacities (like initializing cpufreq) need to be
explicitly advertised by corresponding archs to trigger rebuilding
the data.

Additional -dflags- parameter, used when building sched domains, has
been removed as well, as the asymmetry flags are now being set directly
in sd_init.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Suggested-by: Valentin Schneider <valentin.schneider@arm.com>
Signed-off-by: Beata Michalska <beata.michalska@arm.com>
---
 kernel/sched/topology.c | 194 ++++++++++++++++++++++++----------------
 1 file changed, 118 insertions(+), 76 deletions(-)

diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 55a0a243e871..77e6f79235ad 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -675,7 +675,7 @@ static void update_top_cache_domain(int cpu)
 	sd = highest_flag_domain(cpu, SD_ASYM_PACKING);
 	rcu_assign_pointer(per_cpu(sd_asym_packing, cpu), sd);
 
-	sd = lowest_flag_domain(cpu, SD_ASYM_CPUCAPACITY);
+	sd = lowest_flag_domain(cpu, SD_ASYM_CPUCAPACITY_FULL);
 	rcu_assign_pointer(per_cpu(sd_asym_cpucapacity, cpu), sd);
 }
 
@@ -1266,6 +1266,112 @@ static void init_sched_groups_capacity(int cpu, struct sched_domain *sd)
 	update_group_capacity(sd, cpu);
 }
 
+/**
+ * Asymmetric CPU capacity bits
+ */
+struct asym_cap_data {
+	struct list_head link;
+	unsigned long capacity;
+	unsigned long cpus[];
+};
+
+/*
+ * Set of available CPUs grouped by their corresponding capacities
+ * Each list entry contains a CPU mask reflecting CPUs that share the same
+ * capacity.
+ * The lifespan of data is unlimited.
+ */
+static LIST_HEAD(asym_cap_list);
+
+#define cpu_capacity_span(asym_data) to_cpumask((asym_data)->cpus)
+
+/*
+ * Verify whether there is any CPU capacity asymmetry in a given sched domain.
+ * Provides sd_flags reflecting the asymmetry scope.
+ */
+static inline int
+asym_cpu_capacity_classify(struct sched_domain *sd,
+			   const struct cpumask *cpu_map)
+{
+	struct asym_cap_data *entry;
+	int sd_asym_flags = 0;
+	int asym_cap_count = 0;
+	int asym_cap_miss = 0;
+
+	/*
+	 * Count how many unique CPU capacities this domain spans across
+	 * (compare sched_domain CPUs mask with ones representing  available
+	 * CPUs capacities). Take into account CPUs that might be offline:
+	 * skip those.
+	 */
+	list_for_each_entry(entry, &asym_cap_list, link) {
+		if (cpumask_intersects(sched_domain_span(sd),
+				       cpu_capacity_span(entry)))
+			++asym_cap_count;
+		else if (cpumask_intersects(cpu_capacity_span(entry), cpu_map))
+			++asym_cap_miss;
+	}
+	/* No asymmetry detected */
+	if (WARN_ON_ONCE(!asym_cap_count) || asym_cap_count == 1)
+		goto leave;
+
+	sd_asym_flags |= SD_ASYM_CPUCAPACITY;
+
+	/*
+	 * All the available capacities have been found within given sched
+	 * domain: no misses reported.
+	 */
+	if (!asym_cap_miss)
+		sd_asym_flags |= SD_ASYM_CPUCAPACITY_FULL;
+
+leave:
+	return sd_asym_flags;
+}
+
+static inline void asym_cpu_capacity_update_data(int cpu)
+{
+	unsigned long capacity = arch_scale_cpu_capacity(cpu);
+	struct asym_cap_data *entry = NULL;
+
+	list_for_each_entry(entry, &asym_cap_list, link) {
+		if (capacity == entry->capacity)
+			goto done;
+	}
+
+	entry = kzalloc(sizeof(*entry) + cpumask_size(), GFP_KERNEL);
+	if (WARN_ONCE(!entry, "Failed to allocate memory for asymmetry data\n"))
+		return;
+	entry->capacity = capacity;
+	list_add(&entry->link, &asym_cap_list);
+done:
+	__cpumask_set_cpu(cpu, cpu_capacity_span(entry));
+}
+
+/*
+ * Build-up/update list of CPUs grouped by their capacities
+ * An update requires explicit request to rebuild sched domains
+ * with state indicating CPU topology changes.
+ */
+static void asym_cpu_capacity_scan(void)
+{
+	struct asym_cap_data *entry, *next;
+	int cpu;
+
+	list_for_each_entry(entry, &asym_cap_list, link)
+		cpumask_clear(cpu_capacity_span(entry));
+
+	for_each_cpu_and(cpu, cpu_possible_mask,
+			 housekeeping_cpumask(HK_FLAG_DOMAIN))
+		asym_cpu_capacity_update_data(cpu);
+
+	list_for_each_entry_safe(entry, next, &asym_cap_list, link) {
+		if (cpumask_empty(cpu_capacity_span(entry))) {
+			list_del(&entry->link);
+			kfree(entry);
+		}
+	}
+}
+
 /*
  * Initializers for schedule domains
  * Non-inlined to reduce accumulated stack pressure in build_sched_domains()
@@ -1399,7 +1505,7 @@ int __read_mostly		node_reclaim_distance = RECLAIM_DISTANCE;
 static struct sched_domain *
 sd_init(struct sched_domain_topology_level *tl,
 	const struct cpumask *cpu_map,
-	struct sched_domain *child, int dflags, int cpu)
+	struct sched_domain *child, int cpu)
 {
 	struct sd_data *sdd = &tl->data;
 	struct sched_domain *sd = *per_cpu_ptr(sdd->sd, cpu);
@@ -1420,9 +1526,6 @@ sd_init(struct sched_domain_topology_level *tl,
 			"wrong sd_flags in topology description\n"))
 		sd_flags &= TOPOLOGY_SD_FLAGS;
 
-	/* Apply detected topology flags */
-	sd_flags |= dflags;
-
 	*sd = (struct sched_domain){
 		.min_interval		= sd_weight,
 		.max_interval		= 2*sd_weight,
@@ -1457,10 +1560,10 @@ sd_init(struct sched_domain_topology_level *tl,
 	cpumask_and(sched_domain_span(sd), cpu_map, tl->mask(cpu));
 	sd_id = cpumask_first(sched_domain_span(sd));
 
+	sd->flags |= asym_cpu_capacity_classify(sd, cpu_map);
 	/*
 	 * Convert topological properties into behaviour.
 	 */
-
 	/* Don't attempt to spread across CPUs of different capacities. */
 	if ((sd->flags & SD_ASYM_CPUCAPACITY) && sd->child)
 		sd->child->flags &= ~SD_PREFER_SIBLING;
@@ -1926,9 +2029,9 @@ static void __sdt_free(const struct cpumask *cpu_map)
 
 static struct sched_domain *build_sched_domain(struct sched_domain_topology_level *tl,
 		const struct cpumask *cpu_map, struct sched_domain_attr *attr,
-		struct sched_domain *child, int dflags, int cpu)
+		struct sched_domain *child, int cpu)
 {
-	struct sched_domain *sd = sd_init(tl, cpu_map, child, dflags, cpu);
+	struct sched_domain *sd = sd_init(tl, cpu_map, child, cpu);
 
 	if (child) {
 		sd->level = child->level + 1;
@@ -1990,65 +2093,6 @@ static bool topology_span_sane(struct sched_domain_topology_level *tl,
 	return true;
 }
 
-/*
- * Find the sched_domain_topology_level where all CPU capacities are visible
- * for all CPUs.
- */
-static struct sched_domain_topology_level
-*asym_cpu_capacity_level(const struct cpumask *cpu_map)
-{
-	int i, j, asym_level = 0;
-	bool asym = false;
-	struct sched_domain_topology_level *tl, *asym_tl = NULL;
-	unsigned long cap;
-
-	/* Is there any asymmetry? */
-	cap = arch_scale_cpu_capacity(cpumask_first(cpu_map));
-
-	for_each_cpu(i, cpu_map) {
-		if (arch_scale_cpu_capacity(i) != cap) {
-			asym = true;
-			break;
-		}
-	}
-
-	if (!asym)
-		return NULL;
-
-	/*
-	 * Examine topology from all CPU's point of views to detect the lowest
-	 * sched_domain_topology_level where a highest capacity CPU is visible
-	 * to everyone.
-	 */
-	for_each_cpu(i, cpu_map) {
-		unsigned long max_capacity = arch_scale_cpu_capacity(i);
-		int tl_id = 0;
-
-		for_each_sd_topology(tl) {
-			if (tl_id < asym_level)
-				goto next_level;
-
-			for_each_cpu_and(j, tl->mask(i), cpu_map) {
-				unsigned long capacity;
-
-				capacity = arch_scale_cpu_capacity(j);
-
-				if (capacity <= max_capacity)
-					continue;
-
-				max_capacity = capacity;
-				asym_level = tl_id;
-				asym_tl = tl;
-			}
-next_level:
-			tl_id++;
-		}
-	}
-
-	return asym_tl;
-}
-
-
 /*
  * Build sched domains for a given set of CPUs and attach the sched domains
  * to the individual CPUs
@@ -2061,7 +2105,6 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att
 	struct s_data d;
 	struct rq *rq = NULL;
 	int i, ret = -ENOMEM;
-	struct sched_domain_topology_level *tl_asym;
 	bool has_asym = false;
 
 	if (WARN_ON(cpumask_empty(cpu_map)))
@@ -2071,24 +2114,19 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att
 	if (alloc_state != sa_rootdomain)
 		goto error;
 
-	tl_asym = asym_cpu_capacity_level(cpu_map);
-
 	/* Set up domains for CPUs specified by the cpu_map: */
 	for_each_cpu(i, cpu_map) {
 		struct sched_domain_topology_level *tl;
-		int dflags = 0;
 
 		sd = NULL;
 		for_each_sd_topology(tl) {
-			if (tl == tl_asym) {
-				dflags |= SD_ASYM_CPUCAPACITY;
-				has_asym = true;
-			}
 
 			if (WARN_ON(!topology_span_sane(tl, cpu_map, i)))
 				goto error;
 
-			sd = build_sched_domain(tl, cpu_map, attr, sd, dflags, i);
+			sd = build_sched_domain(tl, cpu_map, attr, sd, i);
+
+			has_asym |= sd->flags & SD_ASYM_CPUCAPACITY;
 
 			if (tl == sched_domain_topology)
 				*per_cpu_ptr(d.sd, i) = sd;
@@ -2217,6 +2255,7 @@ int sched_init_domains(const struct cpumask *cpu_map)
 	zalloc_cpumask_var(&fallback_doms, GFP_KERNEL);
 
 	arch_update_cpu_topology();
+	asym_cpu_capacity_scan();
 	ndoms_cur = 1;
 	doms_cur = alloc_sched_domains(ndoms_cur);
 	if (!doms_cur)
@@ -2299,6 +2338,9 @@ void partition_sched_domains_locked(int ndoms_new, cpumask_var_t doms_new[],
 
 	/* Let the architecture update CPU core mappings: */
 	new_topology = arch_update_cpu_topology();
+	/* Trigger rebuilding CPU capacity asymmetry data */
+	if (new_topology)
+		asym_cpu_capacity_scan();
 
 	if (!doms_new) {
 		WARN_ON_ONCE(dattr_new);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v6 3/3] sched/doc: Update the CPU capacity asymmetry bits
  2021-05-27 15:38 [PATCH v6 0/3] Rework CPU capacity asymmetry detection Beata Michalska
  2021-05-27 15:38 ` [PATCH v6 1/3] sched/core: Introduce SD_ASYM_CPUCAPACITY_FULL sched_domain flag Beata Michalska
  2021-05-27 15:38 ` [PATCH v6 2/3] sched/topology: Rework CPU capacity asymmetry detection Beata Michalska
@ 2021-05-27 15:38 ` Beata Michalska
  2021-06-02 19:10 ` [PATCH v6 0/3] Rework CPU capacity asymmetry detection Dietmar Eggemann
  3 siblings, 0 replies; 9+ messages in thread
From: Beata Michalska @ 2021-05-27 15:38 UTC (permalink / raw)
  To: linux-kernel
  Cc: peterz, mingo, juri.lelli, vincent.guittot, valentin.schneider,
	dietmar.eggemann, corbet, rdunlap, linux-doc

Update the documentation bits referring to capacity aware scheduling
with regards to newly introduced SD_ASYM_CPUCAPACITY_FULL sched_domain
flag.

Signed-off-by: Beata Michalska <beata.michalska@arm.com>
Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
---
 Documentation/scheduler/sched-capacity.rst | 6 ++++--
 Documentation/scheduler/sched-energy.rst   | 2 +-
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/Documentation/scheduler/sched-capacity.rst b/Documentation/scheduler/sched-capacity.rst
index 9b7cbe43b2d1..805f85f330b5 100644
--- a/Documentation/scheduler/sched-capacity.rst
+++ b/Documentation/scheduler/sched-capacity.rst
@@ -284,8 +284,10 @@ whether the system exhibits asymmetric CPU capacities. Should that be the
 case:
 
 - The sched_asym_cpucapacity static key will be enabled.
-- The SD_ASYM_CPUCAPACITY flag will be set at the lowest sched_domain level that
-  spans all unique CPU capacity values.
+- The SD_ASYM_CPUCAPACITY_FULL flag will be set at the lowest sched_domain
+  level that spans all unique CPU capacity values.
+- The SD_ASYM_CPUCAPACITY flag will be set for any sched_domain that spans
+  CPUs with any range of asymmetry.
 
 The sched_asym_cpucapacity static key is intended to guard sections of code that
 cater to asymmetric CPU capacity systems. Do note however that said key is
diff --git a/Documentation/scheduler/sched-energy.rst b/Documentation/scheduler/sched-energy.rst
index afe02d394402..8fbce5e767d9 100644
--- a/Documentation/scheduler/sched-energy.rst
+++ b/Documentation/scheduler/sched-energy.rst
@@ -328,7 +328,7 @@ section lists these dependencies and provides hints as to how they can be met.
 
 As mentioned in the introduction, EAS is only supported on platforms with
 asymmetric CPU topologies for now. This requirement is checked at run-time by
-looking for the presence of the SD_ASYM_CPUCAPACITY flag when the scheduling
+looking for the presence of the SD_ASYM_CPUCAPACITY_FULL flag when the scheduling
 domains are built.
 
 See Documentation/scheduler/sched-capacity.rst for requirements to be met for this
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v6 2/3] sched/topology: Rework CPU capacity asymmetry detection
  2021-05-27 15:38 ` [PATCH v6 2/3] sched/topology: Rework CPU capacity asymmetry detection Beata Michalska
@ 2021-06-02 12:50   ` Valentin Schneider
  2021-06-02 13:03     ` Beata Michalska
  2021-06-02 19:09   ` Dietmar Eggemann
  1 sibling, 1 reply; 9+ messages in thread
From: Valentin Schneider @ 2021-06-02 12:50 UTC (permalink / raw)
  To: Beata Michalska, linux-kernel
  Cc: peterz, mingo, juri.lelli, vincent.guittot, dietmar.eggemann,
	corbet, rdunlap, linux-doc

On 27/05/21 16:38, Beata Michalska wrote:
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> Suggested-by: Valentin Schneider <valentin.schneider@arm.com>
> Signed-off-by: Beata Michalska <beata.michalska@arm.com>

I ran this through the usual series of tests ('exotic' topologies, hotplug
and exclusive cpusets), it all behaves as expected.

Tested-by: Valentin Schneider <valentin.schneider@arm.com>
Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>

Some tiny cosmetic nits below, which don't warrant a new revision, and a
comment wrt purely symmetric systems.

> ---
>  kernel/sched/topology.c | 194 ++++++++++++++++++++++++----------------
>  1 file changed, 118 insertions(+), 76 deletions(-)
>
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index 55a0a243e871..77e6f79235ad 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c

> +/*
> + * Verify whether there is any CPU capacity asymmetry in a given sched domain.
> + * Provides sd_flags reflecting the asymmetry scope.
> + */
> +static inline int
> +asym_cpu_capacity_classify(struct sched_domain *sd,
> +			   const struct cpumask *cpu_map)
> +{
> +	struct asym_cap_data *entry;
> +	int sd_asym_flags = 0;
> +	int asym_cap_count = 0;
> +	int asym_cap_miss = 0;
> +
> +	/*
> +	 * Count how many unique CPU capacities this domain spans across
> +	 * (compare sched_domain CPUs mask with ones representing  available
> +	 * CPUs capacities). Take into account CPUs that might be offline:
> +	 * skip those.
> +	 */
> +	list_for_each_entry(entry, &asym_cap_list, link) {
> +		if (cpumask_intersects(sched_domain_span(sd),
> +				       cpu_capacity_span(entry)))

IMO this is one such place where the 80 chars limit can be omitted.

> +			++asym_cap_count;
> +		else if (cpumask_intersects(cpu_capacity_span(entry), cpu_map))
> +			++asym_cap_miss;
> +	}

> +/*
> + * Build-up/update list of CPUs grouped by their capacities
> + * An update requires explicit request to rebuild sched domains
> + * with state indicating CPU topology changes.
> + */
> +static void asym_cpu_capacity_scan(void)
> +{
> +	struct asym_cap_data *entry, *next;
> +	int cpu;
> +
> +	list_for_each_entry(entry, &asym_cap_list, link)
> +		cpumask_clear(cpu_capacity_span(entry));
> +
> +	for_each_cpu_and(cpu, cpu_possible_mask,
> +			 housekeeping_cpumask(HK_FLAG_DOMAIN))

Ditto on keeping this on a single line.

> +		asym_cpu_capacity_update_data(cpu);
> +
> +	list_for_each_entry_safe(entry, next, &asym_cap_list, link) {
> +		if (cpumask_empty(cpu_capacity_span(entry))) {
> +			list_del(&entry->link);
> +			kfree(entry);
> +		}
> +	}
> +}

One "corner case" that comes to mind is systems / architectures which are
purely symmetric wrt CPU capacity. Our x86 friends might object to us
reserving a puny 24 bytes + cpumask_size() in a corner of their
memory.

Perhaps we could clear the list in the list_is_singular_case(), and since
the rest of the code only does list iteration, this should 'naturally'
cover this case:

---
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 62d412013df8..b06d277fa280 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1305,14 +1305,13 @@ asym_cpu_capacity_classify(struct sched_domain *sd,
 	 * skip those.
 	 */
 	list_for_each_entry(entry, &asym_cap_list, link) {
-		if (cpumask_intersects(sched_domain_span(sd),
-				       cpu_capacity_span(entry)))
+		if (cpumask_intersects(sched_domain_span(sd), cpu_capacity_span(entry)))
 			++asym_cap_count;
 		else if (cpumask_intersects(cpu_capacity_span(entry), cpu_map))
 			++asym_cap_miss;
 	}
 	/* No asymmetry detected */
-	if (WARN_ON_ONCE(!asym_cap_count) || asym_cap_count == 1)
+	if (asym_cap_count < 2)
 		goto leave;
 
 	sd_asym_flags |= SD_ASYM_CPUCAPACITY;
@@ -1360,8 +1359,7 @@ static void asym_cpu_capacity_scan(void)
 	list_for_each_entry(entry, &asym_cap_list, link)
 		cpumask_clear(cpu_capacity_span(entry));
 
-	for_each_cpu_and(cpu, cpu_possible_mask,
-			 housekeeping_cpumask(HK_FLAG_DOMAIN))
+	for_each_cpu_and(cpu, cpu_possible_mask, housekeeping_cpumask(HK_FLAG_DOMAIN))
 		asym_cpu_capacity_update_data(cpu);
 
 	list_for_each_entry_safe(entry, next, &asym_cap_list, link) {
@@ -1370,6 +1368,16 @@ static void asym_cpu_capacity_scan(void)
 			kfree(entry);
 		}
 	}
+
+	/*
+	 * There's only one capacity value, i.e. this system is symmetric.
+	 * No need to keep this data around.
+	 */
+	if (list_is_singular(&asym_cap_list)) {
+		entry = list_first_entry(&asym_cap_list, typeof(*entry), link);
+		list_del(&entry->link);
+		kfree(entry);
+	}
 }
 
 /*

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v6 2/3] sched/topology: Rework CPU capacity asymmetry detection
  2021-06-02 12:50   ` Valentin Schneider
@ 2021-06-02 13:03     ` Beata Michalska
  0 siblings, 0 replies; 9+ messages in thread
From: Beata Michalska @ 2021-06-02 13:03 UTC (permalink / raw)
  To: Valentin Schneider
  Cc: linux-kernel, peterz, mingo, juri.lelli, vincent.guittot,
	dietmar.eggemann, corbet, rdunlap, linux-doc

On Wed, Jun 02, 2021 at 01:50:21PM +0100, Valentin Schneider wrote:
> On 27/05/21 16:38, Beata Michalska wrote:
> > Suggested-by: Peter Zijlstra <peterz@infradead.org>
> > Suggested-by: Valentin Schneider <valentin.schneider@arm.com>
> > Signed-off-by: Beata Michalska <beata.michalska@arm.com>
> 
> I ran this through the usual series of tests ('exotic' topologies, hotplug
> and exclusive cpusets), it all behaves as expected.
> 
Thanks for that!

> Tested-by: Valentin Schneider <valentin.schneider@arm.com>
> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
> 
> Some tiny cosmetic nits below, which don't warrant a new revision, and a
> comment wrt purely symmetric systems.
> 
> > ---
> >  kernel/sched/topology.c | 194 ++++++++++++++++++++++++----------------
> >  1 file changed, 118 insertions(+), 76 deletions(-)
> >
> > diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> > index 55a0a243e871..77e6f79235ad 100644
> > --- a/kernel/sched/topology.c
> > +++ b/kernel/sched/topology.c
> 
> > +/*
> > + * Verify whether there is any CPU capacity asymmetry in a given sched domain.
> > + * Provides sd_flags reflecting the asymmetry scope.
> > + */
> > +static inline int
> > +asym_cpu_capacity_classify(struct sched_domain *sd,
> > +			   const struct cpumask *cpu_map)
> > +{
> > +	struct asym_cap_data *entry;
> > +	int sd_asym_flags = 0;
> > +	int asym_cap_count = 0;
> > +	int asym_cap_miss = 0;
> > +
> > +	/*
> > +	 * Count how many unique CPU capacities this domain spans across
> > +	 * (compare sched_domain CPUs mask with ones representing  available
> > +	 * CPUs capacities). Take into account CPUs that might be offline:
> > +	 * skip those.
> > +	 */
> > +	list_for_each_entry(entry, &asym_cap_list, link) {
> > +		if (cpumask_intersects(sched_domain_span(sd),
> > +				       cpu_capacity_span(entry)))
> 
> IMO this is one such place where the 80 chars limit can be omitted.
> 
> > +			++asym_cap_count;
> > +		else if (cpumask_intersects(cpu_capacity_span(entry), cpu_map))
> > +			++asym_cap_miss;
> > +	}
> 
> > +/*
> > + * Build-up/update list of CPUs grouped by their capacities
> > + * An update requires explicit request to rebuild sched domains
> > + * with state indicating CPU topology changes.
> > + */
> > +static void asym_cpu_capacity_scan(void)
> > +{
> > +	struct asym_cap_data *entry, *next;
> > +	int cpu;
> > +
> > +	list_for_each_entry(entry, &asym_cap_list, link)
> > +		cpumask_clear(cpu_capacity_span(entry));
> > +
> > +	for_each_cpu_and(cpu, cpu_possible_mask,
> > +			 housekeeping_cpumask(HK_FLAG_DOMAIN))
> 
> Ditto on keeping this on a single line.
> 
> > +		asym_cpu_capacity_update_data(cpu);
> > +
> > +	list_for_each_entry_safe(entry, next, &asym_cap_list, link) {
> > +		if (cpumask_empty(cpu_capacity_span(entry))) {
> > +			list_del(&entry->link);
> > +			kfree(entry);
> > +		}
> > +	}
> > +}
> 
> One "corner case" that comes to mind is systems / architectures which are
> purely symmetric wrt CPU capacity. Our x86 friends might object to us
> reserving a puny 24 bytes + cpumask_size() in a corner of their
> memory.
> 
> Perhaps we could clear the list in the list_is_singular_case(), and since
> the rest of the code only does list iteration, this should 'naturally'
> cover this case:
>
Can do that.
I am also waiting for a reply regarding the asymmetry detected on an SMT level.
Once I get that solved, I will push new version with embedding your suggestions
as well.

Thanks for having a look!

---
BR
B.
> ---
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index 62d412013df8..b06d277fa280 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -1305,14 +1305,13 @@ asym_cpu_capacity_classify(struct sched_domain *sd,
>  	 * skip those.
>  	 */
>  	list_for_each_entry(entry, &asym_cap_list, link) {
> -		if (cpumask_intersects(sched_domain_span(sd),
> -				       cpu_capacity_span(entry)))
> +		if (cpumask_intersects(sched_domain_span(sd), cpu_capacity_span(entry)))
>  			++asym_cap_count;
>  		else if (cpumask_intersects(cpu_capacity_span(entry), cpu_map))
>  			++asym_cap_miss;
>  	}
>  	/* No asymmetry detected */
> -	if (WARN_ON_ONCE(!asym_cap_count) || asym_cap_count == 1)
> +	if (asym_cap_count < 2)
>  		goto leave;
>  
>  	sd_asym_flags |= SD_ASYM_CPUCAPACITY;
> @@ -1360,8 +1359,7 @@ static void asym_cpu_capacity_scan(void)
>  	list_for_each_entry(entry, &asym_cap_list, link)
>  		cpumask_clear(cpu_capacity_span(entry));
>  
> -	for_each_cpu_and(cpu, cpu_possible_mask,
> -			 housekeeping_cpumask(HK_FLAG_DOMAIN))
> +	for_each_cpu_and(cpu, cpu_possible_mask, housekeeping_cpumask(HK_FLAG_DOMAIN))
>  		asym_cpu_capacity_update_data(cpu);
>  
>  	list_for_each_entry_safe(entry, next, &asym_cap_list, link) {
> @@ -1370,6 +1368,16 @@ static void asym_cpu_capacity_scan(void)
>  			kfree(entry);
>  		}
>  	}
> +
> +	/*
> +	 * There's only one capacity value, i.e. this system is symmetric.
> +	 * No need to keep this data around.
> +	 */
> +	if (list_is_singular(&asym_cap_list)) {
> +		entry = list_first_entry(&asym_cap_list, typeof(*entry), link);
> +		list_del(&entry->link);
> +		kfree(entry);
> +	}
>  }
>  
>  /*

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v6 2/3] sched/topology: Rework CPU capacity asymmetry detection
  2021-05-27 15:38 ` [PATCH v6 2/3] sched/topology: Rework CPU capacity asymmetry detection Beata Michalska
  2021-06-02 12:50   ` Valentin Schneider
@ 2021-06-02 19:09   ` Dietmar Eggemann
  2021-06-02 19:54     ` Beata Michalska
  1 sibling, 1 reply; 9+ messages in thread
From: Dietmar Eggemann @ 2021-06-02 19:09 UTC (permalink / raw)
  To: Beata Michalska, linux-kernel
  Cc: peterz, mingo, juri.lelli, vincent.guittot, valentin.schneider,
	corbet, rdunlap, linux-doc

On 27/05/2021 17:38, Beata Michalska wrote:

[...]

> +/*
> + * Verify whether there is any CPU capacity asymmetry in a given sched domain.
> + * Provides sd_flags reflecting the asymmetry scope.
> + */
> +static inline int
> +asym_cpu_capacity_classify(struct sched_domain *sd,
> +			   const struct cpumask *cpu_map)
> +{
> +	struct asym_cap_data *entry;
> +	int sd_asym_flags = 0;
> +	int asym_cap_count = 0;
> +	int asym_cap_miss = 0;
> +
> +	/*
> +	 * Count how many unique CPU capacities this domain spans across
> +	 * (compare sched_domain CPUs mask with ones representing  available
> +	 * CPUs capacities). Take into account CPUs that might be offline:
> +	 * skip those.
> +	 */
> +	list_for_each_entry(entry, &asym_cap_list, link) {
> +		if (cpumask_intersects(sched_domain_span(sd),
> +				       cpu_capacity_span(entry)))
> +			++asym_cap_count;
> +		else if (cpumask_intersects(cpu_capacity_span(entry), cpu_map))

nit: `sd span, entry span` but `entry span, cpu_map`. Why not `cpu_map, entry span`?

> +			++asym_cap_miss;
> +	}
> +	/* No asymmetry detected */
> +	if (WARN_ON_ONCE(!asym_cap_count) || asym_cap_count == 1)
> +		goto leave;
> +
> +	sd_asym_flags |= SD_ASYM_CPUCAPACITY;
> +
> +	/*
> +	 * All the available capacities have been found within given sched
> +	 * domain: no misses reported.
> +	 */
> +	if (!asym_cap_miss)
> +		sd_asym_flags |= SD_ASYM_CPUCAPACITY_FULL;
> +
> +leave:
> +	return sd_asym_flags;
> +}

Everything looks good except that I like this more compact version better, proposed in:  

https://lkml.kernel.org/r/YK9ESqNEo+uacyMD@hirez.programming.kicks-ass.net

And passing `const struct cpumask *sd_span` instead of `struct
sched_domain *sd` into the function.


diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 77b73abbb9a4..0de8eebded9f 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1290,13 +1290,11 @@ static LIST_HEAD(asym_cap_list);
  * Provides sd_flags reflecting the asymmetry scope.
  */  
 static inline int 
-asym_cpu_capacity_classify(struct sched_domain *sd,
+asym_cpu_capacity_classify(const struct cpumask *sd_span,
                           const struct cpumask *cpu_map)
 {
        struct asym_cap_data *entry;
-       int sd_asym_flags = 0;
-       int asym_cap_count = 0;
-       int asym_cap_miss = 0;
+       int count = 0, miss = 0;
 
        /*
         * Count how many unique CPU capacities this domain spans across
@@ -1305,27 +1303,20 @@ asym_cpu_capacity_classify(struct sched_domain *sd,
         * skip those.
         */
        list_for_each_entry(entry, &asym_cap_list, link) {
-               if (cpumask_intersects(sched_domain_span(sd),
-                                      cpu_capacity_span(entry)))
-                       ++asym_cap_count;
-               else if (cpumask_intersects(cpu_capacity_span(entry), cpu_map))
-                       ++asym_cap_miss;
+               if (cpumask_intersects(sd_span, cpu_capacity_span(entry)))
+                       ++count;
+               else if (cpumask_intersects(cpu_map, cpu_capacity_span(entry)))
+                       ++miss;
        }
-       /* No asymmetry detected */
-       if (WARN_ON_ONCE(!asym_cap_count) || asym_cap_count == 1)
-               goto leave;
 
-       sd_asym_flags |= SD_ASYM_CPUCAPACITY;
+       if (WARN_ON_ONCE(!count) || count == 1) /* No asymmetry */
+               return 0;
 
-       /*
-        * All the available capacities have been found within given sched
-        * domain: no misses reported.
-        */
-       if (!asym_cap_miss)
-               sd_asym_flags |= SD_ASYM_CPUCAPACITY_FULL;
+       if (miss) /* Partial asymmetry */
+               return SD_ASYM_CPUCAPACITY;
 
-leave:
-       return sd_asym_flags;
+       /* Full asymmetry */
+       return SD_ASYM_CPUCAPACITY | SD_ASYM_CPUCAPACITY_FULL;
 }
 
 static inline void asym_cpu_capacity_update_data(int cpu)
@@ -1510,6 +1501,7 @@ sd_init(struct sched_domain_topology_level *tl,
        struct sd_data *sdd = &tl->data;
        struct sched_domain *sd = *per_cpu_ptr(sdd->sd, cpu);
        int sd_id, sd_weight, sd_flags = 0;
+       struct cpumask *sd_span;
 
 #ifdef CONFIG_NUMA
        /*
@@ -1557,10 +1549,11 @@ sd_init(struct sched_domain_topology_level *tl,
 #endif
        };
 
-       cpumask_and(sched_domain_span(sd), cpu_map, tl->mask(cpu));
-       sd_id = cpumask_first(sched_domain_span(sd));
+       sd_span = sched_domain_span(sd);
+       cpumask_and(sd_span, cpu_map, tl->mask(cpu));
+       sd_id = cpumask_first(sd_span);
 
-       sd->flags |= asym_cpu_capacity_classify(sd, cpu_map);
+       sd->flags |= asym_cpu_capacity_classify(sd_span, cpu_map);
 
        WARN_ONCE((sd->flags & (SD_SHARE_CPUCAPACITY | SD_ASYM_CPUCAPACITY)) ==
                  (SD_SHARE_CPUCAPACITY | SD_ASYM_CPUCAPACITY),
-- 
2.25.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v6 0/3] Rework CPU capacity asymmetry detection
  2021-05-27 15:38 [PATCH v6 0/3] Rework CPU capacity asymmetry detection Beata Michalska
                   ` (2 preceding siblings ...)
  2021-05-27 15:38 ` [PATCH v6 3/3] sched/doc: Update the CPU capacity asymmetry bits Beata Michalska
@ 2021-06-02 19:10 ` Dietmar Eggemann
  3 siblings, 0 replies; 9+ messages in thread
From: Dietmar Eggemann @ 2021-06-02 19:10 UTC (permalink / raw)
  To: Beata Michalska, linux-kernel
  Cc: peterz, mingo, juri.lelli, vincent.guittot, valentin.schneider,
	corbet, rdunlap, linux-doc

On 27/05/2021 17:38, Beata Michalska wrote:

[...]

> Beata Michalska (3):
>   sched/core: Introduce SD_ASYM_CPUCAPACITY_FULL sched_domain flag
>   sched/topology: Rework CPU capacity asymmetry detection
>   sched/doc: Update the CPU capacity asymmetry bits
> 
>  Documentation/scheduler/sched-capacity.rst |   6 +-
>  Documentation/scheduler/sched-energy.rst   |   2 +-
>  include/linux/sched/sd_flags.h             |  10 ++
>  kernel/sched/topology.c                    | 194 +++++++++++++--------
>  4 files changed, 133 insertions(+), 79 deletions(-)

Looks good to me, even though I would like to see a more compact version
of asym_cpu_capacity_classify(). Details in my response to [PATCH v6 2/3].

Did some level of testing myself and wasn't able to break it.

Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v6 2/3] sched/topology: Rework CPU capacity asymmetry detection
  2021-06-02 19:09   ` Dietmar Eggemann
@ 2021-06-02 19:54     ` Beata Michalska
  0 siblings, 0 replies; 9+ messages in thread
From: Beata Michalska @ 2021-06-02 19:54 UTC (permalink / raw)
  To: Dietmar Eggemann
  Cc: linux-kernel, peterz, mingo, juri.lelli, vincent.guittot,
	valentin.schneider, corbet, rdunlap, linux-doc

On Wed, Jun 02, 2021 at 09:09:54PM +0200, Dietmar Eggemann wrote:
> On 27/05/2021 17:38, Beata Michalska wrote:
> 
> [...]
> 
> > +/*
> > + * Verify whether there is any CPU capacity asymmetry in a given sched domain.
> > + * Provides sd_flags reflecting the asymmetry scope.
> > + */
> > +static inline int
> > +asym_cpu_capacity_classify(struct sched_domain *sd,
> > +			   const struct cpumask *cpu_map)
> > +{
> > +	struct asym_cap_data *entry;
> > +	int sd_asym_flags = 0;
> > +	int asym_cap_count = 0;
> > +	int asym_cap_miss = 0;
> > +
> > +	/*
> > +	 * Count how many unique CPU capacities this domain spans across
> > +	 * (compare sched_domain CPUs mask with ones representing  available
> > +	 * CPUs capacities). Take into account CPUs that might be offline:
> > +	 * skip those.
> > +	 */
> > +	list_for_each_entry(entry, &asym_cap_list, link) {
> > +		if (cpumask_intersects(sched_domain_span(sd),
> > +				       cpu_capacity_span(entry)))
> > +			++asym_cap_count;
> > +		else if (cpumask_intersects(cpu_capacity_span(entry), cpu_map))
> 
> nit: `sd span, entry span` but `entry span, cpu_map`. Why not `cpu_map, entry span`?
>
Cannot recall any reason for that.

> > +			++asym_cap_miss;
> > +	}
> > +	/* No asymmetry detected */
> > +	if (WARN_ON_ONCE(!asym_cap_count) || asym_cap_count == 1)
> > +		goto leave;
> > +
> > +	sd_asym_flags |= SD_ASYM_CPUCAPACITY;
> > +
> > +	/*
> > +	 * All the available capacities have been found within given sched
> > +	 * domain: no misses reported.
> > +	 */
> > +	if (!asym_cap_miss)
> > +		sd_asym_flags |= SD_ASYM_CPUCAPACITY_FULL;
> > +
> > +leave:
> > +	return sd_asym_flags;
> > +}
> 
> Everything looks good except that I like this more compact version better, proposed in:  
> 
> https://lkml.kernel.org/r/YK9ESqNEo+uacyMD@hirez.programming.kicks-ass.net
> 
> And passing `const struct cpumask *sd_span` instead of `struct
> sched_domain *sd` into the function.
>
I do understand the parameter argument, but honestly don't see much difference
in naming and switching single return for asymmetric topologies vs two return
statement, but if that is more preferred/readable version I do not mind changing
that as well.

Thanks for the review.

---
BR
B.

> 
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index 77b73abbb9a4..0de8eebded9f 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -1290,13 +1290,11 @@ static LIST_HEAD(asym_cap_list);
>   * Provides sd_flags reflecting the asymmetry scope.
>   */  
>  static inline int 
> -asym_cpu_capacity_classify(struct sched_domain *sd,
> +asym_cpu_capacity_classify(const struct cpumask *sd_span,
>                            const struct cpumask *cpu_map)
>  {
>         struct asym_cap_data *entry;
> -       int sd_asym_flags = 0;
> -       int asym_cap_count = 0;
> -       int asym_cap_miss = 0;
> +       int count = 0, miss = 0;
>  
>         /*
>          * Count how many unique CPU capacities this domain spans across
> @@ -1305,27 +1303,20 @@ asym_cpu_capacity_classify(struct sched_domain *sd,
>          * skip those.
>          */
>         list_for_each_entry(entry, &asym_cap_list, link) {
> -               if (cpumask_intersects(sched_domain_span(sd),
> -                                      cpu_capacity_span(entry)))
> -                       ++asym_cap_count;
> -               else if (cpumask_intersects(cpu_capacity_span(entry), cpu_map))
> -                       ++asym_cap_miss;
> +               if (cpumask_intersects(sd_span, cpu_capacity_span(entry)))
> +                       ++count;
> +               else if (cpumask_intersects(cpu_map, cpu_capacity_span(entry)))
> +                       ++miss;
>         }
> -       /* No asymmetry detected */
> -       if (WARN_ON_ONCE(!asym_cap_count) || asym_cap_count == 1)
> -               goto leave;
>  
> -       sd_asym_flags |= SD_ASYM_CPUCAPACITY;
> +       if (WARN_ON_ONCE(!count) || count == 1) /* No asymmetry */
> +               return 0;
>  
> -       /*
> -        * All the available capacities have been found within given sched
> -        * domain: no misses reported.
> -        */
> -       if (!asym_cap_miss)
> -               sd_asym_flags |= SD_ASYM_CPUCAPACITY_FULL;
> +       if (miss) /* Partial asymmetry */
> +               return SD_ASYM_CPUCAPACITY;
>  
> -leave:
> -       return sd_asym_flags;
> +       /* Full asymmetry */
> +       return SD_ASYM_CPUCAPACITY | SD_ASYM_CPUCAPACITY_FULL;
>  }
>  
>  static inline void asym_cpu_capacity_update_data(int cpu)
> @@ -1510,6 +1501,7 @@ sd_init(struct sched_domain_topology_level *tl,
>         struct sd_data *sdd = &tl->data;
>         struct sched_domain *sd = *per_cpu_ptr(sdd->sd, cpu);
>         int sd_id, sd_weight, sd_flags = 0;
> +       struct cpumask *sd_span;
>  
>  #ifdef CONFIG_NUMA
>         /*
> @@ -1557,10 +1549,11 @@ sd_init(struct sched_domain_topology_level *tl,
>  #endif
>         };
>  
> -       cpumask_and(sched_domain_span(sd), cpu_map, tl->mask(cpu));
> -       sd_id = cpumask_first(sched_domain_span(sd));
> +       sd_span = sched_domain_span(sd);
> +       cpumask_and(sd_span, cpu_map, tl->mask(cpu));
> +       sd_id = cpumask_first(sd_span);
>  
> -       sd->flags |= asym_cpu_capacity_classify(sd, cpu_map);
> +       sd->flags |= asym_cpu_capacity_classify(sd_span, cpu_map);
>  
>         WARN_ONCE((sd->flags & (SD_SHARE_CPUCAPACITY | SD_ASYM_CPUCAPACITY)) ==
>                   (SD_SHARE_CPUCAPACITY | SD_ASYM_CPUCAPACITY),
> -- 
> 2.25.1

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2021-06-02 19:54 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-27 15:38 [PATCH v6 0/3] Rework CPU capacity asymmetry detection Beata Michalska
2021-05-27 15:38 ` [PATCH v6 1/3] sched/core: Introduce SD_ASYM_CPUCAPACITY_FULL sched_domain flag Beata Michalska
2021-05-27 15:38 ` [PATCH v6 2/3] sched/topology: Rework CPU capacity asymmetry detection Beata Michalska
2021-06-02 12:50   ` Valentin Schneider
2021-06-02 13:03     ` Beata Michalska
2021-06-02 19:09   ` Dietmar Eggemann
2021-06-02 19:54     ` Beata Michalska
2021-05-27 15:38 ` [PATCH v6 3/3] sched/doc: Update the CPU capacity asymmetry bits Beata Michalska
2021-06-02 19:10 ` [PATCH v6 0/3] Rework CPU capacity asymmetry detection Dietmar Eggemann

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).