linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] sched/topology: Set SD_ASYM_CPUCAPACITY flag automatically
@ 2018-07-20 13:32 Morten Rasmussen
  2018-07-20 13:32 ` [PATCH 1/4] sched/topology: SD_ASYM_CPUCAPACITY flag detection Morten Rasmussen
                   ` (4 more replies)
  0 siblings, 5 replies; 17+ messages in thread
From: Morten Rasmussen @ 2018-07-20 13:32 UTC (permalink / raw)
  To: peterz, mingo
  Cc: valentin.schneider, dietmar.eggemann, vincent.guittot,
	linux-kernel, linux-arm-kernel, Morten Rasmussen

The SD_ASYM_CPUCAPACITY flag has been around for some time now with no code to
actually set it. Android has carried patches to do this out-of-tree in the
meantime. The flag is meant to indicate cpu capacity asymmetry and is set at
the topology level where the sched_domain spans all available cpu capacity in
the system, i.e. all core types are visible, for any cpu in the system.

The flag was merged as being a topology flag meaning that architecture had to
provide the flag explicitly, however when mixed with cpusets splitting the
system into multiple root_domains the flag can't be set without knowledge about
the cpusets. Rather than exposing cpusets to architecture code this patch set
moves the responsibility for setting the flag to generic topology code which is
simpler and make the code architecture agnostic.

Morten Rasmussen (4):
  sched/topology: SD_ASYM_CPUCAPACITY flag detection
  drivers/base/arch_topology: Rebuild sched_domain hierarchy when
    capacities change
  arch/arm64: Rebuild sched_domain hierarchy when cpu capacity changes
  arch/arm: Rebuild sched_domain hierarchy when cpu capacity changes

 arch/arm/include/asm/topology.h   |  3 ++
 arch/arm64/include/asm/topology.h |  3 ++
 drivers/base/arch_topology.c      | 26 +++++++++++++
 include/linux/arch_topology.h     |  1 +
 include/linux/sched/topology.h    |  2 +-
 kernel/sched/topology.c           | 81 ++++++++++++++++++++++++++++++++++++---
 6 files changed, 109 insertions(+), 7 deletions(-)

-- 
2.7.4


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 1/4] sched/topology: SD_ASYM_CPUCAPACITY flag detection
  2018-07-20 13:32 [PATCH 0/4] sched/topology: Set SD_ASYM_CPUCAPACITY flag automatically Morten Rasmussen
@ 2018-07-20 13:32 ` Morten Rasmussen
  2018-07-23 13:25   ` Qais Yousef
                     ` (2 more replies)
  2018-07-20 13:32 ` [PATCH 2/4] drivers/base/arch_topology: Rebuild sched_domain hierarchy when capacities change Morten Rasmussen
                   ` (3 subsequent siblings)
  4 siblings, 3 replies; 17+ messages in thread
From: Morten Rasmussen @ 2018-07-20 13:32 UTC (permalink / raw)
  To: peterz, mingo
  Cc: valentin.schneider, dietmar.eggemann, vincent.guittot,
	linux-kernel, linux-arm-kernel, Morten Rasmussen

The SD_ASYM_CPUCAPACITY sched_domain flag is supposed to mark the
sched_domain in the hierarchy where all cpu capacities are visible for
any cpu's point of view on asymmetric cpu capacity systems. The
scheduler can then take to take capacity asymmetry into account when
balancing at this level. It also serves as an indicator for how wide
task placement heuristics have to search to consider all available cpu
capacities as asymmetric systems might often appear symmetric at
smallest level(s) of the sched_domain hierarchy.

The flag has been around for while but so far only been set by
out-of-tree code in Android kernels. One solution is to let each
architecture provide the flag through a custom sched_domain topology
array and associated mask and flag functions. However,
SD_ASYM_CPUCAPACITY is special in the sense that it depends on the
capacity and presence of all cpus in the system, i.e. when hotplugging
all cpus out except those with one particular cpu capacity the flag
should disappear even if the sched_domains don't collapse. Similarly,
the flag is affected by cpusets where load-balancing is turned off.
Detecting when the flags should be set therefore depends not only on
topology information but also the cpuset configuration and hotplug
state. The arch code doesn't have easy access to the cpuset
configuration.

Instead, this patch implements the flag detection in generic code where
cpusets and hotplug state is already taken care of. All the arch is
responsible for is to implement arch_scale_cpu_capacity() and force a
full rebuild of the sched_domain hierarchy if capacities are updated,
e.g. later in the boot process when cpufreq has initialized.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>

Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
---
 include/linux/sched/topology.h |  2 +-
 kernel/sched/topology.c        | 81 ++++++++++++++++++++++++++++++++++++++----
 2 files changed, 76 insertions(+), 7 deletions(-)

diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
index 26347741ba50..4fe2e49ab13b 100644
--- a/include/linux/sched/topology.h
+++ b/include/linux/sched/topology.h
@@ -23,7 +23,7 @@
 #define SD_BALANCE_FORK		0x0008	/* Balance on fork, clone */
 #define SD_BALANCE_WAKE		0x0010  /* Balance on wakeup */
 #define SD_WAKE_AFFINE		0x0020	/* Wake task to waking CPU */
-#define SD_ASYM_CPUCAPACITY	0x0040  /* Groups have different max cpu capacities */
+#define SD_ASYM_CPUCAPACITY	0x0040  /* Domain members have different cpu capacities */
 #define SD_SHARE_CPUCAPACITY	0x0080	/* Domain members share cpu capacity */
 #define SD_SHARE_POWERDOMAIN	0x0100	/* Domain members share power domain */
 #define SD_SHARE_PKG_RESOURCES	0x0200	/* Domain members share cpu pkg resources */
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 05a831427bc7..b8f41d557612 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1061,7 +1061,6 @@ static struct cpumask		***sched_domains_numa_masks;
  *   SD_SHARE_PKG_RESOURCES - describes shared caches
  *   SD_NUMA                - describes NUMA topologies
  *   SD_SHARE_POWERDOMAIN   - describes shared power domain
- *   SD_ASYM_CPUCAPACITY    - describes mixed capacity topologies
  *
  * Odd one out, which beside describing the topology has a quirk also
  * prescribes the desired behaviour that goes along with it:
@@ -1073,13 +1072,12 @@ static struct cpumask		***sched_domains_numa_masks;
 	 SD_SHARE_PKG_RESOURCES |	\
 	 SD_NUMA		|	\
 	 SD_ASYM_PACKING	|	\
-	 SD_ASYM_CPUCAPACITY	|	\
 	 SD_SHARE_POWERDOMAIN)
 
 static struct sched_domain *
 sd_init(struct sched_domain_topology_level *tl,
 	const struct cpumask *cpu_map,
-	struct sched_domain *child, int cpu)
+	struct sched_domain *child, int dflags, int cpu)
 {
 	struct sd_data *sdd = &tl->data;
 	struct sched_domain *sd = *per_cpu_ptr(sdd->sd, cpu);
@@ -1100,6 +1098,9 @@ sd_init(struct sched_domain_topology_level *tl,
 			"wrong sd_flags in topology description\n"))
 		sd_flags &= ~TOPOLOGY_SD_FLAGS;
 
+	/* Apply detected topology flags */
+	sd_flags |= dflags;
+
 	*sd = (struct sched_domain){
 		.min_interval		= sd_weight,
 		.max_interval		= 2*sd_weight,
@@ -1607,9 +1608,9 @@ static void __sdt_free(const struct cpumask *cpu_map)
 
 static struct sched_domain *build_sched_domain(struct sched_domain_topology_level *tl,
 		const struct cpumask *cpu_map, struct sched_domain_attr *attr,
-		struct sched_domain *child, int cpu)
+		struct sched_domain *child, int dflags, int cpu)
 {
-	struct sched_domain *sd = sd_init(tl, cpu_map, child, cpu);
+	struct sched_domain *sd = sd_init(tl, cpu_map, child, dflags, cpu);
 
 	if (child) {
 		sd->level = child->level + 1;
@@ -1636,6 +1637,65 @@ static struct sched_domain *build_sched_domain(struct sched_domain_topology_leve
 }
 
 /*
+ * Find the sched_domain_topology_level where all cpu capacities are visible
+ * for all cpus.
+ */
+static struct sched_domain_topology_level
+*asym_cpu_capacity_level(const struct cpumask *cpu_map)
+{
+	int i, j, asym_level = 0;
+	bool asym = false;
+	struct sched_domain_topology_level *tl, *asym_tl = NULL;
+	unsigned long cap;
+
+	/* Is there any asymmetry? */
+	cap = arch_scale_cpu_capacity(NULL, cpumask_first(cpu_map));
+
+	for_each_cpu(i, cpu_map) {
+		if (arch_scale_cpu_capacity(NULL, i) != cap) {
+			asym = true;
+			break;
+		}
+	}
+
+	if (!asym)
+		return NULL;
+
+	/*
+	 * Examine topology from all cpu's point of views to detect the lowest
+	 * sched_domain_topology_level where a highest capacity cpu is visible
+	 * to everyone.
+	 */
+	for_each_cpu(i, cpu_map) {
+		unsigned long max_capacity = arch_scale_cpu_capacity(NULL, i);
+		int tl_id = 0;
+
+		for_each_sd_topology(tl) {
+			if (tl_id < asym_level)
+				goto next_level;
+
+			for_each_cpu_and(j, tl->mask(i), cpu_map) {
+				unsigned long capacity;
+
+				capacity = arch_scale_cpu_capacity(NULL, j);
+
+				if (capacity <= max_capacity)
+					continue;
+
+				max_capacity = capacity;
+				asym_level = tl_id;
+				asym_tl = tl;
+			}
+next_level:
+			tl_id++;
+		}
+	}
+
+	return asym_tl;
+}
+
+
+/*
  * Build sched domains for a given set of CPUs and attach the sched domains
  * to the individual CPUs
  */
@@ -1647,18 +1707,27 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att
 	struct s_data d;
 	struct rq *rq = NULL;
 	int i, ret = -ENOMEM;
+	struct sched_domain_topology_level *tl_asym;
 
 	alloc_state = __visit_domain_allocation_hell(&d, cpu_map);
 	if (alloc_state != sa_rootdomain)
 		goto error;
 
+	tl_asym = asym_cpu_capacity_level(cpu_map);
+
 	/* Set up domains for CPUs specified by the cpu_map: */
 	for_each_cpu(i, cpu_map) {
 		struct sched_domain_topology_level *tl;
 
 		sd = NULL;
 		for_each_sd_topology(tl) {
-			sd = build_sched_domain(tl, cpu_map, attr, sd, i);
+			int dflags = 0;
+
+			if (tl == tl_asym)
+				dflags |= SD_ASYM_CPUCAPACITY;
+
+			sd = build_sched_domain(tl, cpu_map, attr, sd, dflags, i);
+
 			if (tl == sched_domain_topology)
 				*per_cpu_ptr(d.sd, i) = sd;
 			if (tl->flags & SDTL_OVERLAP)
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 2/4] drivers/base/arch_topology: Rebuild sched_domain hierarchy when capacities change
  2018-07-20 13:32 [PATCH 0/4] sched/topology: Set SD_ASYM_CPUCAPACITY flag automatically Morten Rasmussen
  2018-07-20 13:32 ` [PATCH 1/4] sched/topology: SD_ASYM_CPUCAPACITY flag detection Morten Rasmussen
@ 2018-07-20 13:32 ` Morten Rasmussen
  2018-09-10 10:11   ` [tip:sched/core] sched/topology, drivers/base/arch_topology: Rebuild the " tip-bot for Morten Rasmussen
  2018-07-20 13:32 ` [PATCH 3/4] arch/arm64: Rebuild sched_domain hierarchy when cpu capacity changes Morten Rasmussen
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 17+ messages in thread
From: Morten Rasmussen @ 2018-07-20 13:32 UTC (permalink / raw)
  To: peterz, mingo
  Cc: valentin.schneider, dietmar.eggemann, vincent.guittot,
	linux-kernel, linux-arm-kernel, Morten Rasmussen,
	Greg Kroah-Hartman

The setting of SD_ASYM_CPUCAPACITY depends on the per-cpu capacities.
These might not have their final values when the hierarchy is initially
built as the values depend on cpufreq to be initialized or the values
being set through sysfs. To ensure that the flags are set correctly we
need to rebuild the sched_domain hierarchy whenever the reported per-cpu
capacity (arch_scale_cpu_capacity()) changes.

This patch ensure that a full sched_domain rebuild happens when cpu
capacity changes occur.

cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
---
 drivers/base/arch_topology.c  | 26 ++++++++++++++++++++++++++
 include/linux/arch_topology.h |  1 +
 2 files changed, 27 insertions(+)

diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
index e7cb0c6ade81..edfcf8d982e4 100644
--- a/drivers/base/arch_topology.c
+++ b/drivers/base/arch_topology.c
@@ -15,6 +15,7 @@
 #include <linux/slab.h>
 #include <linux/string.h>
 #include <linux/sched/topology.h>
+#include <linux/cpuset.h>
 
 DEFINE_PER_CPU(unsigned long, freq_scale) = SCHED_CAPACITY_SCALE;
 
@@ -47,6 +48,9 @@ static ssize_t cpu_capacity_show(struct device *dev,
 	return sprintf(buf, "%lu\n", topology_get_cpu_scale(NULL, cpu->dev.id));
 }
 
+static void update_topology_flags_workfn(struct work_struct *work);
+static DECLARE_WORK(update_topology_flags_work, update_topology_flags_workfn);
+
 static ssize_t cpu_capacity_store(struct device *dev,
 				  struct device_attribute *attr,
 				  const char *buf,
@@ -72,6 +76,8 @@ static ssize_t cpu_capacity_store(struct device *dev,
 		topology_set_cpu_scale(i, new_capacity);
 	mutex_unlock(&cpu_scale_mutex);
 
+	schedule_work(&update_topology_flags_work);
+
 	return count;
 }
 
@@ -96,6 +102,25 @@ static int register_cpu_capacity_sysctl(void)
 }
 subsys_initcall(register_cpu_capacity_sysctl);
 
+static int update_topology;
+
+int topology_update_cpu_topology(void)
+{
+	return update_topology;
+}
+
+/*
+ * Updating the sched_domains can't be done directly from cpufreq callbacks
+ * due to locking, so queue the work for later.
+ */
+static void update_topology_flags_workfn(struct work_struct *work)
+{
+	update_topology = 1;
+	rebuild_sched_domains();
+	pr_debug("sched_domain hierarchy rebuilt, flags updated\n");
+	update_topology = 0;
+}
+
 static u32 capacity_scale;
 static u32 *raw_capacity;
 
@@ -201,6 +226,7 @@ init_cpu_capacity_callback(struct notifier_block *nb,
 
 	if (cpumask_empty(cpus_to_visit)) {
 		topology_normalize_cpu_scale();
+		schedule_work(&update_topology_flags_work);
 		free_raw_capacity();
 		pr_debug("cpu_capacity: parsing done\n");
 		schedule_work(&parsing_done_work);
diff --git a/include/linux/arch_topology.h b/include/linux/arch_topology.h
index 2b709416de05..d9bdc1a7f4e7 100644
--- a/include/linux/arch_topology.h
+++ b/include/linux/arch_topology.h
@@ -9,6 +9,7 @@
 #include <linux/percpu.h>
 
 void topology_normalize_cpu_scale(void);
+int topology_update_cpu_topology(void);
 
 struct device_node;
 bool topology_parse_cpu_capacity(struct device_node *cpu_node, int cpu);
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 3/4] arch/arm64: Rebuild sched_domain hierarchy when cpu capacity changes
  2018-07-20 13:32 [PATCH 0/4] sched/topology: Set SD_ASYM_CPUCAPACITY flag automatically Morten Rasmussen
  2018-07-20 13:32 ` [PATCH 1/4] sched/topology: SD_ASYM_CPUCAPACITY flag detection Morten Rasmussen
  2018-07-20 13:32 ` [PATCH 2/4] drivers/base/arch_topology: Rebuild sched_domain hierarchy when capacities change Morten Rasmussen
@ 2018-07-20 13:32 ` Morten Rasmussen
  2018-09-10 10:12   ` [tip:sched/core] sched/topology, arch/arm64: Rebuild the sched_domain hierarchy when the CPU " tip-bot for Morten Rasmussen
  2018-07-20 13:32 ` [PATCH 4/4] arch/arm: Rebuild sched_domain hierarchy when cpu " Morten Rasmussen
  2018-07-31 10:53 ` [PATCH 0/4] sched/topology: Set SD_ASYM_CPUCAPACITY flag automatically Peter Zijlstra
  4 siblings, 1 reply; 17+ messages in thread
From: Morten Rasmussen @ 2018-07-20 13:32 UTC (permalink / raw)
  To: peterz, mingo
  Cc: valentin.schneider, dietmar.eggemann, vincent.guittot,
	linux-kernel, linux-arm-kernel, Morten Rasmussen,
	Catalin Marinas, Will Deacon

Asymmetric cpu capacity can not necessarily be determined accurately at
the time the initial sched_domain hierarchy is built during boot. It is
therefore necessary to be able to force a full rebuild of the hierarchy
later triggered by the arch_topology driver. A full rebuild requires the
arch-code to implement arch_update_cpu_topology() which isn't yet
implemented for arm64. This patch points the arm64 implementation to
arch_topology driver to ensure that full hierarchy rebuild happens when
needed.

cc: Catalin Marinas <catalin.marinas@arm.com>
cc: Will Deacon <will.deacon@arm.com>

Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
---
 arch/arm64/include/asm/topology.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/arm64/include/asm/topology.h b/arch/arm64/include/asm/topology.h
index df48212f767b..61ba09d48237 100644
--- a/arch/arm64/include/asm/topology.h
+++ b/arch/arm64/include/asm/topology.h
@@ -43,6 +43,9 @@ int pcibus_to_node(struct pci_bus *bus);
 /* Replace task scheduler's default cpu-invariant accounting */
 #define arch_scale_cpu_capacity topology_get_cpu_scale
 
+/* Enable topology flag updates */
+#define arch_update_cpu_topology topology_update_cpu_topology
+
 #include <asm-generic/topology.h>
 
 #endif /* _ASM_ARM_TOPOLOGY_H */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 4/4] arch/arm: Rebuild sched_domain hierarchy when cpu capacity changes
  2018-07-20 13:32 [PATCH 0/4] sched/topology: Set SD_ASYM_CPUCAPACITY flag automatically Morten Rasmussen
                   ` (2 preceding siblings ...)
  2018-07-20 13:32 ` [PATCH 3/4] arch/arm64: Rebuild sched_domain hierarchy when cpu capacity changes Morten Rasmussen
@ 2018-07-20 13:32 ` Morten Rasmussen
  2018-09-10 10:12   ` [tip:sched/core] sched/topology, arch/arm: Rebuild sched_domain hierarchy when CPU " tip-bot for Morten Rasmussen
  2018-07-31 10:53 ` [PATCH 0/4] sched/topology: Set SD_ASYM_CPUCAPACITY flag automatically Peter Zijlstra
  4 siblings, 1 reply; 17+ messages in thread
From: Morten Rasmussen @ 2018-07-20 13:32 UTC (permalink / raw)
  To: peterz, mingo
  Cc: valentin.schneider, dietmar.eggemann, vincent.guittot,
	linux-kernel, linux-arm-kernel, Morten Rasmussen, Russell King

Asymmetric cpu capacity can not necessarily be determined accurately at
the time the initial sched_domain hierarchy is built during boot. It is
therefore necessary to be able to force a full rebuild of the hierarchy
later triggered by the arch_topology driver. A full rebuild requires the
arch-code to implement arch_update_cpu_topology() which isn't yet
implemented for arm. This patch points the arm implementation to
arch_topology driver to ensure that full hierarchy rebuild happens when
needed.

cc: Russell King <linux@armlinux.org.uk>

Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
---
 arch/arm/include/asm/topology.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/arm/include/asm/topology.h b/arch/arm/include/asm/topology.h
index 5d88d2f22b2c..2a786f54d8b8 100644
--- a/arch/arm/include/asm/topology.h
+++ b/arch/arm/include/asm/topology.h
@@ -33,6 +33,9 @@ const struct cpumask *cpu_coregroup_mask(int cpu);
 /* Replace task scheduler's default cpu-invariant accounting */
 #define arch_scale_cpu_capacity topology_get_cpu_scale
 
+/* Enable topology flag updates */
+#define arch_update_cpu_topology topology_update_cpu_topology
+
 #else
 
 static inline void init_cpu_topology(void) { }
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/4] sched/topology: SD_ASYM_CPUCAPACITY flag detection
  2018-07-20 13:32 ` [PATCH 1/4] sched/topology: SD_ASYM_CPUCAPACITY flag detection Morten Rasmussen
@ 2018-07-23 13:25   ` Qais Yousef
  2018-07-23 15:27     ` Morten Rasmussen
  2018-09-10  8:21   ` Ingo Molnar
  2018-09-10 10:11   ` [tip:sched/core] sched/topology: Add " tip-bot for Morten Rasmussen
  2 siblings, 1 reply; 17+ messages in thread
From: Qais Yousef @ 2018-07-23 13:25 UTC (permalink / raw)
  To: Morten Rasmussen
  Cc: peterz, mingo, vincent.guittot, dietmar.eggemann, linux-kernel,
	valentin.schneider, linux-arm-kernel

Hi Morten

On 20/07/18 14:32, Morten Rasmussen wrote:
> The SD_ASYM_CPUCAPACITY sched_domain flag is supposed to mark the
> sched_domain in the hierarchy where all cpu capacities are visible for
> any cpu's point of view on asymmetric cpu capacity systems. The
> scheduler can then take to take capacity asymmetry into account when

Did you mean "s/take to take/try to take/"?

> balancing at this level. It also serves as an indicator for how wide
> task placement heuristics have to search to consider all available cpu
> capacities as asymmetric systems might often appear symmetric at
> smallest level(s) of the sched_domain hierarchy.
>
> The flag has been around for while but so far only been set by
> out-of-tree code in Android kernels. One solution is to let each
> architecture provide the flag through a custom sched_domain topology
> array and associated mask and flag functions. However,
> SD_ASYM_CPUCAPACITY is special in the sense that it depends on the
> capacity and presence of all cpus in the system, i.e. when hotplugging
> all cpus out except those with one particular cpu capacity the flag
> should disappear even if the sched_domains don't collapse. Similarly,
> the flag is affected by cpusets where load-balancing is turned off.
> Detecting when the flags should be set therefore depends not only on
> topology information but also the cpuset configuration and hotplug
> state. The arch code doesn't have easy access to the cpuset
> configuration.
>
> Instead, this patch implements the flag detection in generic code where
> cpusets and hotplug state is already taken care of. All the arch is
> responsible for is to implement arch_scale_cpu_capacity() and force a
> full rebuild of the sched_domain hierarchy if capacities are updated,
> e.g. later in the boot process when cpufreq has initialized.
>
> cc: Ingo Molnar <mingo@redhat.com>
> cc: Peter Zijlstra <peterz@infradead.org>
>
> Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
> ---
>   include/linux/sched/topology.h |  2 +-
>   kernel/sched/topology.c        | 81 ++++++++++++++++++++++++++++++++++++++----
>   2 files changed, 76 insertions(+), 7 deletions(-)
>
> diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
> index 26347741ba50..4fe2e49ab13b 100644
> --- a/include/linux/sched/topology.h
> +++ b/include/linux/sched/topology.h
> @@ -23,7 +23,7 @@
>   #define SD_BALANCE_FORK		0x0008	/* Balance on fork, clone */
>   #define SD_BALANCE_WAKE		0x0010  /* Balance on wakeup */
>   #define SD_WAKE_AFFINE		0x0020	/* Wake task to waking CPU */
> -#define SD_ASYM_CPUCAPACITY	0x0040  /* Groups have different max cpu capacities */
> +#define SD_ASYM_CPUCAPACITY	0x0040  /* Domain members have different cpu capacities */
>   #define SD_SHARE_CPUCAPACITY	0x0080	/* Domain members share cpu capacity */
>   #define SD_SHARE_POWERDOMAIN	0x0100	/* Domain members share power domain */
>   #define SD_SHARE_PKG_RESOURCES	0x0200	/* Domain members share cpu pkg resources */
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index 05a831427bc7..b8f41d557612 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -1061,7 +1061,6 @@ static struct cpumask		***sched_domains_numa_masks;
>    *   SD_SHARE_PKG_RESOURCES - describes shared caches
>    *   SD_NUMA                - describes NUMA topologies
>    *   SD_SHARE_POWERDOMAIN   - describes shared power domain
> - *   SD_ASYM_CPUCAPACITY    - describes mixed capacity topologies
>    *
>    * Odd one out, which beside describing the topology has a quirk also
>    * prescribes the desired behaviour that goes along with it:
> @@ -1073,13 +1072,12 @@ static struct cpumask		***sched_domains_numa_masks;
>   	 SD_SHARE_PKG_RESOURCES |	\
>   	 SD_NUMA		|	\
>   	 SD_ASYM_PACKING	|	\
> -	 SD_ASYM_CPUCAPACITY	|	\
>   	 SD_SHARE_POWERDOMAIN)
>   
>   static struct sched_domain *
>   sd_init(struct sched_domain_topology_level *tl,
>   	const struct cpumask *cpu_map,
> -	struct sched_domain *child, int cpu)
> +	struct sched_domain *child, int dflags, int cpu)
>   {
>   	struct sd_data *sdd = &tl->data;
>   	struct sched_domain *sd = *per_cpu_ptr(sdd->sd, cpu);
> @@ -1100,6 +1098,9 @@ sd_init(struct sched_domain_topology_level *tl,
>   			"wrong sd_flags in topology description\n"))
>   		sd_flags &= ~TOPOLOGY_SD_FLAGS;
>   
> +	/* Apply detected topology flags */
> +	sd_flags |= dflags;
> +
>   	*sd = (struct sched_domain){
>   		.min_interval		= sd_weight,
>   		.max_interval		= 2*sd_weight,
> @@ -1607,9 +1608,9 @@ static void __sdt_free(const struct cpumask *cpu_map)
>   
>   static struct sched_domain *build_sched_domain(struct sched_domain_topology_level *tl,
>   		const struct cpumask *cpu_map, struct sched_domain_attr *attr,
> -		struct sched_domain *child, int cpu)
> +		struct sched_domain *child, int dflags, int cpu)
>   {
> -	struct sched_domain *sd = sd_init(tl, cpu_map, child, cpu);
> +	struct sched_domain *sd = sd_init(tl, cpu_map, child, dflags, cpu);
>   
>   	if (child) {
>   		sd->level = child->level + 1;
> @@ -1636,6 +1637,65 @@ static struct sched_domain *build_sched_domain(struct sched_domain_topology_leve
>   }
>   
>   /*
> + * Find the sched_domain_topology_level where all cpu capacities are visible
> + * for all cpus.
> + */
> +static struct sched_domain_topology_level
> +*asym_cpu_capacity_level(const struct cpumask *cpu_map)
> +{
> +	int i, j, asym_level = 0;
> +	bool asym = false;
> +	struct sched_domain_topology_level *tl, *asym_tl = NULL;
> +	unsigned long cap;
> +
> +	/* Is there any asymmetry? */
> +	cap = arch_scale_cpu_capacity(NULL, cpumask_first(cpu_map));
> +
> +	for_each_cpu(i, cpu_map) {
> +		if (arch_scale_cpu_capacity(NULL, i) != cap) {
> +			asym = true;
> +			break;
> +		}
> +	}
> +
> +	if (!asym)
> +		return NULL;
> +
> +	/*
> +	 * Examine topology from all cpu's point of views to detect the lowest
> +	 * sched_domain_topology_level where a highest capacity cpu is visible
> +	 * to everyone.
> +	 */
> +	for_each_cpu(i, cpu_map) {
> +		unsigned long max_capacity = arch_scale_cpu_capacity(NULL, i);
> +		int tl_id = 0;
> +
> +		for_each_sd_topology(tl) {
> +			if (tl_id < asym_level)
> +				goto next_level;
> +

I think if you increment and then continue here you might save the extra 
branch. I didn't look at any disassembly though to verify the generated 
code.

I wonder if we can introduce for_each_sd_topology_from(tl, 
starting_level) so that you can start searching from a provided level - 
which will make this skipping logic unnecessary? So the code will look like

             for_each_sd_topology_from(tl, asymc_level) {
                 ...
             }

> +			for_each_cpu_and(j, tl->mask(i), cpu_map) {
> +				unsigned long capacity;
> +
> +				capacity = arch_scale_cpu_capacity(NULL, j);
> +
> +				if (capacity <= max_capacity)
> +					continue;
> +
> +				max_capacity = capacity;
> +				asym_level = tl_id;
> +				asym_tl = tl;
> +			}
> +next_level:
> +			tl_id++;
> +		}
> +	}
> +
> +	return asym_tl;
> +}
> +
> +
> +/*
>    * Build sched domains for a given set of CPUs and attach the sched domains
>    * to the individual CPUs
>    */
> @@ -1647,18 +1707,27 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att
>   	struct s_data d;
>   	struct rq *rq = NULL;
>   	int i, ret = -ENOMEM;
> +	struct sched_domain_topology_level *tl_asym;
>   
>   	alloc_state = __visit_domain_allocation_hell(&d, cpu_map);
>   	if (alloc_state != sa_rootdomain)
>   		goto error;
>   
> +	tl_asym = asym_cpu_capacity_level(cpu_map);
> +

Or maybe this is not a hot path and we don't care that much about 
optimizing the search since you call it unconditionally here even for 
systems that don't care?

>   	/* Set up domains for CPUs specified by the cpu_map: */
>   	for_each_cpu(i, cpu_map) {
>   		struct sched_domain_topology_level *tl;
>   
>   		sd = NULL;
>   		for_each_sd_topology(tl) {
> -			sd = build_sched_domain(tl, cpu_map, attr, sd, i);
> +			int dflags = 0;
> +
> +			if (tl == tl_asym)
> +				dflags |= SD_ASYM_CPUCAPACITY;
> +
> +			sd = build_sched_domain(tl, cpu_map, attr, sd, dflags, i);
> +
>   			if (tl == sched_domain_topology)
>   				*per_cpu_ptr(d.sd, i) = sd;
>   			if (tl->flags & SDTL_OVERLAP)


-- 
Qais Yousef


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/4] sched/topology: SD_ASYM_CPUCAPACITY flag detection
  2018-07-23 13:25   ` Qais Yousef
@ 2018-07-23 15:27     ` Morten Rasmussen
  2018-07-23 16:07       ` Qais Yousef
  0 siblings, 1 reply; 17+ messages in thread
From: Morten Rasmussen @ 2018-07-23 15:27 UTC (permalink / raw)
  To: Qais Yousef
  Cc: vincent.guittot, peterz, linux-kernel, dietmar.eggemann, mingo,
	valentin.schneider, linux-arm-kernel

On Mon, Jul 23, 2018 at 02:25:34PM +0100, Qais Yousef wrote:
> Hi Morten
> 
> On 20/07/18 14:32, Morten Rasmussen wrote:
> >The SD_ASYM_CPUCAPACITY sched_domain flag is supposed to mark the
> >sched_domain in the hierarchy where all cpu capacities are visible for
> >any cpu's point of view on asymmetric cpu capacity systems. The
> >scheduler can then take to take capacity asymmetry into account when
> 
> Did you mean "s/take to take/try to take/"?

Yes.


[...]

> >+	/*
> >+	 * Examine topology from all cpu's point of views to detect the lowest
> >+	 * sched_domain_topology_level where a highest capacity cpu is visible
> >+	 * to everyone.
> >+	 */
> >+	for_each_cpu(i, cpu_map) {
> >+		unsigned long max_capacity = arch_scale_cpu_capacity(NULL, i);
> >+		int tl_id = 0;
> >+
> >+		for_each_sd_topology(tl) {
> >+			if (tl_id < asym_level)
> >+				goto next_level;
> >+
> 
> I think if you increment and then continue here you might save the extra
> branch. I didn't look at any disassembly though to verify the generated
> code.
> 
> I wonder if we can introduce for_each_sd_topology_from(tl, starting_level)
> so that you can start searching from a provided level - which will make this
> skipping logic unnecessary? So the code will look like
> 
>             for_each_sd_topology_from(tl, asymc_level) {
>                 ...
>             }

Both options would work. Increment+contrinue instead of goto would be
slightly less readable I think since we would still have the increment
at the end of the loop, but easy to do. Introducing
for_each_sd_topology_from() improve things too, but I wonder if it is
worth it.

> >@@ -1647,18 +1707,27 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att
> >  	struct s_data d;
> >  	struct rq *rq = NULL;
> >  	int i, ret = -ENOMEM;
> >+	struct sched_domain_topology_level *tl_asym;
> >  	alloc_state = __visit_domain_allocation_hell(&d, cpu_map);
> >  	if (alloc_state != sa_rootdomain)
> >  		goto error;
> >+	tl_asym = asym_cpu_capacity_level(cpu_map);
> >+
> 
> Or maybe this is not a hot path and we don't care that much about optimizing
> the search since you call it unconditionally here even for systems that
> don't care?

It does increase the cost of things like hotplug slightly and
repartitioning of root_domains a slightly but I don't see how we can
avoid it if we want generic code to set this flag. If the costs are not
acceptable I think the only option is to make the detection architecture
specific.

In any case, AFAIK rebuilding the sched_domain hierarchy shouldn't be a
normal and common thing to do. If checking for the flag is not
acceptable on SMP-only architectures, I can move it under arch/arm[,64]
although it is not as clean.

Morten

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/4] sched/topology: SD_ASYM_CPUCAPACITY flag detection
  2018-07-23 15:27     ` Morten Rasmussen
@ 2018-07-23 16:07       ` Qais Yousef
  2018-07-24  8:37         ` Morten Rasmussen
  0 siblings, 1 reply; 17+ messages in thread
From: Qais Yousef @ 2018-07-23 16:07 UTC (permalink / raw)
  To: Morten Rasmussen
  Cc: vincent.guittot, peterz, linux-kernel, dietmar.eggemann, mingo,
	valentin.schneider, linux-arm-kernel

On 23/07/18 16:27, Morten Rasmussen wrote:

[...]

>>> +	/*
>>> +	 * Examine topology from all cpu's point of views to detect the lowest
>>> +	 * sched_domain_topology_level where a highest capacity cpu is visible
>>> +	 * to everyone.
>>> +	 */
>>> +	for_each_cpu(i, cpu_map) {
>>> +		unsigned long max_capacity = arch_scale_cpu_capacity(NULL, i);
>>> +		int tl_id = 0;
>>> +
>>> +		for_each_sd_topology(tl) {
>>> +			if (tl_id < asym_level)
>>> +				goto next_level;
>>> +
>> I think if you increment and then continue here you might save the extra
>> branch. I didn't look at any disassembly though to verify the generated
>> code.
>>
>> I wonder if we can introduce for_each_sd_topology_from(tl, starting_level)
>> so that you can start searching from a provided level - which will make this
>> skipping logic unnecessary? So the code will look like
>>
>>              for_each_sd_topology_from(tl, asymc_level) {
>>                  ...
>>              }
> Both options would work. Increment+contrinue instead of goto would be
> slightly less readable I think since we would still have the increment
> at the end of the loop, but easy to do. Introducing
> for_each_sd_topology_from() improve things too, but I wonder if it is
> worth it.

I don't mind the current form to be honest. I agree it's not worth it if 
it is called infrequent enough.

>>> @@ -1647,18 +1707,27 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att
>>>   	struct s_data d;
>>>   	struct rq *rq = NULL;
>>>   	int i, ret = -ENOMEM;
>>> +	struct sched_domain_topology_level *tl_asym;
>>>   	alloc_state = __visit_domain_allocation_hell(&d, cpu_map);
>>>   	if (alloc_state != sa_rootdomain)
>>>   		goto error;
>>> +	tl_asym = asym_cpu_capacity_level(cpu_map);
>>> +
>> Or maybe this is not a hot path and we don't care that much about optimizing
>> the search since you call it unconditionally here even for systems that
>> don't care?
> It does increase the cost of things like hotplug slightly and
> repartitioning of root_domains a slightly but I don't see how we can
> avoid it if we want generic code to set this flag. If the costs are not
> acceptable I think the only option is to make the detection architecture
> specific.

I think hotplug is already expensive and this overhead would be small in 
comparison. But this could be called when frequency changes if I 
understood correctly - this is the one I wasn't sure how 'hot' it could 
be. I wouldn't expect frequency changes at a very high rate because it's 
relatively expensive too..

> In any case, AFAIK rebuilding the sched_domain hierarchy shouldn't be a
> normal and common thing to do. If checking for the flag is not
> acceptable on SMP-only architectures, I can move it under arch/arm[,64]
> although it is not as clean.
>

I like the approach and I think it's nice and clean. If it actually 
appears in some profiles I think we have room to optimize it.

-- 
Qais Yousef


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/4] sched/topology: SD_ASYM_CPUCAPACITY flag detection
  2018-07-23 16:07       ` Qais Yousef
@ 2018-07-24  8:37         ` Morten Rasmussen
  2018-07-24  8:59           ` Qais Yousef
  0 siblings, 1 reply; 17+ messages in thread
From: Morten Rasmussen @ 2018-07-24  8:37 UTC (permalink / raw)
  To: Qais Yousef
  Cc: vincent.guittot, peterz, linux-kernel, dietmar.eggemann, mingo,
	valentin.schneider, linux-arm-kernel

On Mon, Jul 23, 2018 at 05:07:50PM +0100, Qais Yousef wrote:
> On 23/07/18 16:27, Morten Rasmussen wrote:
> >It does increase the cost of things like hotplug slightly and
> >repartitioning of root_domains a slightly but I don't see how we can
> >avoid it if we want generic code to set this flag. If the costs are not
> >acceptable I think the only option is to make the detection architecture
> >specific.
> 
> I think hotplug is already expensive and this overhead would be small in
> comparison. But this could be called when frequency changes if I understood
> correctly - this is the one I wasn't sure how 'hot' it could be. I wouldn't
> expect frequency changes at a very high rate because it's relatively
> expensive too..

A frequency change shouldn't lead to a flag change or a rebuild of the
sched_domain hierarhcy. The situations where the hierarchy should be
rebuild to update the flag is during boot as we only know the amount of
asymmetry once cpufreq has been initialized, when cpus are hotplugged
in/out, and when root_domains change due to cpuset reconfiguration. So
it should be a relatively rare event.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/4] sched/topology: SD_ASYM_CPUCAPACITY flag detection
  2018-07-24  8:37         ` Morten Rasmussen
@ 2018-07-24  8:59           ` Qais Yousef
  0 siblings, 0 replies; 17+ messages in thread
From: Qais Yousef @ 2018-07-24  8:59 UTC (permalink / raw)
  To: Morten Rasmussen
  Cc: vincent.guittot, peterz, linux-kernel, dietmar.eggemann, mingo,
	valentin.schneider, linux-arm-kernel

On 24/07/18 09:37, Morten Rasmussen wrote:
> On Mon, Jul 23, 2018 at 05:07:50PM +0100, Qais Yousef wrote:
>> On 23/07/18 16:27, Morten Rasmussen wrote:
>>> It does increase the cost of things like hotplug slightly and
>>> repartitioning of root_domains a slightly but I don't see how we can
>>> avoid it if we want generic code to set this flag. If the costs are not
>>> acceptable I think the only option is to make the detection architecture
>>> specific.
>> I think hotplug is already expensive and this overhead would be small in
>> comparison. But this could be called when frequency changes if I understood
>> correctly - this is the one I wasn't sure how 'hot' it could be. I wouldn't
>> expect frequency changes at a very high rate because it's relatively
>> expensive too..
> A frequency change shouldn't lead to a flag change or a rebuild of the
> sched_domain hierarhcy. The situations where the hierarchy should be
> rebuild to update the flag is during boot as we only know the amount of
> asymmetry once cpufreq has been initialized, when cpus are hotplugged
> in/out, and when root_domains change due to cpuset reconfiguration. So
> it should be a relatively rare event.


Ah OK I misunderstood that part then.

The series LGTM then.

-- 
Qais Yousef


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 0/4] sched/topology: Set SD_ASYM_CPUCAPACITY flag automatically
  2018-07-20 13:32 [PATCH 0/4] sched/topology: Set SD_ASYM_CPUCAPACITY flag automatically Morten Rasmussen
                   ` (3 preceding siblings ...)
  2018-07-20 13:32 ` [PATCH 4/4] arch/arm: Rebuild sched_domain hierarchy when cpu " Morten Rasmussen
@ 2018-07-31 10:53 ` Peter Zijlstra
  4 siblings, 0 replies; 17+ messages in thread
From: Peter Zijlstra @ 2018-07-31 10:53 UTC (permalink / raw)
  To: Morten Rasmussen
  Cc: mingo, valentin.schneider, dietmar.eggemann, vincent.guittot,
	linux-kernel, linux-arm-kernel

On Fri, Jul 20, 2018 at 02:32:30PM +0100, Morten Rasmussen wrote:
> The SD_ASYM_CPUCAPACITY flag has been around for some time now with no code to
> actually set it. Android has carried patches to do this out-of-tree in the
> meantime. The flag is meant to indicate cpu capacity asymmetry and is set at
> the topology level where the sched_domain spans all available cpu capacity in
> the system, i.e. all core types are visible, for any cpu in the system.
> 
> The flag was merged as being a topology flag meaning that architecture had to
> provide the flag explicitly, however when mixed with cpusets splitting the
> system into multiple root_domains the flag can't be set without knowledge about
> the cpusets. Rather than exposing cpusets to architecture code this patch set
> moves the responsibility for setting the flag to generic topology code which is
> simpler and make the code architecture agnostic.
> 
> Morten Rasmussen (4):
>   sched/topology: SD_ASYM_CPUCAPACITY flag detection
>   drivers/base/arch_topology: Rebuild sched_domain hierarchy when
>     capacities change
>   arch/arm64: Rebuild sched_domain hierarchy when cpu capacity changes
>   arch/arm: Rebuild sched_domain hierarchy when cpu capacity changes
> 
>  arch/arm/include/asm/topology.h   |  3 ++
>  arch/arm64/include/asm/topology.h |  3 ++
>  drivers/base/arch_topology.c      | 26 +++++++++++++
>  include/linux/arch_topology.h     |  1 +
>  include/linux/sched/topology.h    |  2 +-
>  kernel/sched/topology.c           | 81 ++++++++++++++++++++++++++++++++++++---
>  6 files changed, 109 insertions(+), 7 deletions(-)

I've picked up the lot; Thanks!

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/4] sched/topology: SD_ASYM_CPUCAPACITY flag detection
  2018-07-20 13:32 ` [PATCH 1/4] sched/topology: SD_ASYM_CPUCAPACITY flag detection Morten Rasmussen
  2018-07-23 13:25   ` Qais Yousef
@ 2018-09-10  8:21   ` Ingo Molnar
  2018-09-11 11:04     ` Morten Rasmussen
  2018-09-10 10:11   ` [tip:sched/core] sched/topology: Add " tip-bot for Morten Rasmussen
  2 siblings, 1 reply; 17+ messages in thread
From: Ingo Molnar @ 2018-09-10  8:21 UTC (permalink / raw)
  To: Morten Rasmussen
  Cc: peterz, mingo, valentin.schneider, dietmar.eggemann,
	vincent.guittot, linux-kernel, linux-arm-kernel


* Morten Rasmussen <morten.rasmussen@arm.com> wrote:

> The SD_ASYM_CPUCAPACITY sched_domain flag is supposed to mark the
> sched_domain in the hierarchy where all cpu capacities are visible for
> any cpu's point of view on asymmetric cpu capacity systems. The

>  /*
> + * Find the sched_domain_topology_level where all cpu capacities are visible
> + * for all cpus.
> + */

> +	/*
> +	 * Examine topology from all cpu's point of views to detect the lowest
> +	 * sched_domain_topology_level where a highest capacity cpu is visible
> +	 * to everyone.
> +	 */

>  #define SD_WAKE_AFFINE               0x0020  /* Wake task to waking CPU */
> -#define SD_ASYM_CPUCAPACITY  0x0040  /* Groups have different max cpu capacities */
> +#define SD_ASYM_CPUCAPACITY  0x0040  /* Domain members have different cpu capacities */

For future reference: *please* capitalize 'CPU' and 'CPUs' in future patches like the rest of 
the scheduler does.

You can see it spelled right above the new definition: 'waking CPU' ;-)

(I fixed this up in this patch.)

Thanks!

	Ingo

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [tip:sched/core] sched/topology: Add SD_ASYM_CPUCAPACITY flag detection
  2018-07-20 13:32 ` [PATCH 1/4] sched/topology: SD_ASYM_CPUCAPACITY flag detection Morten Rasmussen
  2018-07-23 13:25   ` Qais Yousef
  2018-09-10  8:21   ` Ingo Molnar
@ 2018-09-10 10:11   ` tip-bot for Morten Rasmussen
  2 siblings, 0 replies; 17+ messages in thread
From: tip-bot for Morten Rasmussen @ 2018-09-10 10:11 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: tglx, linux-kernel, morten.rasmussen, hpa, torvalds, peterz, mingo

Commit-ID:  05484e0984487d42e97c417cbb0697fa9d16e7e9
Gitweb:     https://git.kernel.org/tip/05484e0984487d42e97c417cbb0697fa9d16e7e9
Author:     Morten Rasmussen <morten.rasmussen@arm.com>
AuthorDate: Fri, 20 Jul 2018 14:32:31 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 10 Sep 2018 11:05:45 +0200

sched/topology: Add SD_ASYM_CPUCAPACITY flag detection

The SD_ASYM_CPUCAPACITY sched_domain flag is supposed to mark the
sched_domain in the hierarchy where all CPU capacities are visible for
any CPU's point of view on asymmetric CPU capacity systems. The
scheduler can then take to take capacity asymmetry into account when
balancing at this level. It also serves as an indicator for how wide
task placement heuristics have to search to consider all available CPU
capacities as asymmetric systems might often appear symmetric at
smallest level(s) of the sched_domain hierarchy.

The flag has been around for while but so far only been set by
out-of-tree code in Android kernels. One solution is to let each
architecture provide the flag through a custom sched_domain topology
array and associated mask and flag functions. However,
SD_ASYM_CPUCAPACITY is special in the sense that it depends on the
capacity and presence of all CPUs in the system, i.e. when hotplugging
all CPUs out except those with one particular CPU capacity the flag
should disappear even if the sched_domains don't collapse. Similarly,
the flag is affected by cpusets where load-balancing is turned off.
Detecting when the flags should be set therefore depends not only on
topology information but also the cpuset configuration and hotplug
state. The arch code doesn't have easy access to the cpuset
configuration.

Instead, this patch implements the flag detection in generic code where
cpusets and hotplug state is already taken care of. All the arch is
responsible for is to implement arch_scale_cpu_capacity() and force a
full rebuild of the sched_domain hierarchy if capacities are updated,
e.g. later in the boot process when cpufreq has initialized.

Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: dietmar.eggemann@arm.com
Cc: valentin.schneider@arm.com
Cc: vincent.guittot@linaro.org
Link: http://lkml.kernel.org/r/1532093554-30504-2-git-send-email-morten.rasmussen@arm.com
[ Fixed 'CPU' capitalization. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/sched/topology.h |  6 ++--
 kernel/sched/topology.c        | 81 ++++++++++++++++++++++++++++++++++++++----
 2 files changed, 78 insertions(+), 9 deletions(-)

diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
index 26347741ba50..6b9976180c1e 100644
--- a/include/linux/sched/topology.h
+++ b/include/linux/sched/topology.h
@@ -23,10 +23,10 @@
 #define SD_BALANCE_FORK		0x0008	/* Balance on fork, clone */
 #define SD_BALANCE_WAKE		0x0010  /* Balance on wakeup */
 #define SD_WAKE_AFFINE		0x0020	/* Wake task to waking CPU */
-#define SD_ASYM_CPUCAPACITY	0x0040  /* Groups have different max cpu capacities */
-#define SD_SHARE_CPUCAPACITY	0x0080	/* Domain members share cpu capacity */
+#define SD_ASYM_CPUCAPACITY	0x0040  /* Domain members have different CPU capacities */
+#define SD_SHARE_CPUCAPACITY	0x0080	/* Domain members share CPU capacity */
 #define SD_SHARE_POWERDOMAIN	0x0100	/* Domain members share power domain */
-#define SD_SHARE_PKG_RESOURCES	0x0200	/* Domain members share cpu pkg resources */
+#define SD_SHARE_PKG_RESOURCES	0x0200	/* Domain members share CPU pkg resources */
 #define SD_SERIALIZE		0x0400	/* Only a single load balancing instance */
 #define SD_ASYM_PACKING		0x0800  /* Place busy groups earlier in the domain */
 #define SD_PREFER_SIBLING	0x1000	/* Prefer to place tasks in a sibling domain */
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 505a41c42b96..5c4d583d53ee 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1061,7 +1061,6 @@ static struct cpumask		***sched_domains_numa_masks;
  *   SD_SHARE_PKG_RESOURCES - describes shared caches
  *   SD_NUMA                - describes NUMA topologies
  *   SD_SHARE_POWERDOMAIN   - describes shared power domain
- *   SD_ASYM_CPUCAPACITY    - describes mixed capacity topologies
  *
  * Odd one out, which beside describing the topology has a quirk also
  * prescribes the desired behaviour that goes along with it:
@@ -1073,13 +1072,12 @@ static struct cpumask		***sched_domains_numa_masks;
 	 SD_SHARE_PKG_RESOURCES |	\
 	 SD_NUMA		|	\
 	 SD_ASYM_PACKING	|	\
-	 SD_ASYM_CPUCAPACITY	|	\
 	 SD_SHARE_POWERDOMAIN)
 
 static struct sched_domain *
 sd_init(struct sched_domain_topology_level *tl,
 	const struct cpumask *cpu_map,
-	struct sched_domain *child, int cpu)
+	struct sched_domain *child, int dflags, int cpu)
 {
 	struct sd_data *sdd = &tl->data;
 	struct sched_domain *sd = *per_cpu_ptr(sdd->sd, cpu);
@@ -1100,6 +1098,9 @@ sd_init(struct sched_domain_topology_level *tl,
 			"wrong sd_flags in topology description\n"))
 		sd_flags &= ~TOPOLOGY_SD_FLAGS;
 
+	/* Apply detected topology flags */
+	sd_flags |= dflags;
+
 	*sd = (struct sched_domain){
 		.min_interval		= sd_weight,
 		.max_interval		= 2*sd_weight,
@@ -1604,9 +1605,9 @@ static void __sdt_free(const struct cpumask *cpu_map)
 
 static struct sched_domain *build_sched_domain(struct sched_domain_topology_level *tl,
 		const struct cpumask *cpu_map, struct sched_domain_attr *attr,
-		struct sched_domain *child, int cpu)
+		struct sched_domain *child, int dflags, int cpu)
 {
-	struct sched_domain *sd = sd_init(tl, cpu_map, child, cpu);
+	struct sched_domain *sd = sd_init(tl, cpu_map, child, dflags, cpu);
 
 	if (child) {
 		sd->level = child->level + 1;
@@ -1632,6 +1633,65 @@ static struct sched_domain *build_sched_domain(struct sched_domain_topology_leve
 	return sd;
 }
 
+/*
+ * Find the sched_domain_topology_level where all CPU capacities are visible
+ * for all CPUs.
+ */
+static struct sched_domain_topology_level
+*asym_cpu_capacity_level(const struct cpumask *cpu_map)
+{
+	int i, j, asym_level = 0;
+	bool asym = false;
+	struct sched_domain_topology_level *tl, *asym_tl = NULL;
+	unsigned long cap;
+
+	/* Is there any asymmetry? */
+	cap = arch_scale_cpu_capacity(NULL, cpumask_first(cpu_map));
+
+	for_each_cpu(i, cpu_map) {
+		if (arch_scale_cpu_capacity(NULL, i) != cap) {
+			asym = true;
+			break;
+		}
+	}
+
+	if (!asym)
+		return NULL;
+
+	/*
+	 * Examine topology from all CPU's point of views to detect the lowest
+	 * sched_domain_topology_level where a highest capacity CPU is visible
+	 * to everyone.
+	 */
+	for_each_cpu(i, cpu_map) {
+		unsigned long max_capacity = arch_scale_cpu_capacity(NULL, i);
+		int tl_id = 0;
+
+		for_each_sd_topology(tl) {
+			if (tl_id < asym_level)
+				goto next_level;
+
+			for_each_cpu_and(j, tl->mask(i), cpu_map) {
+				unsigned long capacity;
+
+				capacity = arch_scale_cpu_capacity(NULL, j);
+
+				if (capacity <= max_capacity)
+					continue;
+
+				max_capacity = capacity;
+				asym_level = tl_id;
+				asym_tl = tl;
+			}
+next_level:
+			tl_id++;
+		}
+	}
+
+	return asym_tl;
+}
+
+
 /*
  * Build sched domains for a given set of CPUs and attach the sched domains
  * to the individual CPUs
@@ -1644,18 +1704,27 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att
 	struct s_data d;
 	struct rq *rq = NULL;
 	int i, ret = -ENOMEM;
+	struct sched_domain_topology_level *tl_asym;
 
 	alloc_state = __visit_domain_allocation_hell(&d, cpu_map);
 	if (alloc_state != sa_rootdomain)
 		goto error;
 
+	tl_asym = asym_cpu_capacity_level(cpu_map);
+
 	/* Set up domains for CPUs specified by the cpu_map: */
 	for_each_cpu(i, cpu_map) {
 		struct sched_domain_topology_level *tl;
 
 		sd = NULL;
 		for_each_sd_topology(tl) {
-			sd = build_sched_domain(tl, cpu_map, attr, sd, i);
+			int dflags = 0;
+
+			if (tl == tl_asym)
+				dflags |= SD_ASYM_CPUCAPACITY;
+
+			sd = build_sched_domain(tl, cpu_map, attr, sd, dflags, i);
+
 			if (tl == sched_domain_topology)
 				*per_cpu_ptr(d.sd, i) = sd;
 			if (tl->flags & SDTL_OVERLAP)

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [tip:sched/core] sched/topology, drivers/base/arch_topology: Rebuild the sched_domain hierarchy when capacities change
  2018-07-20 13:32 ` [PATCH 2/4] drivers/base/arch_topology: Rebuild sched_domain hierarchy when capacities change Morten Rasmussen
@ 2018-09-10 10:11   ` tip-bot for Morten Rasmussen
  0 siblings, 0 replies; 17+ messages in thread
From: tip-bot for Morten Rasmussen @ 2018-09-10 10:11 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: hpa, torvalds, linux-kernel, mingo, morten.rasmussen, tglx,
	gregkh, peterz

Commit-ID:  bb1fbdd3c3fd12b612c7d8cdf13bd6bfeebdefa3
Gitweb:     https://git.kernel.org/tip/bb1fbdd3c3fd12b612c7d8cdf13bd6bfeebdefa3
Author:     Morten Rasmussen <morten.rasmussen@arm.com>
AuthorDate: Fri, 20 Jul 2018 14:32:32 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 10 Sep 2018 11:05:47 +0200

sched/topology, drivers/base/arch_topology: Rebuild the sched_domain hierarchy when capacities change

The setting of SD_ASYM_CPUCAPACITY depends on the per-CPU capacities.
These might not have their final values when the hierarchy is initially
built as the values depend on cpufreq to be initialized or the values
being set through sysfs. To ensure that the flags are set correctly we
need to rebuild the sched_domain hierarchy whenever the reported per-CPU
capacity (arch_scale_cpu_capacity()) changes.

This patch ensure that a full sched_domain rebuild happens when CPU
capacity changes occur.

Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: dietmar.eggemann@arm.com
Cc: valentin.schneider@arm.com
Cc: vincent.guittot@linaro.org
Link: http://lkml.kernel.org/r/1532093554-30504-3-git-send-email-morten.rasmussen@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 drivers/base/arch_topology.c  | 26 ++++++++++++++++++++++++++
 include/linux/arch_topology.h |  1 +
 2 files changed, 27 insertions(+)

diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
index e7cb0c6ade81..edfcf8d982e4 100644
--- a/drivers/base/arch_topology.c
+++ b/drivers/base/arch_topology.c
@@ -15,6 +15,7 @@
 #include <linux/slab.h>
 #include <linux/string.h>
 #include <linux/sched/topology.h>
+#include <linux/cpuset.h>
 
 DEFINE_PER_CPU(unsigned long, freq_scale) = SCHED_CAPACITY_SCALE;
 
@@ -47,6 +48,9 @@ static ssize_t cpu_capacity_show(struct device *dev,
 	return sprintf(buf, "%lu\n", topology_get_cpu_scale(NULL, cpu->dev.id));
 }
 
+static void update_topology_flags_workfn(struct work_struct *work);
+static DECLARE_WORK(update_topology_flags_work, update_topology_flags_workfn);
+
 static ssize_t cpu_capacity_store(struct device *dev,
 				  struct device_attribute *attr,
 				  const char *buf,
@@ -72,6 +76,8 @@ static ssize_t cpu_capacity_store(struct device *dev,
 		topology_set_cpu_scale(i, new_capacity);
 	mutex_unlock(&cpu_scale_mutex);
 
+	schedule_work(&update_topology_flags_work);
+
 	return count;
 }
 
@@ -96,6 +102,25 @@ static int register_cpu_capacity_sysctl(void)
 }
 subsys_initcall(register_cpu_capacity_sysctl);
 
+static int update_topology;
+
+int topology_update_cpu_topology(void)
+{
+	return update_topology;
+}
+
+/*
+ * Updating the sched_domains can't be done directly from cpufreq callbacks
+ * due to locking, so queue the work for later.
+ */
+static void update_topology_flags_workfn(struct work_struct *work)
+{
+	update_topology = 1;
+	rebuild_sched_domains();
+	pr_debug("sched_domain hierarchy rebuilt, flags updated\n");
+	update_topology = 0;
+}
+
 static u32 capacity_scale;
 static u32 *raw_capacity;
 
@@ -201,6 +226,7 @@ init_cpu_capacity_callback(struct notifier_block *nb,
 
 	if (cpumask_empty(cpus_to_visit)) {
 		topology_normalize_cpu_scale();
+		schedule_work(&update_topology_flags_work);
 		free_raw_capacity();
 		pr_debug("cpu_capacity: parsing done\n");
 		schedule_work(&parsing_done_work);
diff --git a/include/linux/arch_topology.h b/include/linux/arch_topology.h
index 2b709416de05..d9bdc1a7f4e7 100644
--- a/include/linux/arch_topology.h
+++ b/include/linux/arch_topology.h
@@ -9,6 +9,7 @@
 #include <linux/percpu.h>
 
 void topology_normalize_cpu_scale(void);
+int topology_update_cpu_topology(void);
 
 struct device_node;
 bool topology_parse_cpu_capacity(struct device_node *cpu_node, int cpu);

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [tip:sched/core] sched/topology, arch/arm64: Rebuild the sched_domain hierarchy when the CPU capacity changes
  2018-07-20 13:32 ` [PATCH 3/4] arch/arm64: Rebuild sched_domain hierarchy when cpu capacity changes Morten Rasmussen
@ 2018-09-10 10:12   ` tip-bot for Morten Rasmussen
  0 siblings, 0 replies; 17+ messages in thread
From: tip-bot for Morten Rasmussen @ 2018-09-10 10:12 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: torvalds, tglx, will.deacon, peterz, mingo, hpa, linux-kernel,
	morten.rasmussen, catalin.marinas

Commit-ID:  3ba09df4b8b6e3f01ed6381e8fb890840fd0bca3
Gitweb:     https://git.kernel.org/tip/3ba09df4b8b6e3f01ed6381e8fb890840fd0bca3
Author:     Morten Rasmussen <morten.rasmussen@arm.com>
AuthorDate: Fri, 20 Jul 2018 14:32:33 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 10 Sep 2018 11:05:47 +0200

sched/topology, arch/arm64: Rebuild the sched_domain hierarchy when the CPU capacity changes

Asymmetric CPU capacity can not necessarily be determined accurately at
the time the initial sched_domain hierarchy is built during boot. It is
therefore necessary to be able to force a full rebuild of the hierarchy
later triggered by the arch_topology driver. A full rebuild requires the
arch-code to implement arch_update_cpu_topology() which isn't yet
implemented for arm64. This patch points the arm64 implementation to
arch_topology driver to ensure that full hierarchy rebuild happens when
needed.

Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: dietmar.eggemann@arm.com
Cc: valentin.schneider@arm.com
Cc: vincent.guittot@linaro.org
Link: http://lkml.kernel.org/r/1532093554-30504-4-git-send-email-morten.rasmussen@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/arm64/include/asm/topology.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/arm64/include/asm/topology.h b/arch/arm64/include/asm/topology.h
index 49a0fee4f89b..0524f2438649 100644
--- a/arch/arm64/include/asm/topology.h
+++ b/arch/arm64/include/asm/topology.h
@@ -45,6 +45,9 @@ int pcibus_to_node(struct pci_bus *bus);
 /* Replace task scheduler's default cpu-invariant accounting */
 #define arch_scale_cpu_capacity topology_get_cpu_scale
 
+/* Enable topology flag updates */
+#define arch_update_cpu_topology topology_update_cpu_topology
+
 #include <asm-generic/topology.h>
 
 #endif /* _ASM_ARM_TOPOLOGY_H */

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [tip:sched/core] sched/topology, arch/arm: Rebuild sched_domain hierarchy when CPU capacity changes
  2018-07-20 13:32 ` [PATCH 4/4] arch/arm: Rebuild sched_domain hierarchy when cpu " Morten Rasmussen
@ 2018-09-10 10:12   ` tip-bot for Morten Rasmussen
  0 siblings, 0 replies; 17+ messages in thread
From: tip-bot for Morten Rasmussen @ 2018-09-10 10:12 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux, hpa, tglx, mingo, morten.rasmussen, torvalds,
	linux-kernel, peterz

Commit-ID:  e1799a80a4f5a463f252b7325da8bb66dfd55471
Gitweb:     https://git.kernel.org/tip/e1799a80a4f5a463f252b7325da8bb66dfd55471
Author:     Morten Rasmussen <morten.rasmussen@arm.com>
AuthorDate: Fri, 20 Jul 2018 14:32:34 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 10 Sep 2018 11:05:48 +0200

sched/topology, arch/arm: Rebuild sched_domain hierarchy when CPU capacity changes

Asymmetric CPU capacity can not necessarily be determined accurately at
the time the initial sched_domain hierarchy is built during boot. It is
therefore necessary to be able to force a full rebuild of the hierarchy
later triggered by the arch_topology driver. A full rebuild requires the
arch-code to implement arch_update_cpu_topology() which isn't yet
implemented for arm. This patch points the arm implementation to
arch_topology driver to ensure that full hierarchy rebuild happens when
needed.

Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: dietmar.eggemann@arm.com
Cc: valentin.schneider@arm.com
Cc: vincent.guittot@linaro.org
Link: http://lkml.kernel.org/r/1532093554-30504-5-git-send-email-morten.rasmussen@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/arm/include/asm/topology.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/arm/include/asm/topology.h b/arch/arm/include/asm/topology.h
index 5d88d2f22b2c..2a786f54d8b8 100644
--- a/arch/arm/include/asm/topology.h
+++ b/arch/arm/include/asm/topology.h
@@ -33,6 +33,9 @@ const struct cpumask *cpu_coregroup_mask(int cpu);
 /* Replace task scheduler's default cpu-invariant accounting */
 #define arch_scale_cpu_capacity topology_get_cpu_scale
 
+/* Enable topology flag updates */
+#define arch_update_cpu_topology topology_update_cpu_topology
+
 #else
 
 static inline void init_cpu_topology(void) { }

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/4] sched/topology: SD_ASYM_CPUCAPACITY flag detection
  2018-09-10  8:21   ` Ingo Molnar
@ 2018-09-11 11:04     ` Morten Rasmussen
  0 siblings, 0 replies; 17+ messages in thread
From: Morten Rasmussen @ 2018-09-11 11:04 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: peterz, mingo, valentin.schneider, dietmar.eggemann,
	vincent.guittot, linux-kernel, linux-arm-kernel

On Mon, Sep 10, 2018 at 10:21:11AM +0200, Ingo Molnar wrote:
> 
> * Morten Rasmussen <morten.rasmussen@arm.com> wrote:
> 
> > The SD_ASYM_CPUCAPACITY sched_domain flag is supposed to mark the
> > sched_domain in the hierarchy where all cpu capacities are visible for
> > any cpu's point of view on asymmetric cpu capacity systems. The
> 
> >  /*
> > + * Find the sched_domain_topology_level where all cpu capacities are visible
> > + * for all cpus.
> > + */
> 
> > +	/*
> > +	 * Examine topology from all cpu's point of views to detect the lowest
> > +	 * sched_domain_topology_level where a highest capacity cpu is visible
> > +	 * to everyone.
> > +	 */
> 
> >  #define SD_WAKE_AFFINE               0x0020  /* Wake task to waking CPU */
> > -#define SD_ASYM_CPUCAPACITY  0x0040  /* Groups have different max cpu capacities */
> > +#define SD_ASYM_CPUCAPACITY  0x0040  /* Domain members have different cpu capacities */
> 
> For future reference: *please* capitalize 'CPU' and 'CPUs' in future patches like the rest of 
> the scheduler does.
> 
> You can see it spelled right above the new definition: 'waking CPU' ;-)
> 
> (I fixed this up in this patch.)

Noted. Thanks for fixing up the patch.

Morten

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2018-09-11 11:04 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-20 13:32 [PATCH 0/4] sched/topology: Set SD_ASYM_CPUCAPACITY flag automatically Morten Rasmussen
2018-07-20 13:32 ` [PATCH 1/4] sched/topology: SD_ASYM_CPUCAPACITY flag detection Morten Rasmussen
2018-07-23 13:25   ` Qais Yousef
2018-07-23 15:27     ` Morten Rasmussen
2018-07-23 16:07       ` Qais Yousef
2018-07-24  8:37         ` Morten Rasmussen
2018-07-24  8:59           ` Qais Yousef
2018-09-10  8:21   ` Ingo Molnar
2018-09-11 11:04     ` Morten Rasmussen
2018-09-10 10:11   ` [tip:sched/core] sched/topology: Add " tip-bot for Morten Rasmussen
2018-07-20 13:32 ` [PATCH 2/4] drivers/base/arch_topology: Rebuild sched_domain hierarchy when capacities change Morten Rasmussen
2018-09-10 10:11   ` [tip:sched/core] sched/topology, drivers/base/arch_topology: Rebuild the " tip-bot for Morten Rasmussen
2018-07-20 13:32 ` [PATCH 3/4] arch/arm64: Rebuild sched_domain hierarchy when cpu capacity changes Morten Rasmussen
2018-09-10 10:12   ` [tip:sched/core] sched/topology, arch/arm64: Rebuild the sched_domain hierarchy when the CPU " tip-bot for Morten Rasmussen
2018-07-20 13:32 ` [PATCH 4/4] arch/arm: Rebuild sched_domain hierarchy when cpu " Morten Rasmussen
2018-09-10 10:12   ` [tip:sched/core] sched/topology, arch/arm: Rebuild sched_domain hierarchy when CPU " tip-bot for Morten Rasmussen
2018-07-31 10:53 ` [PATCH 0/4] sched/topology: Set SD_ASYM_CPUCAPACITY flag automatically Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).