All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
@ 2012-01-09  8:56 Youquan Song
  2012-01-09 10:06 ` Peter Zijlstra
  2012-01-09 15:37 ` Greg KH
  0 siblings, 2 replies; 39+ messages in thread
From: Youquan Song @ 2012-01-09  8:56 UTC (permalink / raw)
  To: linux-kernel, mingo, a.p.zijlstra, tglx, hpa, akpm
  Cc: stable, suresh.b.siddha, arjan, len.brown, anhua.xu,
	chaohong.guo, Youquan Song, Youquan Song

sched_smt_power_savings is totally broken at lastest linux and -tip tree.

sched_smt_power_savings is set to 1, the scheduler tries to schedule processes
 on the least number of hyper-threads on a core as possible. In other words, 
the process load is distributed such that all the hyper-threads in a core and 
all the cores within the same processor are busy before the load is distributed
 to other hyper-threads and cores in another processor. 

Test On Intel Xeon machine with 2 physical CPUs and each CPU has 8 cores / 16 
threads. physical CPU 0 includes cpu[0~7] and cpu[16~23]; while physical CPU 1
includes cpu[8~15] and cpu[24~31].

At latest -tip tree:
echo 1 > /sys/devices/system/cpu/sched_smt_power_savings
./ebizzy -t 16 -S 100 &  sleep 10 ; cat /proc/sched_debug | grep -A 1 cpu# > tmp.log

cpu#0, 2693.564 MHz
  .nr_running                    : 0
--
cpu#1, 2693.564 MHz
  .nr_running                    : 1
--
cpu#2, 2693.564 MHz
  .nr_running                    : 0
--
cpu#3, 2693.564 MHz
  .nr_running                    : 1
--
cpu#4, 2693.564 MHz
  .nr_running                    : 1
--
cpu#5, 2693.564 MHz
  .nr_running                    : 1
--
cpu#6, 2693.564 MHz
  .nr_running                    : 1
--
cpu#7, 2693.564 MHz
  .nr_running                    : 1
--
cpu#8, 2693.564 MHz
  .nr_running                    : 1
--
cpu#9, 2693.564 MHz
  .nr_running                    : 1
--
cpu#10, 2693.564 MHz
  .nr_running                    : 1
--
cpu#11, 2693.564 MHz
  .nr_running                    : 0
--
cpu#12, 2693.564 MHz
  .nr_running                    : 1
--
cpu#13, 2693.564 MHz
  .nr_running                    : 1
--
cpu#14, 2693.564 MHz
  .nr_running                    : 0
--
cpu#15, 2693.564 MHz
  .nr_running                    : 0
--
cpu#16, 2693.564 MHz
  .nr_running                    : 0
--
cpu#17, 2693.564 MHz
  .nr_running                    : 0
--
cpu#18, 2693.564 MHz
  .nr_running                    : 1
--
cpu#19, 2693.564 MHz
  .nr_running                    : 0
--
cpu#20, 2693.564 MHz
  .nr_running                    : 0
--
cpu#21, 2693.564 MHz
  .nr_running                    : 0
--
cpu#22, 2693.564 MHz
  .nr_running                    : 0
--
cpu#23, 2693.564 MHz
  .nr_running                    : 0
--
cpu#24, 2693.564 MHz
  .nr_running                    : 0
--
cpu#25, 2693.564 MHz
  .nr_running                    : 1
--
cpu#26, 2693.564 MHz
  .nr_running                    : 1
--
cpu#27, 2693.564 MHz
  .nr_running                    : 1
--
cpu#28, 2693.564 MHz
  .nr_running                    : 0
--
cpu#29, 2693.564 MHz
  .nr_running                    : 0
--
cpu#30, 2693.564 MHz
  .nr_running                    : 1
--
cpu#31, 2693.564 MHz
  .nr_running                    : 1

>From above, we notice 16 threads are distributed among 2 physical CPUs.
After apply the patch, 16 threads are only distributed at one physical CPU.
In this case, we can notice 30% power saving.
Following are the result after apply the patch:

cpu#0, 2693.384 MHz
  .nr_running                    : 1
--
cpu#1, 2693.384 MHz
  .nr_running                    : 1
--
cpu#2, 2693.384 MHz
  .nr_running                    : 1
--
cpu#3, 2693.384 MHz
  .nr_running                    : 1
--
cpu#4, 2693.384 MHz
  .nr_running                    : 1
--
cpu#5, 2693.384 MHz
  .nr_running                    : 1
--
cpu#6, 2693.384 MHz
  .nr_running                    : 1
--
cpu#7, 2693.384 MHz
  .nr_running                    : 1
--
cpu#8, 2693.384 MHz
  .nr_running                    : 0
--
cpu#9, 2693.384 MHz
  .nr_running                    : 0
--
cpu#10, 2693.384 MHz
  .nr_running                    : 0
--
cpu#11, 2693.384 MHz
  .nr_running                    : 0
--
cpu#12, 2693.384 MHz
  .nr_running                    : 0
--
cpu#13, 2693.384 MHz
  .nr_running                    : 0
--
cpu#14, 2693.384 MHz
  .nr_running                    : 0
--
cpu#15, 2693.384 MHz
  .nr_running                    : 0
--
cpu#16, 2693.384 MHz
  .nr_running                    : 1
--
cpu#17, 2693.384 MHz
  .nr_running                    : 1
--
cpu#18, 2693.384 MHz
  .nr_running                    : 1
--
cpu#19, 2693.384 MHz
  .nr_running                    : 1
--
cpu#20, 2693.384 MHz
  .nr_running                    : 1
--
cpu#21, 2693.384 MHz
  .nr_running                    : 1
--
cpu#22, 2693.384 MHz
  .nr_running                    : 1
--
cpu#23, 2693.384 MHz
  .nr_running                    : 1
--
cpu#24, 2693.384 MHz
  .nr_running                    : 0
--
cpu#25, 2693.384 MHz
  .nr_running                    : 0
--
cpu#26, 2693.384 MHz
  .nr_running                    : 0
--
cpu#27, 2693.384 MHz
  .nr_running                    : 0
--
cpu#28, 2693.384 MHz
  .nr_running                    : 0
--
cpu#29, 2693.384 MHz
  .nr_running                    : 0
--
cpu#30, 2693.384 MHz
  .nr_running                    : 0
--
cpu#31, 2693.384 MHz
  .nr_running                    : 1


This patch will set SMT sibling power capability to SCHED_POWER_SCALE
(1024) when sched_smt_power_savings set. So when there is possible do power
saving during scheduling, scheduler will truly schedule processes as
sched_smt_power_savings should do.
 

Signed-off-by: Youquan Song <youquan.song@intel.com>
Tested-by: Anhua Xu <anhua.xu@intel.com>
---
 kernel/sched/fair.c |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index a4d2b7a..5be1d43 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3715,6 +3715,9 @@ unsigned long default_scale_smt_power(struct sched_domain *sd, int cpu)
 	unsigned long weight = sd->span_weight;
 	unsigned long smt_gain = sd->smt_gain;
 
+	if (sched_smt_power_savings)
+		return SCHED_POWER_SCALE;
+
 	smt_gain /= weight;
 
 	return smt_gain;
-- 
1.6.4.2


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-09  8:56 [PATCH] x86,sched: Fix sched_smt_power_savings totally broken Youquan Song
@ 2012-01-09 10:06 ` Peter Zijlstra
  2012-01-09 10:28   ` Peter Zijlstra
  2012-01-10  0:14   ` [PATCH] x86,sched: Fix sched_smt_power_savings totally broken Youquan Song
  2012-01-09 15:37 ` Greg KH
  1 sibling, 2 replies; 39+ messages in thread
From: Peter Zijlstra @ 2012-01-09 10:06 UTC (permalink / raw)
  To: Youquan Song
  Cc: linux-kernel, mingo, tglx, hpa, akpm, stable, suresh.b.siddha,
	arjan, len.brown, anhua.xu, chaohong.guo, Youquan Song

On Mon, 2012-01-09 at 16:56 +0800, Youquan Song wrote:
> sched_smt_power_savings is totally broken at lastest linux and -tip tree.

Yes it is.. also that knob should die! Like i've been saying for way too
long. I'm >< close to committing a patch removing all the power_saving
magic from the scheduler.

> sched_smt_power_savings is set to 1, the scheduler tries to schedule processes
>  on the least number of hyper-threads on a core as possible. In other words, 
> the process load is distributed such that all the hyper-threads in a core and 
> all the cores within the same processor are busy before the load is distributed
>  to other hyper-threads and cores in another processor. 

That's the most convoluted way I've seen that stated in a while. What
you're saying is that all threads (of a socket) should be used before
spilling over to another socket.

> This patch will set SMT sibling power capability to SCHED_POWER_SCALE
> (1024) when sched_smt_power_savings set. So when there is possible do power
> saving during scheduling, scheduler will truly schedule processes as
> sched_smt_power_savings should do.
>  
> 
> Signed-off-by: Youquan Song <youquan.song@intel.com>
> Tested-by: Anhua Xu <anhua.xu@intel.com>
> ---
>  kernel/sched/fair.c |    3 +++
>  1 files changed, 3 insertions(+), 0 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index a4d2b7a..5be1d43 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3715,6 +3715,9 @@ unsigned long default_scale_smt_power(struct sched_domain *sd, int cpu)
>  	unsigned long weight = sd->span_weight;
>  	unsigned long smt_gain = sd->smt_gain;
>  
> +	if (sched_smt_power_savings)
> +		return SCHED_POWER_SCALE;
> +
>  	smt_gain /= weight;
>  
>  	return smt_gain;

Hell no, that's completely the wrong thing to do. I think you want to
frob at the group_capacity computation in update_sg_lb_stats.


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-09 10:06 ` Peter Zijlstra
@ 2012-01-09 10:28   ` Peter Zijlstra
  2012-01-09 10:30     ` Peter Zijlstra
                       ` (3 more replies)
  2012-01-10  0:14   ` [PATCH] x86,sched: Fix sched_smt_power_savings totally broken Youquan Song
  1 sibling, 4 replies; 39+ messages in thread
From: Peter Zijlstra @ 2012-01-09 10:28 UTC (permalink / raw)
  To: Youquan Song
  Cc: linux-kernel, mingo, tglx, hpa, akpm, stable, suresh.b.siddha,
	arjan, len.brown, anhua.xu, chaohong.guo, Youquan Song,
	Paul Turner

On Mon, 2012-01-09 at 11:06 +0100, Peter Zijlstra wrote:
> I'm >< close to committing a patch removing all the power_saving
> magic from the scheduler. 

Unless someone steps up that says they care about this crap and are
willing to clean it up this goes in...

And that very much includes ripping out the current interface and
replacing it with something sane, the current setup of {0,1,2}x{0,1,2}
possible configuration settings is completely unacceptable, esp since
people seem to want to inflate that to 27 or worse.

---
 Documentation/ABI/testing/sysfs-devices-system-cpu |   25 ----
 Documentation/scheduler/sched-domains.txt          |    4 -
 arch/x86/kernel/smpboot.c                          |    3 +-
 drivers/base/cpu.c                                 |    4 -
 include/linux/cpu.h                                |    2 -
 include/linux/sched.h                              |   46 --------
 include/linux/topology.h                           |    4 -
 kernel/sched/core.c                                |   95 ----------------
 kernel/sched/fair.c                                |  116 +-------------------
 tools/power/cpupower/man/cpupower-set.1            |    9 --
 tools/power/cpupower/utils/helpers/sysfs.c         |   35 +------
 11 files changed, 4 insertions(+), 339 deletions(-)

diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
index e7be75b..5dab364 100644
--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
+++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
@@ -9,31 +9,6 @@ Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
 
 		/sys/devices/system/cpu/cpu#/
 
-What:		/sys/devices/system/cpu/sched_mc_power_savings
-		/sys/devices/system/cpu/sched_smt_power_savings
-Date:		June 2006
-Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
-Description:	Discover and adjust the kernel's multi-core scheduler support.
-
-		Possible values are:
-
-		0 - No power saving load balance (default value)
-		1 - Fill one thread/core/package first for long running threads
-		2 - Also bias task wakeups to semi-idle cpu package for power
-		    savings
-
-		sched_mc_power_savings is dependent upon SCHED_MC, which is
-		itself architecture dependent.
-
-		sched_smt_power_savings is dependent upon SCHED_SMT, which
-		is itself architecture dependent.
-
-		The two files are independent of each other. It is possible
-		that one file may be present without the other.
-
-		Introduced by git commit 5c45bf27.
-
-
 What:		/sys/devices/system/cpu/kernel_max
 		/sys/devices/system/cpu/offline
 		/sys/devices/system/cpu/online
diff --git a/Documentation/scheduler/sched-domains.txt b/Documentation/scheduler/sched-domains.txt
index b7ee379..443f0c7 100644
--- a/Documentation/scheduler/sched-domains.txt
+++ b/Documentation/scheduler/sched-domains.txt
@@ -61,10 +61,6 @@ might have just one domain covering its one NUMA level.
 struct sched_domain fields, SD_FLAG_*, SD_*_INIT to get an idea of
 the specifics and what to tune.
 
-For SMT, the architecture must define CONFIG_SCHED_SMT and provide a
-cpumask_t cpu_sibling_map[NR_CPUS], where cpu_sibling_map[i] is the mask of
-all "i"'s siblings as well as "i" itself.
-
 Architectures may retain the regular override the default SD_*_INIT flags
 while using the generic domain builder in kernel/sched.c if they wish to
 retain the traditional SMT->SMP->NUMA topology (or some subset of that). This
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 66d250c..5f33068 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -414,8 +414,7 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
 	 * For perf, we return last level cache shared map.
 	 * And for power savings, we return cpu_core_map
 	 */
-	if ((sched_mc_power_savings || sched_smt_power_savings) &&
-	    !(cpu_has(c, X86_FEATURE_AMD_DCM)))
+	if (!(cpu_has(c, X86_FEATURE_AMD_DCM)))
 		return cpu_core_mask(cpu);
 	else
 		return cpu_llc_shared_mask(cpu);
diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
index 3991502..b0c9b87 100644
--- a/drivers/base/cpu.c
+++ b/drivers/base/cpu.c
@@ -259,10 +259,6 @@ int __init cpu_dev_init(void)
 	int err;
 
 	err = sysdev_class_register(&cpu_sysdev_class);
-#if defined(CONFIG_SCHED_MC) || defined(CONFIG_SCHED_SMT)
-	if (!err)
-		err = sched_create_sysfs_power_savings_entries(&cpu_sysdev_class);
-#endif
 
 	return err;
 }
diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index 305c263..9a2fdcb 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -35,8 +35,6 @@ extern void cpu_remove_sysdev_attr(struct sysdev_attribute *attr);
 extern int cpu_add_sysdev_attr_group(struct attribute_group *attrs);
 extern void cpu_remove_sysdev_attr_group(struct attribute_group *attrs);
 
-extern int sched_create_sysfs_power_savings_entries(struct sysdev_class *cls);
-
 #ifdef CONFIG_HOTPLUG_CPU
 extern void unregister_cpu(struct cpu *cpu);
 extern ssize_t arch_cpu_probe(const char *, size_t);
diff --git a/include/linux/sched.h b/include/linux/sched.h
index cf0eb34..65985d3 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -848,54 +848,8 @@ enum cpu_idle_type {
 #define SD_PREFER_SIBLING	0x1000	/* Prefer to place tasks in a sibling domain */
 #define SD_OVERLAP		0x2000	/* sched_domains of this level overlap */
 
-enum powersavings_balance_level {
-	POWERSAVINGS_BALANCE_NONE = 0,  /* No power saving load balance */
-	POWERSAVINGS_BALANCE_BASIC,	/* Fill one thread/core/package
-					 * first for long running threads
-					 */
-	POWERSAVINGS_BALANCE_WAKEUP,	/* Also bias task wakeups to semi-idle
-					 * cpu package for power savings
-					 */
-	MAX_POWERSAVINGS_BALANCE_LEVELS
-};
-
-extern int sched_mc_power_savings, sched_smt_power_savings;
-
-static inline int sd_balance_for_mc_power(void)
-{
-	if (sched_smt_power_savings)
-		return SD_POWERSAVINGS_BALANCE;
-
-	if (!sched_mc_power_savings)
-		return SD_PREFER_SIBLING;
-
-	return 0;
-}
-
-static inline int sd_balance_for_package_power(void)
-{
-	if (sched_mc_power_savings | sched_smt_power_savings)
-		return SD_POWERSAVINGS_BALANCE;
-
-	return SD_PREFER_SIBLING;
-}
-
 extern int __weak arch_sd_sibiling_asym_packing(void);
 
-/*
- * Optimise SD flags for power savings:
- * SD_BALANCE_NEWIDLE helps aggressive task consolidation and power savings.
- * Keep default SD flags if sched_{smt,mc}_power_saving=0
- */
-
-static inline int sd_power_saving_flags(void)
-{
-	if (sched_mc_power_savings | sched_smt_power_savings)
-		return SD_BALANCE_NEWIDLE;
-
-	return 0;
-}
-
 struct sched_group_power {
 	atomic_t ref;
 	/*
diff --git a/include/linux/topology.h b/include/linux/topology.h
index e26db03..3416d5e 100644
--- a/include/linux/topology.h
+++ b/include/linux/topology.h
@@ -135,8 +135,6 @@ int arch_update_cpu_topology(void);
 				| 0*SD_SHARE_CPUPOWER			\
 				| 1*SD_SHARE_PKG_RESOURCES		\
 				| 0*SD_SERIALIZE			\
-				| sd_balance_for_mc_power()		\
-				| sd_power_saving_flags()		\
 				,					\
 	.last_balance		= jiffies,				\
 	.balance_interval	= 1,					\
@@ -168,8 +166,6 @@ int arch_update_cpu_topology(void);
 				| 0*SD_SHARE_CPUPOWER			\
 				| 0*SD_SHARE_PKG_RESOURCES		\
 				| 0*SD_SERIALIZE			\
-				| sd_balance_for_package_power()	\
-				| sd_power_saving_flags()		\
 				,					\
 	.last_balance		= jiffies,				\
 	.balance_interval	= 1,					\
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 4dbfd04..ee2bcfd 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5924,8 +5924,6 @@ static const struct cpumask *cpu_cpu_mask(int cpu)
 	return cpumask_of_node(cpu_to_node(cpu));
 }
 
-int sched_smt_power_savings = 0, sched_mc_power_savings = 0;
-
 struct sd_data {
 	struct sched_domain **__percpu sd;
 	struct sched_group **__percpu sg;
@@ -6635,99 +6633,6 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
 	mutex_unlock(&sched_domains_mutex);
 }
 
-#if defined(CONFIG_SCHED_MC) || defined(CONFIG_SCHED_SMT)
-static void reinit_sched_domains(void)
-{
-	get_online_cpus();
-
-	/* Destroy domains first to force the rebuild */
-	partition_sched_domains(0, NULL, NULL);
-
-	rebuild_sched_domains();
-	put_online_cpus();
-}
-
-static ssize_t sched_power_savings_store(const char *buf, size_t count, int smt)
-{
-	unsigned int level = 0;
-
-	if (sscanf(buf, "%u", &level) != 1)
-		return -EINVAL;
-
-	/*
-	 * level is always be positive so don't check for
-	 * level < POWERSAVINGS_BALANCE_NONE which is 0
-	 * What happens on 0 or 1 byte write,
-	 * need to check for count as well?
-	 */
-
-	if (level >= MAX_POWERSAVINGS_BALANCE_LEVELS)
-		return -EINVAL;
-
-	if (smt)
-		sched_smt_power_savings = level;
-	else
-		sched_mc_power_savings = level;
-
-	reinit_sched_domains();
-
-	return count;
-}
-
-#ifdef CONFIG_SCHED_MC
-static ssize_t sched_mc_power_savings_show(struct sysdev_class *class,
-					   struct sysdev_class_attribute *attr,
-					   char *page)
-{
-	return sprintf(page, "%u\n", sched_mc_power_savings);
-}
-static ssize_t sched_mc_power_savings_store(struct sysdev_class *class,
-					    struct sysdev_class_attribute *attr,
-					    const char *buf, size_t count)
-{
-	return sched_power_savings_store(buf, count, 0);
-}
-static SYSDEV_CLASS_ATTR(sched_mc_power_savings, 0644,
-			 sched_mc_power_savings_show,
-			 sched_mc_power_savings_store);
-#endif
-
-#ifdef CONFIG_SCHED_SMT
-static ssize_t sched_smt_power_savings_show(struct sysdev_class *dev,
-					    struct sysdev_class_attribute *attr,
-					    char *page)
-{
-	return sprintf(page, "%u\n", sched_smt_power_savings);
-}
-static ssize_t sched_smt_power_savings_store(struct sysdev_class *dev,
-					     struct sysdev_class_attribute *attr,
-					     const char *buf, size_t count)
-{
-	return sched_power_savings_store(buf, count, 1);
-}
-static SYSDEV_CLASS_ATTR(sched_smt_power_savings, 0644,
-		   sched_smt_power_savings_show,
-		   sched_smt_power_savings_store);
-#endif
-
-int __init sched_create_sysfs_power_savings_entries(struct sysdev_class *cls)
-{
-	int err = 0;
-
-#ifdef CONFIG_SCHED_SMT
-	if (smt_capable())
-		err = sysfs_create_file(&cls->kset.kobj,
-					&attr_sched_smt_power_savings.attr);
-#endif
-#ifdef CONFIG_SCHED_MC
-	if (!err && mc_capable())
-		err = sysfs_create_file(&cls->kset.kobj,
-					&attr_sched_mc_power_savings.attr);
-#endif
-	return err;
-}
-#endif /* CONFIG_SCHED_MC || CONFIG_SCHED_SMT */
-
 /*
  * Update cpusets according to cpu_active mask.  If cpusets are
  * disabled, cpuset_update_active_cpus() becomes a simple wrapper
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8e42de9..f0f140b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4413,27 +4413,7 @@ static int need_active_balance(struct sched_domain *sd, int idle,
 		if ((sd->flags & SD_ASYM_PACKING) && busiest_cpu > this_cpu)
 			return 1;
 
-		/*
-		 * The only task running in a non-idle cpu can be moved to this
-		 * cpu in an attempt to completely freeup the other CPU
-		 * package.
-		 *
-		 * The package power saving logic comes from
-		 * find_busiest_group(). If there are no imbalance, then
-		 * f_b_g() will return NULL. However when sched_mc={1,2} then
-		 * f_b_g() will select a group from which a running task may be
-		 * pulled to this cpu in order to make the other package idle.
-		 * If there is no opportunity to make a package idle and if
-		 * there are no imbalance, then f_b_g() will return NULL and no
-		 * action will be taken in load_balance_newidle().
-		 *
-		 * Under normal task pull operation due to imbalance, there
-		 * will be more than one task in the source run queue and
-		 * move_tasks() will succeed.  ld_moved will be true and this
-		 * active balance code will not be triggered.
-		 */
-		if (sched_mc_power_savings < POWERSAVINGS_BALANCE_WAKEUP)
-			return 0;
+		return 0;
 	}
 
 	return unlikely(sd->nr_balance_failed > sd->cache_nice_tries+2);
@@ -4735,104 +4715,10 @@ static struct {
 	unsigned long next_balance;     /* in jiffy units */
 } nohz ____cacheline_aligned;
 
-#if defined(CONFIG_SCHED_MC) || defined(CONFIG_SCHED_SMT)
-/**
- * lowest_flag_domain - Return lowest sched_domain containing flag.
- * @cpu:	The cpu whose lowest level of sched domain is to
- *		be returned.
- * @flag:	The flag to check for the lowest sched_domain
- *		for the given cpu.
- *
- * Returns the lowest sched_domain of a cpu which contains the given flag.
- */
-static inline struct sched_domain *lowest_flag_domain(int cpu, int flag)
-{
-	struct sched_domain *sd;
-
-	for_each_domain(cpu, sd)
-		if (sd->flags & flag)
-			break;
-
-	return sd;
-}
-
-/**
- * for_each_flag_domain - Iterates over sched_domains containing the flag.
- * @cpu:	The cpu whose domains we're iterating over.
- * @sd:		variable holding the value of the power_savings_sd
- *		for cpu.
- * @flag:	The flag to filter the sched_domains to be iterated.
- *
- * Iterates over all the scheduler domains for a given cpu that has the 'flag'
- * set, starting from the lowest sched_domain to the highest.
- */
-#define for_each_flag_domain(cpu, sd, flag) \
-	for (sd = lowest_flag_domain(cpu, flag); \
-		(sd && (sd->flags & flag)); sd = sd->parent)
-
-/**
- * find_new_ilb - Finds the optimum idle load balancer for nomination.
- * @cpu:	The cpu which is nominating a new idle_load_balancer.
- *
- * Returns:	Returns the id of the idle load balancer if it exists,
- *		Else, returns >= nr_cpu_ids.
- *
- * This algorithm picks the idle load balancer such that it belongs to a
- * semi-idle powersavings sched_domain. The idea is to try and avoid
- * completely idle packages/cores just for the purpose of idle load balancing
- * when there are other idle cpu's which are better suited for that job.
- */
-static int find_new_ilb(int cpu)
-{
-	int ilb = cpumask_first(nohz.idle_cpus_mask);
-	struct sched_group *ilbg;
-	struct sched_domain *sd;
-
-	/*
-	 * Have idle load balancer selection from semi-idle packages only
-	 * when power-aware load balancing is enabled
-	 */
-	if (!(sched_smt_power_savings || sched_mc_power_savings))
-		goto out_done;
-
-	/*
-	 * Optimize for the case when we have no idle CPUs or only one
-	 * idle CPU. Don't walk the sched_domain hierarchy in such cases
-	 */
-	if (cpumask_weight(nohz.idle_cpus_mask) < 2)
-		goto out_done;
-
-	rcu_read_lock();
-	for_each_flag_domain(cpu, sd, SD_POWERSAVINGS_BALANCE) {
-		ilbg = sd->groups;
-
-		do {
-			if (ilbg->group_weight !=
-				atomic_read(&ilbg->sgp->nr_busy_cpus)) {
-				ilb = cpumask_first_and(nohz.idle_cpus_mask,
-							sched_group_cpus(ilbg));
-				goto unlock;
-			}
-
-			ilbg = ilbg->next;
-
-		} while (ilbg != sd->groups);
-	}
-unlock:
-	rcu_read_unlock();
-
-out_done:
-	if (ilb < nr_cpu_ids && idle_cpu(ilb))
-		return ilb;
-
-	return nr_cpu_ids;
-}
-#else /*  (CONFIG_SCHED_MC || CONFIG_SCHED_SMT) */
 static inline int find_new_ilb(int call_cpu)
 {
 	return nr_cpu_ids;
 }
-#endif
 
 /*
  * Kick a CPU to do the nohz balancing, if it is time for it. We pick the
diff --git a/tools/power/cpupower/man/cpupower-set.1 b/tools/power/cpupower/man/cpupower-set.1
index c4954a9..9dbd536 100644
--- a/tools/power/cpupower/man/cpupower-set.1
+++ b/tools/power/cpupower/man/cpupower-set.1
@@ -85,15 +85,6 @@ Adjust the kernel's multi-core scheduler support.
 savings
 .RE
 
-sched_mc_power_savings is dependent upon SCHED_MC, which is
-itself architecture dependent.
-
-sched_smt_power_savings is dependent upon SCHED_SMT, which
-is itself architecture dependent.
-
-The two files are independent of each other. It is possible
-that one file may be present without the other.
-
 .SH "SEE ALSO"
 cpupower-info(1), cpupower-monitor(1), powertop(1)
 .PP
diff --git a/tools/power/cpupower/utils/helpers/sysfs.c b/tools/power/cpupower/utils/helpers/sysfs.c
index c634302..5650127 100644
--- a/tools/power/cpupower/utils/helpers/sysfs.c
+++ b/tools/power/cpupower/utils/helpers/sysfs.c
@@ -362,22 +362,7 @@ char *sysfs_get_cpuidle_driver(void)
  */
 int sysfs_get_sched(const char *smt_mc)
 {
-	unsigned long value;
-	char linebuf[MAX_LINE_LEN];
-	char *endp;
-	char path[SYSFS_PATH_MAX];
-
-	if (strcmp("mc", smt_mc) && strcmp("smt", smt_mc))
-		return -EINVAL;
-
-	snprintf(path, sizeof(path),
-		PATH_TO_CPU "sched_%s_power_savings", smt_mc);
-	if (sysfs_read_file(path, linebuf, MAX_LINE_LEN) == 0)
-		return -1;
-	value = strtoul(linebuf, &endp, 0);
-	if (endp == linebuf || errno == ERANGE)
-		return -1;
-	return value;
+	return -ENODEV;
 }
 
 /*
@@ -388,21 +373,5 @@ int sysfs_get_sched(const char *smt_mc)
  */
 int sysfs_set_sched(const char *smt_mc, int val)
 {
-	char linebuf[MAX_LINE_LEN];
-	char path[SYSFS_PATH_MAX];
-	struct stat statbuf;
-
-	if (strcmp("mc", smt_mc) && strcmp("smt", smt_mc))
-		return -EINVAL;
-
-	snprintf(path, sizeof(path),
-		PATH_TO_CPU "sched_%s_power_savings", smt_mc);
-	sprintf(linebuf, "%d", val);
-
-	if (stat(path, &statbuf) != 0)
-		return -ENODEV;
-
-	if (sysfs_write_file(path, linebuf, MAX_LINE_LEN) == 0)
-		return -1;
-	return 0;
+	return -ENODEV;
 }


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-09 10:28   ` Peter Zijlstra
@ 2012-01-09 10:30     ` Peter Zijlstra
  2012-01-09 11:00     ` Vaidyanathan Srinivasan
                       ` (2 subsequent siblings)
  3 siblings, 0 replies; 39+ messages in thread
From: Peter Zijlstra @ 2012-01-09 10:30 UTC (permalink / raw)
  To: Youquan Song
  Cc: linux-kernel, mingo, tglx, hpa, akpm, stable, suresh.b.siddha,
	arjan, len.brown, anhua.xu, chaohong.guo, Youquan Song,
	Paul Turner

On Mon, 2012-01-09 at 11:28 +0100, Peter Zijlstra wrote:
> +++ b/arch/x86/kernel/smpboot.c
> @@ -414,8 +414,7 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
>          * For perf, we return last level cache shared map.
>          * And for power savings, we return cpu_core_map
>          */
> -       if ((sched_mc_power_savings || sched_smt_power_savings) &&
> -           !(cpu_has(c, X86_FEATURE_AMD_DCM)))
> +       if (!(cpu_has(c, X86_FEATURE_AMD_DCM)))
>                 return cpu_core_mask(cpu);
>         else
>                 return cpu_llc_shared_mask(cpu); 

That should of course become:

 return cpu_llc_shared_mask(cpu);



^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-09 10:28   ` Peter Zijlstra
  2012-01-09 10:30     ` Peter Zijlstra
@ 2012-01-09 11:00     ` Vaidyanathan Srinivasan
  2012-01-09 14:35       ` Peter Zijlstra
  2012-01-09 14:13     ` Arjan van de Ven
  2012-05-18 10:19     ` [tip:sched/core] sched: Remove stale power aware scheduling remnants and dysfunctional knobs tip-bot for Peter Zijlstra
  3 siblings, 1 reply; 39+ messages in thread
From: Vaidyanathan Srinivasan @ 2012-01-09 11:00 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Youquan Song, linux-kernel, mingo, tglx, hpa, akpm, stable,
	suresh.b.siddha, arjan, len.brown, anhua.xu, chaohong.guo,
	Youquan Song, Paul Turner

* Peter Zijlstra <peterz@infradead.org> [2012-01-09 11:28:35]:

> On Mon, 2012-01-09 at 11:06 +0100, Peter Zijlstra wrote:
> > I'm >< close to committing a patch removing all the power_saving
> > magic from the scheduler. 
> 
> Unless someone steps up that says they care about this crap and are
> willing to clean it up this goes in...

Hi Peter,

Power savings knob in the schedule is a good feature to have and it
benefits Linux users.  I will help to clean this up.

> And that very much includes ripping out the current interface and
> replacing it with something sane, the current setup of {0,1,2}x{0,1,2}
> possible configuration settings is completely unacceptable, esp since
> people seem to want to inflate that to 27 or worse.

Lets combine the MC/SMT options and create sched_powersavings={0,1,2}
That will make it only one sysfs knob without any dependencies.

Based on the architecture, kernel can consolidate at core level or
thread level when available.

First let me propose the fix for Youquan Song's problem and then send
a patch to simplify the sched_powersavings knob.

--Vaidy


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-10  0:14   ` [PATCH] x86,sched: Fix sched_smt_power_savings totally broken Youquan Song
@ 2012-01-09 11:05     ` Peter Zijlstra
  2012-01-10  5:58       ` Youquan Song
  2012-01-10 16:44       ` Vaidyanathan Srinivasan
  2012-01-09 11:12     ` Peter Zijlstra
  1 sibling, 2 replies; 39+ messages in thread
From: Peter Zijlstra @ 2012-01-09 11:05 UTC (permalink / raw)
  To: Youquan Song
  Cc: linux-kernel, mingo, tglx, hpa, akpm, stable, suresh.b.siddha,
	arjan, len.brown, anhua.xu, chaohong.guo, Youquan Song

On Mon, 2012-01-09 at 19:14 -0500, Youquan Song wrote:
> Fine, I will base your suggestion to develop another patch soon.
> 

> 
> @@ -3923,6 +3923,10 @@ static inline void update_sg_lb_stats(struct
> sched_domain *sd,
>                                                 SCHED_POWER_SCALE);
>         if (!sgs->group_capacity)
>                 sgs->group_capacity = fix_small_capacity(sd, group);
> +
> +       if (sched_smt_power_savings)
> +               sgs->group_capacity *= 2; 

Note, this has the hard-coded assumption you only have 2 threads per
core, which while true for intel, isn't true in general. I think you
meant to write *= group->group_weight or somesuch.

Also, you forgot to limit this to the SD_SHARE_CPUPOWER domain, you're
now doubling the capacity for all domains.

Furthermore, have a look at the SD_PREFER_SIBLING logic and make sure
you're not fighting that.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-10  0:14   ` [PATCH] x86,sched: Fix sched_smt_power_savings totally broken Youquan Song
  2012-01-09 11:05     ` Peter Zijlstra
@ 2012-01-09 11:12     ` Peter Zijlstra
  2012-01-09 14:29         ` Vincent Guittot
  1 sibling, 1 reply; 39+ messages in thread
From: Peter Zijlstra @ 2012-01-09 11:12 UTC (permalink / raw)
  To: Youquan Song
  Cc: linux-kernel, mingo, tglx, hpa, akpm, stable, suresh.b.siddha,
	arjan, len.brown, anhua.xu, chaohong.guo, Youquan Song

On Mon, 2012-01-09 at 19:14 -0500, Youquan Song wrote:
> > Yes it is.. also that knob should die! Like i've been saying for way too
> > long. I'm >< close to committing a patch removing all the power_saving
> > magic from the scheduler.
> Sorry. I do not notice it.

I hadn't posted it before, but just about every time people post
something related to smt/mc power balancing I say this needs to get
cleaned up. Since telling people doesn't seem to have any effect what so
ever, stronger measures are needed.

> But currently in many real tests, the knob prove to save power in semi-idle system.
> They are useful in many user scenarios currently.

I'm not saying the stuff is without merit, I'm just saying that a) the
interface is utter crap and b) nobody seems to spend enough time on it
to 1) understand the load-balancer so that 2) he can integrate the power
saving stuff properly.

Instead I get one sporadic band-aid after another.

Also, fundamentally, the split between {smt/mc/numa/book}_power_savings
is completely and fundamentally broken, nobody cares about the actual
topology.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-09 10:28   ` Peter Zijlstra
  2012-01-09 10:30     ` Peter Zijlstra
  2012-01-09 11:00     ` Vaidyanathan Srinivasan
@ 2012-01-09 14:13     ` Arjan van de Ven
  2012-05-18 10:19     ` [tip:sched/core] sched: Remove stale power aware scheduling remnants and dysfunctional knobs tip-bot for Peter Zijlstra
  3 siblings, 0 replies; 39+ messages in thread
From: Arjan van de Ven @ 2012-01-09 14:13 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Youquan Song, linux-kernel, mingo, tglx, hpa, akpm, stable,
	suresh.b.siddha, len.brown, anhua.xu, chaohong.guo, Youquan Song,
	Paul Turner

On 1/9/2012 2:28 AM, Peter Zijlstra wrote:
> On Mon, 2012-01-09 at 11:06 +0100, Peter Zijlstra wrote:
>> I'm >< close to committing a patch removing all the power_saving
>> magic from the scheduler. 
> 
> Unless someone steps up that says they care about this crap and are
> willing to clean it up this goes in...
> 

there is one legitimate choice to make in the scheduler, which is a real
honest preference:
the spread out versus "sort left" preference.

that doesn't need 15 knobs, that needs one.
I can totally buy that this might be easiest done by ripping out all
existing stuff, and then doing something relatively simple
(say, in the, "which CPU to run at" logic, a minor bias)



^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-09 11:12     ` Peter Zijlstra
@ 2012-01-09 14:29         ` Vincent Guittot
  0 siblings, 0 replies; 39+ messages in thread
From: Vincent Guittot @ 2012-01-09 14:29 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Youquan Song, linux-kernel, mingo, tglx, hpa, akpm, stable,
	suresh.b.siddha, arjan, len.brown, anhua.xu, chaohong.guo,
	Youquan Song

Hi Peter,

I'm also using sched_mc level for doing powersaving load balance on
ARM platform and we have real benefits.

On 9 January 2012 12:12, Peter Zijlstra <a.p.zijlstra@chello.nl> wrote:
> On Mon, 2012-01-09 at 19:14 -0500, Youquan Song wrote:
>> > Yes it is.. also that knob should die! Like i've been saying for way too
>> > long. I'm >< close to committing a patch removing all the power_saving
>> > magic from the scheduler.
>> Sorry. I do not notice it.
>
> I hadn't posted it before, but just about every time people post
> something related to smt/mc power balancing I say this needs to get
> cleaned up. Since telling people doesn't seem to have any effect what so
> ever, stronger measures are needed.
>
>> But currently in many real tests, the knob prove to save power in semi-idle system.
>> They are useful in many user scenarios currently.
>
> I'm not saying the stuff is without merit, I'm just saying that a) the
> interface is utter crap and b) nobody seems to spend enough time on it
> to 1) understand the load-balancer so that 2) he can integrate the power
> saving stuff properly.
>
> Instead I get one sporadic band-aid after another.
>

I can also help to on the topic

> Also, fundamentally, the split between {smt/mc/numa/book}_power_savings
> is completely and fundamentally broken, nobody cares about the actual
> topology.

We might modify the way we choose between power or performance mode
because it's not always a matter of gathering or spreading tasks on
cpus but until we found a best interface it's the way to enable
powersaving mode

> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
@ 2012-01-09 14:29         ` Vincent Guittot
  0 siblings, 0 replies; 39+ messages in thread
From: Vincent Guittot @ 2012-01-09 14:29 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Youquan Song, linux-kernel, mingo, tglx, hpa, akpm, stable,
	suresh.b.siddha, arjan, len.brown, anhua.xu, chaohong.guo,
	Youquan Song

Hi Peter,

I'm also using sched_mc level for doing powersaving load balance on
ARM platform and we have real benefits.

On 9 January 2012 12:12, Peter Zijlstra <a.p.zijlstra@chello.nl> wrote:
> On Mon, 2012-01-09 at 19:14 -0500, Youquan Song wrote:
>> > Yes it is.. also that knob should die! Like i've been saying for way too
>> > long. I'm >< close to committing a patch removing all the power_saving
>> > magic from the scheduler.
>> Sorry. I do not notice it.
>
> I hadn't posted it before, but just about every time people post
> something related to smt/mc power balancing I say this needs to get
> cleaned up. Since telling people doesn't seem to have any effect what so
> ever, stronger measures are needed.
>
>> But currently in many real tests, the knob prove to save power in semi-idle system.
>> They are useful in many user scenarios currently.
>
> I'm not saying the stuff is without merit, I'm just saying that a) the
> interface is utter crap and b) nobody seems to spend enough time on it
> to 1) understand the load-balancer so that 2) he can integrate the power
> saving stuff properly.
>
> Instead I get one sporadic band-aid after another.
>

I can also help to on the topic

> Also, fundamentally, the split between {smt/mc/numa/book}_power_savings
> is completely and fundamentally broken, nobody cares about the actual
> topology.

We might modify the way we choose between power or performance mode
because it's not always a matter of gathering or spreading tasks on
cpus but until we found a best interface it's the way to enable
powersaving mode

> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at �http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at �http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-09 11:00     ` Vaidyanathan Srinivasan
@ 2012-01-09 14:35       ` Peter Zijlstra
  2012-01-09 16:03         ` Vaidyanathan Srinivasan
  0 siblings, 1 reply; 39+ messages in thread
From: Peter Zijlstra @ 2012-01-09 14:35 UTC (permalink / raw)
  To: svaidy
  Cc: Youquan Song, linux-kernel, mingo, tglx, hpa, akpm, stable,
	suresh.b.siddha, arjan, len.brown, anhua.xu, chaohong.guo,
	Youquan Song, Paul Turner

On Mon, 2012-01-09 at 16:30 +0530, Vaidyanathan Srinivasan wrote:
> Lets combine the MC/SMT options and create sched_powersavings={0,1,2}
> That will make it only one sysfs knob without any dependencies.
> 
Is there really any sane rationale to keep the 1/2 thing? Why not have a
single boolean knob that says performance/power?

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-09 14:29         ` Vincent Guittot
  (?)
@ 2012-01-09 14:46         ` Peter Zijlstra
  2012-01-10  2:12           ` Indan Zupancic
  -1 siblings, 1 reply; 39+ messages in thread
From: Peter Zijlstra @ 2012-01-09 14:46 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Youquan Song, linux-kernel, mingo, tglx, hpa, akpm, stable,
	suresh.b.siddha, arjan, len.brown, anhua.xu, chaohong.guo,
	Youquan Song, Paul Turner

On Mon, 2012-01-09 at 15:29 +0100, Vincent Guittot wrote:

> I'm also using sched_mc level for doing powersaving load balance on
> ARM platform and we have real benefits.

Right, I've never said that power aware balancing was without merit, I
know it matters (quite a lot for some).

> We might modify the way we choose between power or performance mode
> because it's not always a matter of gathering or spreading tasks on
> cpus but until we found a best interface it's the way to enable
> powersaving mode

Sure, it was the only interface available.

But I really want to get rid of the topology based knobs we have now,
preferably I even want to get rid of the multi-value thing.

pjt still needs to post his linsched rework *poke* *poke*, that should
give a good basis to rework most of this without regressing the world
and then some.

But even without that I think we can do better than we do currently.



^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-09  8:56 [PATCH] x86,sched: Fix sched_smt_power_savings totally broken Youquan Song
  2012-01-09 10:06 ` Peter Zijlstra
@ 2012-01-09 15:37 ` Greg KH
  1 sibling, 0 replies; 39+ messages in thread
From: Greg KH @ 2012-01-09 15:37 UTC (permalink / raw)
  To: Youquan Song
  Cc: linux-kernel, mingo, a.p.zijlstra, tglx, hpa, akpm, stable,
	suresh.b.siddha, arjan, len.brown, anhua.xu, chaohong.guo,
	Youquan Song

On Mon, Jan 09, 2012 at 04:56:07PM +0800, Youquan Song wrote:
> sched_smt_power_savings is totally broken at lastest linux and -tip tree.
> 
> sched_smt_power_savings is set to 1, the scheduler tries to schedule processes
>  on the least number of hyper-threads on a core as possible. In other words, 
> the process load is distributed such that all the hyper-threads in a core and 
> all the cores within the same processor are busy before the load is distributed
>  to other hyper-threads and cores in another processor. 
> 
> Test On Intel Xeon machine with 2 physical CPUs and each CPU has 8 cores / 16 
> threads. physical CPU 0 includes cpu[0~7] and cpu[16~23]; while physical CPU 1
> includes cpu[8~15] and cpu[24~31].
> 
> At latest -tip tree:
> echo 1 > /sys/devices/system/cpu/sched_smt_power_savings
> ./ebizzy -t 16 -S 100 &  sleep 10 ; cat /proc/sched_debug | grep -A 1 cpu# > tmp.log
> 
> cpu#0, 2693.564 MHz
>   .nr_running                    : 0
> --
> cpu#1, 2693.564 MHz
>   .nr_running                    : 1
> --
> cpu#2, 2693.564 MHz
>   .nr_running                    : 0
> --
> cpu#3, 2693.564 MHz
>   .nr_running                    : 1
> --
> cpu#4, 2693.564 MHz
>   .nr_running                    : 1
> --
> cpu#5, 2693.564 MHz
>   .nr_running                    : 1
> --
> cpu#6, 2693.564 MHz
>   .nr_running                    : 1
> --
> cpu#7, 2693.564 MHz
>   .nr_running                    : 1
> --
> cpu#8, 2693.564 MHz
>   .nr_running                    : 1
> --
> cpu#9, 2693.564 MHz
>   .nr_running                    : 1
> --
> cpu#10, 2693.564 MHz
>   .nr_running                    : 1
> --
> cpu#11, 2693.564 MHz
>   .nr_running                    : 0
> --
> cpu#12, 2693.564 MHz
>   .nr_running                    : 1
> --
> cpu#13, 2693.564 MHz
>   .nr_running                    : 1
> --
> cpu#14, 2693.564 MHz
>   .nr_running                    : 0
> --
> cpu#15, 2693.564 MHz
>   .nr_running                    : 0
> --
> cpu#16, 2693.564 MHz
>   .nr_running                    : 0
> --
> cpu#17, 2693.564 MHz
>   .nr_running                    : 0
> --
> cpu#18, 2693.564 MHz
>   .nr_running                    : 1
> --
> cpu#19, 2693.564 MHz
>   .nr_running                    : 0
> --
> cpu#20, 2693.564 MHz
>   .nr_running                    : 0
> --
> cpu#21, 2693.564 MHz
>   .nr_running                    : 0
> --
> cpu#22, 2693.564 MHz
>   .nr_running                    : 0
> --
> cpu#23, 2693.564 MHz
>   .nr_running                    : 0
> --
> cpu#24, 2693.564 MHz
>   .nr_running                    : 0
> --
> cpu#25, 2693.564 MHz
>   .nr_running                    : 1
> --
> cpu#26, 2693.564 MHz
>   .nr_running                    : 1
> --
> cpu#27, 2693.564 MHz
>   .nr_running                    : 1
> --
> cpu#28, 2693.564 MHz
>   .nr_running                    : 0
> --
> cpu#29, 2693.564 MHz
>   .nr_running                    : 0
> --
> cpu#30, 2693.564 MHz
>   .nr_running                    : 1
> --
> cpu#31, 2693.564 MHz
>   .nr_running                    : 1
> 
> >From above, we notice 16 threads are distributed among 2 physical CPUs.
> After apply the patch, 16 threads are only distributed at one physical CPU.
> In this case, we can notice 30% power saving.
> Following are the result after apply the patch:
> 
> cpu#0, 2693.384 MHz
>   .nr_running                    : 1
> --
> cpu#1, 2693.384 MHz
>   .nr_running                    : 1
> --
> cpu#2, 2693.384 MHz
>   .nr_running                    : 1
> --
> cpu#3, 2693.384 MHz
>   .nr_running                    : 1
> --
> cpu#4, 2693.384 MHz
>   .nr_running                    : 1
> --
> cpu#5, 2693.384 MHz
>   .nr_running                    : 1
> --
> cpu#6, 2693.384 MHz
>   .nr_running                    : 1
> --
> cpu#7, 2693.384 MHz
>   .nr_running                    : 1
> --
> cpu#8, 2693.384 MHz
>   .nr_running                    : 0
> --
> cpu#9, 2693.384 MHz
>   .nr_running                    : 0
> --
> cpu#10, 2693.384 MHz
>   .nr_running                    : 0
> --
> cpu#11, 2693.384 MHz
>   .nr_running                    : 0
> --
> cpu#12, 2693.384 MHz
>   .nr_running                    : 0
> --
> cpu#13, 2693.384 MHz
>   .nr_running                    : 0
> --
> cpu#14, 2693.384 MHz
>   .nr_running                    : 0
> --
> cpu#15, 2693.384 MHz
>   .nr_running                    : 0
> --
> cpu#16, 2693.384 MHz
>   .nr_running                    : 1
> --
> cpu#17, 2693.384 MHz
>   .nr_running                    : 1
> --
> cpu#18, 2693.384 MHz
>   .nr_running                    : 1
> --
> cpu#19, 2693.384 MHz
>   .nr_running                    : 1
> --
> cpu#20, 2693.384 MHz
>   .nr_running                    : 1
> --
> cpu#21, 2693.384 MHz
>   .nr_running                    : 1
> --
> cpu#22, 2693.384 MHz
>   .nr_running                    : 1
> --
> cpu#23, 2693.384 MHz
>   .nr_running                    : 1
> --
> cpu#24, 2693.384 MHz
>   .nr_running                    : 0
> --
> cpu#25, 2693.384 MHz
>   .nr_running                    : 0
> --
> cpu#26, 2693.384 MHz
>   .nr_running                    : 0
> --
> cpu#27, 2693.384 MHz
>   .nr_running                    : 0
> --
> cpu#28, 2693.384 MHz
>   .nr_running                    : 0
> --
> cpu#29, 2693.384 MHz
>   .nr_running                    : 0
> --
> cpu#30, 2693.384 MHz
>   .nr_running                    : 0
> --
> cpu#31, 2693.384 MHz
>   .nr_running                    : 1
> 
> 
> This patch will set SMT sibling power capability to SCHED_POWER_SCALE
> (1024) when sched_smt_power_savings set. So when there is possible do power
> saving during scheduling, scheduler will truly schedule processes as
> sched_smt_power_savings should do.
>  
> 
> Signed-off-by: Youquan Song <youquan.song@intel.com>
> Tested-by: Anhua Xu <anhua.xu@intel.com>
> ---
>  kernel/sched/fair.c |    3 +++
>  1 files changed, 3 insertions(+), 0 deletions(-)

<formletter>

This is not the correct way to submit patches for inclusion in the
stable kernel tree.  Please read Documentation/stable_kernel_rules.txt
for how to do this properly.

</formletter>

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-09 14:35       ` Peter Zijlstra
@ 2012-01-09 16:03         ` Vaidyanathan Srinivasan
  2012-01-09 16:13           ` Peter Zijlstra
  0 siblings, 1 reply; 39+ messages in thread
From: Vaidyanathan Srinivasan @ 2012-01-09 16:03 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Youquan Song, linux-kernel, mingo, tglx, hpa, akpm, stable,
	suresh.b.siddha, arjan, len.brown, anhua.xu, chaohong.guo,
	Youquan Song, Paul Turner

* Peter Zijlstra <peterz@infradead.org> [2012-01-09 15:35:07]:

> On Mon, 2012-01-09 at 16:30 +0530, Vaidyanathan Srinivasan wrote:
> > Lets combine the MC/SMT options and create sched_powersavings={0,1,2}
> > That will make it only one sysfs knob without any dependencies.
> > 
> Is there really any sane rationale to keep the 1/2 thing? Why not have a
> single boolean knob that says performance/power?

Yes, based on the architecture and topology, we do have two sweet
spots for power vs performance trade offs.  The first level should be
to reduce power savings with marginal performance impact and second
one will be to go for the most aggressive power savings.

The first one should generally be recommended as default to have
a right balance between performance and power savings, while the
second one should be used for reducing power consumption on
unimportant workloads or under certain constraints.

Some example policies:

sched_powersavings=1:

        Enable consolidation at MC level

sched_powersavings=2:

        Enable aggressive consolidation at MC level and SMT level if
        available. In case arch can benefit from cross node
        consolidation, then enable it.

Having the above simple split in policy will enable wide adoption
where the first level can be a recommended default.  Having just
a boolean enable/disable will mean the end-user will have to decide
when to turn on and later off for best workload experience.

Just similar to cpufreq policy of performance, ondemand and powersave.
They have their unique use cases and this design choice helps us ship
ondemand as default.

--Vaidy


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-09 16:03         ` Vaidyanathan Srinivasan
@ 2012-01-09 16:13           ` Peter Zijlstra
  2012-01-09 17:05             ` Vaidyanathan Srinivasan
  0 siblings, 1 reply; 39+ messages in thread
From: Peter Zijlstra @ 2012-01-09 16:13 UTC (permalink / raw)
  To: svaidy
  Cc: Youquan Song, linux-kernel, mingo, tglx, hpa, akpm, stable,
	suresh.b.siddha, arjan, len.brown, anhua.xu, chaohong.guo,
	Youquan Song, Paul Turner

On Mon, 2012-01-09 at 21:33 +0530, Vaidyanathan Srinivasan wrote:

> Yes, based on the architecture and topology, we do have two sweet
> spots for power vs performance trade offs.  The first level should be
> to reduce power savings with marginal performance impact and second
> one will be to go for the most aggressive power savings.

Colour me unconvinced, cache heavy workloads will suffer greatly from
your 1.

> The first one should generally be recommended as default to have
> a right balance between performance and power savings, while the
> second one should be used for reducing power consumption on
> unimportant workloads or under certain constraints.
> 
> Some example policies:
> 
> sched_powersavings=1:
> 
>         Enable consolidation at MC level
> 
> sched_powersavings=2:
> 
>         Enable aggressive consolidation at MC level and SMT level if
>         available. In case arch can benefit from cross node
>         consolidation, then enable it.

You fail for mentioning MC/SMT..

> Having the above simple split in policy will enable wide adoption
> where the first level can be a recommended default.  Having just
> a boolean enable/disable will mean the end-user will have to decide
> when to turn on and later off for best workload experience.

Picking one of two states is too hard, hence we given them one of three
states to pick from.. How does that make sense?

> Just similar to cpufreq policy of performance, ondemand and powersave.
> They have their unique use cases and this design choice helps us ship
> ondemand as default.

You fail for thinking having multiple cpufreq governors is a good thing.
The result is that they all suck.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-09 16:13           ` Peter Zijlstra
@ 2012-01-09 17:05             ` Vaidyanathan Srinivasan
  0 siblings, 0 replies; 39+ messages in thread
From: Vaidyanathan Srinivasan @ 2012-01-09 17:05 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Youquan Song, linux-kernel, mingo, tglx, hpa, akpm, stable,
	suresh.b.siddha, arjan, len.brown, anhua.xu, chaohong.guo,
	Youquan Song, Paul Turner

* Peter Zijlstra <peterz@infradead.org> [2012-01-09 17:13:17]:

> On Mon, 2012-01-09 at 21:33 +0530, Vaidyanathan Srinivasan wrote:
> 
> > Yes, based on the architecture and topology, we do have two sweet
> > spots for power vs performance trade offs.  The first level should be
> > to reduce power savings with marginal performance impact and second
> > one will be to go for the most aggressive power savings.
> 
> Colour me unconvinced, cache heavy workloads will suffer greatly from
> your 1.

Certain workloads will get hit heavily by '1', so default choice of
'1' is bad for this case. However many general workloads could give
good power savings with marginal performance loss.  For those general
cases, we could keep '1' as default.

> > The first one should generally be recommended as default to have
> > a right balance between performance and power savings, while the
> > second one should be used for reducing power consumption on
> > unimportant workloads or under certain constraints.
> > 
> > Some example policies:
> > 
> > sched_powersavings=1:
> > 
> >         Enable consolidation at MC level
> > 
> > sched_powersavings=2:
> > 
> >         Enable aggressive consolidation at MC level and SMT level if
> >         available. In case arch can benefit from cross node
> >         consolidation, then enable it.
> 
> You fail for mentioning MC/SMT..

My point was that SMT thread level consolidation comes with larger
performance loss compared to core level.  We need not expose this
settings to end user, but kernel can choose 'what' to enable at '2'
based on architecture/topology.

> > Having the above simple split in policy will enable wide adoption
> > where the first level can be a recommended default.  Having just
> > a boolean enable/disable will mean the end-user will have to decide
> > when to turn on and later off for best workload experience.
> 
> Picking one of two states is too hard, hence we given them one of three
> states to pick from.. How does that make sense?

Ok, I am suggesting that having three states will allow the user to
decide 'once' and leave the setting, rather than keep changing the
settings between enable/disable.

I am suggesting that designing powersavings=1 as a good default will
make the adoption simple.  On the other hand, only having
power_savings=enable would mean users will have to decide 'when' to
enable based on some policy, since leaving it enabled could affect
overall performance significantly.

> > Just similar to cpufreq policy of performance, ondemand and powersave.
> > They have their unique use cases and this design choice helps us ship
> > ondemand as default.
> 
> You fail for thinking having multiple cpufreq governors is a good thing.
> The result is that they all suck.

Majority of the users are served by the good default 'ondemand'
governor that has good powersavings without affecting performance
a lot.  If we just had performance and powersave governors, we will
have to choose 'performance' as default and design a method to choose
powersave only when utilization is low.

I agree with you that we do have too many cpufreq governors and
tunables than required.  But a good default covers most use cases,
leaving the rest for corner cases and workload specific tunings.  On
modern systems, cpuidle states and scheduling policy becomes
a significant power savings tradeoffs and hence we will need the
flexibility.

--Vaidy



^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-10  5:58       ` Youquan Song
@ 2012-01-09 23:52         ` Suresh Siddha
  2012-01-10  9:18           ` Ingo Molnar
  2012-01-10 16:54           ` Youquan Song
  0 siblings, 2 replies; 39+ messages in thread
From: Suresh Siddha @ 2012-01-09 23:52 UTC (permalink / raw)
  To: Youquan Song
  Cc: Peter Zijlstra, linux-kernel, mingo, tglx, hpa, akpm, stable,
	arjan, len.brown, anhua.xu, chaohong.guo, Youquan Song

On Tue, 2012-01-10 at 00:58 -0500, Youquan Song wrote:
> Thanks Peter! Here is the patch. 

Youquan, As far as I know both the
sched_smt_power_savings/sched_mc_power_savings are broken for atleast an
year.

Have you checked the state of 'sched_mc_power_savings' to see if it is
working or gets addressed with your proposed patches?

thanks,
suresh


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-09 10:06 ` Peter Zijlstra
  2012-01-09 10:28   ` Peter Zijlstra
@ 2012-01-10  0:14   ` Youquan Song
  2012-01-09 11:05     ` Peter Zijlstra
  2012-01-09 11:12     ` Peter Zijlstra
  1 sibling, 2 replies; 39+ messages in thread
From: Youquan Song @ 2012-01-10  0:14 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Youquan Song, linux-kernel, mingo, tglx, hpa, akpm, stable,
	suresh.b.siddha, arjan, len.brown, anhua.xu, chaohong.guo,
	Youquan Song

> Yes it is.. also that knob should die! Like i've been saying for way too
> long. I'm >< close to committing a patch removing all the power_saving
> magic from the scheduler.
Sorry. I do not notice it.
But currently in many real tests, the knob prove to save power in semi-idle system.
They are useful in many user scenarios currently.

> That's the most convoluted way I've seen that stated in a while. What
> you're saying is that all threads (of a socket) should be used before
> spilling over to another socket.
Only touch half. 
Another half: in ideal, all threads in one core be used before spilling over to another
core.

> Hell no, that's completely the wrong thing to do. I think you want to
> frob at the group_capacity computation in update_sg_lb_stats.
> 

Yes. It also can do in the ways you said. 
I have following testing code before which is changed in update_sg_lb_stats, 
but there are more refresh code required for fix. 
Fine, I will base your suggestion to develop another patch soon.

Thanks
-Youquan

@@ -3923,6 +3923,10 @@ static inline void update_sg_lb_stats(struct
sched_domain *sd,
                                                SCHED_POWER_SCALE);
        if (!sgs->group_capacity)
                sgs->group_capacity = fix_small_capacity(sd, group);
+
+       if (sched_smt_power_savings)
+               sgs->group_capacity *= 2;
+
        sgs->group_weight = group->group_weight;

        if (sgs->group_capacity > sgs->sum_nr_running)


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-09 14:29         ` Vincent Guittot
  (?)
  (?)
@ 2012-01-10  1:54         ` Suresh Siddha
  2012-01-10  8:08           ` Vincent Guittot
  -1 siblings, 1 reply; 39+ messages in thread
From: Suresh Siddha @ 2012-01-10  1:54 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Peter Zijlstra, Youquan Song, linux-kernel, mingo, tglx, hpa,
	akpm, stable, arjan, len.brown, anhua.xu, chaohong.guo,
	Youquan Song

On Mon, 2012-01-09 at 15:29 +0100, Vincent Guittot wrote: 
> Hi Peter,
> 
> I'm also using sched_mc level for doing powersaving load balance on
> ARM platform and we have real benefits.

hi Vincent, Can you elaborate on your platform topology where you see
the benefits?

on x86, 'sched_mc' was designed for the case of consolidating load into
one socket before using another idle socket.

Just wondering if this is your use case too or if you are consolidating
at a different topological level.

thanks,
suresh 


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-09 14:46         ` Peter Zijlstra
@ 2012-01-10  2:12           ` Indan Zupancic
  2012-01-10  9:26             ` Peter Zijlstra
  0 siblings, 1 reply; 39+ messages in thread
From: Indan Zupancic @ 2012-01-10  2:12 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Vincent Guittot, Youquan Song, linux-kernel, mingo, tglx, hpa,
	akpm, stable, suresh.b.siddha, arjan, len.brown, anhua.xu

Hello,

On Mon, January 9, 2012 15:46, Peter Zijlstra wrote:
> On Mon, 2012-01-09 at 15:29 +0100, Vincent Guittot wrote:
>
>> I'm also using sched_mc level for doing powersaving load balance on
>> ARM platform and we have real benefits.
>
> Right, I've never said that power aware balancing was without merit, I
> know it matters (quite a lot for some).
>
>> We might modify the way we choose between power or performance mode
>> because it's not always a matter of gathering or spreading tasks on
>> cpus but until we found a best interface it's the way to enable
>> powersaving mode
>
> Sure, it was the only interface available.
>
> But I really want to get rid of the topology based knobs we have now,
> preferably I even want to get rid of the multi-value thing.
>
> pjt still needs to post his linsched rework *poke* *poke*, that should
> give a good basis to rework most of this without regressing the world
> and then some.
>
> But even without that I think we can do better than we do currently.

Perhaps it should be controlled by the CPU governor instead of being
a separate knob? Because in the end it is about power saving and that
is the CPU governor's role. So to me it seems that at least the policy
should come from/via the CPU governor. It seems right that there is one
place that controls both frequencies and active core count.

I guess what happens now is that powersaving aware scheduling causes a
core to become idle, with the CPU governor detecting that and putting
the core to sleep. Maybe it should be the other way round, with the
governor detecting that not all cores are needed and putting one to
sleep?

This way the scheduler can schedule optimally over the available cores
for maximum performance. For some hardware it may also be much better
to have multiple active cores at low voltage and frequency instead of
fewer cores at high frequency, especially if more cache is available.
To pick the right policy automatically this kind of information has to
be known, and currently such kind of information seems to come from the
cpuidle/cpufreq drivers.

Having just one knob which chooses between gathering or spreading tasks
is a good start. But in the end it seems core idling should be controlled
automatically depending on the hardware and circumstances by something
like a CPU governor. It probably should also be aware of IRQ balancing
and making sure idle cores don't get any interrupts while there are active
cores that could handle them.

I know people weren't too happy with the current implementation of the
CPU governors either. Didn't someone implement a proof of concept version
of an improved governor a while ago? (Arjan?)

Greetings,

Indan



^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-09 11:05     ` Peter Zijlstra
@ 2012-01-10  5:58       ` Youquan Song
  2012-01-09 23:52         ` Suresh Siddha
  2012-01-10 16:44       ` Vaidyanathan Srinivasan
  1 sibling, 1 reply; 39+ messages in thread
From: Youquan Song @ 2012-01-10  5:58 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Youquan Song, linux-kernel, mingo, tglx, hpa, akpm, stable,
	suresh.b.siddha, arjan, len.brown, anhua.xu, chaohong.guo,
	Youquan Song

> Note, this has the hard-coded assumption you only have 2 threads per
> core, which while true for intel, isn't true in general. I think you
> meant to write *= group->group_weight or somesuch.
> 
> Also, you forgot to limit this to the SD_SHARE_CPUPOWER domain, you're
> now doubling the capacity for all domains.
> 
> Furthermore, have a look at the SD_PREFER_SIBLING logic and make sure
> you're not fighting that.
> 
Thanks Peter! Here is the patch. 

-Youquan

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index a4d2b7a..4ada3e7 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3923,6 +3923,10 @@ static inline void update_sg_lb_stats(struct
sched_domain *sd,
                                                SCHED_POWER_SCALE);
        if (!sgs->group_capacity)
                sgs->group_capacity = fix_small_capacity(sd, group);
+
+       if (sched_smt_power_savings && !(sd->flags & SD_SHARE_CPUPOWER))
+               sgs->group_capacity = group->group_weight;
+
        sgs->group_weight = group->group_weight;

        if (sgs->group_capacity > sgs->sum_nr_running)

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-10  1:54         ` Suresh Siddha
@ 2012-01-10  8:08           ` Vincent Guittot
  0 siblings, 0 replies; 39+ messages in thread
From: Vincent Guittot @ 2012-01-10  8:08 UTC (permalink / raw)
  To: Suresh Siddha
  Cc: Peter Zijlstra, Youquan Song, linux-kernel, mingo, tglx, hpa,
	akpm, stable, arjan, len.brown, anhua.xu, chaohong.guo,
	Youquan Song

On 10 January 2012 02:54, Suresh Siddha <suresh.b.siddha@intel.com> wrote:
> On Mon, 2012-01-09 at 15:29 +0100, Vincent Guittot wrote:
>> Hi Peter,
>>
>> I'm also using sched_mc level for doing powersaving load balance on
>> ARM platform and we have real benefits.
>
> hi Vincent, Can you elaborate on your platform topology where you see
> the benefits?
>

Hi Suresh,

I'm using dual/quad cores ARM platform. When sched_mc is different
from 0, the topology is changed from 1 socket/cluster to 2 virtual
sockets/clusters with 1 or 2 cores in each cluster.  For low cpu load
situation, the tasks are gathered in 1 virtual socket and let the
cores in the other socket to enter idle state thanks to cpuidle. On
ARM platform, cores in a cluster can enter interesting idle state even
if the complete cluster can't reach a deep idle state.
With dual cores platform, i'm increasing the cpu_power to gather tasks
on 1 core when cores are at lowest frequency. In fact i'm detecting a
low cpu load (only small tasks are running) and ensure that these
small tasks are kept on 1 core.

Regards,
Vincent

> on x86, 'sched_mc' was designed for the case of consolidating load into
> one socket before using another idle socket.
>
> Just wondering if this is your use case too or if you are consolidating
> at a different topological level.
>
> thanks,
> suresh
>

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-09 23:52         ` Suresh Siddha
@ 2012-01-10  9:18           ` Ingo Molnar
  2012-01-10 14:32             ` Arjan van de Ven
  2012-01-10 16:54           ` Youquan Song
  1 sibling, 1 reply; 39+ messages in thread
From: Ingo Molnar @ 2012-01-10  9:18 UTC (permalink / raw)
  To: Suresh Siddha
  Cc: Youquan Song, Peter Zijlstra, linux-kernel, tglx, hpa, akpm,
	stable, arjan, len.brown, anhua.xu, chaohong.guo, Youquan Song


* Suresh Siddha <suresh.b.siddha@intel.com> wrote:

> On Tue, 2012-01-10 at 00:58 -0500, Youquan Song wrote:
> > Thanks Peter! Here is the patch. 
> 
> Youquan, As far as I know both the
> sched_smt_power_savings/sched_mc_power_savings are broken for atleast an
> year.

We want a single knob, sched_power_savings - with the mc_ and 
smt_ ones still kept and aliased to sched_power_savings, for 
compatibility reasons.

As Peter said, the other reasonable option is to have no knob at 
all and restart this code from scratch.

The other thing we should do is to add sane defaults: to turn on 
sched_power_savings *AUTOMATICALLY* when a system is obviously 
battery driven and turn it off when the system is obviously AC 
driven. User-space can still implement policy and override the 
kernel's default, but there's absolutely no excuse to not offer 
this default ourselves.

The kernel default should probably change automatically on 
powerloss and battery events as well. (again, if user-space 
wants to do it itself, it can do it and we won't interfere.)

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-10  2:12           ` Indan Zupancic
@ 2012-01-10  9:26             ` Peter Zijlstra
  0 siblings, 0 replies; 39+ messages in thread
From: Peter Zijlstra @ 2012-01-10  9:26 UTC (permalink / raw)
  To: Indan Zupancic
  Cc: Vincent Guittot, Youquan Song, linux-kernel, mingo, tglx, hpa,
	akpm, stable, suresh.b.siddha, arjan, len.brown, anhua.xu

On Tue, 2012-01-10 at 03:12 +0100, Indan Zupancic wrote:
> Perhaps it should be controlled by the CPU governor instead of being
> a separate knob? 

Possibly, Arjan was working on new infrastructure there and we wanted
tighter integration with the scheduler for a number of other reasons as
well, hopefully Arjan will get things sorted soon and show us what it
looks like ;-)

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-10  9:18           ` Ingo Molnar
@ 2012-01-10 14:32             ` Arjan van de Ven
  2012-01-10 14:41               ` Peter Zijlstra
                                 ` (3 more replies)
  0 siblings, 4 replies; 39+ messages in thread
From: Arjan van de Ven @ 2012-01-10 14:32 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Suresh Siddha, Youquan Song, Peter Zijlstra, linux-kernel, tglx,
	hpa, akpm, stable, len.brown, anhua.xu, chaohong.guo,
	Youquan Song

On 1/10/2012 1:18 AM, Ingo Molnar wrote:
> 
> * Suresh Siddha <suresh.b.siddha@intel.com> wrote:
> 
>> On Tue, 2012-01-10 at 00:58 -0500, Youquan Song wrote:
>>> Thanks Peter! Here is the patch. 
>>
>> Youquan, As far as I know both the
>> sched_smt_power_savings/sched_mc_power_savings are broken for atleast an
>> year.
> 
> We want a single knob, sched_power_savings - with the mc_ and 
> smt_ ones still kept and aliased to sched_power_savings, for 
> compatibility reasons.
> 
> As Peter said, the other reasonable option is to have no knob at 
> all and restart this code from scratch.
> 
> The other thing we should do is to add sane defaults: to turn on 
> sched_power_savings *AUTOMATICALLY* when a system is obviously 
> battery driven and turn it off when the system is obviously AC 
> driven. User-space can still implement policy and override the 
> kernel's default, but there's absolutely no excuse to not offer 
> this default ourselves.

a very good default would be to keep all tasks on one package until half
the cores in the package are busy, and then start spreading out.

I suspect that'll be the 90% case coverage.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-10 14:32             ` Arjan van de Ven
@ 2012-01-10 14:41               ` Peter Zijlstra
  2012-01-10 14:54                 ` Arjan van de Ven
  2012-01-10 15:32                 ` Vincent Guittot
                                 ` (2 subsequent siblings)
  3 siblings, 1 reply; 39+ messages in thread
From: Peter Zijlstra @ 2012-01-10 14:41 UTC (permalink / raw)
  To: Arjan van de Ven
  Cc: Ingo Molnar, Suresh Siddha, Youquan Song, linux-kernel, tglx,
	hpa, akpm, stable, len.brown, anhua.xu, chaohong.guo,
	Youquan Song

On Tue, 2012-01-10 at 06:32 -0800, Arjan van de Ven wrote:
> 
> a very good default would be to keep all tasks on one package until half
> the cores in the package are busy, and then start spreading out. 

Does that still make sense when there's strong NUMA preference? By
forcing stuff on a single package you increase the number of remote
memory fetches (which generally generate more stalls), also the memory
controllers need to stay awake anyway.



^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-10 14:41               ` Peter Zijlstra
@ 2012-01-10 14:54                 ` Arjan van de Ven
  0 siblings, 0 replies; 39+ messages in thread
From: Arjan van de Ven @ 2012-01-10 14:54 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, Suresh Siddha, Youquan Song, linux-kernel, tglx,
	hpa, akpm, stable, len.brown, anhua.xu, chaohong.guo,
	Youquan Song

On 1/10/2012 6:41 AM, Peter Zijlstra wrote:
> On Tue, 2012-01-10 at 06:32 -0800, Arjan van de Ven wrote:
>>
>> a very good default would be to keep all tasks on one package until half
>> the cores in the package are busy, and then start spreading out. 
> 
> Does that still make sense when there's strong NUMA preference? By
> forcing stuff on a single package you increase the number of remote
> memory fetches (which generally generate more stalls), also the memory
> controllers need to stay awake anyway.

the memory controllers need to stake regardless of what you do;
it's more a memory bandwidth kind of thing.

if you have an enormous numa factor (>= 10 or so), then you really need
a completely different policy I suspect.
Thankfully those are rare.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-10 14:32             ` Arjan van de Ven
@ 2012-01-10 15:32                 ` Vincent Guittot
  2012-01-10 15:32                 ` Vincent Guittot
                                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 39+ messages in thread
From: Vincent Guittot @ 2012-01-10 15:32 UTC (permalink / raw)
  To: Arjan van de Ven
  Cc: Ingo Molnar, Suresh Siddha, Youquan Song, Peter Zijlstra,
	linux-kernel, tglx, hpa, akpm, stable, len.brown, anhua.xu,
	chaohong.guo, Youquan Song

On 10 January 2012 15:32, Arjan van de Ven <arjan@linux.intel.com> wrote:
> On 1/10/2012 1:18 AM, Ingo Molnar wrote:
>>
>> * Suresh Siddha <suresh.b.siddha@intel.com> wrote:
>>
>>> On Tue, 2012-01-10 at 00:58 -0500, Youquan Song wrote:
>>>> Thanks Peter! Here is the patch.
>>>
>>> Youquan, As far as I know both the
>>> sched_smt_power_savings/sched_mc_power_savings are broken for atleast an
>>> year.
>>
>> We want a single knob, sched_power_savings - with the mc_ and
>> smt_ ones still kept and aliased to sched_power_savings, for
>> compatibility reasons.
>>
>> As Peter said, the other reasonable option is to have no knob at
>> all and restart this code from scratch.
>>
>> The other thing we should do is to add sane defaults: to turn on
>> sched_power_savings *AUTOMATICALLY* when a system is obviously
>> battery driven and turn it off when the system is obviously AC
>> driven. User-space can still implement policy and override the
>> kernel's default, but there's absolutely no excuse to not offer
>> this default ourselves.
>
> a very good default would be to keep all tasks on one package until half
> the cores in the package are busy, and then start spreading out.
>

The choice of spreading or not tasks is clearly architecture or even
platform dependent. Can't we get such optimal threshold  information
from architecture code ?

> I suspect that'll be the 90% case coverage.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
@ 2012-01-10 15:32                 ` Vincent Guittot
  0 siblings, 0 replies; 39+ messages in thread
From: Vincent Guittot @ 2012-01-10 15:32 UTC (permalink / raw)
  To: Arjan van de Ven
  Cc: Ingo Molnar, Suresh Siddha, Youquan Song, Peter Zijlstra,
	linux-kernel, tglx, hpa, akpm, stable, len.brown, anhua.xu,
	chaohong.guo, Youquan Song

On 10 January 2012 15:32, Arjan van de Ven <arjan@linux.intel.com> wrote:
> On 1/10/2012 1:18 AM, Ingo Molnar wrote:
>>
>> * Suresh Siddha <suresh.b.siddha@intel.com> wrote:
>>
>>> On Tue, 2012-01-10 at 00:58 -0500, Youquan Song wrote:
>>>> Thanks Peter! Here is the patch.
>>>
>>> Youquan, As far as I know both the
>>> sched_smt_power_savings/sched_mc_power_savings are broken for atleast an
>>> year.
>>
>> We want a single knob, sched_power_savings - with the mc_ and
>> smt_ ones still kept and aliased to sched_power_savings, for
>> compatibility reasons.
>>
>> As Peter said, the other reasonable option is to have no knob at
>> all and restart this code from scratch.
>>
>> The other thing we should do is to add sane defaults: to turn on
>> sched_power_savings *AUTOMATICALLY* when a system is obviously
>> battery driven and turn it off when the system is obviously AC
>> driven. User-space can still implement policy and override the
>> kernel's default, but there's absolutely no excuse to not offer
>> this default ourselves.
>
> a very good default would be to keep all tasks on one package until half
> the cores in the package are busy, and then start spreading out.
>

The choice of spreading or not tasks is clearly architecture or even
platform dependent. Can't we get such optimal threshold  information
from architecture code ?

> I suspect that'll be the 90% case coverage.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at �http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at �http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-09 11:05     ` Peter Zijlstra
  2012-01-10  5:58       ` Youquan Song
@ 2012-01-10 16:44       ` Vaidyanathan Srinivasan
  1 sibling, 0 replies; 39+ messages in thread
From: Vaidyanathan Srinivasan @ 2012-01-10 16:44 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Youquan Song, linux-kernel, mingo, tglx, hpa, akpm, stable,
	suresh.b.siddha, arjan, len.brown, anhua.xu, chaohong.guo,
	Youquan Song

* Peter Zijlstra <a.p.zijlstra@chello.nl> [2012-01-09 12:05:56]:

> On Mon, 2012-01-09 at 19:14 -0500, Youquan Song wrote:
> > Fine, I will base your suggestion to develop another patch soon.
> > 
> 
> > 
> > @@ -3923,6 +3923,10 @@ static inline void update_sg_lb_stats(struct
> > sched_domain *sd,
> >                                                 SCHED_POWER_SCALE);
> >         if (!sgs->group_capacity)
> >                 sgs->group_capacity = fix_small_capacity(sd, group);
> > +
> > +       if (sched_smt_power_savings)
> > +               sgs->group_capacity *= 2; 
> 
> Note, this has the hard-coded assumption you only have 2 threads per
> core, which while true for intel, isn't true in general. I think you
> meant to write *= group->group_weight or somesuch.
> 
> Also, you forgot to limit this to the SD_SHARE_CPUPOWER domain, you're
> now doubling the capacity for all domains.
> 
> Furthermore, have a look at the SD_PREFER_SIBLING logic and make sure
> you're not fighting that.

Hi Peter,

I think I had proposed the following fix earlier.  Can we revisit this
now?  This works, but it is not the best solution.

    sched: Fix group_capacity for sched_smt_powersavings
    
    sched_smt_powersavings for threaded systems need this fix for
    consolidation to sibling threads to work.  Since threads have
    fractional capacity, group_capacity will turn out to be one
    always and not accommodate another task in the sibling thread.
    
    This fix makes group_capacity a function of cpumask_weight that
    will enable the power saving load balancer to pack tasks among
    sibling threads and keep more cores idle.
    
    Signed-off-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8e42de9..77ac4ca 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4010,6 +4010,21 @@ static inline void update_sd_lb_stats(struct sched_domain *sd, int this_cpu,
 		 */
 		if (prefer_sibling && !local_group && sds->this_has_capacity)
 			sgs.group_capacity = min(sgs.group_capacity, 1UL);
+		/*
+		 * If power savings balance is set at this domain, then
+		 * make capacity equal to number of hardware threads to
+		 * accommodate more tasks until capacity is reached.
+		 */
+		else if (sd->flags & SD_POWERSAVINGS_BALANCE)
+			sgs.group_capacity =
+				cpumask_weight(sched_group_cpus(sg));
+
+			/*
+			 * The default group_capacity is rounded from sum of
+			 * fractional cpu_powers of sibling hardware threads
+			 * in order to enable fair use of available hardware
+			 * resources.
+			 */
 
 		if (local_group) {
 			sds->this_load = sgs.avg_load;
@@ -4432,7 +4447,8 @@ static int need_active_balance(struct sched_domain *sd, int idle,
 		 * move_tasks() will succeed.  ld_moved will be true and this
 		 * active balance code will not be triggered.
 		 */
-		if (sched_mc_power_savings < POWERSAVINGS_BALANCE_WAKEUP)
+		if (sched_mc_power_savings < POWERSAVINGS_BALANCE_WAKEUP &&
+		    sched_smt_power_savings < POWERSAVINGS_BALANCE_WAKEUP)
 			return 0;
 	}
 


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-10 14:32             ` Arjan van de Ven
  2012-01-10 14:41               ` Peter Zijlstra
  2012-01-10 15:32                 ` Vincent Guittot
@ 2012-01-10 16:49               ` Vaidyanathan Srinivasan
  2012-01-10 19:41               ` Ingo Molnar
  3 siblings, 0 replies; 39+ messages in thread
From: Vaidyanathan Srinivasan @ 2012-01-10 16:49 UTC (permalink / raw)
  To: Arjan van de Ven
  Cc: Ingo Molnar, Suresh Siddha, Youquan Song, Peter Zijlstra,
	linux-kernel, tglx, hpa, akpm, stable, len.brown, anhua.xu,
	chaohong.guo, Youquan Song

* Arjan van de Ven <arjan@linux.intel.com> [2012-01-10 06:32:16]:

> On 1/10/2012 1:18 AM, Ingo Molnar wrote:
> > 
> > * Suresh Siddha <suresh.b.siddha@intel.com> wrote:
> > 
> >> On Tue, 2012-01-10 at 00:58 -0500, Youquan Song wrote:
> >>> Thanks Peter! Here is the patch. 
> >>
> >> Youquan, As far as I know both the
> >> sched_smt_power_savings/sched_mc_power_savings are broken for atleast an
> >> year.
> > 
> > We want a single knob, sched_power_savings - with the mc_ and 
> > smt_ ones still kept and aliased to sched_power_savings, for 
> > compatibility reasons.
> > 
> > As Peter said, the other reasonable option is to have no knob at 
> > all and restart this code from scratch.
> > 
> > The other thing we should do is to add sane defaults: to turn on 
> > sched_power_savings *AUTOMATICALLY* when a system is obviously 
> > battery driven and turn it off when the system is obviously AC 
> > driven. User-space can still implement policy and override the 
> > kernel's default, but there's absolutely no excuse to not offer 
> > this default ourselves.
> 
> a very good default would be to keep all tasks on one package until half
> the cores in the package are busy, and then start spreading out.
> 
> I suspect that'll be the 90% case coverage.

This is a good default, but I guess we will not consolidate tasks into
SMT threads by default since that will be a larger performance drop.

--Vaidy


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-10 16:54           ` Youquan Song
@ 2012-01-10 16:51             ` Vaidyanathan Srinivasan
  2012-01-10 19:01               ` Suresh Siddha
  0 siblings, 1 reply; 39+ messages in thread
From: Vaidyanathan Srinivasan @ 2012-01-10 16:51 UTC (permalink / raw)
  To: Youquan Song
  Cc: Suresh Siddha, Peter Zijlstra, linux-kernel, mingo, tglx, hpa,
	akpm, stable, arjan, len.brown, anhua.xu, chaohong.guo,
	Youquan Song

* Youquan Song <youquan.song@intel.com> [2012-01-10 11:54:26]:

> 
> > Youquan, As far as I know both the
> > sched_smt_power_savings/sched_mc_power_savings are broken for atleast an
> > year.
> > 
> > Have you checked the state of 'sched_mc_power_savings' to see if it is
> > working or gets addressed with your proposed patches?
> Thanks Suresh!
> 
> We have verified that sched_mc_power_savings works well in mainline and
> -tip tree, only sched_smt_power_savings broken.

Hi Suresh,

My testing also shows that sched_mc works find and only sched_smt
suffers from the group capacity problem at the core level.

--Vaidy


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-09 23:52         ` Suresh Siddha
  2012-01-10  9:18           ` Ingo Molnar
@ 2012-01-10 16:54           ` Youquan Song
  2012-01-10 16:51             ` Vaidyanathan Srinivasan
  1 sibling, 1 reply; 39+ messages in thread
From: Youquan Song @ 2012-01-10 16:54 UTC (permalink / raw)
  To: Suresh Siddha
  Cc: Youquan Song, Peter Zijlstra, linux-kernel, mingo, tglx, hpa,
	akpm, stable, arjan, len.brown, anhua.xu, chaohong.guo,
	Youquan Song

 
> Youquan, As far as I know both the
> sched_smt_power_savings/sched_mc_power_savings are broken for atleast an
> year.
> 
> Have you checked the state of 'sched_mc_power_savings' to see if it is
> working or gets addressed with your proposed patches?
Thanks Suresh!

We have verified that sched_mc_power_savings works well in mainline and
-tip tree, only sched_smt_power_savings broken.

Sorry. My proposed patch last night still need be refreshed. After I closely look
at SD_PREFER_SIBLING logic, there is possible a fighting which Peter has
mentioned it.

I will post a update patch soon.

-Youquan 


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-10 16:51             ` Vaidyanathan Srinivasan
@ 2012-01-10 19:01               ` Suresh Siddha
  2012-01-11  3:52                 ` Vaidyanathan Srinivasan
  0 siblings, 1 reply; 39+ messages in thread
From: Suresh Siddha @ 2012-01-10 19:01 UTC (permalink / raw)
  To: svaidy
  Cc: Youquan Song, Peter Zijlstra, linux-kernel, mingo, tglx, hpa,
	akpm, stable, arjan, len.brown, anhua.xu, chaohong.guo,
	Youquan Song

On Tue, 2012-01-10 at 22:21 +0530, Vaidyanathan Srinivasan wrote:
> * Youquan Song <youquan.song@intel.com> [2012-01-10 11:54:26]:
> 
> > 
> > > Youquan, As far as I know both the
> > > sched_smt_power_savings/sched_mc_power_savings are broken for atleast an
> > > year.
> > > 
> > > Have you checked the state of 'sched_mc_power_savings' to see if it is
> > > working or gets addressed with your proposed patches?
> > Thanks Suresh!
> > 
> > We have verified that sched_mc_power_savings works well in mainline and
> > -tip tree, only sched_smt_power_savings broken.
> 
> Hi Suresh,
> 
> My testing also shows that sched_mc works find and only sched_smt
> suffers from the group capacity problem at the core level.
> 

No. I just checked and it doesn't work as intended.

On a 2 socket 8-core SMT system, I ran 8 cpu hoggers and just toggled
'sched_mc_power_savings' between 0 and 1 multiple times. Irrespective of
the value, those 8 hoggers are distributed evenly between two sockets -
4 each and doesn't change their positions.

So I can bet it doesn't work.

thanks,
suresh


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-10 14:32             ` Arjan van de Ven
                                 ` (2 preceding siblings ...)
  2012-01-10 16:49               ` Vaidyanathan Srinivasan
@ 2012-01-10 19:41               ` Ingo Molnar
  2012-01-10 19:44                 ` Ingo Molnar
  3 siblings, 1 reply; 39+ messages in thread
From: Ingo Molnar @ 2012-01-10 19:41 UTC (permalink / raw)
  To: Arjan van de Ven
  Cc: Suresh Siddha, Youquan Song, Peter Zijlstra, linux-kernel, tglx,
	hpa, akpm, stable, len.brown, anhua.xu, chaohong.guo,
	Youquan Song


* Arjan van de Ven <arjan@linux.intel.com> wrote:

> On 1/10/2012 1:18 AM, Ingo Molnar wrote:
> > 
> > * Suresh Siddha <suresh.b.siddha@intel.com> wrote:
> > 
> >> On Tue, 2012-01-10 at 00:58 -0500, Youquan Song wrote:
> >>> Thanks Peter! Here is the patch. 
> >>
> >> Youquan, As far as I know both the
> >> sched_smt_power_savings/sched_mc_power_savings are broken for atleast an
> >> year.
> > 
> > We want a single knob, sched_power_savings - with the mc_ and 
> > smt_ ones still kept and aliased to sched_power_savings, for 
> > compatibility reasons.
> > 
> > As Peter said, the other reasonable option is to have no knob at 
> > all and restart this code from scratch.
> > 
> > The other thing we should do is to add sane defaults: to turn on 
> > sched_power_savings *AUTOMATICALLY* when a system is obviously 
> > battery driven and turn it off when the system is obviously AC 
> > driven. User-space can still implement policy and override the 
> > kernel's default, but there's absolutely no excuse to not offer 
> > this default ourselves.
> 
> a very good default would be to keep all tasks on one package 
> until half the cores in the package are busy, and then start 
> spreading out.
> 
> I suspect that'll be the 90% case coverage.

Maybe - but there's no reason to connect all the dots within the 
kernel and actually *discover* nd use the very, very likely 
performance preference of the hardware in question.

Like a good resource management system (==kernel) should do.

We can do that with a 99% confidence factor or so - maybe better 
- and leave all the weird cases that the kernel cannot (or 
should not) know about to 'user space policy' knobs.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-10 19:41               ` Ingo Molnar
@ 2012-01-10 19:44                 ` Ingo Molnar
  0 siblings, 0 replies; 39+ messages in thread
From: Ingo Molnar @ 2012-01-10 19:44 UTC (permalink / raw)
  To: Arjan van de Ven
  Cc: Suresh Siddha, Youquan Song, Peter Zijlstra, linux-kernel, tglx,
	hpa, akpm, stable, len.brown, anhua.xu, chaohong.guo,
	Youquan Song


* Ingo Molnar <mingo@elte.hu> wrote:

> > a very good default would be to keep all tasks on one 
> > package until half the cores in the package are busy, and 
> > then start spreading out.
> > 
> > I suspect that'll be the 90% case coverage.
> 
> Maybe - but there's no reason to connect all the dots within 
                                  ^not
> the kernel and actually *discover* and use the very, very 
> likely performance preference of the hardware in question.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-10 19:01               ` Suresh Siddha
@ 2012-01-11  3:52                 ` Vaidyanathan Srinivasan
  2012-01-11 17:37                   ` Youquan Song
  0 siblings, 1 reply; 39+ messages in thread
From: Vaidyanathan Srinivasan @ 2012-01-11  3:52 UTC (permalink / raw)
  To: Suresh Siddha
  Cc: Youquan Song, Peter Zijlstra, linux-kernel, mingo, tglx, hpa,
	akpm, stable, arjan, len.brown, anhua.xu, chaohong.guo,
	Youquan Song

* Suresh Siddha <suresh.b.siddha@intel.com> [2012-01-10 11:01:13]:

> On Tue, 2012-01-10 at 22:21 +0530, Vaidyanathan Srinivasan wrote:
> > * Youquan Song <youquan.song@intel.com> [2012-01-10 11:54:26]:
> > 
> > > 
> > > > Youquan, As far as I know both the
> > > > sched_smt_power_savings/sched_mc_power_savings are broken for atleast an
> > > > year.
> > > > 
> > > > Have you checked the state of 'sched_mc_power_savings' to see if it is
> > > > working or gets addressed with your proposed patches?
> > > Thanks Suresh!
> > > 
> > > We have verified that sched_mc_power_savings works well in mainline and
> > > -tip tree, only sched_smt_power_savings broken.
> > 
> > Hi Suresh,
> > 
> > My testing also shows that sched_mc works find and only sched_smt
> > suffers from the group capacity problem at the core level.
> > 
> 
> No. I just checked and it doesn't work as intended.
> 
> On a 2 socket 8-core SMT system, I ran 8 cpu hoggers and just toggled
> 'sched_mc_power_savings' between 0 and 1 multiple times. Irrespective of
> the value, those 8 hoggers are distributed evenly between two sockets -
> 4 each and doesn't change their positions.
> 
> So I can bet it doesn't work.

Hi Suresh,

I had tested sched_mc_powersavings=2 on dual socket quad core HT
nehalem.  It worked as expected.  Let me check the
sched_mc_powersavings=1 case.  I will not be surprised if it is
broken.

--Vaidy


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
  2012-01-11  3:52                 ` Vaidyanathan Srinivasan
@ 2012-01-11 17:37                   ` Youquan Song
  0 siblings, 0 replies; 39+ messages in thread
From: Youquan Song @ 2012-01-11 17:37 UTC (permalink / raw)
  To: Vaidyanathan Srinivasan
  Cc: Suresh Siddha, Youquan Song, Peter Zijlstra, linux-kernel, mingo,
	tglx, hpa, akpm, stable, arjan, len.brown, anhua.xu,
	chaohong.guo, Youquan Song

> I had tested sched_mc_powersavings=2 on dual socket quad core HT
> nehalem.  It worked as expected.  Let me check the
> sched_mc_powersavings=1 case.  I will not be surprised if it is
> broken.
> 
 
In my opinion. When CPU cores > 8, sched_mc will have issue of course.
What's more, when CPU cores increase it will be abviously if following
current CPU power capability calcaluation logic.

If CPU has 8 cores and if HT enabled, its power capability > 9 cores. 
Current logic, HT enable core power capability = 1178,  1178 * 8
/ 1024 > 9.2. So scheduler will try to schedule >8 threads to meet its
power capability before use other socket if sched_mc used.

So it will be better, like this: Of course, It is only sample code
and I do not try it yet.

For MC:
+       if (sched_mc_power_savings)
+               sgs->group_capacity = group->group_weight/2;

For SMT:
+       if (sched_smt_power_savings && !(sd->flags & SD_SHARE_CPUPOWER))
+               sgs->group_capacity = group->group_weight;


Thanks
-Youquan

^ permalink raw reply	[flat|nested] 39+ messages in thread

* [tip:sched/core] sched: Remove stale power aware scheduling remnants and dysfunctional knobs
  2012-01-09 10:28   ` Peter Zijlstra
                       ` (2 preceding siblings ...)
  2012-01-09 14:13     ` Arjan van de Ven
@ 2012-05-18 10:19     ` tip-bot for Peter Zijlstra
  3 siblings, 0 replies; 39+ messages in thread
From: tip-bot for Peter Zijlstra @ 2012-05-18 10:19 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, hpa, mingo, torvalds, arjan, a.p.zijlstra, peterz,
	akpm, suresh.b.siddha, vincent.guittot, tglx, svaidy

Commit-ID:  8e7fbcbc22c12414bcc9dfdd683637f58fb32759
Gitweb:     http://git.kernel.org/tip/8e7fbcbc22c12414bcc9dfdd683637f58fb32759
Author:     Peter Zijlstra <peterz@infradead.org>
AuthorDate: Mon, 9 Jan 2012 11:28:35 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Thu, 17 May 2012 13:48:56 +0200

sched: Remove stale power aware scheduling remnants and dysfunctional knobs

It's been broken forever (i.e. it's not scheduling in a power
aware fashion), as reported by Suresh and others sending
patches, and nobody cares enough to fix it properly ...
so remove it to make space free for something better.

There's various problems with the code as it stands today, first
and foremost the user interface which is bound to topology
levels and has multiple values per level. This results in a
state explosion which the administrator or distro needs to
master and almost nobody does.

Furthermore large configuration state spaces aren't good, it
means the thing doesn't just work right because it's either
under so many impossibe to meet constraints, or even if
there's an achievable state workloads have to be aware of
it precisely and can never meet it for dynamic workloads.

So pushing this kind of decision to user-space was a bad idea
even with a single knob - it's exponentially worse with knobs
on every node of the topology.

There is a proposal to replace the user interface with a single
3 state knob:

 sched_balance_policy := { performance, power, auto }

where 'auto' would be the preferred default which looks at things
like Battery/AC mode and possible cpufreq state or whatever the hw
exposes to show us power use expectations - but there's been no
progress on it in the past many months.

Aside from that, the actual implementation of the various knobs
is known to be broken. There have been sporadic attempts at
fixing things but these always stop short of reaching a mergable
state.

Therefore this wholesale removal with the hopes of spurring
people who care to come forward once again and work on a
coherent replacement.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: http://lkml.kernel.org/r/1326104915.2442.53.camel@twins
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 Documentation/ABI/testing/sysfs-devices-system-cpu |   25 --
 Documentation/scheduler/sched-domains.txt          |    4 -
 arch/x86/kernel/smpboot.c                          |    3 +-
 drivers/base/cpu.c                                 |    4 -
 include/linux/cpu.h                                |    2 -
 include/linux/sched.h                              |   47 ----
 include/linux/topology.h                           |    5 -
 kernel/sched/core.c                                |   94 -------
 kernel/sched/fair.c                                |  275 +-------------------
 tools/power/cpupower/man/cpupower-set.1            |    9 -
 tools/power/cpupower/utils/helpers/sysfs.c         |   35 +---
 11 files changed, 5 insertions(+), 498 deletions(-)

diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
index e7be75b..5dab364 100644
--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
+++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
@@ -9,31 +9,6 @@ Description:
 
 		/sys/devices/system/cpu/cpu#/
 
-What:		/sys/devices/system/cpu/sched_mc_power_savings
-		/sys/devices/system/cpu/sched_smt_power_savings
-Date:		June 2006
-Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
-Description:	Discover and adjust the kernel's multi-core scheduler support.
-
-		Possible values are:
-
-		0 - No power saving load balance (default value)
-		1 - Fill one thread/core/package first for long running threads
-		2 - Also bias task wakeups to semi-idle cpu package for power
-		    savings
-
-		sched_mc_power_savings is dependent upon SCHED_MC, which is
-		itself architecture dependent.
-
-		sched_smt_power_savings is dependent upon SCHED_SMT, which
-		is itself architecture dependent.
-
-		The two files are independent of each other. It is possible
-		that one file may be present without the other.
-
-		Introduced by git commit 5c45bf27.
-
-
 What:		/sys/devices/system/cpu/kernel_max
 		/sys/devices/system/cpu/offline
 		/sys/devices/system/cpu/online
diff --git a/Documentation/scheduler/sched-domains.txt b/Documentation/scheduler/sched-domains.txt
index b7ee379..443f0c7 100644
--- a/Documentation/scheduler/sched-domains.txt
+++ b/Documentation/scheduler/sched-domains.txt
@@ -61,10 +61,6 @@ The implementor should read comments in include/linux/sched.h:
 struct sched_domain fields, SD_FLAG_*, SD_*_INIT to get an idea of
 the specifics and what to tune.
 
-For SMT, the architecture must define CONFIG_SCHED_SMT and provide a
-cpumask_t cpu_sibling_map[NR_CPUS], where cpu_sibling_map[i] is the mask of
-all "i"'s siblings as well as "i" itself.
-
 Architectures may retain the regular override the default SD_*_INIT flags
 while using the generic domain builder in kernel/sched.c if they wish to
 retain the traditional SMT->SMP->NUMA topology (or some subset of that). This
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index e84c1bb..256c20c 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -429,8 +429,7 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
 	 * For perf, we return last level cache shared map.
 	 * And for power savings, we return cpu_core_map
 	 */
-	if ((sched_mc_power_savings || sched_smt_power_savings) &&
-	    !(cpu_has(c, X86_FEATURE_AMD_DCM)))
+	if (!(cpu_has(c, X86_FEATURE_AMD_DCM)))
 		return cpu_core_mask(cpu);
 	else
 		return cpu_llc_shared_mask(cpu);
diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
index adf937b..6345294 100644
--- a/drivers/base/cpu.c
+++ b/drivers/base/cpu.c
@@ -330,8 +330,4 @@ void __init cpu_dev_init(void)
 		panic("Failed to register CPU subsystem");
 
 	cpu_dev_register_generic();
-
-#if defined(CONFIG_SCHED_MC) || defined(CONFIG_SCHED_SMT)
-	sched_create_sysfs_power_savings_entries(cpu_subsys.dev_root);
-#endif
 }
diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index ee28844..7230bb5 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -36,8 +36,6 @@ extern void cpu_remove_dev_attr(struct device_attribute *attr);
 extern int cpu_add_dev_attr_group(struct attribute_group *attrs);
 extern void cpu_remove_dev_attr_group(struct attribute_group *attrs);
 
-extern int sched_create_sysfs_power_savings_entries(struct device *dev);
-
 #ifdef CONFIG_HOTPLUG_CPU
 extern void unregister_cpu(struct cpu *cpu);
 extern ssize_t arch_cpu_probe(const char *, size_t);
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 4a559bf..3d64480 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -855,61 +855,14 @@ enum cpu_idle_type {
 #define SD_WAKE_AFFINE		0x0020	/* Wake task to waking CPU */
 #define SD_PREFER_LOCAL		0x0040  /* Prefer to keep tasks local to this domain */
 #define SD_SHARE_CPUPOWER	0x0080	/* Domain members share cpu power */
-#define SD_POWERSAVINGS_BALANCE	0x0100	/* Balance for power savings */
 #define SD_SHARE_PKG_RESOURCES	0x0200	/* Domain members share cpu pkg resources */
 #define SD_SERIALIZE		0x0400	/* Only a single load balancing instance */
 #define SD_ASYM_PACKING		0x0800  /* Place busy groups earlier in the domain */
 #define SD_PREFER_SIBLING	0x1000	/* Prefer to place tasks in a sibling domain */
 #define SD_OVERLAP		0x2000	/* sched_domains of this level overlap */
 
-enum powersavings_balance_level {
-	POWERSAVINGS_BALANCE_NONE = 0,  /* No power saving load balance */
-	POWERSAVINGS_BALANCE_BASIC,	/* Fill one thread/core/package
-					 * first for long running threads
-					 */
-	POWERSAVINGS_BALANCE_WAKEUP,	/* Also bias task wakeups to semi-idle
-					 * cpu package for power savings
-					 */
-	MAX_POWERSAVINGS_BALANCE_LEVELS
-};
-
-extern int sched_mc_power_savings, sched_smt_power_savings;
-
-static inline int sd_balance_for_mc_power(void)
-{
-	if (sched_smt_power_savings)
-		return SD_POWERSAVINGS_BALANCE;
-
-	if (!sched_mc_power_savings)
-		return SD_PREFER_SIBLING;
-
-	return 0;
-}
-
-static inline int sd_balance_for_package_power(void)
-{
-	if (sched_mc_power_savings | sched_smt_power_savings)
-		return SD_POWERSAVINGS_BALANCE;
-
-	return SD_PREFER_SIBLING;
-}
-
 extern int __weak arch_sd_sibiling_asym_packing(void);
 
-/*
- * Optimise SD flags for power savings:
- * SD_BALANCE_NEWIDLE helps aggressive task consolidation and power savings.
- * Keep default SD flags if sched_{smt,mc}_power_saving=0
- */
-
-static inline int sd_power_saving_flags(void)
-{
-	if (sched_mc_power_savings | sched_smt_power_savings)
-		return SD_BALANCE_NEWIDLE;
-
-	return 0;
-}
-
 struct sched_group_power {
 	atomic_t ref;
 	/*
diff --git a/include/linux/topology.h b/include/linux/topology.h
index 4f59bf3..09558d1 100644
--- a/include/linux/topology.h
+++ b/include/linux/topology.h
@@ -98,7 +98,6 @@ int arch_update_cpu_topology(void);
 				| 0*SD_BALANCE_WAKE			\
 				| 1*SD_WAKE_AFFINE			\
 				| 1*SD_SHARE_CPUPOWER			\
-				| 0*SD_POWERSAVINGS_BALANCE		\
 				| 1*SD_SHARE_PKG_RESOURCES		\
 				| 0*SD_SERIALIZE			\
 				| 0*SD_PREFER_SIBLING			\
@@ -134,8 +133,6 @@ int arch_update_cpu_topology(void);
 				| 0*SD_SHARE_CPUPOWER			\
 				| 1*SD_SHARE_PKG_RESOURCES		\
 				| 0*SD_SERIALIZE			\
-				| sd_balance_for_mc_power()		\
-				| sd_power_saving_flags()		\
 				,					\
 	.last_balance		= jiffies,				\
 	.balance_interval	= 1,					\
@@ -167,8 +164,6 @@ int arch_update_cpu_topology(void);
 				| 0*SD_SHARE_CPUPOWER			\
 				| 0*SD_SHARE_PKG_RESOURCES		\
 				| 0*SD_SERIALIZE			\
-				| sd_balance_for_package_power()	\
-				| sd_power_saving_flags()		\
 				,					\
 	.last_balance		= jiffies,				\
 	.balance_interval	= 1,					\
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index bd314d7..24ca677 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5929,8 +5929,6 @@ static const struct cpumask *cpu_cpu_mask(int cpu)
 	return cpumask_of_node(cpu_to_node(cpu));
 }
 
-int sched_smt_power_savings = 0, sched_mc_power_savings = 0;
-
 struct sd_data {
 	struct sched_domain **__percpu sd;
 	struct sched_group **__percpu sg;
@@ -6322,7 +6320,6 @@ sd_numa_init(struct sched_domain_topology_level *tl, int cpu)
 					| 0*SD_WAKE_AFFINE
 					| 0*SD_PREFER_LOCAL
 					| 0*SD_SHARE_CPUPOWER
-					| 0*SD_POWERSAVINGS_BALANCE
 					| 0*SD_SHARE_PKG_RESOURCES
 					| 1*SD_SERIALIZE
 					| 0*SD_PREFER_SIBLING
@@ -6819,97 +6816,6 @@ match2:
 	mutex_unlock(&sched_domains_mutex);
 }
 
-#if defined(CONFIG_SCHED_MC) || defined(CONFIG_SCHED_SMT)
-static void reinit_sched_domains(void)
-{
-	get_online_cpus();
-
-	/* Destroy domains first to force the rebuild */
-	partition_sched_domains(0, NULL, NULL);
-
-	rebuild_sched_domains();
-	put_online_cpus();
-}
-
-static ssize_t sched_power_savings_store(const char *buf, size_t count, int smt)
-{
-	unsigned int level = 0;
-
-	if (sscanf(buf, "%u", &level) != 1)
-		return -EINVAL;
-
-	/*
-	 * level is always be positive so don't check for
-	 * level < POWERSAVINGS_BALANCE_NONE which is 0
-	 * What happens on 0 or 1 byte write,
-	 * need to check for count as well?
-	 */
-
-	if (level >= MAX_POWERSAVINGS_BALANCE_LEVELS)
-		return -EINVAL;
-
-	if (smt)
-		sched_smt_power_savings = level;
-	else
-		sched_mc_power_savings = level;
-
-	reinit_sched_domains();
-
-	return count;
-}
-
-#ifdef CONFIG_SCHED_MC
-static ssize_t sched_mc_power_savings_show(struct device *dev,
-					   struct device_attribute *attr,
-					   char *buf)
-{
-	return sprintf(buf, "%u\n", sched_mc_power_savings);
-}
-static ssize_t sched_mc_power_savings_store(struct device *dev,
-					    struct device_attribute *attr,
-					    const char *buf, size_t count)
-{
-	return sched_power_savings_store(buf, count, 0);
-}
-static DEVICE_ATTR(sched_mc_power_savings, 0644,
-		   sched_mc_power_savings_show,
-		   sched_mc_power_savings_store);
-#endif
-
-#ifdef CONFIG_SCHED_SMT
-static ssize_t sched_smt_power_savings_show(struct device *dev,
-					    struct device_attribute *attr,
-					    char *buf)
-{
-	return sprintf(buf, "%u\n", sched_smt_power_savings);
-}
-static ssize_t sched_smt_power_savings_store(struct device *dev,
-					    struct device_attribute *attr,
-					     const char *buf, size_t count)
-{
-	return sched_power_savings_store(buf, count, 1);
-}
-static DEVICE_ATTR(sched_smt_power_savings, 0644,
-		   sched_smt_power_savings_show,
-		   sched_smt_power_savings_store);
-#endif
-
-int __init sched_create_sysfs_power_savings_entries(struct device *dev)
-{
-	int err = 0;
-
-#ifdef CONFIG_SCHED_SMT
-	if (smt_capable())
-		err = device_create_file(dev, &dev_attr_sched_smt_power_savings);
-#endif
-#ifdef CONFIG_SCHED_MC
-	if (!err && mc_capable())
-		err = device_create_file(dev, &dev_attr_sched_mc_power_savings);
-#endif
-	return err;
-}
-#endif /* CONFIG_SCHED_MC || CONFIG_SCHED_SMT */
-
 /*
  * Update cpusets according to cpu_active mask.  If cpusets are
  * disabled, cpuset_update_active_cpus() becomes a simple wrapper
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0b42f44..940e6d1 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2721,7 +2721,7 @@ select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flags)
 		 * If power savings logic is enabled for a domain, see if we
 		 * are not overloaded, if so, don't balance wider.
 		 */
-		if (tmp->flags & (SD_POWERSAVINGS_BALANCE|SD_PREFER_LOCAL)) {
+		if (tmp->flags & (SD_PREFER_LOCAL)) {
 			unsigned long power = 0;
 			unsigned long nr_running = 0;
 			unsigned long capacity;
@@ -2734,9 +2734,6 @@ select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flags)
 
 			capacity = DIV_ROUND_CLOSEST(power, SCHED_POWER_SCALE);
 
-			if (tmp->flags & SD_POWERSAVINGS_BALANCE)
-				nr_running /= 2;
-
 			if (nr_running < capacity)
 				want_sd = 0;
 		}
@@ -3435,14 +3432,6 @@ struct sd_lb_stats {
 	unsigned int  busiest_group_weight;
 
 	int group_imb; /* Is there imbalance in this sd */
-#if defined(CONFIG_SCHED_MC) || defined(CONFIG_SCHED_SMT)
-	int power_savings_balance; /* Is powersave balance needed for this sd */
-	struct sched_group *group_min; /* Least loaded group in sd */
-	struct sched_group *group_leader; /* Group which relieves group_min */
-	unsigned long min_load_per_task; /* load_per_task in group_min */
-	unsigned long leader_nr_running; /* Nr running of group_leader */
-	unsigned long min_nr_running; /* Nr running of group_min */
-#endif
 };
 
 /*
@@ -3486,147 +3475,6 @@ static inline int get_sd_load_idx(struct sched_domain *sd,
 	return load_idx;
 }
 
-
-#if defined(CONFIG_SCHED_MC) || defined(CONFIG_SCHED_SMT)
-/**
- * init_sd_power_savings_stats - Initialize power savings statistics for
- * the given sched_domain, during load balancing.
- *
- * @sd: Sched domain whose power-savings statistics are to be initialized.
- * @sds: Variable containing the statistics for sd.
- * @idle: Idle status of the CPU at which we're performing load-balancing.
- */
-static inline void init_sd_power_savings_stats(struct sched_domain *sd,
-	struct sd_lb_stats *sds, enum cpu_idle_type idle)
-{
-	/*
-	 * Busy processors will not participate in power savings
-	 * balance.
-	 */
-	if (idle == CPU_NOT_IDLE || !(sd->flags & SD_POWERSAVINGS_BALANCE))
-		sds->power_savings_balance = 0;
-	else {
-		sds->power_savings_balance = 1;
-		sds->min_nr_running = ULONG_MAX;
-		sds->leader_nr_running = 0;
-	}
-}
-
-/**
- * update_sd_power_savings_stats - Update the power saving stats for a
- * sched_domain while performing load balancing.
- *
- * @group: sched_group belonging to the sched_domain under consideration.
- * @sds: Variable containing the statistics of the sched_domain
- * @local_group: Does group contain the CPU for which we're performing
- * 		load balancing ?
- * @sgs: Variable containing the statistics of the group.
- */
-static inline void update_sd_power_savings_stats(struct sched_group *group,
-	struct sd_lb_stats *sds, int local_group, struct sg_lb_stats *sgs)
-{
-
-	if (!sds->power_savings_balance)
-		return;
-
-	/*
-	 * If the local group is idle or completely loaded
-	 * no need to do power savings balance at this domain
-	 */
-	if (local_group && (sds->this_nr_running >= sgs->group_capacity ||
-				!sds->this_nr_running))
-		sds->power_savings_balance = 0;
-
-	/*
-	 * If a group is already running at full capacity or idle,
-	 * don't include that group in power savings calculations
-	 */
-	if (!sds->power_savings_balance ||
-		sgs->sum_nr_running >= sgs->group_capacity ||
-		!sgs->sum_nr_running)
-		return;
-
-	/*
-	 * Calculate the group which has the least non-idle load.
-	 * This is the group from where we need to pick up the load
-	 * for saving power
-	 */
-	if ((sgs->sum_nr_running < sds->min_nr_running) ||
-	    (sgs->sum_nr_running == sds->min_nr_running &&
-	     group_first_cpu(group) > group_first_cpu(sds->group_min))) {
-		sds->group_min = group;
-		sds->min_nr_running = sgs->sum_nr_running;
-		sds->min_load_per_task = sgs->sum_weighted_load /
-						sgs->sum_nr_running;
-	}
-
-	/*
-	 * Calculate the group which is almost near its
-	 * capacity but still has some space to pick up some load
-	 * from other group and save more power
-	 */
-	if (sgs->sum_nr_running + 1 > sgs->group_capacity)
-		return;
-
-	if (sgs->sum_nr_running > sds->leader_nr_running ||
-	    (sgs->sum_nr_running == sds->leader_nr_running &&
-	     group_first_cpu(group) < group_first_cpu(sds->group_leader))) {
-		sds->group_leader = group;
-		sds->leader_nr_running = sgs->sum_nr_running;
-	}
-}
-
-/**
- * check_power_save_busiest_group - see if there is potential for some power-savings balance
- * @env: load balance environment
- * @sds: Variable containing the statistics of the sched_domain
- *	under consideration.
- *
- * Description:
- * Check if we have potential to perform some power-savings balance.
- * If yes, set the busiest group to be the least loaded group in the
- * sched_domain, so that it's CPUs can be put to idle.
- *
- * Returns 1 if there is potential to perform power-savings balance.
- * Else returns 0.
- */
-static inline
-int check_power_save_busiest_group(struct lb_env *env, struct sd_lb_stats *sds)
-{
-	if (!sds->power_savings_balance)
-		return 0;
-
-	if (sds->this != sds->group_leader ||
-			sds->group_leader == sds->group_min)
-		return 0;
-
-	env->imbalance = sds->min_load_per_task;
-	sds->busiest = sds->group_min;
-
-	return 1;
-
-}
-#else /* CONFIG_SCHED_MC || CONFIG_SCHED_SMT */
-static inline void init_sd_power_savings_stats(struct sched_domain *sd,
-	struct sd_lb_stats *sds, enum cpu_idle_type idle)
-{
-	return;
-}
-
-static inline void update_sd_power_savings_stats(struct sched_group *group,
-	struct sd_lb_stats *sds, int local_group, struct sg_lb_stats *sgs)
-{
-	return;
-}
-
-static inline
-int check_power_save_busiest_group(struct lb_env *env, struct sd_lb_stats *sds)
-{
-	return 0;
-}
-#endif /* CONFIG_SCHED_MC || CONFIG_SCHED_SMT */
-
-
 unsigned long default_scale_freq_power(struct sched_domain *sd, int cpu)
 {
 	return SCHED_POWER_SCALE;
@@ -3932,7 +3780,6 @@ static inline void update_sd_lb_stats(struct lb_env *env,
 	if (child && child->flags & SD_PREFER_SIBLING)
 		prefer_sibling = 1;
 
-	init_sd_power_savings_stats(env->sd, sds, env->idle);
 	load_idx = get_sd_load_idx(env->sd, env->idle);
 
 	do {
@@ -3981,7 +3828,6 @@ static inline void update_sd_lb_stats(struct lb_env *env,
 			sds->group_imb = sgs.group_imb;
 		}
 
-		update_sd_power_savings_stats(sg, sds, local_group, &sgs);
 		sg = sg->next;
 	} while (sg != env->sd->groups);
 }
@@ -4276,12 +4122,6 @@ force_balance:
 	return sds.busiest;
 
 out_balanced:
-	/*
-	 * There is no obvious imbalance. But check if we can do some balancing
-	 * to save power.
-	 */
-	if (check_power_save_busiest_group(env, &sds))
-		return sds.busiest;
 ret:
 	env->imbalance = 0;
 	return NULL;
@@ -4359,28 +4199,6 @@ static int need_active_balance(struct lb_env *env)
 		 */
 		if ((sd->flags & SD_ASYM_PACKING) && env->src_cpu > env->dst_cpu)
 			return 1;
-
-		/*
-		 * The only task running in a non-idle cpu can be moved to this
-		 * cpu in an attempt to completely freeup the other CPU
-		 * package.
-		 *
-		 * The package power saving logic comes from
-		 * find_busiest_group(). If there are no imbalance, then
-		 * f_b_g() will return NULL. However when sched_mc={1,2} then
-		 * f_b_g() will select a group from which a running task may be
-		 * pulled to this cpu in order to make the other package idle.
-		 * If there is no opportunity to make a package idle and if
-		 * there are no imbalance, then f_b_g() will return NULL and no
-		 * action will be taken in load_balance_newidle().
-		 *
-		 * Under normal task pull operation due to imbalance, there
-		 * will be more than one task in the source run queue and
-		 * move_tasks() will succeed.  ld_moved will be true and this
-		 * active balance code will not be triggered.
-		 */
-		if (sched_mc_power_savings < POWERSAVINGS_BALANCE_WAKEUP)
-			return 0;
 	}
 
 	return unlikely(sd->nr_balance_failed > sd->cache_nice_tries+2);
@@ -4700,104 +4518,15 @@ static struct {
 	unsigned long next_balance;     /* in jiffy units */
 } nohz ____cacheline_aligned;
 
-#if defined(CONFIG_SCHED_MC) || defined(CONFIG_SCHED_SMT)
-/**
- * lowest_flag_domain - Return lowest sched_domain containing flag.
- * @cpu:	The cpu whose lowest level of sched domain is to
- *		be returned.
- * @flag:	The flag to check for the lowest sched_domain
- *		for the given cpu.
- *
- * Returns the lowest sched_domain of a cpu which contains the given flag.
- */
-static inline struct sched_domain *lowest_flag_domain(int cpu, int flag)
-{
-	struct sched_domain *sd;
-
-	for_each_domain(cpu, sd)
-		if (sd->flags & flag)
-			break;
-
-	return sd;
-}
-
-/**
- * for_each_flag_domain - Iterates over sched_domains containing the flag.
- * @cpu:	The cpu whose domains we're iterating over.
- * @sd:		variable holding the value of the power_savings_sd
- *		for cpu.
- * @flag:	The flag to filter the sched_domains to be iterated.
- *
- * Iterates over all the scheduler domains for a given cpu that has the 'flag'
- * set, starting from the lowest sched_domain to the highest.
- */
-#define for_each_flag_domain(cpu, sd, flag) \
-	for (sd = lowest_flag_domain(cpu, flag); \
-		(sd && (sd->flags & flag)); sd = sd->parent)
-
-/**
- * find_new_ilb - Finds the optimum idle load balancer for nomination.
- * @cpu:	The cpu which is nominating a new idle_load_balancer.
- *
- * Returns:	Returns the id of the idle load balancer if it exists,
- *		Else, returns >= nr_cpu_ids.
- *
- * This algorithm picks the idle load balancer such that it belongs to a
- * semi-idle powersavings sched_domain. The idea is to try and avoid
- * completely idle packages/cores just for the purpose of idle load balancing
- * when there are other idle cpu's which are better suited for that job.
- */
-static int find_new_ilb(int cpu)
+static inline int find_new_ilb(int call_cpu)
 {
 	int ilb = cpumask_first(nohz.idle_cpus_mask);
-	struct sched_group *ilbg;
-	struct sched_domain *sd;
 
-	/*
-	 * Have idle load balancer selection from semi-idle packages only
-	 * when power-aware load balancing is enabled
-	 */
-	if (!(sched_smt_power_savings || sched_mc_power_savings))
-		goto out_done;
-
-	/*
-	 * Optimize for the case when we have no idle CPUs or only one
-	 * idle CPU. Don't walk the sched_domain hierarchy in such cases
-	 */
-	if (cpumask_weight(nohz.idle_cpus_mask) < 2)
-		goto out_done;
-
-	rcu_read_lock();
-	for_each_flag_domain(cpu, sd, SD_POWERSAVINGS_BALANCE) {
-		ilbg = sd->groups;
-
-		do {
-			if (ilbg->group_weight !=
-				atomic_read(&ilbg->sgp->nr_busy_cpus)) {
-				ilb = cpumask_first_and(nohz.idle_cpus_mask,
-							sched_group_cpus(ilbg));
-				goto unlock;
-			}
-
-			ilbg = ilbg->next;
-
-		} while (ilbg != sd->groups);
-	}
-unlock:
-	rcu_read_unlock();
-
-out_done:
 	if (ilb < nr_cpu_ids && idle_cpu(ilb))
 		return ilb;
 
 	return nr_cpu_ids;
 }
-#else /*  (CONFIG_SCHED_MC || CONFIG_SCHED_SMT) */
-static inline int find_new_ilb(int call_cpu)
-{
-	return nr_cpu_ids;
-}
-#endif
 
 /*
  * Kick a CPU to do the nohz balancing, if it is time for it. We pick the
diff --git a/tools/power/cpupower/man/cpupower-set.1 b/tools/power/cpupower/man/cpupower-set.1
index c4954a9..9dbd536 100644
--- a/tools/power/cpupower/man/cpupower-set.1
+++ b/tools/power/cpupower/man/cpupower-set.1
@@ -85,15 +85,6 @@ Possible values are:
 savings
 .RE
 
-sched_mc_power_savings is dependent upon SCHED_MC, which is
-itself architecture dependent.
-
-sched_smt_power_savings is dependent upon SCHED_SMT, which
-is itself architecture dependent.
-
-The two files are independent of each other. It is possible
-that one file may be present without the other.
-
 .SH "SEE ALSO"
 cpupower-info(1), cpupower-monitor(1), powertop(1)
 .PP
diff --git a/tools/power/cpupower/utils/helpers/sysfs.c b/tools/power/cpupower/utils/helpers/sysfs.c
index c634302..96e28c1 100644
--- a/tools/power/cpupower/utils/helpers/sysfs.c
+++ b/tools/power/cpupower/utils/helpers/sysfs.c
@@ -362,22 +362,7 @@ char *sysfs_get_cpuidle_driver(void)
  */
 int sysfs_get_sched(const char *smt_mc)
 {
-	unsigned long value;
-	char linebuf[MAX_LINE_LEN];
-	char *endp;
-	char path[SYSFS_PATH_MAX];
-
-	if (strcmp("mc", smt_mc) && strcmp("smt", smt_mc))
-		return -EINVAL;
-
-	snprintf(path, sizeof(path),
-		PATH_TO_CPU "sched_%s_power_savings", smt_mc);
-	if (sysfs_read_file(path, linebuf, MAX_LINE_LEN) == 0)
-		return -1;
-	value = strtoul(linebuf, &endp, 0);
-	if (endp == linebuf || errno == ERANGE)
-		return -1;
-	return value;
+	return -ENODEV;
 }
 
 /*
@@ -388,21 +373,5 @@ int sysfs_get_sched(const char *smt_mc)
  */
 int sysfs_set_sched(const char *smt_mc, int val)
 {
-	char linebuf[MAX_LINE_LEN];
-	char path[SYSFS_PATH_MAX];
-	struct stat statbuf;
-
-	if (strcmp("mc", smt_mc) && strcmp("smt", smt_mc))
-		return -EINVAL;
-
-	snprintf(path, sizeof(path),
-		PATH_TO_CPU "sched_%s_power_savings", smt_mc);
-	sprintf(linebuf, "%d", val);
-
-	if (stat(path, &statbuf) != 0)
-		return -ENODEV;
-
-	if (sysfs_write_file(path, linebuf, MAX_LINE_LEN) == 0)
-		return -1;
-	return 0;
+	return -ENODEV;
 }

^ permalink raw reply related	[flat|nested] 39+ messages in thread

end of thread, other threads:[~2012-05-18 10:20 UTC | newest]

Thread overview: 39+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-01-09  8:56 [PATCH] x86,sched: Fix sched_smt_power_savings totally broken Youquan Song
2012-01-09 10:06 ` Peter Zijlstra
2012-01-09 10:28   ` Peter Zijlstra
2012-01-09 10:30     ` Peter Zijlstra
2012-01-09 11:00     ` Vaidyanathan Srinivasan
2012-01-09 14:35       ` Peter Zijlstra
2012-01-09 16:03         ` Vaidyanathan Srinivasan
2012-01-09 16:13           ` Peter Zijlstra
2012-01-09 17:05             ` Vaidyanathan Srinivasan
2012-01-09 14:13     ` Arjan van de Ven
2012-05-18 10:19     ` [tip:sched/core] sched: Remove stale power aware scheduling remnants and dysfunctional knobs tip-bot for Peter Zijlstra
2012-01-10  0:14   ` [PATCH] x86,sched: Fix sched_smt_power_savings totally broken Youquan Song
2012-01-09 11:05     ` Peter Zijlstra
2012-01-10  5:58       ` Youquan Song
2012-01-09 23:52         ` Suresh Siddha
2012-01-10  9:18           ` Ingo Molnar
2012-01-10 14:32             ` Arjan van de Ven
2012-01-10 14:41               ` Peter Zijlstra
2012-01-10 14:54                 ` Arjan van de Ven
2012-01-10 15:32               ` Vincent Guittot
2012-01-10 15:32                 ` Vincent Guittot
2012-01-10 16:49               ` Vaidyanathan Srinivasan
2012-01-10 19:41               ` Ingo Molnar
2012-01-10 19:44                 ` Ingo Molnar
2012-01-10 16:54           ` Youquan Song
2012-01-10 16:51             ` Vaidyanathan Srinivasan
2012-01-10 19:01               ` Suresh Siddha
2012-01-11  3:52                 ` Vaidyanathan Srinivasan
2012-01-11 17:37                   ` Youquan Song
2012-01-10 16:44       ` Vaidyanathan Srinivasan
2012-01-09 11:12     ` Peter Zijlstra
2012-01-09 14:29       ` Vincent Guittot
2012-01-09 14:29         ` Vincent Guittot
2012-01-09 14:46         ` Peter Zijlstra
2012-01-10  2:12           ` Indan Zupancic
2012-01-10  9:26             ` Peter Zijlstra
2012-01-10  1:54         ` Suresh Siddha
2012-01-10  8:08           ` Vincent Guittot
2012-01-09 15:37 ` Greg KH

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.