All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCHv3 0/5] coupled cpuidle state support
@ 2012-04-30 20:09 ` Colin Cross
  0 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-04-30 20:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-arm-kernel, linux-pm, Kevin Hilman, Len Brown,
	Trinabh Gupta, Arjan van de Ven, Deepthi Dharwar,
	Greg Kroah-Hartman, Kay Sievers, Santosh Shilimkar,
	Daniel Lezcano, Amit Kucheria, Lorenzo Pieralisi, Arnd Bergmann,
	Russell King, Colin Cross

On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
cpus cannot be independently powered down, either due to
sequencing restrictions (on Tegra 2, cpu 0 must be the last to
power down), or due to HW bugs (on OMAP4460, a cpu powering up
will corrupt the gic state unless the other cpu runs a work
around).  Each cpu has a power state that it can enter without
coordinating with the other cpu (usually Wait For Interrupt, or
WFI), and one or more "coupled" power states that affect blocks
shared between the cpus (L2 cache, interrupt controller, and
sometimes the whole SoC).  Entering a coupled power state must
be tightly controlled on both cpus.

The easiest solution to implementing coupled cpu power states is
to hotplug all but one cpu whenever possible, usually using a
cpufreq governor that looks at cpu load to determine when to
enable the secondary cpus.  This causes problems, as hotplug is an
expensive operation, so the number of hotplug transitions must be
minimized, leading to very slow response to loads, often on the
order of seconds.

This patch series implements an alternative solution, where each
cpu will wait in the WFI state until all cpus are ready to enter
a coupled state, at which point the coupled state function will
be called on all cpus at approximately the same time.

Once all cpus are ready to enter idle, they are woken by an smp
cross call.  At this point, there is a chance that one of the
cpus will find work to do, and choose not to enter suspend.  A
final pass is needed to guarantee that all cpus will call the
power state enter function at the same time.  During this pass,
each cpu will increment the ready counter, and continue once the
ready counter matches the number of online coupled cpus.  If any
cpu exits idle, the other cpus will decrement their counter and
retry.

To use coupled cpuidle states, a cpuidle driver must:

   Set struct cpuidle_device.coupled_cpus to the mask of all
   coupled cpus, usually the same as cpu_possible_mask if all cpus
   are part of the same cluster.  The coupled_cpus mask must be
   set in the struct cpuidle_device for each cpu.

   Set struct cpuidle_device.safe_state to a state that is not a
   coupled state.  This is usually WFI.

   Set CPUIDLE_FLAG_COUPLED in struct cpuidle_state.flags for each
   state that affects multiple cpus.

   Provide a struct cpuidle_state.enter function for each state
   that affects multiple cpus.  This function is guaranteed to be
   called on all cpus at approximately the same time.  The driver
   should ensure that the cpus all abort together if any cpu tries
   to abort once the function is called.

This series has been tested by implementing a test cpuidle state
that uses the parallel barrier helper function to verify that
all cpus call the function at the same time.

This patch set has a few disadvantages over the hotplug governor,
but I think they are all fairly minor:
   * Worst-case interrupt latency can be increased.  If one cpu
     receives an interrupt while the other is spinning in the
     ready_count loop, the second cpu will be stuck with
     interrupts off until the first cpu finished processing
     its interrupt and exits idle.  This will increase the worst
     case interrupt latency by the worst-case interrupt processing
     time, but should be very rare.
   * Interrupts are processed while still inside pm_idle.
     Normally, interrupts are only processed at the very end of
     pm_idle, just before it returns to the idle loop.  Coupled
     states requires processing interrupts inside
     cpuidle_enter_state_coupled in order to distinguish between
     the smp_cross_call from another cpu that is now idle and an
     interrupt that should cause idle to exit.
     I don't see a way to fix this without either being able to
     read the next pending irq from the interrupt chip, or
     querying the irq core for which interrupts were processed.
   * Since interrupts are processed inside cpuidle, the next
     timer event could change.  The new timer event will be
     handled correctly, but the idle state decision made by
     the governor will be out of date, and will not be revisited.
     The governor select function could be called again every time,
     but this could lead to a lot of work being done by an idle
     cpu if the other cpu was mostly busy.

v2:
   * removed the coupled lock, replacing it with atomic counters
   * added a check for outstanding pokes before beginning the
     final transition to avoid extra wakeups
   * made the cpuidle_coupled struct completely private
   * fixed kerneldoc comment formatting
   * added a patch with a helper function for resynchronizing
     cpus after aborting idle
   * added a patch (not for merging) to add trace events for
     verification and performance testing

v3:
   * rebased on v3.4-rc4 by Santosh
   * fixed decrement in cpuidle_coupled_cpu_set_alive
   * updated tracing patch to remove unnecessary debugging so
     it can be merged
   * made tracing _rcuidle

This series has been tested and reviewed by Santosh and Kevin
for OMAP4, which has a cpuidle series ready for 3.5, and Tegra
and Exynos5 patches are in progress.  I think this is ready to
go in.  Lean, are you maintaining a cpuidle tree for linux-next?
If not, I can publish a tree for linux-next, or this could go in
through Arnd's tree.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [PATCHv3 0/5] coupled cpuidle state support
@ 2012-04-30 20:09 ` Colin Cross
  0 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-04-30 20:09 UTC (permalink / raw)
  To: linux-arm-kernel

On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
cpus cannot be independently powered down, either due to
sequencing restrictions (on Tegra 2, cpu 0 must be the last to
power down), or due to HW bugs (on OMAP4460, a cpu powering up
will corrupt the gic state unless the other cpu runs a work
around).  Each cpu has a power state that it can enter without
coordinating with the other cpu (usually Wait For Interrupt, or
WFI), and one or more "coupled" power states that affect blocks
shared between the cpus (L2 cache, interrupt controller, and
sometimes the whole SoC).  Entering a coupled power state must
be tightly controlled on both cpus.

The easiest solution to implementing coupled cpu power states is
to hotplug all but one cpu whenever possible, usually using a
cpufreq governor that looks at cpu load to determine when to
enable the secondary cpus.  This causes problems, as hotplug is an
expensive operation, so the number of hotplug transitions must be
minimized, leading to very slow response to loads, often on the
order of seconds.

This patch series implements an alternative solution, where each
cpu will wait in the WFI state until all cpus are ready to enter
a coupled state, at which point the coupled state function will
be called on all cpus at approximately the same time.

Once all cpus are ready to enter idle, they are woken by an smp
cross call.  At this point, there is a chance that one of the
cpus will find work to do, and choose not to enter suspend.  A
final pass is needed to guarantee that all cpus will call the
power state enter function at the same time.  During this pass,
each cpu will increment the ready counter, and continue once the
ready counter matches the number of online coupled cpus.  If any
cpu exits idle, the other cpus will decrement their counter and
retry.

To use coupled cpuidle states, a cpuidle driver must:

   Set struct cpuidle_device.coupled_cpus to the mask of all
   coupled cpus, usually the same as cpu_possible_mask if all cpus
   are part of the same cluster.  The coupled_cpus mask must be
   set in the struct cpuidle_device for each cpu.

   Set struct cpuidle_device.safe_state to a state that is not a
   coupled state.  This is usually WFI.

   Set CPUIDLE_FLAG_COUPLED in struct cpuidle_state.flags for each
   state that affects multiple cpus.

   Provide a struct cpuidle_state.enter function for each state
   that affects multiple cpus.  This function is guaranteed to be
   called on all cpus at approximately the same time.  The driver
   should ensure that the cpus all abort together if any cpu tries
   to abort once the function is called.

This series has been tested by implementing a test cpuidle state
that uses the parallel barrier helper function to verify that
all cpus call the function at the same time.

This patch set has a few disadvantages over the hotplug governor,
but I think they are all fairly minor:
   * Worst-case interrupt latency can be increased.  If one cpu
     receives an interrupt while the other is spinning in the
     ready_count loop, the second cpu will be stuck with
     interrupts off until the first cpu finished processing
     its interrupt and exits idle.  This will increase the worst
     case interrupt latency by the worst-case interrupt processing
     time, but should be very rare.
   * Interrupts are processed while still inside pm_idle.
     Normally, interrupts are only processed at the very end of
     pm_idle, just before it returns to the idle loop.  Coupled
     states requires processing interrupts inside
     cpuidle_enter_state_coupled in order to distinguish between
     the smp_cross_call from another cpu that is now idle and an
     interrupt that should cause idle to exit.
     I don't see a way to fix this without either being able to
     read the next pending irq from the interrupt chip, or
     querying the irq core for which interrupts were processed.
   * Since interrupts are processed inside cpuidle, the next
     timer event could change.  The new timer event will be
     handled correctly, but the idle state decision made by
     the governor will be out of date, and will not be revisited.
     The governor select function could be called again every time,
     but this could lead to a lot of work being done by an idle
     cpu if the other cpu was mostly busy.

v2:
   * removed the coupled lock, replacing it with atomic counters
   * added a check for outstanding pokes before beginning the
     final transition to avoid extra wakeups
   * made the cpuidle_coupled struct completely private
   * fixed kerneldoc comment formatting
   * added a patch with a helper function for resynchronizing
     cpus after aborting idle
   * added a patch (not for merging) to add trace events for
     verification and performance testing

v3:
   * rebased on v3.4-rc4 by Santosh
   * fixed decrement in cpuidle_coupled_cpu_set_alive
   * updated tracing patch to remove unnecessary debugging so
     it can be merged
   * made tracing _rcuidle

This series has been tested and reviewed by Santosh and Kevin
for OMAP4, which has a cpuidle series ready for 3.5, and Tegra
and Exynos5 patches are in progress.  I think this is ready to
go in.  Lean, are you maintaining a cpuidle tree for linux-next?
If not, I can publish a tree for linux-next, or this could go in
through Arnd's tree.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [PATCHv3 1/5] cpuidle: refactor out cpuidle_enter_state
  2012-04-30 20:09 ` Colin Cross
@ 2012-04-30 20:09   ` Colin Cross
  -1 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-04-30 20:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-arm-kernel, linux-pm, Kevin Hilman, Len Brown,
	Trinabh Gupta, Arjan van de Ven, Deepthi Dharwar,
	Greg Kroah-Hartman, Kay Sievers, Santosh Shilimkar,
	Daniel Lezcano, Amit Kucheria, Lorenzo Pieralisi, Arnd Bergmann,
	Russell King, Colin Cross

Split the code to enter a state and update the stats into a helper
function, cpuidle_enter_state, and export it.  This function will
be called by the coupled state code to handle entering the safe
state and the final coupled state.

Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Colin Cross <ccross@android.com>
---
 drivers/cpuidle/cpuidle.c |   42 +++++++++++++++++++++++++++++-------------
 drivers/cpuidle/cpuidle.h |    2 ++
 2 files changed, 31 insertions(+), 13 deletions(-)

diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index 2f0083a..3e3e3e4 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -103,6 +103,34 @@ int cpuidle_play_dead(void)
 }
 
 /**
+ * cpuidle_enter_state - enter the state and update stats
+ * @dev: cpuidle device for this cpu
+ * @drv: cpuidle driver for this cpu
+ * @next_state: index into drv->states of the state to enter
+ */
+int cpuidle_enter_state(struct cpuidle_device *dev, struct cpuidle_driver *drv,
+		int next_state)
+{
+	int entered_state;
+
+	entered_state = cpuidle_enter_ops(dev, drv, next_state);
+
+	if (entered_state >= 0) {
+		/* Update cpuidle counters */
+		/* This can be moved to within driver enter routine
+		 * but that results in multiple copies of same code.
+		 */
+		dev->states_usage[entered_state].time +=
+				(unsigned long long)dev->last_residency;
+		dev->states_usage[entered_state].usage++;
+	} else {
+		dev->last_residency = 0;
+	}
+
+	return entered_state;
+}
+
+/**
  * cpuidle_idle_call - the main idle loop
  *
  * NOTE: no locks or semaphores should be used here
@@ -143,23 +171,11 @@ int cpuidle_idle_call(void)
 	trace_power_start_rcuidle(POWER_CSTATE, next_state, dev->cpu);
 	trace_cpu_idle_rcuidle(next_state, dev->cpu);
 
-	entered_state = cpuidle_enter_ops(dev, drv, next_state);
+	entered_state = cpuidle_enter_state(dev, drv, next_state);
 
 	trace_power_end_rcuidle(dev->cpu);
 	trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, dev->cpu);
 
-	if (entered_state >= 0) {
-		/* Update cpuidle counters */
-		/* This can be moved to within driver enter routine
-		 * but that results in multiple copies of same code.
-		 */
-		dev->states_usage[entered_state].time +=
-				(unsigned long long)dev->last_residency;
-		dev->states_usage[entered_state].usage++;
-	} else {
-		dev->last_residency = 0;
-	}
-
 	/* give the governor an opportunity to reflect on the outcome */
 	if (cpuidle_curr_governor->reflect)
 		cpuidle_curr_governor->reflect(dev, entered_state);
diff --git a/drivers/cpuidle/cpuidle.h b/drivers/cpuidle/cpuidle.h
index 7db1866..d8a3ccc 100644
--- a/drivers/cpuidle/cpuidle.h
+++ b/drivers/cpuidle/cpuidle.h
@@ -14,6 +14,8 @@
 extern struct mutex cpuidle_lock;
 extern spinlock_t cpuidle_driver_lock;
 extern int cpuidle_disabled(void);
+extern int cpuidle_enter_state(struct cpuidle_device *dev,
+		struct cpuidle_driver *drv, int next_state);
 
 /* idle loop */
 extern void cpuidle_install_idle_handler(void);
-- 
1.7.7.3


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCHv3 1/5] cpuidle: refactor out cpuidle_enter_state
@ 2012-04-30 20:09   ` Colin Cross
  0 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-04-30 20:09 UTC (permalink / raw)
  To: linux-arm-kernel

Split the code to enter a state and update the stats into a helper
function, cpuidle_enter_state, and export it.  This function will
be called by the coupled state code to handle entering the safe
state and the final coupled state.

Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Colin Cross <ccross@android.com>
---
 drivers/cpuidle/cpuidle.c |   42 +++++++++++++++++++++++++++++-------------
 drivers/cpuidle/cpuidle.h |    2 ++
 2 files changed, 31 insertions(+), 13 deletions(-)

diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index 2f0083a..3e3e3e4 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -103,6 +103,34 @@ int cpuidle_play_dead(void)
 }
 
 /**
+ * cpuidle_enter_state - enter the state and update stats
+ * @dev: cpuidle device for this cpu
+ * @drv: cpuidle driver for this cpu
+ * @next_state: index into drv->states of the state to enter
+ */
+int cpuidle_enter_state(struct cpuidle_device *dev, struct cpuidle_driver *drv,
+		int next_state)
+{
+	int entered_state;
+
+	entered_state = cpuidle_enter_ops(dev, drv, next_state);
+
+	if (entered_state >= 0) {
+		/* Update cpuidle counters */
+		/* This can be moved to within driver enter routine
+		 * but that results in multiple copies of same code.
+		 */
+		dev->states_usage[entered_state].time +=
+				(unsigned long long)dev->last_residency;
+		dev->states_usage[entered_state].usage++;
+	} else {
+		dev->last_residency = 0;
+	}
+
+	return entered_state;
+}
+
+/**
  * cpuidle_idle_call - the main idle loop
  *
  * NOTE: no locks or semaphores should be used here
@@ -143,23 +171,11 @@ int cpuidle_idle_call(void)
 	trace_power_start_rcuidle(POWER_CSTATE, next_state, dev->cpu);
 	trace_cpu_idle_rcuidle(next_state, dev->cpu);
 
-	entered_state = cpuidle_enter_ops(dev, drv, next_state);
+	entered_state = cpuidle_enter_state(dev, drv, next_state);
 
 	trace_power_end_rcuidle(dev->cpu);
 	trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, dev->cpu);
 
-	if (entered_state >= 0) {
-		/* Update cpuidle counters */
-		/* This can be moved to within driver enter routine
-		 * but that results in multiple copies of same code.
-		 */
-		dev->states_usage[entered_state].time +=
-				(unsigned long long)dev->last_residency;
-		dev->states_usage[entered_state].usage++;
-	} else {
-		dev->last_residency = 0;
-	}
-
 	/* give the governor an opportunity to reflect on the outcome */
 	if (cpuidle_curr_governor->reflect)
 		cpuidle_curr_governor->reflect(dev, entered_state);
diff --git a/drivers/cpuidle/cpuidle.h b/drivers/cpuidle/cpuidle.h
index 7db1866..d8a3ccc 100644
--- a/drivers/cpuidle/cpuidle.h
+++ b/drivers/cpuidle/cpuidle.h
@@ -14,6 +14,8 @@
 extern struct mutex cpuidle_lock;
 extern spinlock_t cpuidle_driver_lock;
 extern int cpuidle_disabled(void);
+extern int cpuidle_enter_state(struct cpuidle_device *dev,
+		struct cpuidle_driver *drv, int next_state);
 
 /* idle loop */
 extern void cpuidle_install_idle_handler(void);
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCHv3 2/5] cpuidle: fix error handling in __cpuidle_register_device
  2012-04-30 20:09 ` Colin Cross
  (?)
@ 2012-04-30 20:09   ` Colin Cross
  -1 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-04-30 20:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-arm-kernel, linux-pm, Kevin Hilman, Len Brown,
	Trinabh Gupta, Arjan van de Ven, Deepthi Dharwar,
	Greg Kroah-Hartman, Kay Sievers, Santosh Shilimkar,
	Daniel Lezcano, Amit Kucheria, Lorenzo Pieralisi, Arnd Bergmann,
	Russell King, Colin Cross

Fix the error handling in __cpuidle_register_device to include
the missing list_del.  Move it to a label, which will simplify
the error handling when coupled states are added.

Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Colin Cross <ccross@android.com>
---
 drivers/cpuidle/cpuidle.c |   13 +++++++++----
 1 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index 3e3e3e4..4540672 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -403,13 +403,18 @@ static int __cpuidle_register_device(struct cpuidle_device *dev)
 
 	per_cpu(cpuidle_devices, dev->cpu) = dev;
 	list_add(&dev->device_list, &cpuidle_detected_devices);
-	if ((ret = cpuidle_add_sysfs(cpu_dev))) {
-		module_put(cpuidle_driver->owner);
-		return ret;
-	}
+	ret = cpuidle_add_sysfs(cpu_dev);
+	if (ret)
+		goto err_sysfs;
 
 	dev->registered = 1;
 	return 0;
+
+err_sysfs:
+	list_del(&dev->device_list);
+	per_cpu(cpuidle_devices, dev->cpu) = NULL;
+	module_put(cpuidle_driver->owner);
+	return ret;
 }
 
 /**
-- 
1.7.7.3


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCHv3 2/5] cpuidle: fix error handling in __cpuidle_register_device
@ 2012-04-30 20:09   ` Colin Cross
  0 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-04-30 20:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kevin Hilman, Len Brown, Russell King, Greg Kroah-Hartman,
	Kay Sievers, Amit Kucheria, Colin Cross, linux-pm,
	Arjan van de Ven, Arnd Bergmann, linux-arm-kernel

Fix the error handling in __cpuidle_register_device to include
the missing list_del.  Move it to a label, which will simplify
the error handling when coupled states are added.

Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Colin Cross <ccross@android.com>
---
 drivers/cpuidle/cpuidle.c |   13 +++++++++----
 1 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index 3e3e3e4..4540672 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -403,13 +403,18 @@ static int __cpuidle_register_device(struct cpuidle_device *dev)
 
 	per_cpu(cpuidle_devices, dev->cpu) = dev;
 	list_add(&dev->device_list, &cpuidle_detected_devices);
-	if ((ret = cpuidle_add_sysfs(cpu_dev))) {
-		module_put(cpuidle_driver->owner);
-		return ret;
-	}
+	ret = cpuidle_add_sysfs(cpu_dev);
+	if (ret)
+		goto err_sysfs;
 
 	dev->registered = 1;
 	return 0;
+
+err_sysfs:
+	list_del(&dev->device_list);
+	per_cpu(cpuidle_devices, dev->cpu) = NULL;
+	module_put(cpuidle_driver->owner);
+	return ret;
 }
 
 /**
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCHv3 2/5] cpuidle: fix error handling in __cpuidle_register_device
@ 2012-04-30 20:09   ` Colin Cross
  0 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-04-30 20:09 UTC (permalink / raw)
  To: linux-arm-kernel

Fix the error handling in __cpuidle_register_device to include
the missing list_del.  Move it to a label, which will simplify
the error handling when coupled states are added.

Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Colin Cross <ccross@android.com>
---
 drivers/cpuidle/cpuidle.c |   13 +++++++++----
 1 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index 3e3e3e4..4540672 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -403,13 +403,18 @@ static int __cpuidle_register_device(struct cpuidle_device *dev)
 
 	per_cpu(cpuidle_devices, dev->cpu) = dev;
 	list_add(&dev->device_list, &cpuidle_detected_devices);
-	if ((ret = cpuidle_add_sysfs(cpu_dev))) {
-		module_put(cpuidle_driver->owner);
-		return ret;
-	}
+	ret = cpuidle_add_sysfs(cpu_dev);
+	if (ret)
+		goto err_sysfs;
 
 	dev->registered = 1;
 	return 0;
+
+err_sysfs:
+	list_del(&dev->device_list);
+	per_cpu(cpuidle_devices, dev->cpu) = NULL;
+	module_put(cpuidle_driver->owner);
+	return ret;
 }
 
 /**
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCHv3 3/5] cpuidle: add support for states that affect multiple cpus
  2012-04-30 20:09 ` Colin Cross
  (?)
@ 2012-04-30 20:09   ` Colin Cross
  -1 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-04-30 20:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-arm-kernel, linux-pm, Kevin Hilman, Len Brown,
	Trinabh Gupta, Arjan van de Ven, Deepthi Dharwar,
	Greg Kroah-Hartman, Kay Sievers, Santosh Shilimkar,
	Daniel Lezcano, Amit Kucheria, Lorenzo Pieralisi, Arnd Bergmann,
	Russell King, Colin Cross

On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
cpus cannot be independently powered down, either due to
sequencing restrictions (on Tegra 2, cpu 0 must be the last to
power down), or due to HW bugs (on OMAP4460, a cpu powering up
will corrupt the gic state unless the other cpu runs a work
around).  Each cpu has a power state that it can enter without
coordinating with the other cpu (usually Wait For Interrupt, or
WFI), and one or more "coupled" power states that affect blocks
shared between the cpus (L2 cache, interrupt controller, and
sometimes the whole SoC).  Entering a coupled power state must
be tightly controlled on both cpus.

The easiest solution to implementing coupled cpu power states is
to hotplug all but one cpu whenever possible, usually using a
cpufreq governor that looks at cpu load to determine when to
enable the secondary cpus.  This causes problems, as hotplug is an
expensive operation, so the number of hotplug transitions must be
minimized, leading to very slow response to loads, often on the
order of seconds.

This file implements an alternative solution, where each cpu will
wait in the WFI state until all cpus are ready to enter a coupled
state, at which point the coupled state function will be called
on all cpus at approximately the same time.

Once all cpus are ready to enter idle, they are woken by an smp
cross call.  At this point, there is a chance that one of the
cpus will find work to do, and choose not to enter idle.  A
final pass is needed to guarantee that all cpus will call the
power state enter function at the same time.  During this pass,
each cpu will increment the ready counter, and continue once the
ready counter matches the number of online coupled cpus.  If any
cpu exits idle, the other cpus will decrement their counter and
retry.

To use coupled cpuidle states, a cpuidle driver must:

   Set struct cpuidle_device.coupled_cpus to the mask of all
   coupled cpus, usually the same as cpu_possible_mask if all cpus
   are part of the same cluster.  The coupled_cpus mask must be
   set in the struct cpuidle_device for each cpu.

   Set struct cpuidle_device.safe_state to a state that is not a
   coupled state.  This is usually WFI.

   Set CPUIDLE_FLAG_COUPLED in struct cpuidle_state.flags for each
   state that affects multiple cpus.

   Provide a struct cpuidle_state.enter function for each state
   that affects multiple cpus.  This function is guaranteed to be
   called on all cpus at approximately the same time.  The driver
   should ensure that the cpus all abort together if any cpu tries
   to abort once the function is called.

Cc: Len Brown <len.brown@intel.com>
Cc: Amit Kucheria <amit.kucheria@linaro.org>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Trinabh Gupta <g.trinabh@gmail.com>
Cc: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Colin Cross <ccross@android.com>
---
 drivers/cpuidle/Kconfig   |    3 +
 drivers/cpuidle/Makefile  |    1 +
 drivers/cpuidle/coupled.c |  571 +++++++++++++++++++++++++++++++++++++++++++++
 drivers/cpuidle/cpuidle.c |   15 ++-
 drivers/cpuidle/cpuidle.h |   30 +++
 include/linux/cpuidle.h   |    7 +
 6 files changed, 626 insertions(+), 1 deletions(-)
 create mode 100644 drivers/cpuidle/coupled.c

v2:
   * removed the coupled lock, replacing it with atomic counters
   * added a check for outstanding pokes before beginning the
     final transition to avoid extra wakeups
   * made the cpuidle_coupled struct completely private
   * fixed kerneldoc comment formatting

v3:
   * fixed decrement in cpuidle_coupled_cpu_set_alive
   * added kerneldoc annotation to the description

diff --git a/drivers/cpuidle/Kconfig b/drivers/cpuidle/Kconfig
index 78a666d..a76b689 100644
--- a/drivers/cpuidle/Kconfig
+++ b/drivers/cpuidle/Kconfig
@@ -18,3 +18,6 @@ config CPU_IDLE_GOV_MENU
 	bool
 	depends on CPU_IDLE && NO_HZ
 	default y
+
+config ARCH_NEEDS_CPU_IDLE_COUPLED
+	def_bool n
diff --git a/drivers/cpuidle/Makefile b/drivers/cpuidle/Makefile
index 5634f88..38c8f69 100644
--- a/drivers/cpuidle/Makefile
+++ b/drivers/cpuidle/Makefile
@@ -3,3 +3,4 @@
 #
 
 obj-y += cpuidle.o driver.o governor.o sysfs.o governors/
+obj-$(CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED) += coupled.o
diff --git a/drivers/cpuidle/coupled.c b/drivers/cpuidle/coupled.c
new file mode 100644
index 0000000..d097826
--- /dev/null
+++ b/drivers/cpuidle/coupled.c
@@ -0,0 +1,571 @@
+/*
+ * coupled.c - helper functions to enter the same idle state on multiple cpus
+ *
+ * Copyright (c) 2011 Google, Inc.
+ *
+ * Author: Colin Cross <ccross@android.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/cpu.h>
+#include <linux/cpuidle.h>
+#include <linux/mutex.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+
+#include "cpuidle.h"
+
+/**
+ * DOC: Coupled cpuidle states
+ *
+ * On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
+ * cpus cannot be independently powered down, either due to
+ * sequencing restrictions (on Tegra 2, cpu 0 must be the last to
+ * power down), or due to HW bugs (on OMAP4460, a cpu powering up
+ * will corrupt the gic state unless the other cpu runs a work
+ * around).  Each cpu has a power state that it can enter without
+ * coordinating with the other cpu (usually Wait For Interrupt, or
+ * WFI), and one or more "coupled" power states that affect blocks
+ * shared between the cpus (L2 cache, interrupt controller, and
+ * sometimes the whole SoC).  Entering a coupled power state must
+ * be tightly controlled on both cpus.
+ *
+ * The easiest solution to implementing coupled cpu power states is
+ * to hotplug all but one cpu whenever possible, usually using a
+ * cpufreq governor that looks at cpu load to determine when to
+ * enable the secondary cpus.  This causes problems, as hotplug is an
+ * expensive operation, so the number of hotplug transitions must be
+ * minimized, leading to very slow response to loads, often on the
+ * order of seconds.
+ *
+ * This file implements an alternative solution, where each cpu will
+ * wait in the WFI state until all cpus are ready to enter a coupled
+ * state, at which point the coupled state function will be called
+ * on all cpus at approximately the same time.
+ *
+ * Once all cpus are ready to enter idle, they are woken by an smp
+ * cross call.  At this point, there is a chance that one of the
+ * cpus will find work to do, and choose not to enter idle.  A
+ * final pass is needed to guarantee that all cpus will call the
+ * power state enter function at the same time.  During this pass,
+ * each cpu will increment the ready counter, and continue once the
+ * ready counter matches the number of online coupled cpus.  If any
+ * cpu exits idle, the other cpus will decrement their counter and
+ * retry.
+ *
+ * requested_state stores the deepest coupled idle state each cpu
+ * is ready for.  It is assumed that the states are indexed from
+ * shallowest (highest power, lowest exit latency) to deepest
+ * (lowest power, highest exit latency).  The requested_state
+ * variable is not locked.  It is only written from the cpu that
+ * it stores (or by the on/offlining cpu if that cpu is offline),
+ * and only read after all the cpus are ready for the coupled idle
+ * state are are no longer updating it.
+ *
+ * Three atomic counters are used.  alive_count tracks the number
+ * of cpus in the coupled set that are currently or soon will be
+ * online.  waiting_count tracks the number of cpus that are in
+ * the waiting loop, in the ready loop, or in the coupled idle state.
+ * ready_count tracks the number of cpus that are in the ready loop
+ * or in the coupled idle state.
+ *
+ * To use coupled cpuidle states, a cpuidle driver must:
+ *
+ *    Set struct cpuidle_device.coupled_cpus to the mask of all
+ *    coupled cpus, usually the same as cpu_possible_mask if all cpus
+ *    are part of the same cluster.  The coupled_cpus mask must be
+ *    set in the struct cpuidle_device for each cpu.
+ *
+ *    Set struct cpuidle_device.safe_state to a state that is not a
+ *    coupled state.  This is usually WFI.
+ *
+ *    Set CPUIDLE_FLAG_COUPLED in struct cpuidle_state.flags for each
+ *    state that affects multiple cpus.
+ *
+ *    Provide a struct cpuidle_state.enter function for each state
+ *    that affects multiple cpus.  This function is guaranteed to be
+ *    called on all cpus at approximately the same time.  The driver
+ *    should ensure that the cpus all abort together if any cpu tries
+ *    to abort once the function is called.  The function should return
+ *    with interrupts still disabled.
+ */
+
+/**
+ * struct cpuidle_coupled - data for set of cpus that share a coupled idle state
+ * @coupled_cpus: mask of cpus that are part of the coupled set
+ * @requested_state: array of requested states for cpus in the coupled set
+ * @ready_count: count of cpus that are ready for the final idle transition
+ * @waiting_count: count of cpus that are waiting for all other cpus to be idle
+ * @alive_count: count of cpus that are online or soon will be
+ * @refcnt: reference count of cpuidle devices that are using this struct
+ */
+struct cpuidle_coupled {
+	cpumask_t coupled_cpus;
+	int requested_state[NR_CPUS];
+	atomic_t ready_count;
+	atomic_t waiting_count;
+	atomic_t alive_count;
+	int refcnt;
+};
+
+#define CPUIDLE_COUPLED_NOT_IDLE	(-1)
+#define CPUIDLE_COUPLED_DEAD		(-2)
+
+static DEFINE_MUTEX(cpuidle_coupled_lock);
+static DEFINE_PER_CPU(struct call_single_data, cpuidle_coupled_poke_cb);
+
+/*
+ * The cpuidle_coupled_poked_mask masked is used to avoid calling
+ * __smp_call_function_single with the per cpu call_single_data struct already
+ * in use.  This prevents a deadlock where two cpus are waiting for each others
+ * call_single_data struct to be available
+ */
+static cpumask_t cpuidle_coupled_poked_mask;
+
+/**
+ * cpuidle_state_is_coupled - check if a state is part of a coupled set
+ * @dev: struct cpuidle_device for the current cpu
+ * @drv: struct cpuidle_driver for the platform
+ * @state: index of the target state in drv->states
+ *
+ * Returns true if the target state is coupled with cpus besides this one
+ */
+bool cpuidle_state_is_coupled(struct cpuidle_device *dev,
+	struct cpuidle_driver *drv, int state)
+{
+	return drv->states[state].flags & CPUIDLE_FLAG_COUPLED;
+}
+
+/**
+ * cpuidle_coupled_cpus_waiting - check if all cpus in a coupled set are waiting
+ * @coupled: the struct coupled that contains the current cpu
+ *
+ * Returns true if all cpus coupled to this target state are in the wait loop
+ */
+static inline bool cpuidle_coupled_cpus_waiting(struct cpuidle_coupled *coupled)
+{
+	int alive;
+	int waiting;
+
+	/*
+	 * Read alive before reading waiting so a booting cpu is not treated as
+	 * idle
+	 */
+	alive = atomic_read(&coupled->alive_count);
+	smp_rmb();
+	waiting = atomic_read(&coupled->waiting_count);
+
+	return (waiting == alive);
+}
+
+/**
+ * cpuidle_coupled_get_state - determine the deepest idle state
+ * @dev: struct cpuidle_device for this cpu
+ * @coupled: the struct coupled that contains the current cpu
+ *
+ * Returns the deepest idle state that all coupled cpus can enter
+ */
+static inline int cpuidle_coupled_get_state(struct cpuidle_device *dev,
+		struct cpuidle_coupled *coupled)
+{
+	int i;
+	int state = INT_MAX;
+
+	for_each_cpu_mask(i, coupled->coupled_cpus)
+		if (coupled->requested_state[i] != CPUIDLE_COUPLED_DEAD &&
+		    coupled->requested_state[i] < state)
+			state = coupled->requested_state[i];
+
+	BUG_ON(state >= dev->state_count || state < 0);
+
+	return state;
+}
+
+static void cpuidle_coupled_poked(void *info)
+{
+	int cpu = (unsigned long)info;
+	cpumask_clear_cpu(cpu, &cpuidle_coupled_poked_mask);
+}
+
+/**
+ * cpuidle_coupled_poke - wake up a cpu that may be waiting
+ * @cpu: target cpu
+ *
+ * Ensures that the target cpu exits it's waiting idle state (if it is in it)
+ * and will see updates to waiting_count before it re-enters it's waiting idle
+ * state.
+ *
+ * If cpuidle_coupled_poked_mask is already set for the target cpu, that cpu
+ * either has or will soon have a pending IPI that will wake it out of idle,
+ * or it is currently processing the IPI and is not in idle.
+ */
+static void cpuidle_coupled_poke(int cpu)
+{
+	struct call_single_data *csd = &per_cpu(cpuidle_coupled_poke_cb, cpu);
+
+	if (!cpumask_test_and_set_cpu(cpu, &cpuidle_coupled_poked_mask))
+		__smp_call_function_single(cpu, csd, 0);
+}
+
+/**
+ * cpuidle_coupled_poke_others - wake up all other cpus that may be waiting
+ * @dev: struct cpuidle_device for this cpu
+ * @coupled: the struct coupled that contains the current cpu
+ *
+ * Calls cpuidle_coupled_poke on all other online cpus.
+ */
+static void cpuidle_coupled_poke_others(struct cpuidle_device *dev,
+		struct cpuidle_coupled *coupled)
+{
+	int cpu;
+
+	for_each_cpu_mask(cpu, coupled->coupled_cpus)
+		if (cpu != dev->cpu && cpu_online(cpu))
+			cpuidle_coupled_poke(cpu);
+}
+
+/**
+ * cpuidle_coupled_set_waiting - mark this cpu as in the wait loop
+ * @dev: struct cpuidle_device for this cpu
+ * @coupled: the struct coupled that contains the current cpu
+ * @next_state: the index in drv->states of the requested state for this cpu
+ *
+ * Updates the requested idle state for the specified cpuidle device,
+ * poking all coupled cpus out of idle if necessary to let them see the new
+ * state.
+ *
+ * Provides memory ordering around waiting_count.
+ */
+static void cpuidle_coupled_set_waiting(struct cpuidle_device *dev,
+		struct cpuidle_coupled *coupled, int next_state)
+{
+	int alive;
+
+	BUG_ON(coupled->requested_state[dev->cpu] >= 0);
+
+	coupled->requested_state[dev->cpu] = next_state;
+
+	/*
+	 * If this is the last cpu to enter the waiting state, poke
+	 * all the other cpus out of their waiting state so they can
+	 * enter a deeper state.  This can race with one of the cpus
+	 * exiting the waiting state due to an interrupt and
+	 * decrementing waiting_count, see comment below.
+	 */
+	alive = atomic_read(&coupled->alive_count);
+	if (atomic_inc_return(&coupled->waiting_count) == alive)
+		cpuidle_coupled_poke_others(dev, coupled);
+}
+
+/**
+ * cpuidle_coupled_set_not_waiting - mark this cpu as leaving the wait loop
+ * @dev: struct cpuidle_device for this cpu
+ * @coupled: the struct coupled that contains the current cpu
+ *
+ * Removes the requested idle state for the specified cpuidle device.
+ *
+ * Provides memory ordering around waiting_count.
+ */
+static void cpuidle_coupled_set_not_waiting(struct cpuidle_device *dev,
+		struct cpuidle_coupled *coupled)
+{
+	BUG_ON(coupled->requested_state[dev->cpu] < 0);
+
+	/*
+	 * Decrementing waiting_count can race with incrementing it in
+	 * cpuidle_coupled_set_waiting, but that's OK.  Worst case, some
+	 * cpus will increment ready_count and then spin until they
+	 * notice that this cpu has cleared it's requested_state.
+	 */
+
+	smp_mb__before_atomic_dec();
+	atomic_dec(&coupled->waiting_count);
+	smp_mb__after_atomic_dec();
+
+	coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
+}
+
+/**
+ * cpuidle_enter_state_coupled - attempt to enter a state with coupled cpus
+ * @dev: struct cpuidle_device for the current cpu
+ * @drv: struct cpuidle_driver for the platform
+ * @next_state: index of the requested state in drv->states
+ *
+ * Coordinate with coupled cpus to enter the target state.  This is a two
+ * stage process.  In the first stage, the cpus are operating independently,
+ * and may call into cpuidle_enter_state_coupled at completely different times.
+ * To save as much power as possible, the first cpus to call this function will
+ * go to an intermediate state (the cpuidle_device's safe state), and wait for
+ * all the other cpus to call this function.  Once all coupled cpus are idle,
+ * the second stage will start.  Each coupled cpu will spin until all cpus have
+ * guaranteed that they will call the target_state.
+ */
+int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
+		struct cpuidle_driver *drv, int next_state)
+{
+	int entered_state = -1;
+	struct cpuidle_coupled *coupled = dev->coupled;
+	int alive;
+
+	if (!coupled)
+		return -EINVAL;
+
+	BUG_ON(atomic_read(&coupled->ready_count));
+	cpuidle_coupled_set_waiting(dev, coupled, next_state);
+
+retry:
+	/*
+	 * Wait for all coupled cpus to be idle, using the deepest state
+	 * allowed for a single cpu.
+	 */
+	while (!need_resched() && !cpuidle_coupled_cpus_waiting(coupled)) {
+		entered_state = cpuidle_enter_state(dev, drv,
+			dev->safe_state_index);
+
+		local_irq_enable();
+		while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
+			cpu_relax();
+		local_irq_disable();
+	}
+
+	/* give a chance to process any remaining pokes */
+	local_irq_enable();
+	while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
+		cpu_relax();
+	local_irq_disable();
+
+	if (need_resched()) {
+		cpuidle_coupled_set_not_waiting(dev, coupled);
+		goto out;
+	}
+
+	/*
+	 * All coupled cpus are probably idle.  There is a small chance that
+	 * one of the other cpus just became active.  Increment a counter when
+	 * ready, and spin until all coupled cpus have incremented the counter.
+	 * Once a cpu has incremented the counter, it cannot abort idle and must
+	 * spin until either the count has hit alive_count, or another cpu
+	 * leaves idle.
+	 */
+
+	smp_mb__before_atomic_inc();
+	atomic_inc(&coupled->ready_count);
+	smp_mb__after_atomic_inc();
+	/* alive_count can't change while ready_count > 0 */
+	alive = atomic_read(&coupled->alive_count);
+	while (atomic_read(&coupled->ready_count) != alive) {
+		/* Check if any other cpus bailed out of idle. */
+		if (!cpuidle_coupled_cpus_waiting(coupled)) {
+			atomic_dec(&coupled->ready_count);
+			smp_mb__after_atomic_dec();
+			goto retry;
+		}
+
+		cpu_relax();
+	}
+
+	/* all cpus have acked the coupled state */
+	smp_rmb();
+
+	next_state = cpuidle_coupled_get_state(dev, coupled);
+
+	entered_state = cpuidle_enter_state(dev, drv, next_state);
+
+	cpuidle_coupled_set_not_waiting(dev, coupled);
+	atomic_dec(&coupled->ready_count);
+	smp_mb__after_atomic_dec();
+
+out:
+	/*
+	 * Normal cpuidle states are expected to return with irqs enabled.
+	 * That leads to an inefficiency where a cpu receiving an interrupt
+	 * that brings it out of idle will process that interrupt before
+	 * exiting the idle enter function and decrementing ready_count.  All
+	 * other cpus will need to spin waiting for the cpu that is processing
+	 * the interrupt.  If the driver returns with interrupts disabled,
+	 * all other cpus will loop back into the safe idle state instead of
+	 * spinning, saving power.
+	 *
+	 * Calling local_irq_enable here allows coupled states to return with
+	 * interrupts disabled, but won't cause problems for drivers that
+	 * exit with interrupts enabled.
+	 */
+	local_irq_enable();
+
+	/*
+	 * Wait until all coupled cpus have exited idle.  There is no risk that
+	 * a cpu exits and re-enters the ready state because this cpu has
+	 * already decremented its waiting_count.
+	 */
+	while (atomic_read(&coupled->ready_count) != 0)
+		cpu_relax();
+
+	smp_rmb();
+
+	return entered_state;
+}
+
+/**
+ * cpuidle_coupled_register_device - register a coupled cpuidle device
+ * @dev: struct cpuidle_device for the current cpu
+ *
+ * Called from cpuidle_register_device to handle coupled idle init.  Finds the
+ * cpuidle_coupled struct for this set of coupled cpus, or creates one if none
+ * exists yet.
+ */
+int cpuidle_coupled_register_device(struct cpuidle_device *dev)
+{
+	int cpu;
+	struct cpuidle_device *other_dev;
+	struct call_single_data *csd;
+	struct cpuidle_coupled *coupled;
+
+	if (cpumask_empty(&dev->coupled_cpus))
+		return 0;
+
+	for_each_cpu_mask(cpu, dev->coupled_cpus) {
+		other_dev = per_cpu(cpuidle_devices, cpu);
+		if (other_dev && other_dev->coupled) {
+			coupled = other_dev->coupled;
+			goto have_coupled;
+		}
+	}
+
+	/* No existing coupled info found, create a new one */
+	coupled = kzalloc(sizeof(struct cpuidle_coupled), GFP_KERNEL);
+	if (!coupled)
+		return -ENOMEM;
+
+	coupled->coupled_cpus = dev->coupled_cpus;
+	for_each_cpu_mask(cpu, coupled->coupled_cpus)
+		coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_DEAD;
+
+have_coupled:
+	dev->coupled = coupled;
+	BUG_ON(!cpumask_equal(&dev->coupled_cpus, &coupled->coupled_cpus));
+
+	if (cpu_online(dev->cpu)) {
+		coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
+		atomic_inc(&coupled->alive_count);
+	}
+
+	coupled->refcnt++;
+
+	csd = &per_cpu(cpuidle_coupled_poke_cb, dev->cpu);
+	csd->func = cpuidle_coupled_poked;
+	csd->info = (void *)(unsigned long)dev->cpu;
+
+	return 0;
+}
+
+/**
+ * cpuidle_coupled_unregister_device - unregister a coupled cpuidle device
+ * @dev: struct cpuidle_device for the current cpu
+ *
+ * Called from cpuidle_unregister_device to tear down coupled idle.  Removes the
+ * cpu from the coupled idle set, and frees the cpuidle_coupled_info struct if
+ * this was the last cpu in the set.
+ */
+void cpuidle_coupled_unregister_device(struct cpuidle_device *dev)
+{
+	struct cpuidle_coupled *coupled = dev->coupled;
+
+	if (cpumask_empty(&dev->coupled_cpus))
+		return;
+
+	if (--coupled->refcnt)
+		kfree(coupled);
+	dev->coupled = NULL;
+}
+
+/**
+ * cpuidle_coupled_cpu_set_alive - adjust alive_count during hotplug transitions
+ * @cpu: target cpu number
+ * @alive: whether the target cpu is going up or down
+ *
+ * Run on the cpu that is bringing up the target cpu, before the target cpu
+ * has been booted, or after the target cpu is completely dead.
+ */
+static void cpuidle_coupled_cpu_set_alive(int cpu, bool alive)
+{
+	struct cpuidle_device *dev;
+	struct cpuidle_coupled *coupled;
+
+	mutex_lock(&cpuidle_lock);
+
+	dev = per_cpu(cpuidle_devices, cpu);
+	if (!dev->coupled)
+		goto out;
+
+	coupled = dev->coupled;
+
+	/*
+	 * waiting_count must be at least 1 less than alive_count, because
+	 * this cpu is not waiting.  Spin until all cpus have noticed this cpu
+	 * is not idle and exited the ready loop before changing alive_count.
+	 */
+	while (atomic_read(&coupled->ready_count))
+		cpu_relax();
+
+	if (alive) {
+		smp_mb__before_atomic_inc();
+		atomic_inc(&coupled->alive_count);
+		smp_mb__after_atomic_inc();
+		coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
+	} else {
+		smp_mb__before_atomic_dec();
+		atomic_dec(&coupled->alive_count);
+		smp_mb__after_atomic_dec();
+		coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_DEAD;
+	}
+
+out:
+	mutex_unlock(&cpuidle_lock);
+}
+
+/**
+ * cpuidle_coupled_cpu_notify - notifier called during hotplug transitions
+ * @nb: notifier block
+ * @action: hotplug transition
+ * @hcpu: target cpu number
+ *
+ * Called when a cpu is brought on or offline using hotplug.  Updates the
+ * coupled cpu set appropriately
+ */
+static int cpuidle_coupled_cpu_notify(struct notifier_block *nb,
+		unsigned long action, void *hcpu)
+{
+	int cpu = (unsigned long)hcpu;
+
+	switch (action & ~CPU_TASKS_FROZEN) {
+	case CPU_DEAD:
+	case CPU_UP_CANCELED:
+		cpuidle_coupled_cpu_set_alive(cpu, false);
+		break;
+	case CPU_UP_PREPARE:
+		cpuidle_coupled_cpu_set_alive(cpu, true);
+		break;
+	}
+	return NOTIFY_OK;
+}
+
+static struct notifier_block cpuidle_coupled_cpu_notifier = {
+	.notifier_call = cpuidle_coupled_cpu_notify,
+};
+
+static int __init cpuidle_coupled_init(void)
+{
+	return register_cpu_notifier(&cpuidle_coupled_cpu_notifier);
+}
+core_initcall(cpuidle_coupled_init);
diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index 4540672..e81cfda 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -171,7 +171,11 @@ int cpuidle_idle_call(void)
 	trace_power_start_rcuidle(POWER_CSTATE, next_state, dev->cpu);
 	trace_cpu_idle_rcuidle(next_state, dev->cpu);
 
-	entered_state = cpuidle_enter_state(dev, drv, next_state);
+	if (cpuidle_state_is_coupled(dev, drv, next_state))
+		entered_state = cpuidle_enter_state_coupled(dev, drv,
+							    next_state);
+	else
+		entered_state = cpuidle_enter_state(dev, drv, next_state);
 
 	trace_power_end_rcuidle(dev->cpu);
 	trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, dev->cpu);
@@ -407,9 +411,16 @@ static int __cpuidle_register_device(struct cpuidle_device *dev)
 	if (ret)
 		goto err_sysfs;
 
+	ret = cpuidle_coupled_register_device(dev);
+	if (ret)
+		goto err_coupled;
+
 	dev->registered = 1;
 	return 0;
 
+err_coupled:
+	cpuidle_remove_sysfs(cpu_dev);
+	wait_for_completion(&dev->kobj_unregister);
 err_sysfs:
 	list_del(&dev->device_list);
 	per_cpu(cpuidle_devices, dev->cpu) = NULL;
@@ -464,6 +475,8 @@ void cpuidle_unregister_device(struct cpuidle_device *dev)
 	wait_for_completion(&dev->kobj_unregister);
 	per_cpu(cpuidle_devices, dev->cpu) = NULL;
 
+	cpuidle_coupled_unregister_device(dev);
+
 	cpuidle_resume_and_unlock();
 
 	module_put(cpuidle_driver->owner);
diff --git a/drivers/cpuidle/cpuidle.h b/drivers/cpuidle/cpuidle.h
index d8a3ccc..76e7f69 100644
--- a/drivers/cpuidle/cpuidle.h
+++ b/drivers/cpuidle/cpuidle.h
@@ -32,4 +32,34 @@ extern int cpuidle_enter_state(struct cpuidle_device *dev,
 extern int cpuidle_add_sysfs(struct device *dev);
 extern void cpuidle_remove_sysfs(struct device *dev);
 
+#ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED
+bool cpuidle_state_is_coupled(struct cpuidle_device *dev,
+		struct cpuidle_driver *drv, int state);
+int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
+		struct cpuidle_driver *drv, int next_state);
+int cpuidle_coupled_register_device(struct cpuidle_device *dev);
+void cpuidle_coupled_unregister_device(struct cpuidle_device *dev);
+#else
+static inline bool cpuidle_state_is_coupled(struct cpuidle_device *dev,
+		struct cpuidle_driver *drv, int state)
+{
+	return false;
+}
+
+static inline int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
+		struct cpuidle_driver *drv, int next_state)
+{
+	return -1;
+}
+
+static inline int cpuidle_coupled_register_device(struct cpuidle_device *dev)
+{
+	return 0;
+}
+
+static inline void cpuidle_coupled_unregister_device(struct cpuidle_device *dev)
+{
+}
+#endif
+
 #endif /* __DRIVER_CPUIDLE_H */
diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h
index 6c26a3d..6038448 100644
--- a/include/linux/cpuidle.h
+++ b/include/linux/cpuidle.h
@@ -57,6 +57,7 @@ struct cpuidle_state {
 
 /* Idle State Flags */
 #define CPUIDLE_FLAG_TIME_VALID	(0x01) /* is residency time measurable? */
+#define CPUIDLE_FLAG_COUPLED	(0x02) /* state applies to multiple cpus */
 
 #define CPUIDLE_DRIVER_FLAGS_MASK (0xFFFF0000)
 
@@ -100,6 +101,12 @@ struct cpuidle_device {
 	struct list_head 	device_list;
 	struct kobject		kobj;
 	struct completion	kobj_unregister;
+
+#ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED
+	int			safe_state_index;
+	cpumask_t		coupled_cpus;
+	struct cpuidle_coupled	*coupled;
+#endif
 };
 
 DECLARE_PER_CPU(struct cpuidle_device *, cpuidle_devices);
-- 
1.7.7.3


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCHv3 3/5] cpuidle: add support for states that affect multiple cpus
@ 2012-04-30 20:09   ` Colin Cross
  0 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-04-30 20:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kevin Hilman, Len Brown, Russell King, Greg Kroah-Hartman,
	Kay Sievers, Amit Kucheria, Colin Cross, linux-pm,
	Arjan van de Ven, Arnd Bergmann, linux-arm-kernel

On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
cpus cannot be independently powered down, either due to
sequencing restrictions (on Tegra 2, cpu 0 must be the last to
power down), or due to HW bugs (on OMAP4460, a cpu powering up
will corrupt the gic state unless the other cpu runs a work
around).  Each cpu has a power state that it can enter without
coordinating with the other cpu (usually Wait For Interrupt, or
WFI), and one or more "coupled" power states that affect blocks
shared between the cpus (L2 cache, interrupt controller, and
sometimes the whole SoC).  Entering a coupled power state must
be tightly controlled on both cpus.

The easiest solution to implementing coupled cpu power states is
to hotplug all but one cpu whenever possible, usually using a
cpufreq governor that looks at cpu load to determine when to
enable the secondary cpus.  This causes problems, as hotplug is an
expensive operation, so the number of hotplug transitions must be
minimized, leading to very slow response to loads, often on the
order of seconds.

This file implements an alternative solution, where each cpu will
wait in the WFI state until all cpus are ready to enter a coupled
state, at which point the coupled state function will be called
on all cpus at approximately the same time.

Once all cpus are ready to enter idle, they are woken by an smp
cross call.  At this point, there is a chance that one of the
cpus will find work to do, and choose not to enter idle.  A
final pass is needed to guarantee that all cpus will call the
power state enter function at the same time.  During this pass,
each cpu will increment the ready counter, and continue once the
ready counter matches the number of online coupled cpus.  If any
cpu exits idle, the other cpus will decrement their counter and
retry.

To use coupled cpuidle states, a cpuidle driver must:

   Set struct cpuidle_device.coupled_cpus to the mask of all
   coupled cpus, usually the same as cpu_possible_mask if all cpus
   are part of the same cluster.  The coupled_cpus mask must be
   set in the struct cpuidle_device for each cpu.

   Set struct cpuidle_device.safe_state to a state that is not a
   coupled state.  This is usually WFI.

   Set CPUIDLE_FLAG_COUPLED in struct cpuidle_state.flags for each
   state that affects multiple cpus.

   Provide a struct cpuidle_state.enter function for each state
   that affects multiple cpus.  This function is guaranteed to be
   called on all cpus at approximately the same time.  The driver
   should ensure that the cpus all abort together if any cpu tries
   to abort once the function is called.

Cc: Len Brown <len.brown@intel.com>
Cc: Amit Kucheria <amit.kucheria@linaro.org>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Trinabh Gupta <g.trinabh@gmail.com>
Cc: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Colin Cross <ccross@android.com>
---
 drivers/cpuidle/Kconfig   |    3 +
 drivers/cpuidle/Makefile  |    1 +
 drivers/cpuidle/coupled.c |  571 +++++++++++++++++++++++++++++++++++++++++++++
 drivers/cpuidle/cpuidle.c |   15 ++-
 drivers/cpuidle/cpuidle.h |   30 +++
 include/linux/cpuidle.h   |    7 +
 6 files changed, 626 insertions(+), 1 deletions(-)
 create mode 100644 drivers/cpuidle/coupled.c

v2:
   * removed the coupled lock, replacing it with atomic counters
   * added a check for outstanding pokes before beginning the
     final transition to avoid extra wakeups
   * made the cpuidle_coupled struct completely private
   * fixed kerneldoc comment formatting

v3:
   * fixed decrement in cpuidle_coupled_cpu_set_alive
   * added kerneldoc annotation to the description

diff --git a/drivers/cpuidle/Kconfig b/drivers/cpuidle/Kconfig
index 78a666d..a76b689 100644
--- a/drivers/cpuidle/Kconfig
+++ b/drivers/cpuidle/Kconfig
@@ -18,3 +18,6 @@ config CPU_IDLE_GOV_MENU
 	bool
 	depends on CPU_IDLE && NO_HZ
 	default y
+
+config ARCH_NEEDS_CPU_IDLE_COUPLED
+	def_bool n
diff --git a/drivers/cpuidle/Makefile b/drivers/cpuidle/Makefile
index 5634f88..38c8f69 100644
--- a/drivers/cpuidle/Makefile
+++ b/drivers/cpuidle/Makefile
@@ -3,3 +3,4 @@
 #
 
 obj-y += cpuidle.o driver.o governor.o sysfs.o governors/
+obj-$(CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED) += coupled.o
diff --git a/drivers/cpuidle/coupled.c b/drivers/cpuidle/coupled.c
new file mode 100644
index 0000000..d097826
--- /dev/null
+++ b/drivers/cpuidle/coupled.c
@@ -0,0 +1,571 @@
+/*
+ * coupled.c - helper functions to enter the same idle state on multiple cpus
+ *
+ * Copyright (c) 2011 Google, Inc.
+ *
+ * Author: Colin Cross <ccross@android.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/cpu.h>
+#include <linux/cpuidle.h>
+#include <linux/mutex.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+
+#include "cpuidle.h"
+
+/**
+ * DOC: Coupled cpuidle states
+ *
+ * On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
+ * cpus cannot be independently powered down, either due to
+ * sequencing restrictions (on Tegra 2, cpu 0 must be the last to
+ * power down), or due to HW bugs (on OMAP4460, a cpu powering up
+ * will corrupt the gic state unless the other cpu runs a work
+ * around).  Each cpu has a power state that it can enter without
+ * coordinating with the other cpu (usually Wait For Interrupt, or
+ * WFI), and one or more "coupled" power states that affect blocks
+ * shared between the cpus (L2 cache, interrupt controller, and
+ * sometimes the whole SoC).  Entering a coupled power state must
+ * be tightly controlled on both cpus.
+ *
+ * The easiest solution to implementing coupled cpu power states is
+ * to hotplug all but one cpu whenever possible, usually using a
+ * cpufreq governor that looks at cpu load to determine when to
+ * enable the secondary cpus.  This causes problems, as hotplug is an
+ * expensive operation, so the number of hotplug transitions must be
+ * minimized, leading to very slow response to loads, often on the
+ * order of seconds.
+ *
+ * This file implements an alternative solution, where each cpu will
+ * wait in the WFI state until all cpus are ready to enter a coupled
+ * state, at which point the coupled state function will be called
+ * on all cpus at approximately the same time.
+ *
+ * Once all cpus are ready to enter idle, they are woken by an smp
+ * cross call.  At this point, there is a chance that one of the
+ * cpus will find work to do, and choose not to enter idle.  A
+ * final pass is needed to guarantee that all cpus will call the
+ * power state enter function at the same time.  During this pass,
+ * each cpu will increment the ready counter, and continue once the
+ * ready counter matches the number of online coupled cpus.  If any
+ * cpu exits idle, the other cpus will decrement their counter and
+ * retry.
+ *
+ * requested_state stores the deepest coupled idle state each cpu
+ * is ready for.  It is assumed that the states are indexed from
+ * shallowest (highest power, lowest exit latency) to deepest
+ * (lowest power, highest exit latency).  The requested_state
+ * variable is not locked.  It is only written from the cpu that
+ * it stores (or by the on/offlining cpu if that cpu is offline),
+ * and only read after all the cpus are ready for the coupled idle
+ * state are are no longer updating it.
+ *
+ * Three atomic counters are used.  alive_count tracks the number
+ * of cpus in the coupled set that are currently or soon will be
+ * online.  waiting_count tracks the number of cpus that are in
+ * the waiting loop, in the ready loop, or in the coupled idle state.
+ * ready_count tracks the number of cpus that are in the ready loop
+ * or in the coupled idle state.
+ *
+ * To use coupled cpuidle states, a cpuidle driver must:
+ *
+ *    Set struct cpuidle_device.coupled_cpus to the mask of all
+ *    coupled cpus, usually the same as cpu_possible_mask if all cpus
+ *    are part of the same cluster.  The coupled_cpus mask must be
+ *    set in the struct cpuidle_device for each cpu.
+ *
+ *    Set struct cpuidle_device.safe_state to a state that is not a
+ *    coupled state.  This is usually WFI.
+ *
+ *    Set CPUIDLE_FLAG_COUPLED in struct cpuidle_state.flags for each
+ *    state that affects multiple cpus.
+ *
+ *    Provide a struct cpuidle_state.enter function for each state
+ *    that affects multiple cpus.  This function is guaranteed to be
+ *    called on all cpus at approximately the same time.  The driver
+ *    should ensure that the cpus all abort together if any cpu tries
+ *    to abort once the function is called.  The function should return
+ *    with interrupts still disabled.
+ */
+
+/**
+ * struct cpuidle_coupled - data for set of cpus that share a coupled idle state
+ * @coupled_cpus: mask of cpus that are part of the coupled set
+ * @requested_state: array of requested states for cpus in the coupled set
+ * @ready_count: count of cpus that are ready for the final idle transition
+ * @waiting_count: count of cpus that are waiting for all other cpus to be idle
+ * @alive_count: count of cpus that are online or soon will be
+ * @refcnt: reference count of cpuidle devices that are using this struct
+ */
+struct cpuidle_coupled {
+	cpumask_t coupled_cpus;
+	int requested_state[NR_CPUS];
+	atomic_t ready_count;
+	atomic_t waiting_count;
+	atomic_t alive_count;
+	int refcnt;
+};
+
+#define CPUIDLE_COUPLED_NOT_IDLE	(-1)
+#define CPUIDLE_COUPLED_DEAD		(-2)
+
+static DEFINE_MUTEX(cpuidle_coupled_lock);
+static DEFINE_PER_CPU(struct call_single_data, cpuidle_coupled_poke_cb);
+
+/*
+ * The cpuidle_coupled_poked_mask masked is used to avoid calling
+ * __smp_call_function_single with the per cpu call_single_data struct already
+ * in use.  This prevents a deadlock where two cpus are waiting for each others
+ * call_single_data struct to be available
+ */
+static cpumask_t cpuidle_coupled_poked_mask;
+
+/**
+ * cpuidle_state_is_coupled - check if a state is part of a coupled set
+ * @dev: struct cpuidle_device for the current cpu
+ * @drv: struct cpuidle_driver for the platform
+ * @state: index of the target state in drv->states
+ *
+ * Returns true if the target state is coupled with cpus besides this one
+ */
+bool cpuidle_state_is_coupled(struct cpuidle_device *dev,
+	struct cpuidle_driver *drv, int state)
+{
+	return drv->states[state].flags & CPUIDLE_FLAG_COUPLED;
+}
+
+/**
+ * cpuidle_coupled_cpus_waiting - check if all cpus in a coupled set are waiting
+ * @coupled: the struct coupled that contains the current cpu
+ *
+ * Returns true if all cpus coupled to this target state are in the wait loop
+ */
+static inline bool cpuidle_coupled_cpus_waiting(struct cpuidle_coupled *coupled)
+{
+	int alive;
+	int waiting;
+
+	/*
+	 * Read alive before reading waiting so a booting cpu is not treated as
+	 * idle
+	 */
+	alive = atomic_read(&coupled->alive_count);
+	smp_rmb();
+	waiting = atomic_read(&coupled->waiting_count);
+
+	return (waiting == alive);
+}
+
+/**
+ * cpuidle_coupled_get_state - determine the deepest idle state
+ * @dev: struct cpuidle_device for this cpu
+ * @coupled: the struct coupled that contains the current cpu
+ *
+ * Returns the deepest idle state that all coupled cpus can enter
+ */
+static inline int cpuidle_coupled_get_state(struct cpuidle_device *dev,
+		struct cpuidle_coupled *coupled)
+{
+	int i;
+	int state = INT_MAX;
+
+	for_each_cpu_mask(i, coupled->coupled_cpus)
+		if (coupled->requested_state[i] != CPUIDLE_COUPLED_DEAD &&
+		    coupled->requested_state[i] < state)
+			state = coupled->requested_state[i];
+
+	BUG_ON(state >= dev->state_count || state < 0);
+
+	return state;
+}
+
+static void cpuidle_coupled_poked(void *info)
+{
+	int cpu = (unsigned long)info;
+	cpumask_clear_cpu(cpu, &cpuidle_coupled_poked_mask);
+}
+
+/**
+ * cpuidle_coupled_poke - wake up a cpu that may be waiting
+ * @cpu: target cpu
+ *
+ * Ensures that the target cpu exits it's waiting idle state (if it is in it)
+ * and will see updates to waiting_count before it re-enters it's waiting idle
+ * state.
+ *
+ * If cpuidle_coupled_poked_mask is already set for the target cpu, that cpu
+ * either has or will soon have a pending IPI that will wake it out of idle,
+ * or it is currently processing the IPI and is not in idle.
+ */
+static void cpuidle_coupled_poke(int cpu)
+{
+	struct call_single_data *csd = &per_cpu(cpuidle_coupled_poke_cb, cpu);
+
+	if (!cpumask_test_and_set_cpu(cpu, &cpuidle_coupled_poked_mask))
+		__smp_call_function_single(cpu, csd, 0);
+}
+
+/**
+ * cpuidle_coupled_poke_others - wake up all other cpus that may be waiting
+ * @dev: struct cpuidle_device for this cpu
+ * @coupled: the struct coupled that contains the current cpu
+ *
+ * Calls cpuidle_coupled_poke on all other online cpus.
+ */
+static void cpuidle_coupled_poke_others(struct cpuidle_device *dev,
+		struct cpuidle_coupled *coupled)
+{
+	int cpu;
+
+	for_each_cpu_mask(cpu, coupled->coupled_cpus)
+		if (cpu != dev->cpu && cpu_online(cpu))
+			cpuidle_coupled_poke(cpu);
+}
+
+/**
+ * cpuidle_coupled_set_waiting - mark this cpu as in the wait loop
+ * @dev: struct cpuidle_device for this cpu
+ * @coupled: the struct coupled that contains the current cpu
+ * @next_state: the index in drv->states of the requested state for this cpu
+ *
+ * Updates the requested idle state for the specified cpuidle device,
+ * poking all coupled cpus out of idle if necessary to let them see the new
+ * state.
+ *
+ * Provides memory ordering around waiting_count.
+ */
+static void cpuidle_coupled_set_waiting(struct cpuidle_device *dev,
+		struct cpuidle_coupled *coupled, int next_state)
+{
+	int alive;
+
+	BUG_ON(coupled->requested_state[dev->cpu] >= 0);
+
+	coupled->requested_state[dev->cpu] = next_state;
+
+	/*
+	 * If this is the last cpu to enter the waiting state, poke
+	 * all the other cpus out of their waiting state so they can
+	 * enter a deeper state.  This can race with one of the cpus
+	 * exiting the waiting state due to an interrupt and
+	 * decrementing waiting_count, see comment below.
+	 */
+	alive = atomic_read(&coupled->alive_count);
+	if (atomic_inc_return(&coupled->waiting_count) == alive)
+		cpuidle_coupled_poke_others(dev, coupled);
+}
+
+/**
+ * cpuidle_coupled_set_not_waiting - mark this cpu as leaving the wait loop
+ * @dev: struct cpuidle_device for this cpu
+ * @coupled: the struct coupled that contains the current cpu
+ *
+ * Removes the requested idle state for the specified cpuidle device.
+ *
+ * Provides memory ordering around waiting_count.
+ */
+static void cpuidle_coupled_set_not_waiting(struct cpuidle_device *dev,
+		struct cpuidle_coupled *coupled)
+{
+	BUG_ON(coupled->requested_state[dev->cpu] < 0);
+
+	/*
+	 * Decrementing waiting_count can race with incrementing it in
+	 * cpuidle_coupled_set_waiting, but that's OK.  Worst case, some
+	 * cpus will increment ready_count and then spin until they
+	 * notice that this cpu has cleared it's requested_state.
+	 */
+
+	smp_mb__before_atomic_dec();
+	atomic_dec(&coupled->waiting_count);
+	smp_mb__after_atomic_dec();
+
+	coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
+}
+
+/**
+ * cpuidle_enter_state_coupled - attempt to enter a state with coupled cpus
+ * @dev: struct cpuidle_device for the current cpu
+ * @drv: struct cpuidle_driver for the platform
+ * @next_state: index of the requested state in drv->states
+ *
+ * Coordinate with coupled cpus to enter the target state.  This is a two
+ * stage process.  In the first stage, the cpus are operating independently,
+ * and may call into cpuidle_enter_state_coupled at completely different times.
+ * To save as much power as possible, the first cpus to call this function will
+ * go to an intermediate state (the cpuidle_device's safe state), and wait for
+ * all the other cpus to call this function.  Once all coupled cpus are idle,
+ * the second stage will start.  Each coupled cpu will spin until all cpus have
+ * guaranteed that they will call the target_state.
+ */
+int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
+		struct cpuidle_driver *drv, int next_state)
+{
+	int entered_state = -1;
+	struct cpuidle_coupled *coupled = dev->coupled;
+	int alive;
+
+	if (!coupled)
+		return -EINVAL;
+
+	BUG_ON(atomic_read(&coupled->ready_count));
+	cpuidle_coupled_set_waiting(dev, coupled, next_state);
+
+retry:
+	/*
+	 * Wait for all coupled cpus to be idle, using the deepest state
+	 * allowed for a single cpu.
+	 */
+	while (!need_resched() && !cpuidle_coupled_cpus_waiting(coupled)) {
+		entered_state = cpuidle_enter_state(dev, drv,
+			dev->safe_state_index);
+
+		local_irq_enable();
+		while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
+			cpu_relax();
+		local_irq_disable();
+	}
+
+	/* give a chance to process any remaining pokes */
+	local_irq_enable();
+	while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
+		cpu_relax();
+	local_irq_disable();
+
+	if (need_resched()) {
+		cpuidle_coupled_set_not_waiting(dev, coupled);
+		goto out;
+	}
+
+	/*
+	 * All coupled cpus are probably idle.  There is a small chance that
+	 * one of the other cpus just became active.  Increment a counter when
+	 * ready, and spin until all coupled cpus have incremented the counter.
+	 * Once a cpu has incremented the counter, it cannot abort idle and must
+	 * spin until either the count has hit alive_count, or another cpu
+	 * leaves idle.
+	 */
+
+	smp_mb__before_atomic_inc();
+	atomic_inc(&coupled->ready_count);
+	smp_mb__after_atomic_inc();
+	/* alive_count can't change while ready_count > 0 */
+	alive = atomic_read(&coupled->alive_count);
+	while (atomic_read(&coupled->ready_count) != alive) {
+		/* Check if any other cpus bailed out of idle. */
+		if (!cpuidle_coupled_cpus_waiting(coupled)) {
+			atomic_dec(&coupled->ready_count);
+			smp_mb__after_atomic_dec();
+			goto retry;
+		}
+
+		cpu_relax();
+	}
+
+	/* all cpus have acked the coupled state */
+	smp_rmb();
+
+	next_state = cpuidle_coupled_get_state(dev, coupled);
+
+	entered_state = cpuidle_enter_state(dev, drv, next_state);
+
+	cpuidle_coupled_set_not_waiting(dev, coupled);
+	atomic_dec(&coupled->ready_count);
+	smp_mb__after_atomic_dec();
+
+out:
+	/*
+	 * Normal cpuidle states are expected to return with irqs enabled.
+	 * That leads to an inefficiency where a cpu receiving an interrupt
+	 * that brings it out of idle will process that interrupt before
+	 * exiting the idle enter function and decrementing ready_count.  All
+	 * other cpus will need to spin waiting for the cpu that is processing
+	 * the interrupt.  If the driver returns with interrupts disabled,
+	 * all other cpus will loop back into the safe idle state instead of
+	 * spinning, saving power.
+	 *
+	 * Calling local_irq_enable here allows coupled states to return with
+	 * interrupts disabled, but won't cause problems for drivers that
+	 * exit with interrupts enabled.
+	 */
+	local_irq_enable();
+
+	/*
+	 * Wait until all coupled cpus have exited idle.  There is no risk that
+	 * a cpu exits and re-enters the ready state because this cpu has
+	 * already decremented its waiting_count.
+	 */
+	while (atomic_read(&coupled->ready_count) != 0)
+		cpu_relax();
+
+	smp_rmb();
+
+	return entered_state;
+}
+
+/**
+ * cpuidle_coupled_register_device - register a coupled cpuidle device
+ * @dev: struct cpuidle_device for the current cpu
+ *
+ * Called from cpuidle_register_device to handle coupled idle init.  Finds the
+ * cpuidle_coupled struct for this set of coupled cpus, or creates one if none
+ * exists yet.
+ */
+int cpuidle_coupled_register_device(struct cpuidle_device *dev)
+{
+	int cpu;
+	struct cpuidle_device *other_dev;
+	struct call_single_data *csd;
+	struct cpuidle_coupled *coupled;
+
+	if (cpumask_empty(&dev->coupled_cpus))
+		return 0;
+
+	for_each_cpu_mask(cpu, dev->coupled_cpus) {
+		other_dev = per_cpu(cpuidle_devices, cpu);
+		if (other_dev && other_dev->coupled) {
+			coupled = other_dev->coupled;
+			goto have_coupled;
+		}
+	}
+
+	/* No existing coupled info found, create a new one */
+	coupled = kzalloc(sizeof(struct cpuidle_coupled), GFP_KERNEL);
+	if (!coupled)
+		return -ENOMEM;
+
+	coupled->coupled_cpus = dev->coupled_cpus;
+	for_each_cpu_mask(cpu, coupled->coupled_cpus)
+		coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_DEAD;
+
+have_coupled:
+	dev->coupled = coupled;
+	BUG_ON(!cpumask_equal(&dev->coupled_cpus, &coupled->coupled_cpus));
+
+	if (cpu_online(dev->cpu)) {
+		coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
+		atomic_inc(&coupled->alive_count);
+	}
+
+	coupled->refcnt++;
+
+	csd = &per_cpu(cpuidle_coupled_poke_cb, dev->cpu);
+	csd->func = cpuidle_coupled_poked;
+	csd->info = (void *)(unsigned long)dev->cpu;
+
+	return 0;
+}
+
+/**
+ * cpuidle_coupled_unregister_device - unregister a coupled cpuidle device
+ * @dev: struct cpuidle_device for the current cpu
+ *
+ * Called from cpuidle_unregister_device to tear down coupled idle.  Removes the
+ * cpu from the coupled idle set, and frees the cpuidle_coupled_info struct if
+ * this was the last cpu in the set.
+ */
+void cpuidle_coupled_unregister_device(struct cpuidle_device *dev)
+{
+	struct cpuidle_coupled *coupled = dev->coupled;
+
+	if (cpumask_empty(&dev->coupled_cpus))
+		return;
+
+	if (--coupled->refcnt)
+		kfree(coupled);
+	dev->coupled = NULL;
+}
+
+/**
+ * cpuidle_coupled_cpu_set_alive - adjust alive_count during hotplug transitions
+ * @cpu: target cpu number
+ * @alive: whether the target cpu is going up or down
+ *
+ * Run on the cpu that is bringing up the target cpu, before the target cpu
+ * has been booted, or after the target cpu is completely dead.
+ */
+static void cpuidle_coupled_cpu_set_alive(int cpu, bool alive)
+{
+	struct cpuidle_device *dev;
+	struct cpuidle_coupled *coupled;
+
+	mutex_lock(&cpuidle_lock);
+
+	dev = per_cpu(cpuidle_devices, cpu);
+	if (!dev->coupled)
+		goto out;
+
+	coupled = dev->coupled;
+
+	/*
+	 * waiting_count must be at least 1 less than alive_count, because
+	 * this cpu is not waiting.  Spin until all cpus have noticed this cpu
+	 * is not idle and exited the ready loop before changing alive_count.
+	 */
+	while (atomic_read(&coupled->ready_count))
+		cpu_relax();
+
+	if (alive) {
+		smp_mb__before_atomic_inc();
+		atomic_inc(&coupled->alive_count);
+		smp_mb__after_atomic_inc();
+		coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
+	} else {
+		smp_mb__before_atomic_dec();
+		atomic_dec(&coupled->alive_count);
+		smp_mb__after_atomic_dec();
+		coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_DEAD;
+	}
+
+out:
+	mutex_unlock(&cpuidle_lock);
+}
+
+/**
+ * cpuidle_coupled_cpu_notify - notifier called during hotplug transitions
+ * @nb: notifier block
+ * @action: hotplug transition
+ * @hcpu: target cpu number
+ *
+ * Called when a cpu is brought on or offline using hotplug.  Updates the
+ * coupled cpu set appropriately
+ */
+static int cpuidle_coupled_cpu_notify(struct notifier_block *nb,
+		unsigned long action, void *hcpu)
+{
+	int cpu = (unsigned long)hcpu;
+
+	switch (action & ~CPU_TASKS_FROZEN) {
+	case CPU_DEAD:
+	case CPU_UP_CANCELED:
+		cpuidle_coupled_cpu_set_alive(cpu, false);
+		break;
+	case CPU_UP_PREPARE:
+		cpuidle_coupled_cpu_set_alive(cpu, true);
+		break;
+	}
+	return NOTIFY_OK;
+}
+
+static struct notifier_block cpuidle_coupled_cpu_notifier = {
+	.notifier_call = cpuidle_coupled_cpu_notify,
+};
+
+static int __init cpuidle_coupled_init(void)
+{
+	return register_cpu_notifier(&cpuidle_coupled_cpu_notifier);
+}
+core_initcall(cpuidle_coupled_init);
diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index 4540672..e81cfda 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -171,7 +171,11 @@ int cpuidle_idle_call(void)
 	trace_power_start_rcuidle(POWER_CSTATE, next_state, dev->cpu);
 	trace_cpu_idle_rcuidle(next_state, dev->cpu);
 
-	entered_state = cpuidle_enter_state(dev, drv, next_state);
+	if (cpuidle_state_is_coupled(dev, drv, next_state))
+		entered_state = cpuidle_enter_state_coupled(dev, drv,
+							    next_state);
+	else
+		entered_state = cpuidle_enter_state(dev, drv, next_state);
 
 	trace_power_end_rcuidle(dev->cpu);
 	trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, dev->cpu);
@@ -407,9 +411,16 @@ static int __cpuidle_register_device(struct cpuidle_device *dev)
 	if (ret)
 		goto err_sysfs;
 
+	ret = cpuidle_coupled_register_device(dev);
+	if (ret)
+		goto err_coupled;
+
 	dev->registered = 1;
 	return 0;
 
+err_coupled:
+	cpuidle_remove_sysfs(cpu_dev);
+	wait_for_completion(&dev->kobj_unregister);
 err_sysfs:
 	list_del(&dev->device_list);
 	per_cpu(cpuidle_devices, dev->cpu) = NULL;
@@ -464,6 +475,8 @@ void cpuidle_unregister_device(struct cpuidle_device *dev)
 	wait_for_completion(&dev->kobj_unregister);
 	per_cpu(cpuidle_devices, dev->cpu) = NULL;
 
+	cpuidle_coupled_unregister_device(dev);
+
 	cpuidle_resume_and_unlock();
 
 	module_put(cpuidle_driver->owner);
diff --git a/drivers/cpuidle/cpuidle.h b/drivers/cpuidle/cpuidle.h
index d8a3ccc..76e7f69 100644
--- a/drivers/cpuidle/cpuidle.h
+++ b/drivers/cpuidle/cpuidle.h
@@ -32,4 +32,34 @@ extern int cpuidle_enter_state(struct cpuidle_device *dev,
 extern int cpuidle_add_sysfs(struct device *dev);
 extern void cpuidle_remove_sysfs(struct device *dev);
 
+#ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED
+bool cpuidle_state_is_coupled(struct cpuidle_device *dev,
+		struct cpuidle_driver *drv, int state);
+int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
+		struct cpuidle_driver *drv, int next_state);
+int cpuidle_coupled_register_device(struct cpuidle_device *dev);
+void cpuidle_coupled_unregister_device(struct cpuidle_device *dev);
+#else
+static inline bool cpuidle_state_is_coupled(struct cpuidle_device *dev,
+		struct cpuidle_driver *drv, int state)
+{
+	return false;
+}
+
+static inline int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
+		struct cpuidle_driver *drv, int next_state)
+{
+	return -1;
+}
+
+static inline int cpuidle_coupled_register_device(struct cpuidle_device *dev)
+{
+	return 0;
+}
+
+static inline void cpuidle_coupled_unregister_device(struct cpuidle_device *dev)
+{
+}
+#endif
+
 #endif /* __DRIVER_CPUIDLE_H */
diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h
index 6c26a3d..6038448 100644
--- a/include/linux/cpuidle.h
+++ b/include/linux/cpuidle.h
@@ -57,6 +57,7 @@ struct cpuidle_state {
 
 /* Idle State Flags */
 #define CPUIDLE_FLAG_TIME_VALID	(0x01) /* is residency time measurable? */
+#define CPUIDLE_FLAG_COUPLED	(0x02) /* state applies to multiple cpus */
 
 #define CPUIDLE_DRIVER_FLAGS_MASK (0xFFFF0000)
 
@@ -100,6 +101,12 @@ struct cpuidle_device {
 	struct list_head 	device_list;
 	struct kobject		kobj;
 	struct completion	kobj_unregister;
+
+#ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED
+	int			safe_state_index;
+	cpumask_t		coupled_cpus;
+	struct cpuidle_coupled	*coupled;
+#endif
 };
 
 DECLARE_PER_CPU(struct cpuidle_device *, cpuidle_devices);
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCHv3 3/5] cpuidle: add support for states that affect multiple cpus
@ 2012-04-30 20:09   ` Colin Cross
  0 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-04-30 20:09 UTC (permalink / raw)
  To: linux-arm-kernel

On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
cpus cannot be independently powered down, either due to
sequencing restrictions (on Tegra 2, cpu 0 must be the last to
power down), or due to HW bugs (on OMAP4460, a cpu powering up
will corrupt the gic state unless the other cpu runs a work
around).  Each cpu has a power state that it can enter without
coordinating with the other cpu (usually Wait For Interrupt, or
WFI), and one or more "coupled" power states that affect blocks
shared between the cpus (L2 cache, interrupt controller, and
sometimes the whole SoC).  Entering a coupled power state must
be tightly controlled on both cpus.

The easiest solution to implementing coupled cpu power states is
to hotplug all but one cpu whenever possible, usually using a
cpufreq governor that looks at cpu load to determine when to
enable the secondary cpus.  This causes problems, as hotplug is an
expensive operation, so the number of hotplug transitions must be
minimized, leading to very slow response to loads, often on the
order of seconds.

This file implements an alternative solution, where each cpu will
wait in the WFI state until all cpus are ready to enter a coupled
state, at which point the coupled state function will be called
on all cpus at approximately the same time.

Once all cpus are ready to enter idle, they are woken by an smp
cross call.  At this point, there is a chance that one of the
cpus will find work to do, and choose not to enter idle.  A
final pass is needed to guarantee that all cpus will call the
power state enter function at the same time.  During this pass,
each cpu will increment the ready counter, and continue once the
ready counter matches the number of online coupled cpus.  If any
cpu exits idle, the other cpus will decrement their counter and
retry.

To use coupled cpuidle states, a cpuidle driver must:

   Set struct cpuidle_device.coupled_cpus to the mask of all
   coupled cpus, usually the same as cpu_possible_mask if all cpus
   are part of the same cluster.  The coupled_cpus mask must be
   set in the struct cpuidle_device for each cpu.

   Set struct cpuidle_device.safe_state to a state that is not a
   coupled state.  This is usually WFI.

   Set CPUIDLE_FLAG_COUPLED in struct cpuidle_state.flags for each
   state that affects multiple cpus.

   Provide a struct cpuidle_state.enter function for each state
   that affects multiple cpus.  This function is guaranteed to be
   called on all cpus at approximately the same time.  The driver
   should ensure that the cpus all abort together if any cpu tries
   to abort once the function is called.

Cc: Len Brown <len.brown@intel.com>
Cc: Amit Kucheria <amit.kucheria@linaro.org>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Trinabh Gupta <g.trinabh@gmail.com>
Cc: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Colin Cross <ccross@android.com>
---
 drivers/cpuidle/Kconfig   |    3 +
 drivers/cpuidle/Makefile  |    1 +
 drivers/cpuidle/coupled.c |  571 +++++++++++++++++++++++++++++++++++++++++++++
 drivers/cpuidle/cpuidle.c |   15 ++-
 drivers/cpuidle/cpuidle.h |   30 +++
 include/linux/cpuidle.h   |    7 +
 6 files changed, 626 insertions(+), 1 deletions(-)
 create mode 100644 drivers/cpuidle/coupled.c

v2:
   * removed the coupled lock, replacing it with atomic counters
   * added a check for outstanding pokes before beginning the
     final transition to avoid extra wakeups
   * made the cpuidle_coupled struct completely private
   * fixed kerneldoc comment formatting

v3:
   * fixed decrement in cpuidle_coupled_cpu_set_alive
   * added kerneldoc annotation to the description

diff --git a/drivers/cpuidle/Kconfig b/drivers/cpuidle/Kconfig
index 78a666d..a76b689 100644
--- a/drivers/cpuidle/Kconfig
+++ b/drivers/cpuidle/Kconfig
@@ -18,3 +18,6 @@ config CPU_IDLE_GOV_MENU
 	bool
 	depends on CPU_IDLE && NO_HZ
 	default y
+
+config ARCH_NEEDS_CPU_IDLE_COUPLED
+	def_bool n
diff --git a/drivers/cpuidle/Makefile b/drivers/cpuidle/Makefile
index 5634f88..38c8f69 100644
--- a/drivers/cpuidle/Makefile
+++ b/drivers/cpuidle/Makefile
@@ -3,3 +3,4 @@
 #
 
 obj-y += cpuidle.o driver.o governor.o sysfs.o governors/
+obj-$(CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED) += coupled.o
diff --git a/drivers/cpuidle/coupled.c b/drivers/cpuidle/coupled.c
new file mode 100644
index 0000000..d097826
--- /dev/null
+++ b/drivers/cpuidle/coupled.c
@@ -0,0 +1,571 @@
+/*
+ * coupled.c - helper functions to enter the same idle state on multiple cpus
+ *
+ * Copyright (c) 2011 Google, Inc.
+ *
+ * Author: Colin Cross <ccross@android.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/cpu.h>
+#include <linux/cpuidle.h>
+#include <linux/mutex.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+
+#include "cpuidle.h"
+
+/**
+ * DOC: Coupled cpuidle states
+ *
+ * On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
+ * cpus cannot be independently powered down, either due to
+ * sequencing restrictions (on Tegra 2, cpu 0 must be the last to
+ * power down), or due to HW bugs (on OMAP4460, a cpu powering up
+ * will corrupt the gic state unless the other cpu runs a work
+ * around).  Each cpu has a power state that it can enter without
+ * coordinating with the other cpu (usually Wait For Interrupt, or
+ * WFI), and one or more "coupled" power states that affect blocks
+ * shared between the cpus (L2 cache, interrupt controller, and
+ * sometimes the whole SoC).  Entering a coupled power state must
+ * be tightly controlled on both cpus.
+ *
+ * The easiest solution to implementing coupled cpu power states is
+ * to hotplug all but one cpu whenever possible, usually using a
+ * cpufreq governor that looks at cpu load to determine when to
+ * enable the secondary cpus.  This causes problems, as hotplug is an
+ * expensive operation, so the number of hotplug transitions must be
+ * minimized, leading to very slow response to loads, often on the
+ * order of seconds.
+ *
+ * This file implements an alternative solution, where each cpu will
+ * wait in the WFI state until all cpus are ready to enter a coupled
+ * state, at which point the coupled state function will be called
+ * on all cpus at approximately the same time.
+ *
+ * Once all cpus are ready to enter idle, they are woken by an smp
+ * cross call.  At this point, there is a chance that one of the
+ * cpus will find work to do, and choose not to enter idle.  A
+ * final pass is needed to guarantee that all cpus will call the
+ * power state enter function at the same time.  During this pass,
+ * each cpu will increment the ready counter, and continue once the
+ * ready counter matches the number of online coupled cpus.  If any
+ * cpu exits idle, the other cpus will decrement their counter and
+ * retry.
+ *
+ * requested_state stores the deepest coupled idle state each cpu
+ * is ready for.  It is assumed that the states are indexed from
+ * shallowest (highest power, lowest exit latency) to deepest
+ * (lowest power, highest exit latency).  The requested_state
+ * variable is not locked.  It is only written from the cpu that
+ * it stores (or by the on/offlining cpu if that cpu is offline),
+ * and only read after all the cpus are ready for the coupled idle
+ * state are are no longer updating it.
+ *
+ * Three atomic counters are used.  alive_count tracks the number
+ * of cpus in the coupled set that are currently or soon will be
+ * online.  waiting_count tracks the number of cpus that are in
+ * the waiting loop, in the ready loop, or in the coupled idle state.
+ * ready_count tracks the number of cpus that are in the ready loop
+ * or in the coupled idle state.
+ *
+ * To use coupled cpuidle states, a cpuidle driver must:
+ *
+ *    Set struct cpuidle_device.coupled_cpus to the mask of all
+ *    coupled cpus, usually the same as cpu_possible_mask if all cpus
+ *    are part of the same cluster.  The coupled_cpus mask must be
+ *    set in the struct cpuidle_device for each cpu.
+ *
+ *    Set struct cpuidle_device.safe_state to a state that is not a
+ *    coupled state.  This is usually WFI.
+ *
+ *    Set CPUIDLE_FLAG_COUPLED in struct cpuidle_state.flags for each
+ *    state that affects multiple cpus.
+ *
+ *    Provide a struct cpuidle_state.enter function for each state
+ *    that affects multiple cpus.  This function is guaranteed to be
+ *    called on all cpus at approximately the same time.  The driver
+ *    should ensure that the cpus all abort together if any cpu tries
+ *    to abort once the function is called.  The function should return
+ *    with interrupts still disabled.
+ */
+
+/**
+ * struct cpuidle_coupled - data for set of cpus that share a coupled idle state
+ * @coupled_cpus: mask of cpus that are part of the coupled set
+ * @requested_state: array of requested states for cpus in the coupled set
+ * @ready_count: count of cpus that are ready for the final idle transition
+ * @waiting_count: count of cpus that are waiting for all other cpus to be idle
+ * @alive_count: count of cpus that are online or soon will be
+ * @refcnt: reference count of cpuidle devices that are using this struct
+ */
+struct cpuidle_coupled {
+	cpumask_t coupled_cpus;
+	int requested_state[NR_CPUS];
+	atomic_t ready_count;
+	atomic_t waiting_count;
+	atomic_t alive_count;
+	int refcnt;
+};
+
+#define CPUIDLE_COUPLED_NOT_IDLE	(-1)
+#define CPUIDLE_COUPLED_DEAD		(-2)
+
+static DEFINE_MUTEX(cpuidle_coupled_lock);
+static DEFINE_PER_CPU(struct call_single_data, cpuidle_coupled_poke_cb);
+
+/*
+ * The cpuidle_coupled_poked_mask masked is used to avoid calling
+ * __smp_call_function_single with the per cpu call_single_data struct already
+ * in use.  This prevents a deadlock where two cpus are waiting for each others
+ * call_single_data struct to be available
+ */
+static cpumask_t cpuidle_coupled_poked_mask;
+
+/**
+ * cpuidle_state_is_coupled - check if a state is part of a coupled set
+ * @dev: struct cpuidle_device for the current cpu
+ * @drv: struct cpuidle_driver for the platform
+ * @state: index of the target state in drv->states
+ *
+ * Returns true if the target state is coupled with cpus besides this one
+ */
+bool cpuidle_state_is_coupled(struct cpuidle_device *dev,
+	struct cpuidle_driver *drv, int state)
+{
+	return drv->states[state].flags & CPUIDLE_FLAG_COUPLED;
+}
+
+/**
+ * cpuidle_coupled_cpus_waiting - check if all cpus in a coupled set are waiting
+ * @coupled: the struct coupled that contains the current cpu
+ *
+ * Returns true if all cpus coupled to this target state are in the wait loop
+ */
+static inline bool cpuidle_coupled_cpus_waiting(struct cpuidle_coupled *coupled)
+{
+	int alive;
+	int waiting;
+
+	/*
+	 * Read alive before reading waiting so a booting cpu is not treated as
+	 * idle
+	 */
+	alive = atomic_read(&coupled->alive_count);
+	smp_rmb();
+	waiting = atomic_read(&coupled->waiting_count);
+
+	return (waiting == alive);
+}
+
+/**
+ * cpuidle_coupled_get_state - determine the deepest idle state
+ * @dev: struct cpuidle_device for this cpu
+ * @coupled: the struct coupled that contains the current cpu
+ *
+ * Returns the deepest idle state that all coupled cpus can enter
+ */
+static inline int cpuidle_coupled_get_state(struct cpuidle_device *dev,
+		struct cpuidle_coupled *coupled)
+{
+	int i;
+	int state = INT_MAX;
+
+	for_each_cpu_mask(i, coupled->coupled_cpus)
+		if (coupled->requested_state[i] != CPUIDLE_COUPLED_DEAD &&
+		    coupled->requested_state[i] < state)
+			state = coupled->requested_state[i];
+
+	BUG_ON(state >= dev->state_count || state < 0);
+
+	return state;
+}
+
+static void cpuidle_coupled_poked(void *info)
+{
+	int cpu = (unsigned long)info;
+	cpumask_clear_cpu(cpu, &cpuidle_coupled_poked_mask);
+}
+
+/**
+ * cpuidle_coupled_poke - wake up a cpu that may be waiting
+ * @cpu: target cpu
+ *
+ * Ensures that the target cpu exits it's waiting idle state (if it is in it)
+ * and will see updates to waiting_count before it re-enters it's waiting idle
+ * state.
+ *
+ * If cpuidle_coupled_poked_mask is already set for the target cpu, that cpu
+ * either has or will soon have a pending IPI that will wake it out of idle,
+ * or it is currently processing the IPI and is not in idle.
+ */
+static void cpuidle_coupled_poke(int cpu)
+{
+	struct call_single_data *csd = &per_cpu(cpuidle_coupled_poke_cb, cpu);
+
+	if (!cpumask_test_and_set_cpu(cpu, &cpuidle_coupled_poked_mask))
+		__smp_call_function_single(cpu, csd, 0);
+}
+
+/**
+ * cpuidle_coupled_poke_others - wake up all other cpus that may be waiting
+ * @dev: struct cpuidle_device for this cpu
+ * @coupled: the struct coupled that contains the current cpu
+ *
+ * Calls cpuidle_coupled_poke on all other online cpus.
+ */
+static void cpuidle_coupled_poke_others(struct cpuidle_device *dev,
+		struct cpuidle_coupled *coupled)
+{
+	int cpu;
+
+	for_each_cpu_mask(cpu, coupled->coupled_cpus)
+		if (cpu != dev->cpu && cpu_online(cpu))
+			cpuidle_coupled_poke(cpu);
+}
+
+/**
+ * cpuidle_coupled_set_waiting - mark this cpu as in the wait loop
+ * @dev: struct cpuidle_device for this cpu
+ * @coupled: the struct coupled that contains the current cpu
+ * @next_state: the index in drv->states of the requested state for this cpu
+ *
+ * Updates the requested idle state for the specified cpuidle device,
+ * poking all coupled cpus out of idle if necessary to let them see the new
+ * state.
+ *
+ * Provides memory ordering around waiting_count.
+ */
+static void cpuidle_coupled_set_waiting(struct cpuidle_device *dev,
+		struct cpuidle_coupled *coupled, int next_state)
+{
+	int alive;
+
+	BUG_ON(coupled->requested_state[dev->cpu] >= 0);
+
+	coupled->requested_state[dev->cpu] = next_state;
+
+	/*
+	 * If this is the last cpu to enter the waiting state, poke
+	 * all the other cpus out of their waiting state so they can
+	 * enter a deeper state.  This can race with one of the cpus
+	 * exiting the waiting state due to an interrupt and
+	 * decrementing waiting_count, see comment below.
+	 */
+	alive = atomic_read(&coupled->alive_count);
+	if (atomic_inc_return(&coupled->waiting_count) == alive)
+		cpuidle_coupled_poke_others(dev, coupled);
+}
+
+/**
+ * cpuidle_coupled_set_not_waiting - mark this cpu as leaving the wait loop
+ * @dev: struct cpuidle_device for this cpu
+ * @coupled: the struct coupled that contains the current cpu
+ *
+ * Removes the requested idle state for the specified cpuidle device.
+ *
+ * Provides memory ordering around waiting_count.
+ */
+static void cpuidle_coupled_set_not_waiting(struct cpuidle_device *dev,
+		struct cpuidle_coupled *coupled)
+{
+	BUG_ON(coupled->requested_state[dev->cpu] < 0);
+
+	/*
+	 * Decrementing waiting_count can race with incrementing it in
+	 * cpuidle_coupled_set_waiting, but that's OK.  Worst case, some
+	 * cpus will increment ready_count and then spin until they
+	 * notice that this cpu has cleared it's requested_state.
+	 */
+
+	smp_mb__before_atomic_dec();
+	atomic_dec(&coupled->waiting_count);
+	smp_mb__after_atomic_dec();
+
+	coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
+}
+
+/**
+ * cpuidle_enter_state_coupled - attempt to enter a state with coupled cpus
+ * @dev: struct cpuidle_device for the current cpu
+ * @drv: struct cpuidle_driver for the platform
+ * @next_state: index of the requested state in drv->states
+ *
+ * Coordinate with coupled cpus to enter the target state.  This is a two
+ * stage process.  In the first stage, the cpus are operating independently,
+ * and may call into cpuidle_enter_state_coupled at completely different times.
+ * To save as much power as possible, the first cpus to call this function will
+ * go to an intermediate state (the cpuidle_device's safe state), and wait for
+ * all the other cpus to call this function.  Once all coupled cpus are idle,
+ * the second stage will start.  Each coupled cpu will spin until all cpus have
+ * guaranteed that they will call the target_state.
+ */
+int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
+		struct cpuidle_driver *drv, int next_state)
+{
+	int entered_state = -1;
+	struct cpuidle_coupled *coupled = dev->coupled;
+	int alive;
+
+	if (!coupled)
+		return -EINVAL;
+
+	BUG_ON(atomic_read(&coupled->ready_count));
+	cpuidle_coupled_set_waiting(dev, coupled, next_state);
+
+retry:
+	/*
+	 * Wait for all coupled cpus to be idle, using the deepest state
+	 * allowed for a single cpu.
+	 */
+	while (!need_resched() && !cpuidle_coupled_cpus_waiting(coupled)) {
+		entered_state = cpuidle_enter_state(dev, drv,
+			dev->safe_state_index);
+
+		local_irq_enable();
+		while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
+			cpu_relax();
+		local_irq_disable();
+	}
+
+	/* give a chance to process any remaining pokes */
+	local_irq_enable();
+	while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
+		cpu_relax();
+	local_irq_disable();
+
+	if (need_resched()) {
+		cpuidle_coupled_set_not_waiting(dev, coupled);
+		goto out;
+	}
+
+	/*
+	 * All coupled cpus are probably idle.  There is a small chance that
+	 * one of the other cpus just became active.  Increment a counter when
+	 * ready, and spin until all coupled cpus have incremented the counter.
+	 * Once a cpu has incremented the counter, it cannot abort idle and must
+	 * spin until either the count has hit alive_count, or another cpu
+	 * leaves idle.
+	 */
+
+	smp_mb__before_atomic_inc();
+	atomic_inc(&coupled->ready_count);
+	smp_mb__after_atomic_inc();
+	/* alive_count can't change while ready_count > 0 */
+	alive = atomic_read(&coupled->alive_count);
+	while (atomic_read(&coupled->ready_count) != alive) {
+		/* Check if any other cpus bailed out of idle. */
+		if (!cpuidle_coupled_cpus_waiting(coupled)) {
+			atomic_dec(&coupled->ready_count);
+			smp_mb__after_atomic_dec();
+			goto retry;
+		}
+
+		cpu_relax();
+	}
+
+	/* all cpus have acked the coupled state */
+	smp_rmb();
+
+	next_state = cpuidle_coupled_get_state(dev, coupled);
+
+	entered_state = cpuidle_enter_state(dev, drv, next_state);
+
+	cpuidle_coupled_set_not_waiting(dev, coupled);
+	atomic_dec(&coupled->ready_count);
+	smp_mb__after_atomic_dec();
+
+out:
+	/*
+	 * Normal cpuidle states are expected to return with irqs enabled.
+	 * That leads to an inefficiency where a cpu receiving an interrupt
+	 * that brings it out of idle will process that interrupt before
+	 * exiting the idle enter function and decrementing ready_count.  All
+	 * other cpus will need to spin waiting for the cpu that is processing
+	 * the interrupt.  If the driver returns with interrupts disabled,
+	 * all other cpus will loop back into the safe idle state instead of
+	 * spinning, saving power.
+	 *
+	 * Calling local_irq_enable here allows coupled states to return with
+	 * interrupts disabled, but won't cause problems for drivers that
+	 * exit with interrupts enabled.
+	 */
+	local_irq_enable();
+
+	/*
+	 * Wait until all coupled cpus have exited idle.  There is no risk that
+	 * a cpu exits and re-enters the ready state because this cpu has
+	 * already decremented its waiting_count.
+	 */
+	while (atomic_read(&coupled->ready_count) != 0)
+		cpu_relax();
+
+	smp_rmb();
+
+	return entered_state;
+}
+
+/**
+ * cpuidle_coupled_register_device - register a coupled cpuidle device
+ * @dev: struct cpuidle_device for the current cpu
+ *
+ * Called from cpuidle_register_device to handle coupled idle init.  Finds the
+ * cpuidle_coupled struct for this set of coupled cpus, or creates one if none
+ * exists yet.
+ */
+int cpuidle_coupled_register_device(struct cpuidle_device *dev)
+{
+	int cpu;
+	struct cpuidle_device *other_dev;
+	struct call_single_data *csd;
+	struct cpuidle_coupled *coupled;
+
+	if (cpumask_empty(&dev->coupled_cpus))
+		return 0;
+
+	for_each_cpu_mask(cpu, dev->coupled_cpus) {
+		other_dev = per_cpu(cpuidle_devices, cpu);
+		if (other_dev && other_dev->coupled) {
+			coupled = other_dev->coupled;
+			goto have_coupled;
+		}
+	}
+
+	/* No existing coupled info found, create a new one */
+	coupled = kzalloc(sizeof(struct cpuidle_coupled), GFP_KERNEL);
+	if (!coupled)
+		return -ENOMEM;
+
+	coupled->coupled_cpus = dev->coupled_cpus;
+	for_each_cpu_mask(cpu, coupled->coupled_cpus)
+		coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_DEAD;
+
+have_coupled:
+	dev->coupled = coupled;
+	BUG_ON(!cpumask_equal(&dev->coupled_cpus, &coupled->coupled_cpus));
+
+	if (cpu_online(dev->cpu)) {
+		coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
+		atomic_inc(&coupled->alive_count);
+	}
+
+	coupled->refcnt++;
+
+	csd = &per_cpu(cpuidle_coupled_poke_cb, dev->cpu);
+	csd->func = cpuidle_coupled_poked;
+	csd->info = (void *)(unsigned long)dev->cpu;
+
+	return 0;
+}
+
+/**
+ * cpuidle_coupled_unregister_device - unregister a coupled cpuidle device
+ * @dev: struct cpuidle_device for the current cpu
+ *
+ * Called from cpuidle_unregister_device to tear down coupled idle.  Removes the
+ * cpu from the coupled idle set, and frees the cpuidle_coupled_info struct if
+ * this was the last cpu in the set.
+ */
+void cpuidle_coupled_unregister_device(struct cpuidle_device *dev)
+{
+	struct cpuidle_coupled *coupled = dev->coupled;
+
+	if (cpumask_empty(&dev->coupled_cpus))
+		return;
+
+	if (--coupled->refcnt)
+		kfree(coupled);
+	dev->coupled = NULL;
+}
+
+/**
+ * cpuidle_coupled_cpu_set_alive - adjust alive_count during hotplug transitions
+ * @cpu: target cpu number
+ * @alive: whether the target cpu is going up or down
+ *
+ * Run on the cpu that is bringing up the target cpu, before the target cpu
+ * has been booted, or after the target cpu is completely dead.
+ */
+static void cpuidle_coupled_cpu_set_alive(int cpu, bool alive)
+{
+	struct cpuidle_device *dev;
+	struct cpuidle_coupled *coupled;
+
+	mutex_lock(&cpuidle_lock);
+
+	dev = per_cpu(cpuidle_devices, cpu);
+	if (!dev->coupled)
+		goto out;
+
+	coupled = dev->coupled;
+
+	/*
+	 * waiting_count must be at least 1 less than alive_count, because
+	 * this cpu is not waiting.  Spin until all cpus have noticed this cpu
+	 * is not idle and exited the ready loop before changing alive_count.
+	 */
+	while (atomic_read(&coupled->ready_count))
+		cpu_relax();
+
+	if (alive) {
+		smp_mb__before_atomic_inc();
+		atomic_inc(&coupled->alive_count);
+		smp_mb__after_atomic_inc();
+		coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
+	} else {
+		smp_mb__before_atomic_dec();
+		atomic_dec(&coupled->alive_count);
+		smp_mb__after_atomic_dec();
+		coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_DEAD;
+	}
+
+out:
+	mutex_unlock(&cpuidle_lock);
+}
+
+/**
+ * cpuidle_coupled_cpu_notify - notifier called during hotplug transitions
+ * @nb: notifier block
+ * @action: hotplug transition
+ * @hcpu: target cpu number
+ *
+ * Called when a cpu is brought on or offline using hotplug.  Updates the
+ * coupled cpu set appropriately
+ */
+static int cpuidle_coupled_cpu_notify(struct notifier_block *nb,
+		unsigned long action, void *hcpu)
+{
+	int cpu = (unsigned long)hcpu;
+
+	switch (action & ~CPU_TASKS_FROZEN) {
+	case CPU_DEAD:
+	case CPU_UP_CANCELED:
+		cpuidle_coupled_cpu_set_alive(cpu, false);
+		break;
+	case CPU_UP_PREPARE:
+		cpuidle_coupled_cpu_set_alive(cpu, true);
+		break;
+	}
+	return NOTIFY_OK;
+}
+
+static struct notifier_block cpuidle_coupled_cpu_notifier = {
+	.notifier_call = cpuidle_coupled_cpu_notify,
+};
+
+static int __init cpuidle_coupled_init(void)
+{
+	return register_cpu_notifier(&cpuidle_coupled_cpu_notifier);
+}
+core_initcall(cpuidle_coupled_init);
diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index 4540672..e81cfda 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -171,7 +171,11 @@ int cpuidle_idle_call(void)
 	trace_power_start_rcuidle(POWER_CSTATE, next_state, dev->cpu);
 	trace_cpu_idle_rcuidle(next_state, dev->cpu);
 
-	entered_state = cpuidle_enter_state(dev, drv, next_state);
+	if (cpuidle_state_is_coupled(dev, drv, next_state))
+		entered_state = cpuidle_enter_state_coupled(dev, drv,
+							    next_state);
+	else
+		entered_state = cpuidle_enter_state(dev, drv, next_state);
 
 	trace_power_end_rcuidle(dev->cpu);
 	trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, dev->cpu);
@@ -407,9 +411,16 @@ static int __cpuidle_register_device(struct cpuidle_device *dev)
 	if (ret)
 		goto err_sysfs;
 
+	ret = cpuidle_coupled_register_device(dev);
+	if (ret)
+		goto err_coupled;
+
 	dev->registered = 1;
 	return 0;
 
+err_coupled:
+	cpuidle_remove_sysfs(cpu_dev);
+	wait_for_completion(&dev->kobj_unregister);
 err_sysfs:
 	list_del(&dev->device_list);
 	per_cpu(cpuidle_devices, dev->cpu) = NULL;
@@ -464,6 +475,8 @@ void cpuidle_unregister_device(struct cpuidle_device *dev)
 	wait_for_completion(&dev->kobj_unregister);
 	per_cpu(cpuidle_devices, dev->cpu) = NULL;
 
+	cpuidle_coupled_unregister_device(dev);
+
 	cpuidle_resume_and_unlock();
 
 	module_put(cpuidle_driver->owner);
diff --git a/drivers/cpuidle/cpuidle.h b/drivers/cpuidle/cpuidle.h
index d8a3ccc..76e7f69 100644
--- a/drivers/cpuidle/cpuidle.h
+++ b/drivers/cpuidle/cpuidle.h
@@ -32,4 +32,34 @@ extern int cpuidle_enter_state(struct cpuidle_device *dev,
 extern int cpuidle_add_sysfs(struct device *dev);
 extern void cpuidle_remove_sysfs(struct device *dev);
 
+#ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED
+bool cpuidle_state_is_coupled(struct cpuidle_device *dev,
+		struct cpuidle_driver *drv, int state);
+int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
+		struct cpuidle_driver *drv, int next_state);
+int cpuidle_coupled_register_device(struct cpuidle_device *dev);
+void cpuidle_coupled_unregister_device(struct cpuidle_device *dev);
+#else
+static inline bool cpuidle_state_is_coupled(struct cpuidle_device *dev,
+		struct cpuidle_driver *drv, int state)
+{
+	return false;
+}
+
+static inline int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
+		struct cpuidle_driver *drv, int next_state)
+{
+	return -1;
+}
+
+static inline int cpuidle_coupled_register_device(struct cpuidle_device *dev)
+{
+	return 0;
+}
+
+static inline void cpuidle_coupled_unregister_device(struct cpuidle_device *dev)
+{
+}
+#endif
+
 #endif /* __DRIVER_CPUIDLE_H */
diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h
index 6c26a3d..6038448 100644
--- a/include/linux/cpuidle.h
+++ b/include/linux/cpuidle.h
@@ -57,6 +57,7 @@ struct cpuidle_state {
 
 /* Idle State Flags */
 #define CPUIDLE_FLAG_TIME_VALID	(0x01) /* is residency time measurable? */
+#define CPUIDLE_FLAG_COUPLED	(0x02) /* state applies to multiple cpus */
 
 #define CPUIDLE_DRIVER_FLAGS_MASK (0xFFFF0000)
 
@@ -100,6 +101,12 @@ struct cpuidle_device {
 	struct list_head 	device_list;
 	struct kobject		kobj;
 	struct completion	kobj_unregister;
+
+#ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED
+	int			safe_state_index;
+	cpumask_t		coupled_cpus;
+	struct cpuidle_coupled	*coupled;
+#endif
 };
 
 DECLARE_PER_CPU(struct cpuidle_device *, cpuidle_devices);
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCHv3 4/5] cpuidle: coupled: add parallel barrier function
  2012-04-30 20:09 ` Colin Cross
@ 2012-04-30 20:09   ` Colin Cross
  -1 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-04-30 20:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-arm-kernel, linux-pm, Kevin Hilman, Len Brown,
	Trinabh Gupta, Arjan van de Ven, Deepthi Dharwar,
	Greg Kroah-Hartman, Kay Sievers, Santosh Shilimkar,
	Daniel Lezcano, Amit Kucheria, Lorenzo Pieralisi, Arnd Bergmann,
	Russell King, Colin Cross

Adds cpuidle_coupled_parallel_barrier, which can be used by coupled
cpuidle state enter functions to handle resynchronization after
determining if any cpu needs to abort.  The normal use case will
be:

static bool abort_flag;
static atomic_t abort_barrier;

int arch_cpuidle_enter(struct cpuidle_device *dev, ...)
{
	if (arch_turn_off_irq_controller()) {
	   	/* returns an error if an irq is pending and would be lost
		   if idle continued and turned off power */
		abort_flag = true;
	}

	cpuidle_coupled_parallel_barrier(dev, &abort_barrier);

	if (abort_flag) {
	   	/* One of the cpus didn't turn off it's irq controller */
	   	arch_turn_on_irq_controller();
		return -EINTR;
	}

	/* continue with idle */
	...
}

This will cause all cpus to abort idle together if one of them needs
to abort.

Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Colin Cross <ccross@android.com>
---
 drivers/cpuidle/coupled.c |   37 +++++++++++++++++++++++++++++++++++++
 include/linux/cpuidle.h   |    4 ++++
 2 files changed, 41 insertions(+), 0 deletions(-)

diff --git a/drivers/cpuidle/coupled.c b/drivers/cpuidle/coupled.c
index d097826..242dc7c 100644
--- a/drivers/cpuidle/coupled.c
+++ b/drivers/cpuidle/coupled.c
@@ -134,6 +134,43 @@ struct cpuidle_coupled {
 static cpumask_t cpuidle_coupled_poked_mask;
 
 /**
+ * cpuidle_coupled_parallel_barrier - synchronize all online coupled cpus
+ * @dev: cpuidle_device of the calling cpu
+ * @a:   atomic variable to hold the barrier
+ *
+ * No caller to this function will return from this function until all online
+ * cpus in the same coupled group have called this function.  Once any caller
+ * has returned from this function, the barrier is immediately available for
+ * reuse.
+ *
+ * The atomic variable a must be initialized to 0 before any cpu calls
+ * this function, will be reset to 0 before any cpu returns from this function.
+ *
+ * Must only be called from within a coupled idle state handler
+ * (state.enter when state.flags has CPUIDLE_FLAG_COUPLED set).
+ *
+ * Provides full smp barrier semantics before and after calling.
+ */
+void cpuidle_coupled_parallel_barrier(struct cpuidle_device *dev, atomic_t *a)
+{
+	int n = atomic_read(&dev->coupled->alive_count);
+
+	smp_mb__before_atomic_inc();
+	atomic_inc(a);
+
+	while (atomic_read(a) < n)
+		cpu_relax();
+
+	if (atomic_inc_return(a) == n * 2) {
+		atomic_set(a, 0);
+		return;
+	}
+
+	while (atomic_read(a) > n)
+		cpu_relax();
+}
+
+/**
  * cpuidle_state_is_coupled - check if a state is part of a coupled set
  * @dev: struct cpuidle_device for the current cpu
  * @drv: struct cpuidle_driver for the platform
diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h
index 6038448..5ab7183 100644
--- a/include/linux/cpuidle.h
+++ b/include/linux/cpuidle.h
@@ -183,6 +183,10 @@ static inline int cpuidle_wrap_enter(struct cpuidle_device *dev,
 
 #endif
 
+#ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED
+void cpuidle_coupled_parallel_barrier(struct cpuidle_device *dev, atomic_t *a);
+#endif
+
 /******************************
  * CPUIDLE GOVERNOR INTERFACE *
  ******************************/
-- 
1.7.7.3


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCHv3 4/5] cpuidle: coupled: add parallel barrier function
@ 2012-04-30 20:09   ` Colin Cross
  0 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-04-30 20:09 UTC (permalink / raw)
  To: linux-arm-kernel

Adds cpuidle_coupled_parallel_barrier, which can be used by coupled
cpuidle state enter functions to handle resynchronization after
determining if any cpu needs to abort.  The normal use case will
be:

static bool abort_flag;
static atomic_t abort_barrier;

int arch_cpuidle_enter(struct cpuidle_device *dev, ...)
{
	if (arch_turn_off_irq_controller()) {
	   	/* returns an error if an irq is pending and would be lost
		   if idle continued and turned off power */
		abort_flag = true;
	}

	cpuidle_coupled_parallel_barrier(dev, &abort_barrier);

	if (abort_flag) {
	   	/* One of the cpus didn't turn off it's irq controller */
	   	arch_turn_on_irq_controller();
		return -EINTR;
	}

	/* continue with idle */
	...
}

This will cause all cpus to abort idle together if one of them needs
to abort.

Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Colin Cross <ccross@android.com>
---
 drivers/cpuidle/coupled.c |   37 +++++++++++++++++++++++++++++++++++++
 include/linux/cpuidle.h   |    4 ++++
 2 files changed, 41 insertions(+), 0 deletions(-)

diff --git a/drivers/cpuidle/coupled.c b/drivers/cpuidle/coupled.c
index d097826..242dc7c 100644
--- a/drivers/cpuidle/coupled.c
+++ b/drivers/cpuidle/coupled.c
@@ -134,6 +134,43 @@ struct cpuidle_coupled {
 static cpumask_t cpuidle_coupled_poked_mask;
 
 /**
+ * cpuidle_coupled_parallel_barrier - synchronize all online coupled cpus
+ * @dev: cpuidle_device of the calling cpu
+ * @a:   atomic variable to hold the barrier
+ *
+ * No caller to this function will return from this function until all online
+ * cpus in the same coupled group have called this function.  Once any caller
+ * has returned from this function, the barrier is immediately available for
+ * reuse.
+ *
+ * The atomic variable a must be initialized to 0 before any cpu calls
+ * this function, will be reset to 0 before any cpu returns from this function.
+ *
+ * Must only be called from within a coupled idle state handler
+ * (state.enter when state.flags has CPUIDLE_FLAG_COUPLED set).
+ *
+ * Provides full smp barrier semantics before and after calling.
+ */
+void cpuidle_coupled_parallel_barrier(struct cpuidle_device *dev, atomic_t *a)
+{
+	int n = atomic_read(&dev->coupled->alive_count);
+
+	smp_mb__before_atomic_inc();
+	atomic_inc(a);
+
+	while (atomic_read(a) < n)
+		cpu_relax();
+
+	if (atomic_inc_return(a) == n * 2) {
+		atomic_set(a, 0);
+		return;
+	}
+
+	while (atomic_read(a) > n)
+		cpu_relax();
+}
+
+/**
  * cpuidle_state_is_coupled - check if a state is part of a coupled set
  * @dev: struct cpuidle_device for the current cpu
  * @drv: struct cpuidle_driver for the platform
diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h
index 6038448..5ab7183 100644
--- a/include/linux/cpuidle.h
+++ b/include/linux/cpuidle.h
@@ -183,6 +183,10 @@ static inline int cpuidle_wrap_enter(struct cpuidle_device *dev,
 
 #endif
 
+#ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED
+void cpuidle_coupled_parallel_barrier(struct cpuidle_device *dev, atomic_t *a);
+#endif
+
 /******************************
  * CPUIDLE GOVERNOR INTERFACE *
  ******************************/
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCHv3 5/5] cpuidle: coupled: add trace events
  2012-04-30 20:09 ` Colin Cross
  (?)
@ 2012-04-30 20:09   ` Colin Cross
  -1 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-04-30 20:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-arm-kernel, linux-pm, Kevin Hilman, Len Brown,
	Trinabh Gupta, Arjan van de Ven, Deepthi Dharwar,
	Greg Kroah-Hartman, Kay Sievers, Santosh Shilimkar,
	Daniel Lezcano, Amit Kucheria, Lorenzo Pieralisi, Arnd Bergmann,
	Russell King, Colin Cross

Adds trace events to allow debugging of coupled cpuidle.
Can be used to verify cpuidle performance, including time spent
spinning and time spent in safe states.

Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Colin Cross <ccross@android.com>
---
 drivers/cpuidle/coupled.c      |   29 +++++-
 include/trace/events/cpuidle.h |  243 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 270 insertions(+), 2 deletions(-)
 create mode 100644 include/trace/events/cpuidle.h

v3:
   * removed debugging code from cpuidle_coupled_parallel_barrier
     so this patch can be merged to help with debugging new
     coupled cpuidle drivers
   * made tracing _rcuidle

diff --git a/drivers/cpuidle/coupled.c b/drivers/cpuidle/coupled.c
index 242dc7c..6b63d67 100644
--- a/drivers/cpuidle/coupled.c
+++ b/drivers/cpuidle/coupled.c
@@ -26,6 +26,11 @@
 
 #include "cpuidle.h"
 
+#define CREATE_TRACE_POINTS
+#include <trace/events/cpuidle.h>
+
+atomic_t cpuidle_trace_seq;
+
 /**
  * DOC: Coupled cpuidle states
  *
@@ -232,6 +237,7 @@ static inline int cpuidle_coupled_get_state(struct cpuidle_device *dev,
 static void cpuidle_coupled_poked(void *info)
 {
 	int cpu = (unsigned long)info;
+	trace_coupled_poked_rcuidle(cpu);
 	cpumask_clear_cpu(cpu, &cpuidle_coupled_poked_mask);
 }
 
@@ -251,8 +257,10 @@ static void cpuidle_coupled_poke(int cpu)
 {
 	struct call_single_data *csd = &per_cpu(cpuidle_coupled_poke_cb, cpu);
 
-	if (!cpumask_test_and_set_cpu(cpu, &cpuidle_coupled_poked_mask))
+	if (!cpumask_test_and_set_cpu(cpu, &cpuidle_coupled_poked_mask)) {
+		trace_coupled_poke_rcuidle(cpu);
 		__smp_call_function_single(cpu, csd, 0);
+	}
 }
 
 /**
@@ -361,28 +369,37 @@ int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
 	BUG_ON(atomic_read(&coupled->ready_count));
 	cpuidle_coupled_set_waiting(dev, coupled, next_state);
 
+	trace_coupled_enter_rcuidle(dev->cpu);
+
 retry:
 	/*
 	 * Wait for all coupled cpus to be idle, using the deepest state
 	 * allowed for a single cpu.
 	 */
 	while (!need_resched() && !cpuidle_coupled_cpus_waiting(coupled)) {
+		trace_coupled_safe_enter_rcuidle(dev->cpu);
 		entered_state = cpuidle_enter_state(dev, drv,
 			dev->safe_state_index);
+		trace_coupled_safe_exit_rcuidle(dev->cpu);
 
+		trace_coupled_spin_rcuidle(dev->cpu);
 		local_irq_enable();
 		while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
 			cpu_relax();
 		local_irq_disable();
+		trace_coupled_unspin_rcuidle(dev->cpu);
 	}
 
 	/* give a chance to process any remaining pokes */
+	trace_coupled_spin_rcuidle(dev->cpu);
 	local_irq_enable();
 	while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
 		cpu_relax();
 	local_irq_disable();
+	trace_coupled_unspin_rcuidle(dev->cpu);
 
 	if (need_resched()) {
+		trace_coupled_abort_rcuidle(dev->cpu);
 		cpuidle_coupled_set_not_waiting(dev, coupled);
 		goto out;
 	}
@@ -401,29 +418,35 @@ int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
 	smp_mb__after_atomic_inc();
 	/* alive_count can't change while ready_count > 0 */
 	alive = atomic_read(&coupled->alive_count);
+	trace_coupled_spin_rcuidle(dev->cpu);
 	while (atomic_read(&coupled->ready_count) != alive) {
 		/* Check if any other cpus bailed out of idle. */
 		if (!cpuidle_coupled_cpus_waiting(coupled)) {
 			atomic_dec(&coupled->ready_count);
 			smp_mb__after_atomic_dec();
+			trace_coupled_detected_abort_rcuidle(dev->cpu);
 			goto retry;
 		}
 
 		cpu_relax();
 	}
+	trace_coupled_unspin_rcuidle(dev->cpu);
 
 	/* all cpus have acked the coupled state */
 	smp_rmb();
 
 	next_state = cpuidle_coupled_get_state(dev, coupled);
-
+	trace_coupled_idle_enter_rcuidle(dev->cpu);
 	entered_state = cpuidle_enter_state(dev, drv, next_state);
+	trace_coupled_idle_exit_rcuidle(dev->cpu);
 
 	cpuidle_coupled_set_not_waiting(dev, coupled);
 	atomic_dec(&coupled->ready_count);
 	smp_mb__after_atomic_dec();
 
 out:
+	trace_coupled_exit_rcuidle(dev->cpu);
+
 	/*
 	 * Normal cpuidle states are expected to return with irqs enabled.
 	 * That leads to an inefficiency where a cpu receiving an interrupt
@@ -445,8 +468,10 @@ int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
 	 * a cpu exits and re-enters the ready state because this cpu has
 	 * already decremented its waiting_count.
 	 */
+	trace_coupled_spin_rcuidle(dev->cpu);
 	while (atomic_read(&coupled->ready_count) != 0)
 		cpu_relax();
+	trace_coupled_unspin_rcuidle(dev->cpu);
 
 	smp_rmb();
 
diff --git a/include/trace/events/cpuidle.h b/include/trace/events/cpuidle.h
new file mode 100644
index 0000000..9b2cbbb
--- /dev/null
+++ b/include/trace/events/cpuidle.h
@@ -0,0 +1,243 @@
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM cpuidle
+
+#if !defined(_TRACE_CPUIDLE_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_CPUIDLE_H
+
+#include <linux/atomic.h>
+#include <linux/tracepoint.h>
+
+extern atomic_t cpuidle_trace_seq;
+
+TRACE_EVENT(coupled_enter,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_exit,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_spin,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_unspin,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_safe_enter,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_safe_exit,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_idle_enter,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_idle_exit,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_abort,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_detected_abort,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_poke,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_poked,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+#endif /* if !defined(_TRACE_CPUIDLE_H) || defined(TRACE_HEADER_MULTI_READ) */
+
+/* This part must be outside protection */
+#include <trace/define_trace.h>
-- 
1.7.7.3


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCHv3 5/5] cpuidle: coupled: add trace events
@ 2012-04-30 20:09   ` Colin Cross
  0 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-04-30 20:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kevin Hilman, Len Brown, Russell King, Greg Kroah-Hartman,
	Kay Sievers, Amit Kucheria, Colin Cross, linux-pm,
	Arjan van de Ven, Arnd Bergmann, linux-arm-kernel

Adds trace events to allow debugging of coupled cpuidle.
Can be used to verify cpuidle performance, including time spent
spinning and time spent in safe states.

Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Colin Cross <ccross@android.com>
---
 drivers/cpuidle/coupled.c      |   29 +++++-
 include/trace/events/cpuidle.h |  243 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 270 insertions(+), 2 deletions(-)
 create mode 100644 include/trace/events/cpuidle.h

v3:
   * removed debugging code from cpuidle_coupled_parallel_barrier
     so this patch can be merged to help with debugging new
     coupled cpuidle drivers
   * made tracing _rcuidle

diff --git a/drivers/cpuidle/coupled.c b/drivers/cpuidle/coupled.c
index 242dc7c..6b63d67 100644
--- a/drivers/cpuidle/coupled.c
+++ b/drivers/cpuidle/coupled.c
@@ -26,6 +26,11 @@
 
 #include "cpuidle.h"
 
+#define CREATE_TRACE_POINTS
+#include <trace/events/cpuidle.h>
+
+atomic_t cpuidle_trace_seq;
+
 /**
  * DOC: Coupled cpuidle states
  *
@@ -232,6 +237,7 @@ static inline int cpuidle_coupled_get_state(struct cpuidle_device *dev,
 static void cpuidle_coupled_poked(void *info)
 {
 	int cpu = (unsigned long)info;
+	trace_coupled_poked_rcuidle(cpu);
 	cpumask_clear_cpu(cpu, &cpuidle_coupled_poked_mask);
 }
 
@@ -251,8 +257,10 @@ static void cpuidle_coupled_poke(int cpu)
 {
 	struct call_single_data *csd = &per_cpu(cpuidle_coupled_poke_cb, cpu);
 
-	if (!cpumask_test_and_set_cpu(cpu, &cpuidle_coupled_poked_mask))
+	if (!cpumask_test_and_set_cpu(cpu, &cpuidle_coupled_poked_mask)) {
+		trace_coupled_poke_rcuidle(cpu);
 		__smp_call_function_single(cpu, csd, 0);
+	}
 }
 
 /**
@@ -361,28 +369,37 @@ int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
 	BUG_ON(atomic_read(&coupled->ready_count));
 	cpuidle_coupled_set_waiting(dev, coupled, next_state);
 
+	trace_coupled_enter_rcuidle(dev->cpu);
+
 retry:
 	/*
 	 * Wait for all coupled cpus to be idle, using the deepest state
 	 * allowed for a single cpu.
 	 */
 	while (!need_resched() && !cpuidle_coupled_cpus_waiting(coupled)) {
+		trace_coupled_safe_enter_rcuidle(dev->cpu);
 		entered_state = cpuidle_enter_state(dev, drv,
 			dev->safe_state_index);
+		trace_coupled_safe_exit_rcuidle(dev->cpu);
 
+		trace_coupled_spin_rcuidle(dev->cpu);
 		local_irq_enable();
 		while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
 			cpu_relax();
 		local_irq_disable();
+		trace_coupled_unspin_rcuidle(dev->cpu);
 	}
 
 	/* give a chance to process any remaining pokes */
+	trace_coupled_spin_rcuidle(dev->cpu);
 	local_irq_enable();
 	while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
 		cpu_relax();
 	local_irq_disable();
+	trace_coupled_unspin_rcuidle(dev->cpu);
 
 	if (need_resched()) {
+		trace_coupled_abort_rcuidle(dev->cpu);
 		cpuidle_coupled_set_not_waiting(dev, coupled);
 		goto out;
 	}
@@ -401,29 +418,35 @@ int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
 	smp_mb__after_atomic_inc();
 	/* alive_count can't change while ready_count > 0 */
 	alive = atomic_read(&coupled->alive_count);
+	trace_coupled_spin_rcuidle(dev->cpu);
 	while (atomic_read(&coupled->ready_count) != alive) {
 		/* Check if any other cpus bailed out of idle. */
 		if (!cpuidle_coupled_cpus_waiting(coupled)) {
 			atomic_dec(&coupled->ready_count);
 			smp_mb__after_atomic_dec();
+			trace_coupled_detected_abort_rcuidle(dev->cpu);
 			goto retry;
 		}
 
 		cpu_relax();
 	}
+	trace_coupled_unspin_rcuidle(dev->cpu);
 
 	/* all cpus have acked the coupled state */
 	smp_rmb();
 
 	next_state = cpuidle_coupled_get_state(dev, coupled);
-
+	trace_coupled_idle_enter_rcuidle(dev->cpu);
 	entered_state = cpuidle_enter_state(dev, drv, next_state);
+	trace_coupled_idle_exit_rcuidle(dev->cpu);
 
 	cpuidle_coupled_set_not_waiting(dev, coupled);
 	atomic_dec(&coupled->ready_count);
 	smp_mb__after_atomic_dec();
 
 out:
+	trace_coupled_exit_rcuidle(dev->cpu);
+
 	/*
 	 * Normal cpuidle states are expected to return with irqs enabled.
 	 * That leads to an inefficiency where a cpu receiving an interrupt
@@ -445,8 +468,10 @@ int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
 	 * a cpu exits and re-enters the ready state because this cpu has
 	 * already decremented its waiting_count.
 	 */
+	trace_coupled_spin_rcuidle(dev->cpu);
 	while (atomic_read(&coupled->ready_count) != 0)
 		cpu_relax();
+	trace_coupled_unspin_rcuidle(dev->cpu);
 
 	smp_rmb();
 
diff --git a/include/trace/events/cpuidle.h b/include/trace/events/cpuidle.h
new file mode 100644
index 0000000..9b2cbbb
--- /dev/null
+++ b/include/trace/events/cpuidle.h
@@ -0,0 +1,243 @@
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM cpuidle
+
+#if !defined(_TRACE_CPUIDLE_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_CPUIDLE_H
+
+#include <linux/atomic.h>
+#include <linux/tracepoint.h>
+
+extern atomic_t cpuidle_trace_seq;
+
+TRACE_EVENT(coupled_enter,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_exit,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_spin,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_unspin,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_safe_enter,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_safe_exit,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_idle_enter,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_idle_exit,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_abort,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_detected_abort,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_poke,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_poked,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+#endif /* if !defined(_TRACE_CPUIDLE_H) || defined(TRACE_HEADER_MULTI_READ) */
+
+/* This part must be outside protection */
+#include <trace/define_trace.h>
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCHv3 5/5] cpuidle: coupled: add trace events
@ 2012-04-30 20:09   ` Colin Cross
  0 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-04-30 20:09 UTC (permalink / raw)
  To: linux-arm-kernel

Adds trace events to allow debugging of coupled cpuidle.
Can be used to verify cpuidle performance, including time spent
spinning and time spent in safe states.

Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Colin Cross <ccross@android.com>
---
 drivers/cpuidle/coupled.c      |   29 +++++-
 include/trace/events/cpuidle.h |  243 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 270 insertions(+), 2 deletions(-)
 create mode 100644 include/trace/events/cpuidle.h

v3:
   * removed debugging code from cpuidle_coupled_parallel_barrier
     so this patch can be merged to help with debugging new
     coupled cpuidle drivers
   * made tracing _rcuidle

diff --git a/drivers/cpuidle/coupled.c b/drivers/cpuidle/coupled.c
index 242dc7c..6b63d67 100644
--- a/drivers/cpuidle/coupled.c
+++ b/drivers/cpuidle/coupled.c
@@ -26,6 +26,11 @@
 
 #include "cpuidle.h"
 
+#define CREATE_TRACE_POINTS
+#include <trace/events/cpuidle.h>
+
+atomic_t cpuidle_trace_seq;
+
 /**
  * DOC: Coupled cpuidle states
  *
@@ -232,6 +237,7 @@ static inline int cpuidle_coupled_get_state(struct cpuidle_device *dev,
 static void cpuidle_coupled_poked(void *info)
 {
 	int cpu = (unsigned long)info;
+	trace_coupled_poked_rcuidle(cpu);
 	cpumask_clear_cpu(cpu, &cpuidle_coupled_poked_mask);
 }
 
@@ -251,8 +257,10 @@ static void cpuidle_coupled_poke(int cpu)
 {
 	struct call_single_data *csd = &per_cpu(cpuidle_coupled_poke_cb, cpu);
 
-	if (!cpumask_test_and_set_cpu(cpu, &cpuidle_coupled_poked_mask))
+	if (!cpumask_test_and_set_cpu(cpu, &cpuidle_coupled_poked_mask)) {
+		trace_coupled_poke_rcuidle(cpu);
 		__smp_call_function_single(cpu, csd, 0);
+	}
 }
 
 /**
@@ -361,28 +369,37 @@ int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
 	BUG_ON(atomic_read(&coupled->ready_count));
 	cpuidle_coupled_set_waiting(dev, coupled, next_state);
 
+	trace_coupled_enter_rcuidle(dev->cpu);
+
 retry:
 	/*
 	 * Wait for all coupled cpus to be idle, using the deepest state
 	 * allowed for a single cpu.
 	 */
 	while (!need_resched() && !cpuidle_coupled_cpus_waiting(coupled)) {
+		trace_coupled_safe_enter_rcuidle(dev->cpu);
 		entered_state = cpuidle_enter_state(dev, drv,
 			dev->safe_state_index);
+		trace_coupled_safe_exit_rcuidle(dev->cpu);
 
+		trace_coupled_spin_rcuidle(dev->cpu);
 		local_irq_enable();
 		while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
 			cpu_relax();
 		local_irq_disable();
+		trace_coupled_unspin_rcuidle(dev->cpu);
 	}
 
 	/* give a chance to process any remaining pokes */
+	trace_coupled_spin_rcuidle(dev->cpu);
 	local_irq_enable();
 	while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
 		cpu_relax();
 	local_irq_disable();
+	trace_coupled_unspin_rcuidle(dev->cpu);
 
 	if (need_resched()) {
+		trace_coupled_abort_rcuidle(dev->cpu);
 		cpuidle_coupled_set_not_waiting(dev, coupled);
 		goto out;
 	}
@@ -401,29 +418,35 @@ int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
 	smp_mb__after_atomic_inc();
 	/* alive_count can't change while ready_count > 0 */
 	alive = atomic_read(&coupled->alive_count);
+	trace_coupled_spin_rcuidle(dev->cpu);
 	while (atomic_read(&coupled->ready_count) != alive) {
 		/* Check if any other cpus bailed out of idle. */
 		if (!cpuidle_coupled_cpus_waiting(coupled)) {
 			atomic_dec(&coupled->ready_count);
 			smp_mb__after_atomic_dec();
+			trace_coupled_detected_abort_rcuidle(dev->cpu);
 			goto retry;
 		}
 
 		cpu_relax();
 	}
+	trace_coupled_unspin_rcuidle(dev->cpu);
 
 	/* all cpus have acked the coupled state */
 	smp_rmb();
 
 	next_state = cpuidle_coupled_get_state(dev, coupled);
-
+	trace_coupled_idle_enter_rcuidle(dev->cpu);
 	entered_state = cpuidle_enter_state(dev, drv, next_state);
+	trace_coupled_idle_exit_rcuidle(dev->cpu);
 
 	cpuidle_coupled_set_not_waiting(dev, coupled);
 	atomic_dec(&coupled->ready_count);
 	smp_mb__after_atomic_dec();
 
 out:
+	trace_coupled_exit_rcuidle(dev->cpu);
+
 	/*
 	 * Normal cpuidle states are expected to return with irqs enabled.
 	 * That leads to an inefficiency where a cpu receiving an interrupt
@@ -445,8 +468,10 @@ int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
 	 * a cpu exits and re-enters the ready state because this cpu has
 	 * already decremented its waiting_count.
 	 */
+	trace_coupled_spin_rcuidle(dev->cpu);
 	while (atomic_read(&coupled->ready_count) != 0)
 		cpu_relax();
+	trace_coupled_unspin_rcuidle(dev->cpu);
 
 	smp_rmb();
 
diff --git a/include/trace/events/cpuidle.h b/include/trace/events/cpuidle.h
new file mode 100644
index 0000000..9b2cbbb
--- /dev/null
+++ b/include/trace/events/cpuidle.h
@@ -0,0 +1,243 @@
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM cpuidle
+
+#if !defined(_TRACE_CPUIDLE_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_CPUIDLE_H
+
+#include <linux/atomic.h>
+#include <linux/tracepoint.h>
+
+extern atomic_t cpuidle_trace_seq;
+
+TRACE_EVENT(coupled_enter,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_exit,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_spin,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_unspin,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_safe_enter,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_safe_exit,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_idle_enter,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_idle_exit,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_abort,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_detected_abort,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_poke,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_poked,
+
+	TP_PROTO(unsigned int cpu),
+
+	TP_ARGS(cpu),
+
+	TP_STRUCT__entry(
+		__field(unsigned int, cpu)
+		__field(unsigned int, seq)
+	),
+
+	TP_fast_assign(
+		__entry->cpu = cpu;
+		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+	),
+
+	TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+#endif /* if !defined(_TRACE_CPUIDLE_H) || defined(TRACE_HEADER_MULTI_READ) */
+
+/* This part must be outside protection */
+#include <trace/define_trace.h>
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 0/5] coupled cpuidle state support
  2012-04-30 20:09 ` Colin Cross
  (?)
@ 2012-04-30 21:18   ` Colin Cross
  -1 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-04-30 21:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-arm-kernel, linux-pm, Kevin Hilman, Len Brown,
	Trinabh Gupta, Arjan van de Ven, Deepthi Dharwar,
	Greg Kroah-Hartman, Kay Sievers, Santosh Shilimkar,
	Daniel Lezcano, Amit Kucheria, Lorenzo Pieralisi, Arnd Bergmann,
	Russell King, Colin Cross

On Mon, Apr 30, 2012 at 1:09 PM, Colin Cross <ccross@android.com> wrote:
> On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
> cpus cannot be independently powered down, either due to
> sequencing restrictions (on Tegra 2, cpu 0 must be the last to
> power down), or due to HW bugs (on OMAP4460, a cpu powering up
> will corrupt the gic state unless the other cpu runs a work
> around).  Each cpu has a power state that it can enter without
> coordinating with the other cpu (usually Wait For Interrupt, or
> WFI), and one or more "coupled" power states that affect blocks
> shared between the cpus (L2 cache, interrupt controller, and
> sometimes the whole SoC).  Entering a coupled power state must
> be tightly controlled on both cpus.
>
> The easiest solution to implementing coupled cpu power states is
> to hotplug all but one cpu whenever possible, usually using a
> cpufreq governor that looks at cpu load to determine when to
> enable the secondary cpus.  This causes problems, as hotplug is an
> expensive operation, so the number of hotplug transitions must be
> minimized, leading to very slow response to loads, often on the
> order of seconds.
>
> This patch series implements an alternative solution, where each
> cpu will wait in the WFI state until all cpus are ready to enter
> a coupled state, at which point the coupled state function will
> be called on all cpus at approximately the same time.
>
> Once all cpus are ready to enter idle, they are woken by an smp
> cross call.  At this point, there is a chance that one of the
> cpus will find work to do, and choose not to enter suspend.  A
> final pass is needed to guarantee that all cpus will call the
> power state enter function at the same time.  During this pass,
> each cpu will increment the ready counter, and continue once the
> ready counter matches the number of online coupled cpus.  If any
> cpu exits idle, the other cpus will decrement their counter and
> retry.
>
> To use coupled cpuidle states, a cpuidle driver must:
>
>   Set struct cpuidle_device.coupled_cpus to the mask of all
>   coupled cpus, usually the same as cpu_possible_mask if all cpus
>   are part of the same cluster.  The coupled_cpus mask must be
>   set in the struct cpuidle_device for each cpu.
>
>   Set struct cpuidle_device.safe_state to a state that is not a
>   coupled state.  This is usually WFI.
>
>   Set CPUIDLE_FLAG_COUPLED in struct cpuidle_state.flags for each
>   state that affects multiple cpus.
>
>   Provide a struct cpuidle_state.enter function for each state
>   that affects multiple cpus.  This function is guaranteed to be
>   called on all cpus at approximately the same time.  The driver
>   should ensure that the cpus all abort together if any cpu tries
>   to abort once the function is called.
>
> This series has been tested by implementing a test cpuidle state
> that uses the parallel barrier helper function to verify that
> all cpus call the function at the same time.
>
> This patch set has a few disadvantages over the hotplug governor,
> but I think they are all fairly minor:
>   * Worst-case interrupt latency can be increased.  If one cpu
>     receives an interrupt while the other is spinning in the
>     ready_count loop, the second cpu will be stuck with
>     interrupts off until the first cpu finished processing
>     its interrupt and exits idle.  This will increase the worst
>     case interrupt latency by the worst-case interrupt processing
>     time, but should be very rare.
>   * Interrupts are processed while still inside pm_idle.
>     Normally, interrupts are only processed at the very end of
>     pm_idle, just before it returns to the idle loop.  Coupled
>     states requires processing interrupts inside
>     cpuidle_enter_state_coupled in order to distinguish between
>     the smp_cross_call from another cpu that is now idle and an
>     interrupt that should cause idle to exit.
>     I don't see a way to fix this without either being able to
>     read the next pending irq from the interrupt chip, or
>     querying the irq core for which interrupts were processed.
>   * Since interrupts are processed inside cpuidle, the next
>     timer event could change.  The new timer event will be
>     handled correctly, but the idle state decision made by
>     the governor will be out of date, and will not be revisited.
>     The governor select function could be called again every time,
>     but this could lead to a lot of work being done by an idle
>     cpu if the other cpu was mostly busy.
>
> v2:
>   * removed the coupled lock, replacing it with atomic counters
>   * added a check for outstanding pokes before beginning the
>     final transition to avoid extra wakeups
>   * made the cpuidle_coupled struct completely private
>   * fixed kerneldoc comment formatting
>   * added a patch with a helper function for resynchronizing
>     cpus after aborting idle
>   * added a patch (not for merging) to add trace events for
>     verification and performance testing
>
> v3:
>   * rebased on v3.4-rc4 by Santosh
>   * fixed decrement in cpuidle_coupled_cpu_set_alive
>   * updated tracing patch to remove unnecessary debugging so
>     it can be merged
>   * made tracing _rcuidle
>
> This series has been tested and reviewed by Santosh and Kevin
> for OMAP4, which has a cpuidle series ready for 3.5, and Tegra
> and Exynos5 patches are in progress.  I think this is ready to
> go in.  Lean, are you maintaining a cpuidle tree for linux-next?
Sorry, *Len.

> If not, I can publish a tree for linux-next, or this could go in
> through Arnd's tree.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 0/5] coupled cpuidle state support
@ 2012-04-30 21:18   ` Colin Cross
  0 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-04-30 21:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kevin Hilman, Len Brown, Russell King, Greg Kroah-Hartman,
	Kay Sievers, Amit Kucheria, Colin Cross, linux-pm,
	Arjan van de Ven, Arnd Bergmann, linux-arm-kernel

On Mon, Apr 30, 2012 at 1:09 PM, Colin Cross <ccross@android.com> wrote:
> On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
> cpus cannot be independently powered down, either due to
> sequencing restrictions (on Tegra 2, cpu 0 must be the last to
> power down), or due to HW bugs (on OMAP4460, a cpu powering up
> will corrupt the gic state unless the other cpu runs a work
> around).  Each cpu has a power state that it can enter without
> coordinating with the other cpu (usually Wait For Interrupt, or
> WFI), and one or more "coupled" power states that affect blocks
> shared between the cpus (L2 cache, interrupt controller, and
> sometimes the whole SoC).  Entering a coupled power state must
> be tightly controlled on both cpus.
>
> The easiest solution to implementing coupled cpu power states is
> to hotplug all but one cpu whenever possible, usually using a
> cpufreq governor that looks at cpu load to determine when to
> enable the secondary cpus.  This causes problems, as hotplug is an
> expensive operation, so the number of hotplug transitions must be
> minimized, leading to very slow response to loads, often on the
> order of seconds.
>
> This patch series implements an alternative solution, where each
> cpu will wait in the WFI state until all cpus are ready to enter
> a coupled state, at which point the coupled state function will
> be called on all cpus at approximately the same time.
>
> Once all cpus are ready to enter idle, they are woken by an smp
> cross call.  At this point, there is a chance that one of the
> cpus will find work to do, and choose not to enter suspend.  A
> final pass is needed to guarantee that all cpus will call the
> power state enter function at the same time.  During this pass,
> each cpu will increment the ready counter, and continue once the
> ready counter matches the number of online coupled cpus.  If any
> cpu exits idle, the other cpus will decrement their counter and
> retry.
>
> To use coupled cpuidle states, a cpuidle driver must:
>
>   Set struct cpuidle_device.coupled_cpus to the mask of all
>   coupled cpus, usually the same as cpu_possible_mask if all cpus
>   are part of the same cluster.  The coupled_cpus mask must be
>   set in the struct cpuidle_device for each cpu.
>
>   Set struct cpuidle_device.safe_state to a state that is not a
>   coupled state.  This is usually WFI.
>
>   Set CPUIDLE_FLAG_COUPLED in struct cpuidle_state.flags for each
>   state that affects multiple cpus.
>
>   Provide a struct cpuidle_state.enter function for each state
>   that affects multiple cpus.  This function is guaranteed to be
>   called on all cpus at approximately the same time.  The driver
>   should ensure that the cpus all abort together if any cpu tries
>   to abort once the function is called.
>
> This series has been tested by implementing a test cpuidle state
> that uses the parallel barrier helper function to verify that
> all cpus call the function at the same time.
>
> This patch set has a few disadvantages over the hotplug governor,
> but I think they are all fairly minor:
>   * Worst-case interrupt latency can be increased.  If one cpu
>     receives an interrupt while the other is spinning in the
>     ready_count loop, the second cpu will be stuck with
>     interrupts off until the first cpu finished processing
>     its interrupt and exits idle.  This will increase the worst
>     case interrupt latency by the worst-case interrupt processing
>     time, but should be very rare.
>   * Interrupts are processed while still inside pm_idle.
>     Normally, interrupts are only processed at the very end of
>     pm_idle, just before it returns to the idle loop.  Coupled
>     states requires processing interrupts inside
>     cpuidle_enter_state_coupled in order to distinguish between
>     the smp_cross_call from another cpu that is now idle and an
>     interrupt that should cause idle to exit.
>     I don't see a way to fix this without either being able to
>     read the next pending irq from the interrupt chip, or
>     querying the irq core for which interrupts were processed.
>   * Since interrupts are processed inside cpuidle, the next
>     timer event could change.  The new timer event will be
>     handled correctly, but the idle state decision made by
>     the governor will be out of date, and will not be revisited.
>     The governor select function could be called again every time,
>     but this could lead to a lot of work being done by an idle
>     cpu if the other cpu was mostly busy.
>
> v2:
>   * removed the coupled lock, replacing it with atomic counters
>   * added a check for outstanding pokes before beginning the
>     final transition to avoid extra wakeups
>   * made the cpuidle_coupled struct completely private
>   * fixed kerneldoc comment formatting
>   * added a patch with a helper function for resynchronizing
>     cpus after aborting idle
>   * added a patch (not for merging) to add trace events for
>     verification and performance testing
>
> v3:
>   * rebased on v3.4-rc4 by Santosh
>   * fixed decrement in cpuidle_coupled_cpu_set_alive
>   * updated tracing patch to remove unnecessary debugging so
>     it can be merged
>   * made tracing _rcuidle
>
> This series has been tested and reviewed by Santosh and Kevin
> for OMAP4, which has a cpuidle series ready for 3.5, and Tegra
> and Exynos5 patches are in progress.  I think this is ready to
> go in.  Lean, are you maintaining a cpuidle tree for linux-next?
Sorry, *Len.

> If not, I can publish a tree for linux-next, or this could go in
> through Arnd's tree.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [PATCHv3 0/5] coupled cpuidle state support
@ 2012-04-30 21:18   ` Colin Cross
  0 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-04-30 21:18 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Apr 30, 2012 at 1:09 PM, Colin Cross <ccross@android.com> wrote:
> On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
> cpus cannot be independently powered down, either due to
> sequencing restrictions (on Tegra 2, cpu 0 must be the last to
> power down), or due to HW bugs (on OMAP4460, a cpu powering up
> will corrupt the gic state unless the other cpu runs a work
> around). ?Each cpu has a power state that it can enter without
> coordinating with the other cpu (usually Wait For Interrupt, or
> WFI), and one or more "coupled" power states that affect blocks
> shared between the cpus (L2 cache, interrupt controller, and
> sometimes the whole SoC). ?Entering a coupled power state must
> be tightly controlled on both cpus.
>
> The easiest solution to implementing coupled cpu power states is
> to hotplug all but one cpu whenever possible, usually using a
> cpufreq governor that looks at cpu load to determine when to
> enable the secondary cpus. ?This causes problems, as hotplug is an
> expensive operation, so the number of hotplug transitions must be
> minimized, leading to very slow response to loads, often on the
> order of seconds.
>
> This patch series implements an alternative solution, where each
> cpu will wait in the WFI state until all cpus are ready to enter
> a coupled state, at which point the coupled state function will
> be called on all cpus at approximately the same time.
>
> Once all cpus are ready to enter idle, they are woken by an smp
> cross call. ?At this point, there is a chance that one of the
> cpus will find work to do, and choose not to enter suspend. ?A
> final pass is needed to guarantee that all cpus will call the
> power state enter function at the same time. ?During this pass,
> each cpu will increment the ready counter, and continue once the
> ready counter matches the number of online coupled cpus. ?If any
> cpu exits idle, the other cpus will decrement their counter and
> retry.
>
> To use coupled cpuidle states, a cpuidle driver must:
>
> ? Set struct cpuidle_device.coupled_cpus to the mask of all
> ? coupled cpus, usually the same as cpu_possible_mask if all cpus
> ? are part of the same cluster. ?The coupled_cpus mask must be
> ? set in the struct cpuidle_device for each cpu.
>
> ? Set struct cpuidle_device.safe_state to a state that is not a
> ? coupled state. ?This is usually WFI.
>
> ? Set CPUIDLE_FLAG_COUPLED in struct cpuidle_state.flags for each
> ? state that affects multiple cpus.
>
> ? Provide a struct cpuidle_state.enter function for each state
> ? that affects multiple cpus. ?This function is guaranteed to be
> ? called on all cpus at approximately the same time. ?The driver
> ? should ensure that the cpus all abort together if any cpu tries
> ? to abort once the function is called.
>
> This series has been tested by implementing a test cpuidle state
> that uses the parallel barrier helper function to verify that
> all cpus call the function at the same time.
>
> This patch set has a few disadvantages over the hotplug governor,
> but I think they are all fairly minor:
> ? * Worst-case interrupt latency can be increased. ?If one cpu
> ? ? receives an interrupt while the other is spinning in the
> ? ? ready_count loop, the second cpu will be stuck with
> ? ? interrupts off until the first cpu finished processing
> ? ? its interrupt and exits idle. ?This will increase the worst
> ? ? case interrupt latency by the worst-case interrupt processing
> ? ? time, but should be very rare.
> ? * Interrupts are processed while still inside pm_idle.
> ? ? Normally, interrupts are only processed at the very end of
> ? ? pm_idle, just before it returns to the idle loop. ?Coupled
> ? ? states requires processing interrupts inside
> ? ? cpuidle_enter_state_coupled in order to distinguish between
> ? ? the smp_cross_call from another cpu that is now idle and an
> ? ? interrupt that should cause idle to exit.
> ? ? I don't see a way to fix this without either being able to
> ? ? read the next pending irq from the interrupt chip, or
> ? ? querying the irq core for which interrupts were processed.
> ? * Since interrupts are processed inside cpuidle, the next
> ? ? timer event could change. ?The new timer event will be
> ? ? handled correctly, but the idle state decision made by
> ? ? the governor will be out of date, and will not be revisited.
> ? ? The governor select function could be called again every time,
> ? ? but this could lead to a lot of work being done by an idle
> ? ? cpu if the other cpu was mostly busy.
>
> v2:
> ? * removed the coupled lock, replacing it with atomic counters
> ? * added a check for outstanding pokes before beginning the
> ? ? final transition to avoid extra wakeups
> ? * made the cpuidle_coupled struct completely private
> ? * fixed kerneldoc comment formatting
> ? * added a patch with a helper function for resynchronizing
> ? ? cpus after aborting idle
> ? * added a patch (not for merging) to add trace events for
> ? ? verification and performance testing
>
> v3:
> ? * rebased on v3.4-rc4 by Santosh
> ? * fixed decrement in cpuidle_coupled_cpu_set_alive
> ? * updated tracing patch to remove unnecessary debugging so
> ? ? it can be merged
> ? * made tracing _rcuidle
>
> This series has been tested and reviewed by Santosh and Kevin
> for OMAP4, which has a cpuidle series ready for 3.5, and Tegra
> and Exynos5 patches are in progress. ?I think this is ready to
> go in. ?Lean, are you maintaining a cpuidle tree for linux-next?
Sorry, *Len.

> If not, I can publish a tree for linux-next, or this could go in
> through Arnd's tree.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 0/5] coupled cpuidle state support
  2012-04-30 20:09 ` Colin Cross
  (?)
@ 2012-04-30 21:25   ` Rafael J. Wysocki
  -1 siblings, 0 replies; 78+ messages in thread
From: Rafael J. Wysocki @ 2012-04-30 21:25 UTC (permalink / raw)
  To: Colin Cross
  Cc: linux-kernel, linux-arm-kernel, linux-pm, Kevin Hilman,
	Len Brown, Trinabh Gupta, Arjan van de Ven, Deepthi Dharwar,
	Greg Kroah-Hartman, Kay Sievers, Santosh Shilimkar,
	Daniel Lezcano, Amit Kucheria, Lorenzo Pieralisi, Arnd Bergmann,
	Russell King, Len Brown

Hi,

I have a comment, which isn't about the series itself, but something
thay may be worth thinking about.

On Monday, April 30, 2012, Colin Cross wrote:
> On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
> cpus cannot be independently powered down, either due to
> sequencing restrictions (on Tegra 2, cpu 0 must be the last to
> power down), or due to HW bugs (on OMAP4460, a cpu powering up
> will corrupt the gic state unless the other cpu runs a work
> around).  Each cpu has a power state that it can enter without
> coordinating with the other cpu (usually Wait For Interrupt, or
> WFI), and one or more "coupled" power states that affect blocks
> shared between the cpus (L2 cache, interrupt controller, and
> sometimes the whole SoC).  Entering a coupled power state must
> be tightly controlled on both cpus.

That seems to be a special case of a more general situation where
a number of CPU cores belong into a single power domain, possibly along
some I/O devices.

We'll need to handle the general case at one point anyway, so I wonder if
the approach shown here may get us in the way?

> The easiest solution to implementing coupled cpu power states is
> to hotplug all but one cpu whenever possible, usually using a
> cpufreq governor that looks at cpu load to determine when to
> enable the secondary cpus.  This causes problems, as hotplug is an
> expensive operation, so the number of hotplug transitions must be
> minimized, leading to very slow response to loads, often on the
> order of seconds.

This isn't a solution at all, rather a workaround and a poor one for that
matter.

> This patch series implements an alternative solution, where each
> cpu will wait in the WFI state until all cpus are ready to enter
> a coupled state, at which point the coupled state function will
> be called on all cpus at approximately the same time.
> 
> Once all cpus are ready to enter idle, they are woken by an smp
> cross call.

Is it really necessary to wake up all of the CPUs in WFI before
going to deeper idle?  We should be able to figure out when they
are going to be needed next time without waking them up and we should
know the latency to wake up from the deeper multi-CPU "C-state",
so it should be possible to decide whether or not to go to deeper
idle without the SMP cross call.  Is there anything I'm missing here?

Rafael

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 0/5] coupled cpuidle state support
@ 2012-04-30 21:25   ` Rafael J. Wysocki
  0 siblings, 0 replies; 78+ messages in thread
From: Rafael J. Wysocki @ 2012-04-30 21:25 UTC (permalink / raw)
  To: Colin Cross
  Cc: Kevin Hilman, Len Brown, Russell King, Greg Kroah-Hartman,
	Kay Sievers, linux-kernel, Amit Kucheria, linux-pm,
	Arjan van de Ven, Arnd Bergmann, linux-arm-kernel

Hi,

I have a comment, which isn't about the series itself, but something
thay may be worth thinking about.

On Monday, April 30, 2012, Colin Cross wrote:
> On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
> cpus cannot be independently powered down, either due to
> sequencing restrictions (on Tegra 2, cpu 0 must be the last to
> power down), or due to HW bugs (on OMAP4460, a cpu powering up
> will corrupt the gic state unless the other cpu runs a work
> around).  Each cpu has a power state that it can enter without
> coordinating with the other cpu (usually Wait For Interrupt, or
> WFI), and one or more "coupled" power states that affect blocks
> shared between the cpus (L2 cache, interrupt controller, and
> sometimes the whole SoC).  Entering a coupled power state must
> be tightly controlled on both cpus.

That seems to be a special case of a more general situation where
a number of CPU cores belong into a single power domain, possibly along
some I/O devices.

We'll need to handle the general case at one point anyway, so I wonder if
the approach shown here may get us in the way?

> The easiest solution to implementing coupled cpu power states is
> to hotplug all but one cpu whenever possible, usually using a
> cpufreq governor that looks at cpu load to determine when to
> enable the secondary cpus.  This causes problems, as hotplug is an
> expensive operation, so the number of hotplug transitions must be
> minimized, leading to very slow response to loads, often on the
> order of seconds.

This isn't a solution at all, rather a workaround and a poor one for that
matter.

> This patch series implements an alternative solution, where each
> cpu will wait in the WFI state until all cpus are ready to enter
> a coupled state, at which point the coupled state function will
> be called on all cpus at approximately the same time.
> 
> Once all cpus are ready to enter idle, they are woken by an smp
> cross call.

Is it really necessary to wake up all of the CPUs in WFI before
going to deeper idle?  We should be able to figure out when they
are going to be needed next time without waking them up and we should
know the latency to wake up from the deeper multi-CPU "C-state",
so it should be possible to decide whether or not to go to deeper
idle without the SMP cross call.  Is there anything I'm missing here?

Rafael

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [PATCHv3 0/5] coupled cpuidle state support
@ 2012-04-30 21:25   ` Rafael J. Wysocki
  0 siblings, 0 replies; 78+ messages in thread
From: Rafael J. Wysocki @ 2012-04-30 21:25 UTC (permalink / raw)
  To: linux-arm-kernel

Hi,

I have a comment, which isn't about the series itself, but something
thay may be worth thinking about.

On Monday, April 30, 2012, Colin Cross wrote:
> On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
> cpus cannot be independently powered down, either due to
> sequencing restrictions (on Tegra 2, cpu 0 must be the last to
> power down), or due to HW bugs (on OMAP4460, a cpu powering up
> will corrupt the gic state unless the other cpu runs a work
> around).  Each cpu has a power state that it can enter without
> coordinating with the other cpu (usually Wait For Interrupt, or
> WFI), and one or more "coupled" power states that affect blocks
> shared between the cpus (L2 cache, interrupt controller, and
> sometimes the whole SoC).  Entering a coupled power state must
> be tightly controlled on both cpus.

That seems to be a special case of a more general situation where
a number of CPU cores belong into a single power domain, possibly along
some I/O devices.

We'll need to handle the general case at one point anyway, so I wonder if
the approach shown here may get us in the way?

> The easiest solution to implementing coupled cpu power states is
> to hotplug all but one cpu whenever possible, usually using a
> cpufreq governor that looks at cpu load to determine when to
> enable the secondary cpus.  This causes problems, as hotplug is an
> expensive operation, so the number of hotplug transitions must be
> minimized, leading to very slow response to loads, often on the
> order of seconds.

This isn't a solution at all, rather a workaround and a poor one for that
matter.

> This patch series implements an alternative solution, where each
> cpu will wait in the WFI state until all cpus are ready to enter
> a coupled state, at which point the coupled state function will
> be called on all cpus at approximately the same time.
> 
> Once all cpus are ready to enter idle, they are woken by an smp
> cross call.

Is it really necessary to wake up all of the CPUs in WFI before
going to deeper idle?  We should be able to figure out when they
are going to be needed next time without waking them up and we should
know the latency to wake up from the deeper multi-CPU "C-state",
so it should be possible to decide whether or not to go to deeper
idle without the SMP cross call.  Is there anything I'm missing here?

Rafael

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 0/5] coupled cpuidle state support
  2012-04-30 21:25   ` Rafael J. Wysocki
  (?)
@ 2012-04-30 21:37     ` Colin Cross
  -1 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-04-30 21:37 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: linux-kernel, linux-arm-kernel, linux-pm, Kevin Hilman,
	Len Brown, Trinabh Gupta, Arjan van de Ven, Deepthi Dharwar,
	Greg Kroah-Hartman, Kay Sievers, Santosh Shilimkar,
	Daniel Lezcano, Amit Kucheria, Lorenzo Pieralisi, Arnd Bergmann,
	Russell King, Len Brown

On Mon, Apr 30, 2012 at 2:25 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> Hi,
>
> I have a comment, which isn't about the series itself, but something
> thay may be worth thinking about.
>
> On Monday, April 30, 2012, Colin Cross wrote:
>> On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
>> cpus cannot be independently powered down, either due to
>> sequencing restrictions (on Tegra 2, cpu 0 must be the last to
>> power down), or due to HW bugs (on OMAP4460, a cpu powering up
>> will corrupt the gic state unless the other cpu runs a work
>> around).  Each cpu has a power state that it can enter without
>> coordinating with the other cpu (usually Wait For Interrupt, or
>> WFI), and one or more "coupled" power states that affect blocks
>> shared between the cpus (L2 cache, interrupt controller, and
>> sometimes the whole SoC).  Entering a coupled power state must
>> be tightly controlled on both cpus.
>
> That seems to be a special case of a more general situation where
> a number of CPU cores belong into a single power domain, possibly along
> some I/O devices.
>
> We'll need to handle the general case at one point anyway, so I wonder if
> the approach shown here may get us in the way?

I can't parse what you're saying here.

>> The easiest solution to implementing coupled cpu power states is
>> to hotplug all but one cpu whenever possible, usually using a
>> cpufreq governor that looks at cpu load to determine when to
>> enable the secondary cpus.  This causes problems, as hotplug is an
>> expensive operation, so the number of hotplug transitions must be
>> minimized, leading to very slow response to loads, often on the
>> order of seconds.
>
> This isn't a solution at all, rather a workaround and a poor one for that
> matter.

Yes, which is what started me on this series.

>> This patch series implements an alternative solution, where each
>> cpu will wait in the WFI state until all cpus are ready to enter
>> a coupled state, at which point the coupled state function will
>> be called on all cpus at approximately the same time.
>>
>> Once all cpus are ready to enter idle, they are woken by an smp
>> cross call.
>
> Is it really necessary to wake up all of the CPUs in WFI before
> going to deeper idle?  We should be able to figure out when they
> are going to be needed next time without waking them up and we should
> know the latency to wake up from the deeper multi-CPU "C-state",
> so it should be possible to decide whether or not to go to deeper
> idle without the SMP cross call.  Is there anything I'm missing here?

The decision to go to the lower state has already been made when the
cross call occurs.  On the platforms I have worked directly with so
far (Tegra2 and OMAP4460), the secondary cpu needs to execute code
before the primary cpu turns off the power.  For example, on OMAP4460,
the secondary cpu needs to go from WFI (clock gated) to OFF (power
gated), because OFF is not supported as an individual cpu state due to
a ROM code bug.  To do that transition, it needs to come out of WFI,
set up it's power domain registers, save a bunch of state, and
transition to OFF.

On Tegra3, the deepest individual cpu state for cpus 1-3 is OFF, the
same state the cpu would go into as the first step of a transition to
a deeper power state (cpus 0-3 OFF).  It would be more optimal in that
case to bypass the SMP cross call, and leave the cpu in OFF, but that
would require some way of disabling all wakeups for the secondary cpus
and then verifying that they didn't start waking up just before the
wakeups were disabled.  I have just started considering this
optimization, but I don't see anything in the existing code that would
prevent adding it later.

A simple measurement using the tracing may show that it is
unnecessary.  If the wakeup time for CPU1 to go from OFF to active is
small there might be no need to optimize out the extra wakeup.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 0/5] coupled cpuidle state support
@ 2012-04-30 21:37     ` Colin Cross
  0 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-04-30 21:37 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Kevin Hilman, Len Brown, Russell King, Greg Kroah-Hartman,
	Kay Sievers, linux-kernel, Amit Kucheria, linux-pm,
	Arjan van de Ven, Arnd Bergmann, linux-arm-kernel

On Mon, Apr 30, 2012 at 2:25 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> Hi,
>
> I have a comment, which isn't about the series itself, but something
> thay may be worth thinking about.
>
> On Monday, April 30, 2012, Colin Cross wrote:
>> On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
>> cpus cannot be independently powered down, either due to
>> sequencing restrictions (on Tegra 2, cpu 0 must be the last to
>> power down), or due to HW bugs (on OMAP4460, a cpu powering up
>> will corrupt the gic state unless the other cpu runs a work
>> around).  Each cpu has a power state that it can enter without
>> coordinating with the other cpu (usually Wait For Interrupt, or
>> WFI), and one or more "coupled" power states that affect blocks
>> shared between the cpus (L2 cache, interrupt controller, and
>> sometimes the whole SoC).  Entering a coupled power state must
>> be tightly controlled on both cpus.
>
> That seems to be a special case of a more general situation where
> a number of CPU cores belong into a single power domain, possibly along
> some I/O devices.
>
> We'll need to handle the general case at one point anyway, so I wonder if
> the approach shown here may get us in the way?

I can't parse what you're saying here.

>> The easiest solution to implementing coupled cpu power states is
>> to hotplug all but one cpu whenever possible, usually using a
>> cpufreq governor that looks at cpu load to determine when to
>> enable the secondary cpus.  This causes problems, as hotplug is an
>> expensive operation, so the number of hotplug transitions must be
>> minimized, leading to very slow response to loads, often on the
>> order of seconds.
>
> This isn't a solution at all, rather a workaround and a poor one for that
> matter.

Yes, which is what started me on this series.

>> This patch series implements an alternative solution, where each
>> cpu will wait in the WFI state until all cpus are ready to enter
>> a coupled state, at which point the coupled state function will
>> be called on all cpus at approximately the same time.
>>
>> Once all cpus are ready to enter idle, they are woken by an smp
>> cross call.
>
> Is it really necessary to wake up all of the CPUs in WFI before
> going to deeper idle?  We should be able to figure out when they
> are going to be needed next time without waking them up and we should
> know the latency to wake up from the deeper multi-CPU "C-state",
> so it should be possible to decide whether or not to go to deeper
> idle without the SMP cross call.  Is there anything I'm missing here?

The decision to go to the lower state has already been made when the
cross call occurs.  On the platforms I have worked directly with so
far (Tegra2 and OMAP4460), the secondary cpu needs to execute code
before the primary cpu turns off the power.  For example, on OMAP4460,
the secondary cpu needs to go from WFI (clock gated) to OFF (power
gated), because OFF is not supported as an individual cpu state due to
a ROM code bug.  To do that transition, it needs to come out of WFI,
set up it's power domain registers, save a bunch of state, and
transition to OFF.

On Tegra3, the deepest individual cpu state for cpus 1-3 is OFF, the
same state the cpu would go into as the first step of a transition to
a deeper power state (cpus 0-3 OFF).  It would be more optimal in that
case to bypass the SMP cross call, and leave the cpu in OFF, but that
would require some way of disabling all wakeups for the secondary cpus
and then verifying that they didn't start waking up just before the
wakeups were disabled.  I have just started considering this
optimization, but I don't see anything in the existing code that would
prevent adding it later.

A simple measurement using the tracing may show that it is
unnecessary.  If the wakeup time for CPU1 to go from OFF to active is
small there might be no need to optimize out the extra wakeup.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [PATCHv3 0/5] coupled cpuidle state support
@ 2012-04-30 21:37     ` Colin Cross
  0 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-04-30 21:37 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Apr 30, 2012 at 2:25 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> Hi,
>
> I have a comment, which isn't about the series itself, but something
> thay may be worth thinking about.
>
> On Monday, April 30, 2012, Colin Cross wrote:
>> On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
>> cpus cannot be independently powered down, either due to
>> sequencing restrictions (on Tegra 2, cpu 0 must be the last to
>> power down), or due to HW bugs (on OMAP4460, a cpu powering up
>> will corrupt the gic state unless the other cpu runs a work
>> around). ?Each cpu has a power state that it can enter without
>> coordinating with the other cpu (usually Wait For Interrupt, or
>> WFI), and one or more "coupled" power states that affect blocks
>> shared between the cpus (L2 cache, interrupt controller, and
>> sometimes the whole SoC). ?Entering a coupled power state must
>> be tightly controlled on both cpus.
>
> That seems to be a special case of a more general situation where
> a number of CPU cores belong into a single power domain, possibly along
> some I/O devices.
>
> We'll need to handle the general case at one point anyway, so I wonder if
> the approach shown here may get us in the way?

I can't parse what you're saying here.

>> The easiest solution to implementing coupled cpu power states is
>> to hotplug all but one cpu whenever possible, usually using a
>> cpufreq governor that looks at cpu load to determine when to
>> enable the secondary cpus. ?This causes problems, as hotplug is an
>> expensive operation, so the number of hotplug transitions must be
>> minimized, leading to very slow response to loads, often on the
>> order of seconds.
>
> This isn't a solution at all, rather a workaround and a poor one for that
> matter.

Yes, which is what started me on this series.

>> This patch series implements an alternative solution, where each
>> cpu will wait in the WFI state until all cpus are ready to enter
>> a coupled state, at which point the coupled state function will
>> be called on all cpus at approximately the same time.
>>
>> Once all cpus are ready to enter idle, they are woken by an smp
>> cross call.
>
> Is it really necessary to wake up all of the CPUs in WFI before
> going to deeper idle? ?We should be able to figure out when they
> are going to be needed next time without waking them up and we should
> know the latency to wake up from the deeper multi-CPU "C-state",
> so it should be possible to decide whether or not to go to deeper
> idle without the SMP cross call. ?Is there anything I'm missing here?

The decision to go to the lower state has already been made when the
cross call occurs.  On the platforms I have worked directly with so
far (Tegra2 and OMAP4460), the secondary cpu needs to execute code
before the primary cpu turns off the power.  For example, on OMAP4460,
the secondary cpu needs to go from WFI (clock gated) to OFF (power
gated), because OFF is not supported as an individual cpu state due to
a ROM code bug.  To do that transition, it needs to come out of WFI,
set up it's power domain registers, save a bunch of state, and
transition to OFF.

On Tegra3, the deepest individual cpu state for cpus 1-3 is OFF, the
same state the cpu would go into as the first step of a transition to
a deeper power state (cpus 0-3 OFF).  It would be more optimal in that
case to bypass the SMP cross call, and leave the cpu in OFF, but that
would require some way of disabling all wakeups for the secondary cpus
and then verifying that they didn't start waking up just before the
wakeups were disabled.  I have just started considering this
optimization, but I don't see anything in the existing code that would
prevent adding it later.

A simple measurement using the tracing may show that it is
unnecessary.  If the wakeup time for CPU1 to go from OFF to active is
small there might be no need to optimize out the extra wakeup.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 0/5] coupled cpuidle state support
  2012-04-30 21:37     ` Colin Cross
  (?)
@ 2012-04-30 21:54       ` Rafael J. Wysocki
  -1 siblings, 0 replies; 78+ messages in thread
From: Rafael J. Wysocki @ 2012-04-30 21:54 UTC (permalink / raw)
  To: Colin Cross
  Cc: linux-kernel, linux-arm-kernel, linux-pm, Kevin Hilman,
	Len Brown, Trinabh Gupta, Arjan van de Ven, Deepthi Dharwar,
	Greg Kroah-Hartman, Kay Sievers, Santosh Shilimkar,
	Daniel Lezcano, Amit Kucheria, Lorenzo Pieralisi, Arnd Bergmann,
	Russell King, Len Brown

On Monday, April 30, 2012, Colin Cross wrote:
> On Mon, Apr 30, 2012 at 2:25 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> > Hi,
> >
> > I have a comment, which isn't about the series itself, but something
> > thay may be worth thinking about.
> >
> > On Monday, April 30, 2012, Colin Cross wrote:
> >> On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
> >> cpus cannot be independently powered down, either due to
> >> sequencing restrictions (on Tegra 2, cpu 0 must be the last to
> >> power down), or due to HW bugs (on OMAP4460, a cpu powering up
> >> will corrupt the gic state unless the other cpu runs a work
> >> around).  Each cpu has a power state that it can enter without
> >> coordinating with the other cpu (usually Wait For Interrupt, or
> >> WFI), and one or more "coupled" power states that affect blocks
> >> shared between the cpus (L2 cache, interrupt controller, and
> >> sometimes the whole SoC).  Entering a coupled power state must
> >> be tightly controlled on both cpus.
> >
> > That seems to be a special case of a more general situation where
> > a number of CPU cores belong into a single power domain, possibly along
> > some I/O devices.
> >
> > We'll need to handle the general case at one point anyway, so I wonder if
> > the approach shown here may get us in the way?
> 
> I can't parse what you're saying here.

The general case is a CPU core in one PM domain with a number of I/O
devices and a number of other CPU cores.  If we forget about the I/O
devices, we get a situation your patchset is addressing, so the
question is how difficult it is going to be to extend it to cover the
I/O devices as well.

> >> The easiest solution to implementing coupled cpu power states is
> >> to hotplug all but one cpu whenever possible, usually using a
> >> cpufreq governor that looks at cpu load to determine when to
> >> enable the secondary cpus.  This causes problems, as hotplug is an
> >> expensive operation, so the number of hotplug transitions must be
> >> minimized, leading to very slow response to loads, often on the
> >> order of seconds.
> >
> > This isn't a solution at all, rather a workaround and a poor one for that
> > matter.
> 
> Yes, which is what started me on this series.
> 
> >> This patch series implements an alternative solution, where each
> >> cpu will wait in the WFI state until all cpus are ready to enter
> >> a coupled state, at which point the coupled state function will
> >> be called on all cpus at approximately the same time.
> >>
> >> Once all cpus are ready to enter idle, they are woken by an smp
> >> cross call.
> >
> > Is it really necessary to wake up all of the CPUs in WFI before
> > going to deeper idle?  We should be able to figure out when they
> > are going to be needed next time without waking them up and we should
> > know the latency to wake up from the deeper multi-CPU "C-state",
> > so it should be possible to decide whether or not to go to deeper
> > idle without the SMP cross call.  Is there anything I'm missing here?
> 
> The decision to go to the lower state has already been made when the
> cross call occurs.  On the platforms I have worked directly with so
> far (Tegra2 and OMAP4460), the secondary cpu needs to execute code
> before the primary cpu turns off the power.  For example, on OMAP4460,
> the secondary cpu needs to go from WFI (clock gated) to OFF (power
> gated), because OFF is not supported as an individual cpu state due to
> a ROM code bug.  To do that transition, it needs to come out of WFI,
> set up it's power domain registers, save a bunch of state, and
> transition to OFF.
> 
> On Tegra3, the deepest individual cpu state for cpus 1-3 is OFF, the
> same state the cpu would go into as the first step of a transition to
> a deeper power state (cpus 0-3 OFF).  It would be more optimal in that
> case to bypass the SMP cross call, and leave the cpu in OFF, but that
> would require some way of disabling all wakeups for the secondary cpus
> and then verifying that they didn't start waking up just before the
> wakeups were disabled.  I have just started considering this
> optimization, but I don't see anything in the existing code that would
> prevent adding it later.

OK

> A simple measurement using the tracing may show that it is
> unnecessary.  If the wakeup time for CPU1 to go from OFF to active is
> small there might be no need to optimize out the extra wakeup.

I see.

So, in the end, it may always be more straightforward to put individual
CPU cores into single-core idle states until the "we can all go to
deeper idle" condition is satisfied and then wake them all up and let
each of them do the transition individually, right?

Rafael

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 0/5] coupled cpuidle state support
@ 2012-04-30 21:54       ` Rafael J. Wysocki
  0 siblings, 0 replies; 78+ messages in thread
From: Rafael J. Wysocki @ 2012-04-30 21:54 UTC (permalink / raw)
  To: Colin Cross
  Cc: Kevin Hilman, Len Brown, Russell King, Greg Kroah-Hartman,
	Kay Sievers, linux-kernel, Amit Kucheria, linux-pm,
	Arjan van de Ven, Arnd Bergmann, linux-arm-kernel

On Monday, April 30, 2012, Colin Cross wrote:
> On Mon, Apr 30, 2012 at 2:25 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> > Hi,
> >
> > I have a comment, which isn't about the series itself, but something
> > thay may be worth thinking about.
> >
> > On Monday, April 30, 2012, Colin Cross wrote:
> >> On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
> >> cpus cannot be independently powered down, either due to
> >> sequencing restrictions (on Tegra 2, cpu 0 must be the last to
> >> power down), or due to HW bugs (on OMAP4460, a cpu powering up
> >> will corrupt the gic state unless the other cpu runs a work
> >> around).  Each cpu has a power state that it can enter without
> >> coordinating with the other cpu (usually Wait For Interrupt, or
> >> WFI), and one or more "coupled" power states that affect blocks
> >> shared between the cpus (L2 cache, interrupt controller, and
> >> sometimes the whole SoC).  Entering a coupled power state must
> >> be tightly controlled on both cpus.
> >
> > That seems to be a special case of a more general situation where
> > a number of CPU cores belong into a single power domain, possibly along
> > some I/O devices.
> >
> > We'll need to handle the general case at one point anyway, so I wonder if
> > the approach shown here may get us in the way?
> 
> I can't parse what you're saying here.

The general case is a CPU core in one PM domain with a number of I/O
devices and a number of other CPU cores.  If we forget about the I/O
devices, we get a situation your patchset is addressing, so the
question is how difficult it is going to be to extend it to cover the
I/O devices as well.

> >> The easiest solution to implementing coupled cpu power states is
> >> to hotplug all but one cpu whenever possible, usually using a
> >> cpufreq governor that looks at cpu load to determine when to
> >> enable the secondary cpus.  This causes problems, as hotplug is an
> >> expensive operation, so the number of hotplug transitions must be
> >> minimized, leading to very slow response to loads, often on the
> >> order of seconds.
> >
> > This isn't a solution at all, rather a workaround and a poor one for that
> > matter.
> 
> Yes, which is what started me on this series.
> 
> >> This patch series implements an alternative solution, where each
> >> cpu will wait in the WFI state until all cpus are ready to enter
> >> a coupled state, at which point the coupled state function will
> >> be called on all cpus at approximately the same time.
> >>
> >> Once all cpus are ready to enter idle, they are woken by an smp
> >> cross call.
> >
> > Is it really necessary to wake up all of the CPUs in WFI before
> > going to deeper idle?  We should be able to figure out when they
> > are going to be needed next time without waking them up and we should
> > know the latency to wake up from the deeper multi-CPU "C-state",
> > so it should be possible to decide whether or not to go to deeper
> > idle without the SMP cross call.  Is there anything I'm missing here?
> 
> The decision to go to the lower state has already been made when the
> cross call occurs.  On the platforms I have worked directly with so
> far (Tegra2 and OMAP4460), the secondary cpu needs to execute code
> before the primary cpu turns off the power.  For example, on OMAP4460,
> the secondary cpu needs to go from WFI (clock gated) to OFF (power
> gated), because OFF is not supported as an individual cpu state due to
> a ROM code bug.  To do that transition, it needs to come out of WFI,
> set up it's power domain registers, save a bunch of state, and
> transition to OFF.
> 
> On Tegra3, the deepest individual cpu state for cpus 1-3 is OFF, the
> same state the cpu would go into as the first step of a transition to
> a deeper power state (cpus 0-3 OFF).  It would be more optimal in that
> case to bypass the SMP cross call, and leave the cpu in OFF, but that
> would require some way of disabling all wakeups for the secondary cpus
> and then verifying that they didn't start waking up just before the
> wakeups were disabled.  I have just started considering this
> optimization, but I don't see anything in the existing code that would
> prevent adding it later.

OK

> A simple measurement using the tracing may show that it is
> unnecessary.  If the wakeup time for CPU1 to go from OFF to active is
> small there might be no need to optimize out the extra wakeup.

I see.

So, in the end, it may always be more straightforward to put individual
CPU cores into single-core idle states until the "we can all go to
deeper idle" condition is satisfied and then wake them all up and let
each of them do the transition individually, right?

Rafael

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [PATCHv3 0/5] coupled cpuidle state support
@ 2012-04-30 21:54       ` Rafael J. Wysocki
  0 siblings, 0 replies; 78+ messages in thread
From: Rafael J. Wysocki @ 2012-04-30 21:54 UTC (permalink / raw)
  To: linux-arm-kernel

On Monday, April 30, 2012, Colin Cross wrote:
> On Mon, Apr 30, 2012 at 2:25 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> > Hi,
> >
> > I have a comment, which isn't about the series itself, but something
> > thay may be worth thinking about.
> >
> > On Monday, April 30, 2012, Colin Cross wrote:
> >> On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
> >> cpus cannot be independently powered down, either due to
> >> sequencing restrictions (on Tegra 2, cpu 0 must be the last to
> >> power down), or due to HW bugs (on OMAP4460, a cpu powering up
> >> will corrupt the gic state unless the other cpu runs a work
> >> around).  Each cpu has a power state that it can enter without
> >> coordinating with the other cpu (usually Wait For Interrupt, or
> >> WFI), and one or more "coupled" power states that affect blocks
> >> shared between the cpus (L2 cache, interrupt controller, and
> >> sometimes the whole SoC).  Entering a coupled power state must
> >> be tightly controlled on both cpus.
> >
> > That seems to be a special case of a more general situation where
> > a number of CPU cores belong into a single power domain, possibly along
> > some I/O devices.
> >
> > We'll need to handle the general case at one point anyway, so I wonder if
> > the approach shown here may get us in the way?
> 
> I can't parse what you're saying here.

The general case is a CPU core in one PM domain with a number of I/O
devices and a number of other CPU cores.  If we forget about the I/O
devices, we get a situation your patchset is addressing, so the
question is how difficult it is going to be to extend it to cover the
I/O devices as well.

> >> The easiest solution to implementing coupled cpu power states is
> >> to hotplug all but one cpu whenever possible, usually using a
> >> cpufreq governor that looks at cpu load to determine when to
> >> enable the secondary cpus.  This causes problems, as hotplug is an
> >> expensive operation, so the number of hotplug transitions must be
> >> minimized, leading to very slow response to loads, often on the
> >> order of seconds.
> >
> > This isn't a solution at all, rather a workaround and a poor one for that
> > matter.
> 
> Yes, which is what started me on this series.
> 
> >> This patch series implements an alternative solution, where each
> >> cpu will wait in the WFI state until all cpus are ready to enter
> >> a coupled state, at which point the coupled state function will
> >> be called on all cpus at approximately the same time.
> >>
> >> Once all cpus are ready to enter idle, they are woken by an smp
> >> cross call.
> >
> > Is it really necessary to wake up all of the CPUs in WFI before
> > going to deeper idle?  We should be able to figure out when they
> > are going to be needed next time without waking them up and we should
> > know the latency to wake up from the deeper multi-CPU "C-state",
> > so it should be possible to decide whether or not to go to deeper
> > idle without the SMP cross call.  Is there anything I'm missing here?
> 
> The decision to go to the lower state has already been made when the
> cross call occurs.  On the platforms I have worked directly with so
> far (Tegra2 and OMAP4460), the secondary cpu needs to execute code
> before the primary cpu turns off the power.  For example, on OMAP4460,
> the secondary cpu needs to go from WFI (clock gated) to OFF (power
> gated), because OFF is not supported as an individual cpu state due to
> a ROM code bug.  To do that transition, it needs to come out of WFI,
> set up it's power domain registers, save a bunch of state, and
> transition to OFF.
> 
> On Tegra3, the deepest individual cpu state for cpus 1-3 is OFF, the
> same state the cpu would go into as the first step of a transition to
> a deeper power state (cpus 0-3 OFF).  It would be more optimal in that
> case to bypass the SMP cross call, and leave the cpu in OFF, but that
> would require some way of disabling all wakeups for the secondary cpus
> and then verifying that they didn't start waking up just before the
> wakeups were disabled.  I have just started considering this
> optimization, but I don't see anything in the existing code that would
> prevent adding it later.

OK

> A simple measurement using the tracing may show that it is
> unnecessary.  If the wakeup time for CPU1 to go from OFF to active is
> small there might be no need to optimize out the extra wakeup.

I see.

So, in the end, it may always be more straightforward to put individual
CPU cores into single-core idle states until the "we can all go to
deeper idle" condition is satisfied and then wake them all up and let
each of them do the transition individually, right?

Rafael

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 0/5] coupled cpuidle state support
  2012-04-30 21:54       ` Rafael J. Wysocki
  (?)
@ 2012-04-30 22:01         ` Colin Cross
  -1 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-04-30 22:01 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: linux-kernel, linux-arm-kernel, linux-pm, Kevin Hilman,
	Len Brown, Trinabh Gupta, Arjan van de Ven, Deepthi Dharwar,
	Greg Kroah-Hartman, Kay Sievers, Santosh Shilimkar,
	Daniel Lezcano, Amit Kucheria, Lorenzo Pieralisi, Arnd Bergmann,
	Russell King, Len Brown

On Mon, Apr 30, 2012 at 2:54 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> On Monday, April 30, 2012, Colin Cross wrote:
>> On Mon, Apr 30, 2012 at 2:25 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
>> > Hi,
>> >
>> > I have a comment, which isn't about the series itself, but something
>> > thay may be worth thinking about.
>> >
>> > On Monday, April 30, 2012, Colin Cross wrote:
>> >> On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
>> >> cpus cannot be independently powered down, either due to
>> >> sequencing restrictions (on Tegra 2, cpu 0 must be the last to
>> >> power down), or due to HW bugs (on OMAP4460, a cpu powering up
>> >> will corrupt the gic state unless the other cpu runs a work
>> >> around).  Each cpu has a power state that it can enter without
>> >> coordinating with the other cpu (usually Wait For Interrupt, or
>> >> WFI), and one or more "coupled" power states that affect blocks
>> >> shared between the cpus (L2 cache, interrupt controller, and
>> >> sometimes the whole SoC).  Entering a coupled power state must
>> >> be tightly controlled on both cpus.
>> >
>> > That seems to be a special case of a more general situation where
>> > a number of CPU cores belong into a single power domain, possibly along
>> > some I/O devices.
>> >
>> > We'll need to handle the general case at one point anyway, so I wonder if
>> > the approach shown here may get us in the way?
>>
>> I can't parse what you're saying here.
>
> The general case is a CPU core in one PM domain with a number of I/O
> devices and a number of other CPU cores.  If we forget about the I/O
> devices, we get a situation your patchset is addressing, so the
> question is how difficult it is going to be to extend it to cover the
> I/O devices as well.

The logic in this patch set is always going to be required to get
multiple cpus to coordinate an idle transition, and it will need to
stay fairly tightly coupled with cpuidle to correctly track the idle
time statistics for the intermediate and final states.  I don't think
there would be an issue if it ends up getting hoisted out into a
future combined cpu/IO power domain, but it seems more likely that the
coupled cpu idle states would call into the power domain to say they
no longer need power.

>> >> The easiest solution to implementing coupled cpu power states is
>> >> to hotplug all but one cpu whenever possible, usually using a
>> >> cpufreq governor that looks at cpu load to determine when to
>> >> enable the secondary cpus.  This causes problems, as hotplug is an
>> >> expensive operation, so the number of hotplug transitions must be
>> >> minimized, leading to very slow response to loads, often on the
>> >> order of seconds.
>> >
>> > This isn't a solution at all, rather a workaround and a poor one for that
>> > matter.
>>
>> Yes, which is what started me on this series.
>>
>> >> This patch series implements an alternative solution, where each
>> >> cpu will wait in the WFI state until all cpus are ready to enter
>> >> a coupled state, at which point the coupled state function will
>> >> be called on all cpus at approximately the same time.
>> >>
>> >> Once all cpus are ready to enter idle, they are woken by an smp
>> >> cross call.
>> >
>> > Is it really necessary to wake up all of the CPUs in WFI before
>> > going to deeper idle?  We should be able to figure out when they
>> > are going to be needed next time without waking them up and we should
>> > know the latency to wake up from the deeper multi-CPU "C-state",
>> > so it should be possible to decide whether or not to go to deeper
>> > idle without the SMP cross call.  Is there anything I'm missing here?
>>
>> The decision to go to the lower state has already been made when the
>> cross call occurs.  On the platforms I have worked directly with so
>> far (Tegra2 and OMAP4460), the secondary cpu needs to execute code
>> before the primary cpu turns off the power.  For example, on OMAP4460,
>> the secondary cpu needs to go from WFI (clock gated) to OFF (power
>> gated), because OFF is not supported as an individual cpu state due to
>> a ROM code bug.  To do that transition, it needs to come out of WFI,
>> set up it's power domain registers, save a bunch of state, and
>> transition to OFF.
>>
>> On Tegra3, the deepest individual cpu state for cpus 1-3 is OFF, the
>> same state the cpu would go into as the first step of a transition to
>> a deeper power state (cpus 0-3 OFF).  It would be more optimal in that
>> case to bypass the SMP cross call, and leave the cpu in OFF, but that
>> would require some way of disabling all wakeups for the secondary cpus
>> and then verifying that they didn't start waking up just before the
>> wakeups were disabled.  I have just started considering this
>> optimization, but I don't see anything in the existing code that would
>> prevent adding it later.
>
> OK
>
>> A simple measurement using the tracing may show that it is
>> unnecessary.  If the wakeup time for CPU1 to go from OFF to active is
>> small there might be no need to optimize out the extra wakeup.
>
> I see.
>
> So, in the end, it may always be more straightforward to put individual
> CPU cores into single-core idle states until the "we can all go to
> deeper idle" condition is satisfied and then wake them all up and let
> each of them do the transition individually, right?

Yes, the tradeoff will be the complexity of code to handle a generic
way of holding another cpu in idle while this cpu does the transition
vs. the time and power required to bring a cpu back online just to put
it into a deeper state.  Right now, since all the users of this code
are using WFI for their intermediate state, it takes microseconds to
bring a cpu back up.  On Tegra3, the answer might be "sometimes" -
only cpu0 can perform the final idle state transition, so if cpu1 is
the last to go to idle, it will always have to SMP cross call to cpu0,
but if cpu0 is the last to go idle it may be able to avoid waking up
cpu1.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 0/5] coupled cpuidle state support
@ 2012-04-30 22:01         ` Colin Cross
  0 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-04-30 22:01 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Kevin Hilman, Len Brown, Russell King, Greg Kroah-Hartman,
	Kay Sievers, linux-kernel, Amit Kucheria, linux-pm,
	Arjan van de Ven, Arnd Bergmann, linux-arm-kernel

On Mon, Apr 30, 2012 at 2:54 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> On Monday, April 30, 2012, Colin Cross wrote:
>> On Mon, Apr 30, 2012 at 2:25 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
>> > Hi,
>> >
>> > I have a comment, which isn't about the series itself, but something
>> > thay may be worth thinking about.
>> >
>> > On Monday, April 30, 2012, Colin Cross wrote:
>> >> On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
>> >> cpus cannot be independently powered down, either due to
>> >> sequencing restrictions (on Tegra 2, cpu 0 must be the last to
>> >> power down), or due to HW bugs (on OMAP4460, a cpu powering up
>> >> will corrupt the gic state unless the other cpu runs a work
>> >> around).  Each cpu has a power state that it can enter without
>> >> coordinating with the other cpu (usually Wait For Interrupt, or
>> >> WFI), and one or more "coupled" power states that affect blocks
>> >> shared between the cpus (L2 cache, interrupt controller, and
>> >> sometimes the whole SoC).  Entering a coupled power state must
>> >> be tightly controlled on both cpus.
>> >
>> > That seems to be a special case of a more general situation where
>> > a number of CPU cores belong into a single power domain, possibly along
>> > some I/O devices.
>> >
>> > We'll need to handle the general case at one point anyway, so I wonder if
>> > the approach shown here may get us in the way?
>>
>> I can't parse what you're saying here.
>
> The general case is a CPU core in one PM domain with a number of I/O
> devices and a number of other CPU cores.  If we forget about the I/O
> devices, we get a situation your patchset is addressing, so the
> question is how difficult it is going to be to extend it to cover the
> I/O devices as well.

The logic in this patch set is always going to be required to get
multiple cpus to coordinate an idle transition, and it will need to
stay fairly tightly coupled with cpuidle to correctly track the idle
time statistics for the intermediate and final states.  I don't think
there would be an issue if it ends up getting hoisted out into a
future combined cpu/IO power domain, but it seems more likely that the
coupled cpu idle states would call into the power domain to say they
no longer need power.

>> >> The easiest solution to implementing coupled cpu power states is
>> >> to hotplug all but one cpu whenever possible, usually using a
>> >> cpufreq governor that looks at cpu load to determine when to
>> >> enable the secondary cpus.  This causes problems, as hotplug is an
>> >> expensive operation, so the number of hotplug transitions must be
>> >> minimized, leading to very slow response to loads, often on the
>> >> order of seconds.
>> >
>> > This isn't a solution at all, rather a workaround and a poor one for that
>> > matter.
>>
>> Yes, which is what started me on this series.
>>
>> >> This patch series implements an alternative solution, where each
>> >> cpu will wait in the WFI state until all cpus are ready to enter
>> >> a coupled state, at which point the coupled state function will
>> >> be called on all cpus at approximately the same time.
>> >>
>> >> Once all cpus are ready to enter idle, they are woken by an smp
>> >> cross call.
>> >
>> > Is it really necessary to wake up all of the CPUs in WFI before
>> > going to deeper idle?  We should be able to figure out when they
>> > are going to be needed next time without waking them up and we should
>> > know the latency to wake up from the deeper multi-CPU "C-state",
>> > so it should be possible to decide whether or not to go to deeper
>> > idle without the SMP cross call.  Is there anything I'm missing here?
>>
>> The decision to go to the lower state has already been made when the
>> cross call occurs.  On the platforms I have worked directly with so
>> far (Tegra2 and OMAP4460), the secondary cpu needs to execute code
>> before the primary cpu turns off the power.  For example, on OMAP4460,
>> the secondary cpu needs to go from WFI (clock gated) to OFF (power
>> gated), because OFF is not supported as an individual cpu state due to
>> a ROM code bug.  To do that transition, it needs to come out of WFI,
>> set up it's power domain registers, save a bunch of state, and
>> transition to OFF.
>>
>> On Tegra3, the deepest individual cpu state for cpus 1-3 is OFF, the
>> same state the cpu would go into as the first step of a transition to
>> a deeper power state (cpus 0-3 OFF).  It would be more optimal in that
>> case to bypass the SMP cross call, and leave the cpu in OFF, but that
>> would require some way of disabling all wakeups for the secondary cpus
>> and then verifying that they didn't start waking up just before the
>> wakeups were disabled.  I have just started considering this
>> optimization, but I don't see anything in the existing code that would
>> prevent adding it later.
>
> OK
>
>> A simple measurement using the tracing may show that it is
>> unnecessary.  If the wakeup time for CPU1 to go from OFF to active is
>> small there might be no need to optimize out the extra wakeup.
>
> I see.
>
> So, in the end, it may always be more straightforward to put individual
> CPU cores into single-core idle states until the "we can all go to
> deeper idle" condition is satisfied and then wake them all up and let
> each of them do the transition individually, right?

Yes, the tradeoff will be the complexity of code to handle a generic
way of holding another cpu in idle while this cpu does the transition
vs. the time and power required to bring a cpu back online just to put
it into a deeper state.  Right now, since all the users of this code
are using WFI for their intermediate state, it takes microseconds to
bring a cpu back up.  On Tegra3, the answer might be "sometimes" -
only cpu0 can perform the final idle state transition, so if cpu1 is
the last to go to idle, it will always have to SMP cross call to cpu0,
but if cpu0 is the last to go idle it may be able to avoid waking up
cpu1.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [PATCHv3 0/5] coupled cpuidle state support
@ 2012-04-30 22:01         ` Colin Cross
  0 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-04-30 22:01 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Apr 30, 2012 at 2:54 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> On Monday, April 30, 2012, Colin Cross wrote:
>> On Mon, Apr 30, 2012 at 2:25 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
>> > Hi,
>> >
>> > I have a comment, which isn't about the series itself, but something
>> > thay may be worth thinking about.
>> >
>> > On Monday, April 30, 2012, Colin Cross wrote:
>> >> On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
>> >> cpus cannot be independently powered down, either due to
>> >> sequencing restrictions (on Tegra 2, cpu 0 must be the last to
>> >> power down), or due to HW bugs (on OMAP4460, a cpu powering up
>> >> will corrupt the gic state unless the other cpu runs a work
>> >> around). ?Each cpu has a power state that it can enter without
>> >> coordinating with the other cpu (usually Wait For Interrupt, or
>> >> WFI), and one or more "coupled" power states that affect blocks
>> >> shared between the cpus (L2 cache, interrupt controller, and
>> >> sometimes the whole SoC). ?Entering a coupled power state must
>> >> be tightly controlled on both cpus.
>> >
>> > That seems to be a special case of a more general situation where
>> > a number of CPU cores belong into a single power domain, possibly along
>> > some I/O devices.
>> >
>> > We'll need to handle the general case at one point anyway, so I wonder if
>> > the approach shown here may get us in the way?
>>
>> I can't parse what you're saying here.
>
> The general case is a CPU core in one PM domain with a number of I/O
> devices and a number of other CPU cores. ?If we forget about the I/O
> devices, we get a situation your patchset is addressing, so the
> question is how difficult it is going to be to extend it to cover the
> I/O devices as well.

The logic in this patch set is always going to be required to get
multiple cpus to coordinate an idle transition, and it will need to
stay fairly tightly coupled with cpuidle to correctly track the idle
time statistics for the intermediate and final states.  I don't think
there would be an issue if it ends up getting hoisted out into a
future combined cpu/IO power domain, but it seems more likely that the
coupled cpu idle states would call into the power domain to say they
no longer need power.

>> >> The easiest solution to implementing coupled cpu power states is
>> >> to hotplug all but one cpu whenever possible, usually using a
>> >> cpufreq governor that looks at cpu load to determine when to
>> >> enable the secondary cpus. ?This causes problems, as hotplug is an
>> >> expensive operation, so the number of hotplug transitions must be
>> >> minimized, leading to very slow response to loads, often on the
>> >> order of seconds.
>> >
>> > This isn't a solution at all, rather a workaround and a poor one for that
>> > matter.
>>
>> Yes, which is what started me on this series.
>>
>> >> This patch series implements an alternative solution, where each
>> >> cpu will wait in the WFI state until all cpus are ready to enter
>> >> a coupled state, at which point the coupled state function will
>> >> be called on all cpus at approximately the same time.
>> >>
>> >> Once all cpus are ready to enter idle, they are woken by an smp
>> >> cross call.
>> >
>> > Is it really necessary to wake up all of the CPUs in WFI before
>> > going to deeper idle? ?We should be able to figure out when they
>> > are going to be needed next time without waking them up and we should
>> > know the latency to wake up from the deeper multi-CPU "C-state",
>> > so it should be possible to decide whether or not to go to deeper
>> > idle without the SMP cross call. ?Is there anything I'm missing here?
>>
>> The decision to go to the lower state has already been made when the
>> cross call occurs. ?On the platforms I have worked directly with so
>> far (Tegra2 and OMAP4460), the secondary cpu needs to execute code
>> before the primary cpu turns off the power. ?For example, on OMAP4460,
>> the secondary cpu needs to go from WFI (clock gated) to OFF (power
>> gated), because OFF is not supported as an individual cpu state due to
>> a ROM code bug. ?To do that transition, it needs to come out of WFI,
>> set up it's power domain registers, save a bunch of state, and
>> transition to OFF.
>>
>> On Tegra3, the deepest individual cpu state for cpus 1-3 is OFF, the
>> same state the cpu would go into as the first step of a transition to
>> a deeper power state (cpus 0-3 OFF). ?It would be more optimal in that
>> case to bypass the SMP cross call, and leave the cpu in OFF, but that
>> would require some way of disabling all wakeups for the secondary cpus
>> and then verifying that they didn't start waking up just before the
>> wakeups were disabled. ?I have just started considering this
>> optimization, but I don't see anything in the existing code that would
>> prevent adding it later.
>
> OK
>
>> A simple measurement using the tracing may show that it is
>> unnecessary. ?If the wakeup time for CPU1 to go from OFF to active is
>> small there might be no need to optimize out the extra wakeup.
>
> I see.
>
> So, in the end, it may always be more straightforward to put individual
> CPU cores into single-core idle states until the "we can all go to
> deeper idle" condition is satisfied and then wake them all up and let
> each of them do the transition individually, right?

Yes, the tradeoff will be the complexity of code to handle a generic
way of holding another cpu in idle while this cpu does the transition
vs. the time and power required to bring a cpu back online just to put
it into a deeper state.  Right now, since all the users of this code
are using WFI for their intermediate state, it takes microseconds to
bring a cpu back up.  On Tegra3, the answer might be "sometimes" -
only cpu0 can perform the final idle state transition, so if cpu1 is
the last to go to idle, it will always have to SMP cross call to cpu0,
but if cpu0 is the last to go idle it may be able to avoid waking up
cpu1.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 0/5] coupled cpuidle state support
  2012-04-30 21:37     ` Colin Cross
  (?)
@ 2012-05-01 10:43       ` Lorenzo Pieralisi
  -1 siblings, 0 replies; 78+ messages in thread
From: Lorenzo Pieralisi @ 2012-05-01 10:43 UTC (permalink / raw)
  To: Colin Cross
  Cc: Rafael J. Wysocki, linux-kernel, linux-arm-kernel, linux-pm,
	Kevin Hilman, Len Brown, Trinabh Gupta, Arjan van de Ven,
	Deepthi Dharwar, Greg Kroah-Hartman, Kay Sievers,
	Santosh Shilimkar, Daniel Lezcano, Amit Kucheria, Arnd Bergmann,
	Russell King, Len Brown

Hi Colin,

On Mon, Apr 30, 2012 at 10:37:30PM +0100, Colin Cross wrote:
> On Mon, Apr 30, 2012 at 2:25 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> > Hi,
> >
> > I have a comment, which isn't about the series itself, but something
> > thay may be worth thinking about.
> >
> > On Monday, April 30, 2012, Colin Cross wrote:
> >> On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
> >> cpus cannot be independently powered down, either due to
> >> sequencing restrictions (on Tegra 2, cpu 0 must be the last to
> >> power down), or due to HW bugs (on OMAP4460, a cpu powering up
> >> will corrupt the gic state unless the other cpu runs a work
> >> around).  Each cpu has a power state that it can enter without
> >> coordinating with the other cpu (usually Wait For Interrupt, or
> >> WFI), and one or more "coupled" power states that affect blocks
> >> shared between the cpus (L2 cache, interrupt controller, and
> >> sometimes the whole SoC).  Entering a coupled power state must
> >> be tightly controlled on both cpus.
> >
> > That seems to be a special case of a more general situation where
> > a number of CPU cores belong into a single power domain, possibly along
> > some I/O devices.
> >
> > We'll need to handle the general case at one point anyway, so I wonder if
> > the approach shown here may get us in the way?
> 
> I can't parse what you're saying here.
> 
> >> The easiest solution to implementing coupled cpu power states is
> >> to hotplug all but one cpu whenever possible, usually using a
> >> cpufreq governor that looks at cpu load to determine when to
> >> enable the secondary cpus.  This causes problems, as hotplug is an
> >> expensive operation, so the number of hotplug transitions must be
> >> minimized, leading to very slow response to loads, often on the
> >> order of seconds.
> >
> > This isn't a solution at all, rather a workaround and a poor one for that
> > matter.
> 
> Yes, which is what started me on this series.
> 
> >> This patch series implements an alternative solution, where each
> >> cpu will wait in the WFI state until all cpus are ready to enter
> >> a coupled state, at which point the coupled state function will
> >> be called on all cpus at approximately the same time.
> >>
> >> Once all cpus are ready to enter idle, they are woken by an smp
> >> cross call.
> >
> > Is it really necessary to wake up all of the CPUs in WFI before
> > going to deeper idle?  We should be able to figure out when they
> > are going to be needed next time without waking them up and we should
> > know the latency to wake up from the deeper multi-CPU "C-state",
> > so it should be possible to decide whether or not to go to deeper
> > idle without the SMP cross call.  Is there anything I'm missing here?
> 
> The decision to go to the lower state has already been made when the
> cross call occurs.  On the platforms I have worked directly with so
> far (Tegra2 and OMAP4460), the secondary cpu needs to execute code
> before the primary cpu turns off the power.  For example, on OMAP4460,
> the secondary cpu needs to go from WFI (clock gated) to OFF (power
> gated), because OFF is not supported as an individual cpu state due to
> a ROM code bug.  To do that transition, it needs to come out of WFI,
> set up it's power domain registers, save a bunch of state, and
> transition to OFF.
> 
> On Tegra3, the deepest individual cpu state for cpus 1-3 is OFF, the
> same state the cpu would go into as the first step of a transition to
> a deeper power state (cpus 0-3 OFF).  It would be more optimal in that
> case to bypass the SMP cross call, and leave the cpu in OFF, but that
> would require some way of disabling all wakeups for the secondary cpus
> and then verifying that they didn't start waking up just before the
> wakeups were disabled.  I have just started considering this
> optimization, but I don't see anything in the existing code that would
> prevent adding it later.

I agree it is certainly an optimization that can be added later if benchmarks
show it is needed (but again it is heavily platform dependent, ie technology
dependent).
On a side note, disabling (or move to the primary) wake-ups for "secondaries"
on platforms where every core is in a different power domain is still needed
to avoid having a situation where a CPU can independently get out of idle, ie
abort idle, after hitting the coupled barrier.
Still do not know if for those platforms coupled C-states should be used, but
it is much better to have a choice there IMHO.

I have also started thinking about a cluster or multi-CPU "next-event" that
could avoid triggering heavy operations like L2 cleaning (ie cluster shutdown)
if a timer is about to expire on a given CPU (as you know CPUs get in and out
of idle independently so the governor decision at the point the coupled state
barrier is hit might be stale).

I reckon the coupled C-state concept can prove to be an effective one for
some platforms, currently benchmarking it.

> A simple measurement using the tracing may show that it is
> unnecessary.  If the wakeup time for CPU1 to go from OFF to active is
> small there might be no need to optimize out the extra wakeup.

Indeed, it is all about resetting the CPU and getting it started, with
inclusive L2 the power cost of shutting down a CPU and resuming it should be
low (and timing very fast) for most platforms.

Lorenzo


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 0/5] coupled cpuidle state support
@ 2012-05-01 10:43       ` Lorenzo Pieralisi
  0 siblings, 0 replies; 78+ messages in thread
From: Lorenzo Pieralisi @ 2012-05-01 10:43 UTC (permalink / raw)
  To: Colin Cross
  Cc: Kevin Hilman, Len Brown, Russell King, Greg Kroah-Hartman,
	Kay Sievers, linux-kernel, Amit Kucheria, linux-pm,
	Arjan van de Ven, Arnd Bergmann, linux-arm-kernel

Hi Colin,

On Mon, Apr 30, 2012 at 10:37:30PM +0100, Colin Cross wrote:
> On Mon, Apr 30, 2012 at 2:25 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> > Hi,
> >
> > I have a comment, which isn't about the series itself, but something
> > thay may be worth thinking about.
> >
> > On Monday, April 30, 2012, Colin Cross wrote:
> >> On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
> >> cpus cannot be independently powered down, either due to
> >> sequencing restrictions (on Tegra 2, cpu 0 must be the last to
> >> power down), or due to HW bugs (on OMAP4460, a cpu powering up
> >> will corrupt the gic state unless the other cpu runs a work
> >> around).  Each cpu has a power state that it can enter without
> >> coordinating with the other cpu (usually Wait For Interrupt, or
> >> WFI), and one or more "coupled" power states that affect blocks
> >> shared between the cpus (L2 cache, interrupt controller, and
> >> sometimes the whole SoC).  Entering a coupled power state must
> >> be tightly controlled on both cpus.
> >
> > That seems to be a special case of a more general situation where
> > a number of CPU cores belong into a single power domain, possibly along
> > some I/O devices.
> >
> > We'll need to handle the general case at one point anyway, so I wonder if
> > the approach shown here may get us in the way?
> 
> I can't parse what you're saying here.
> 
> >> The easiest solution to implementing coupled cpu power states is
> >> to hotplug all but one cpu whenever possible, usually using a
> >> cpufreq governor that looks at cpu load to determine when to
> >> enable the secondary cpus.  This causes problems, as hotplug is an
> >> expensive operation, so the number of hotplug transitions must be
> >> minimized, leading to very slow response to loads, often on the
> >> order of seconds.
> >
> > This isn't a solution at all, rather a workaround and a poor one for that
> > matter.
> 
> Yes, which is what started me on this series.
> 
> >> This patch series implements an alternative solution, where each
> >> cpu will wait in the WFI state until all cpus are ready to enter
> >> a coupled state, at which point the coupled state function will
> >> be called on all cpus at approximately the same time.
> >>
> >> Once all cpus are ready to enter idle, they are woken by an smp
> >> cross call.
> >
> > Is it really necessary to wake up all of the CPUs in WFI before
> > going to deeper idle?  We should be able to figure out when they
> > are going to be needed next time without waking them up and we should
> > know the latency to wake up from the deeper multi-CPU "C-state",
> > so it should be possible to decide whether or not to go to deeper
> > idle without the SMP cross call.  Is there anything I'm missing here?
> 
> The decision to go to the lower state has already been made when the
> cross call occurs.  On the platforms I have worked directly with so
> far (Tegra2 and OMAP4460), the secondary cpu needs to execute code
> before the primary cpu turns off the power.  For example, on OMAP4460,
> the secondary cpu needs to go from WFI (clock gated) to OFF (power
> gated), because OFF is not supported as an individual cpu state due to
> a ROM code bug.  To do that transition, it needs to come out of WFI,
> set up it's power domain registers, save a bunch of state, and
> transition to OFF.
> 
> On Tegra3, the deepest individual cpu state for cpus 1-3 is OFF, the
> same state the cpu would go into as the first step of a transition to
> a deeper power state (cpus 0-3 OFF).  It would be more optimal in that
> case to bypass the SMP cross call, and leave the cpu in OFF, but that
> would require some way of disabling all wakeups for the secondary cpus
> and then verifying that they didn't start waking up just before the
> wakeups were disabled.  I have just started considering this
> optimization, but I don't see anything in the existing code that would
> prevent adding it later.

I agree it is certainly an optimization that can be added later if benchmarks
show it is needed (but again it is heavily platform dependent, ie technology
dependent).
On a side note, disabling (or move to the primary) wake-ups for "secondaries"
on platforms where every core is in a different power domain is still needed
to avoid having a situation where a CPU can independently get out of idle, ie
abort idle, after hitting the coupled barrier.
Still do not know if for those platforms coupled C-states should be used, but
it is much better to have a choice there IMHO.

I have also started thinking about a cluster or multi-CPU "next-event" that
could avoid triggering heavy operations like L2 cleaning (ie cluster shutdown)
if a timer is about to expire on a given CPU (as you know CPUs get in and out
of idle independently so the governor decision at the point the coupled state
barrier is hit might be stale).

I reckon the coupled C-state concept can prove to be an effective one for
some platforms, currently benchmarking it.

> A simple measurement using the tracing may show that it is
> unnecessary.  If the wakeup time for CPU1 to go from OFF to active is
> small there might be no need to optimize out the extra wakeup.

Indeed, it is all about resetting the CPU and getting it started, with
inclusive L2 the power cost of shutting down a CPU and resuming it should be
low (and timing very fast) for most platforms.

Lorenzo

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [PATCHv3 0/5] coupled cpuidle state support
@ 2012-05-01 10:43       ` Lorenzo Pieralisi
  0 siblings, 0 replies; 78+ messages in thread
From: Lorenzo Pieralisi @ 2012-05-01 10:43 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Colin,

On Mon, Apr 30, 2012 at 10:37:30PM +0100, Colin Cross wrote:
> On Mon, Apr 30, 2012 at 2:25 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> > Hi,
> >
> > I have a comment, which isn't about the series itself, but something
> > thay may be worth thinking about.
> >
> > On Monday, April 30, 2012, Colin Cross wrote:
> >> On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
> >> cpus cannot be independently powered down, either due to
> >> sequencing restrictions (on Tegra 2, cpu 0 must be the last to
> >> power down), or due to HW bugs (on OMAP4460, a cpu powering up
> >> will corrupt the gic state unless the other cpu runs a work
> >> around).  Each cpu has a power state that it can enter without
> >> coordinating with the other cpu (usually Wait For Interrupt, or
> >> WFI), and one or more "coupled" power states that affect blocks
> >> shared between the cpus (L2 cache, interrupt controller, and
> >> sometimes the whole SoC).  Entering a coupled power state must
> >> be tightly controlled on both cpus.
> >
> > That seems to be a special case of a more general situation where
> > a number of CPU cores belong into a single power domain, possibly along
> > some I/O devices.
> >
> > We'll need to handle the general case at one point anyway, so I wonder if
> > the approach shown here may get us in the way?
> 
> I can't parse what you're saying here.
> 
> >> The easiest solution to implementing coupled cpu power states is
> >> to hotplug all but one cpu whenever possible, usually using a
> >> cpufreq governor that looks at cpu load to determine when to
> >> enable the secondary cpus.  This causes problems, as hotplug is an
> >> expensive operation, so the number of hotplug transitions must be
> >> minimized, leading to very slow response to loads, often on the
> >> order of seconds.
> >
> > This isn't a solution at all, rather a workaround and a poor one for that
> > matter.
> 
> Yes, which is what started me on this series.
> 
> >> This patch series implements an alternative solution, where each
> >> cpu will wait in the WFI state until all cpus are ready to enter
> >> a coupled state, at which point the coupled state function will
> >> be called on all cpus at approximately the same time.
> >>
> >> Once all cpus are ready to enter idle, they are woken by an smp
> >> cross call.
> >
> > Is it really necessary to wake up all of the CPUs in WFI before
> > going to deeper idle?  We should be able to figure out when they
> > are going to be needed next time without waking them up and we should
> > know the latency to wake up from the deeper multi-CPU "C-state",
> > so it should be possible to decide whether or not to go to deeper
> > idle without the SMP cross call.  Is there anything I'm missing here?
> 
> The decision to go to the lower state has already been made when the
> cross call occurs.  On the platforms I have worked directly with so
> far (Tegra2 and OMAP4460), the secondary cpu needs to execute code
> before the primary cpu turns off the power.  For example, on OMAP4460,
> the secondary cpu needs to go from WFI (clock gated) to OFF (power
> gated), because OFF is not supported as an individual cpu state due to
> a ROM code bug.  To do that transition, it needs to come out of WFI,
> set up it's power domain registers, save a bunch of state, and
> transition to OFF.
> 
> On Tegra3, the deepest individual cpu state for cpus 1-3 is OFF, the
> same state the cpu would go into as the first step of a transition to
> a deeper power state (cpus 0-3 OFF).  It would be more optimal in that
> case to bypass the SMP cross call, and leave the cpu in OFF, but that
> would require some way of disabling all wakeups for the secondary cpus
> and then verifying that they didn't start waking up just before the
> wakeups were disabled.  I have just started considering this
> optimization, but I don't see anything in the existing code that would
> prevent adding it later.

I agree it is certainly an optimization that can be added later if benchmarks
show it is needed (but again it is heavily platform dependent, ie technology
dependent).
On a side note, disabling (or move to the primary) wake-ups for "secondaries"
on platforms where every core is in a different power domain is still needed
to avoid having a situation where a CPU can independently get out of idle, ie
abort idle, after hitting the coupled barrier.
Still do not know if for those platforms coupled C-states should be used, but
it is much better to have a choice there IMHO.

I have also started thinking about a cluster or multi-CPU "next-event" that
could avoid triggering heavy operations like L2 cleaning (ie cluster shutdown)
if a timer is about to expire on a given CPU (as you know CPUs get in and out
of idle independently so the governor decision at the point the coupled state
barrier is hit might be stale).

I reckon the coupled C-state concept can prove to be an effective one for
some platforms, currently benchmarking it.

> A simple measurement using the tracing may show that it is
> unnecessary.  If the wakeup time for CPU1 to go from OFF to active is
> small there might be no need to optimize out the extra wakeup.

Indeed, it is all about resetting the CPU and getting it started, with
inclusive L2 the power cost of shutting down a CPU and resuming it should be
low (and timing very fast) for most platforms.

Lorenzo

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 0/5] coupled cpuidle state support
  2012-05-01 10:43       ` Lorenzo Pieralisi
  (?)
@ 2012-05-02  0:11         ` Colin Cross
  -1 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-05-02  0:11 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: Rafael J. Wysocki, linux-kernel, linux-arm-kernel, linux-pm,
	Kevin Hilman, Len Brown, Trinabh Gupta, Arjan van de Ven,
	Deepthi Dharwar, Greg Kroah-Hartman, Kay Sievers,
	Santosh Shilimkar, Daniel Lezcano, Amit Kucheria, Arnd Bergmann,
	Russell King, Len Brown

On Tue, May 1, 2012 at 3:43 AM, Lorenzo Pieralisi
<lorenzo.pieralisi@arm.com> wrote:
> Hi Colin,
>
> On Mon, Apr 30, 2012 at 10:37:30PM +0100, Colin Cross wrote:
<snip>

>> On Tegra3, the deepest individual cpu state for cpus 1-3 is OFF, the
>> same state the cpu would go into as the first step of a transition to
>> a deeper power state (cpus 0-3 OFF).  It would be more optimal in that
>> case to bypass the SMP cross call, and leave the cpu in OFF, but that
>> would require some way of disabling all wakeups for the secondary cpus
>> and then verifying that they didn't start waking up just before the
>> wakeups were disabled.  I have just started considering this
>> optimization, but I don't see anything in the existing code that would
>> prevent adding it later.
>
> I agree it is certainly an optimization that can be added later if benchmarks
> show it is needed (but again it is heavily platform dependent, ie technology
> dependent).
> On a side note, disabling (or move to the primary) wake-ups for "secondaries"
> on platforms where every core is in a different power domain is still needed
> to avoid having a situation where a CPU can independently get out of idle, ie
> abort idle, after hitting the coupled barrier.
> Still do not know if for those platforms coupled C-states should be used, but
> it is much better to have a choice there IMHO.

Yes, that is the primary need for the coupled_cpuidle_parallel_barrier
function - secondary cpus need to disable their wakeup sources, then
check that a wakeup was not already pending and abort if necessary.

> I have also started thinking about a cluster or multi-CPU "next-event" that
> could avoid triggering heavy operations like L2 cleaning (ie cluster shutdown)
> if a timer is about to expire on a given CPU (as you know CPUs get in and out
> of idle independently so the governor decision at the point the coupled state
> barrier is hit might be stale).

It would be possible to re-check the governor to decide the next state
(maybe only if the previous decision is out of date by more than the
target_residency?), but I left that as an additional optimization.

> I reckon the coupled C-state concept can prove to be an effective one for
> some platforms, currently benchmarking it.
>
>> A simple measurement using the tracing may show that it is
>> unnecessary.  If the wakeup time for CPU1 to go from OFF to active is
>> small there might be no need to optimize out the extra wakeup.
>
> Indeed, it is all about resetting the CPU and getting it started, with
> inclusive L2 the power cost of shutting down a CPU and resuming it should be
> low (and timing very fast) for most platforms.

The limiting factor may be the amount of time spent in ROM/Trustzone
code when bringing a cpu back online.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 0/5] coupled cpuidle state support
@ 2012-05-02  0:11         ` Colin Cross
  0 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-05-02  0:11 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: Kevin Hilman, Len Brown, Russell King, Greg Kroah-Hartman,
	Kay Sievers, linux-kernel, Amit Kucheria, linux-pm,
	Arjan van de Ven, Arnd Bergmann, linux-arm-kernel

On Tue, May 1, 2012 at 3:43 AM, Lorenzo Pieralisi
<lorenzo.pieralisi@arm.com> wrote:
> Hi Colin,
>
> On Mon, Apr 30, 2012 at 10:37:30PM +0100, Colin Cross wrote:
<snip>

>> On Tegra3, the deepest individual cpu state for cpus 1-3 is OFF, the
>> same state the cpu would go into as the first step of a transition to
>> a deeper power state (cpus 0-3 OFF).  It would be more optimal in that
>> case to bypass the SMP cross call, and leave the cpu in OFF, but that
>> would require some way of disabling all wakeups for the secondary cpus
>> and then verifying that they didn't start waking up just before the
>> wakeups were disabled.  I have just started considering this
>> optimization, but I don't see anything in the existing code that would
>> prevent adding it later.
>
> I agree it is certainly an optimization that can be added later if benchmarks
> show it is needed (but again it is heavily platform dependent, ie technology
> dependent).
> On a side note, disabling (or move to the primary) wake-ups for "secondaries"
> on platforms where every core is in a different power domain is still needed
> to avoid having a situation where a CPU can independently get out of idle, ie
> abort idle, after hitting the coupled barrier.
> Still do not know if for those platforms coupled C-states should be used, but
> it is much better to have a choice there IMHO.

Yes, that is the primary need for the coupled_cpuidle_parallel_barrier
function - secondary cpus need to disable their wakeup sources, then
check that a wakeup was not already pending and abort if necessary.

> I have also started thinking about a cluster or multi-CPU "next-event" that
> could avoid triggering heavy operations like L2 cleaning (ie cluster shutdown)
> if a timer is about to expire on a given CPU (as you know CPUs get in and out
> of idle independently so the governor decision at the point the coupled state
> barrier is hit might be stale).

It would be possible to re-check the governor to decide the next state
(maybe only if the previous decision is out of date by more than the
target_residency?), but I left that as an additional optimization.

> I reckon the coupled C-state concept can prove to be an effective one for
> some platforms, currently benchmarking it.
>
>> A simple measurement using the tracing may show that it is
>> unnecessary.  If the wakeup time for CPU1 to go from OFF to active is
>> small there might be no need to optimize out the extra wakeup.
>
> Indeed, it is all about resetting the CPU and getting it started, with
> inclusive L2 the power cost of shutting down a CPU and resuming it should be
> low (and timing very fast) for most platforms.

The limiting factor may be the amount of time spent in ROM/Trustzone
code when bringing a cpu back online.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [PATCHv3 0/5] coupled cpuidle state support
@ 2012-05-02  0:11         ` Colin Cross
  0 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-05-02  0:11 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 1, 2012 at 3:43 AM, Lorenzo Pieralisi
<lorenzo.pieralisi@arm.com> wrote:
> Hi Colin,
>
> On Mon, Apr 30, 2012 at 10:37:30PM +0100, Colin Cross wrote:
<snip>

>> On Tegra3, the deepest individual cpu state for cpus 1-3 is OFF, the
>> same state the cpu would go into as the first step of a transition to
>> a deeper power state (cpus 0-3 OFF). ?It would be more optimal in that
>> case to bypass the SMP cross call, and leave the cpu in OFF, but that
>> would require some way of disabling all wakeups for the secondary cpus
>> and then verifying that they didn't start waking up just before the
>> wakeups were disabled. ?I have just started considering this
>> optimization, but I don't see anything in the existing code that would
>> prevent adding it later.
>
> I agree it is certainly an optimization that can be added later if benchmarks
> show it is needed (but again it is heavily platform dependent, ie technology
> dependent).
> On a side note, disabling (or move to the primary) wake-ups for "secondaries"
> on platforms where every core is in a different power domain is still needed
> to avoid having a situation where a CPU can independently get out of idle, ie
> abort idle, after hitting the coupled barrier.
> Still do not know if for those platforms coupled C-states should be used, but
> it is much better to have a choice there IMHO.

Yes, that is the primary need for the coupled_cpuidle_parallel_barrier
function - secondary cpus need to disable their wakeup sources, then
check that a wakeup was not already pending and abort if necessary.

> I have also started thinking about a cluster or multi-CPU "next-event" that
> could avoid triggering heavy operations like L2 cleaning (ie cluster shutdown)
> if a timer is about to expire on a given CPU (as you know CPUs get in and out
> of idle independently so the governor decision at the point the coupled state
> barrier is hit might be stale).

It would be possible to re-check the governor to decide the next state
(maybe only if the previous decision is out of date by more than the
target_residency?), but I left that as an additional optimization.

> I reckon the coupled C-state concept can prove to be an effective one for
> some platforms, currently benchmarking it.
>
>> A simple measurement using the tracing may show that it is
>> unnecessary. ?If the wakeup time for CPU1 to go from OFF to active is
>> small there might be no need to optimize out the extra wakeup.
>
> Indeed, it is all about resetting the CPU and getting it started, with
> inclusive L2 the power cost of shutting down a CPU and resuming it should be
> low (and timing very fast) for most platforms.

The limiting factor may be the amount of time spent in ROM/Trustzone
code when bringing a cpu back online.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 0/5] coupled cpuidle state support
  2012-05-02  0:11         ` Colin Cross
  (?)
@ 2012-05-02  7:22           ` Santosh Shilimkar
  -1 siblings, 0 replies; 78+ messages in thread
From: Santosh Shilimkar @ 2012-05-02  7:22 UTC (permalink / raw)
  To: Colin Cross
  Cc: Lorenzo Pieralisi, Rafael J. Wysocki, linux-kernel,
	linux-arm-kernel, linux-pm, Kevin Hilman, Len Brown,
	Trinabh Gupta, Arjan van de Ven, Deepthi Dharwar,
	Greg Kroah-Hartman, Kay Sievers, Daniel Lezcano, Amit Kucheria,
	Arnd Bergmann, Russell King, Len Brown

On Wednesday 02 May 2012 05:41 AM, Colin Cross wrote:
> On Tue, May 1, 2012 at 3:43 AM, Lorenzo Pieralisi
> <lorenzo.pieralisi@arm.com> wrote:
>> Hi Colin,
>>
>> On Mon, Apr 30, 2012 at 10:37:30PM +0100, Colin Cross wrote:
> <snip>
> 
>>> On Tegra3, the deepest individual cpu state for cpus 1-3 is OFF, the
>>> same state the cpu would go into as the first step of a transition to
>>> a deeper power state (cpus 0-3 OFF).  It would be more optimal in that
>>> case to bypass the SMP cross call, and leave the cpu in OFF, but that
>>> would require some way of disabling all wakeups for the secondary cpus
>>> and then verifying that they didn't start waking up just before the
>>> wakeups were disabled.  I have just started considering this
>>> optimization, but I don't see anything in the existing code that would
>>> prevent adding it later.
>>
I was also looking at how we can avoid the un-necessary wakeup on
secondary CPUs if the timer event is not for those CPUs. As you
rightly said, we can add all the optimisations once we have the
base patches merged.

>> I agree it is certainly an optimisation that can be added later if benchmarks
>> show it is needed (but again it is heavily platform dependent, ie technology
>> dependent).
>> On a side note, disabling (or move to the primary) wake-ups for "secondaries"
>> on platforms where every core is in a different power domain is still needed
>> to avoid having a situation where a CPU can independently get out of idle, ie
>> abort idle, after hitting the coupled barrier.
>> Still do not know if for those platforms coupled C-states should be used, but
>> it is much better to have a choice there IMHO.
> 
> Yes, that is the primary need for the coupled_cpuidle_parallel_barrier
> function - secondary cpus need to disable their wakeup sources, then
> check that a wakeup was not already pending and abort if necessary.
> 
>> I have also started thinking about a cluster or multi-CPU "next-event" that
>> could avoid triggering heavy operations like L2 cleaning (ie cluster shutdown)
>> if a timer is about to expire on a given CPU (as you know CPUs get in and out
>> of idle independently so the governor decision at the point the coupled state
>> barrier is hit might be stale).
> 
> It would be possible to re-check the governor to decide the next state
> (maybe only if the previous decision is out of date by more than the
> target_residency?), but I left that as an additional optimization.
>
Yep. If the remaining time for idle is not enough, we
should abort that C-states since the CPU won't stay for good
enough time in the C-state to save power.

>> I reckon the coupled C-state concept can prove to be an effective one for
>> some platforms, currently benchmarking it.
>>
>>> A simple measurement using the tracing may show that it is
>>> unnecessary.  If the wakeup time for CPU1 to go from OFF to active is
>>> small there might be no need to optimize out the extra wakeup.
>>
>> Indeed, it is all about resetting the CPU and getting it started, with
>> inclusive L2 the power cost of shutting down a CPU and resuming it should be
>> low (and timing very fast) for most platforms.
> 
> The limiting factor may be the amount of time spent in ROM/Trustzone
> code when bringing a cpu back online.

It is fast but it's not very small time and will vary on CPU speed too.
As mentioned by Colin it all depends on secure code, CPU restore code
as well as the power domain transition time. Of course power domain
transition time will be different on different platforms.

Regards
santosh


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 0/5] coupled cpuidle state support
@ 2012-05-02  7:22           ` Santosh Shilimkar
  0 siblings, 0 replies; 78+ messages in thread
From: Santosh Shilimkar @ 2012-05-02  7:22 UTC (permalink / raw)
  To: Colin Cross
  Cc: Kevin Hilman, Len Brown, Russell King, Greg Kroah-Hartman,
	Kay Sievers, linux-kernel, Amit Kucheria, linux-pm,
	Arjan van de Ven, Arnd Bergmann, linux-arm-kernel

On Wednesday 02 May 2012 05:41 AM, Colin Cross wrote:
> On Tue, May 1, 2012 at 3:43 AM, Lorenzo Pieralisi
> <lorenzo.pieralisi@arm.com> wrote:
>> Hi Colin,
>>
>> On Mon, Apr 30, 2012 at 10:37:30PM +0100, Colin Cross wrote:
> <snip>
> 
>>> On Tegra3, the deepest individual cpu state for cpus 1-3 is OFF, the
>>> same state the cpu would go into as the first step of a transition to
>>> a deeper power state (cpus 0-3 OFF).  It would be more optimal in that
>>> case to bypass the SMP cross call, and leave the cpu in OFF, but that
>>> would require some way of disabling all wakeups for the secondary cpus
>>> and then verifying that they didn't start waking up just before the
>>> wakeups were disabled.  I have just started considering this
>>> optimization, but I don't see anything in the existing code that would
>>> prevent adding it later.
>>
I was also looking at how we can avoid the un-necessary wakeup on
secondary CPUs if the timer event is not for those CPUs. As you
rightly said, we can add all the optimisations once we have the
base patches merged.

>> I agree it is certainly an optimisation that can be added later if benchmarks
>> show it is needed (but again it is heavily platform dependent, ie technology
>> dependent).
>> On a side note, disabling (or move to the primary) wake-ups for "secondaries"
>> on platforms where every core is in a different power domain is still needed
>> to avoid having a situation where a CPU can independently get out of idle, ie
>> abort idle, after hitting the coupled barrier.
>> Still do not know if for those platforms coupled C-states should be used, but
>> it is much better to have a choice there IMHO.
> 
> Yes, that is the primary need for the coupled_cpuidle_parallel_barrier
> function - secondary cpus need to disable their wakeup sources, then
> check that a wakeup was not already pending and abort if necessary.
> 
>> I have also started thinking about a cluster or multi-CPU "next-event" that
>> could avoid triggering heavy operations like L2 cleaning (ie cluster shutdown)
>> if a timer is about to expire on a given CPU (as you know CPUs get in and out
>> of idle independently so the governor decision at the point the coupled state
>> barrier is hit might be stale).
> 
> It would be possible to re-check the governor to decide the next state
> (maybe only if the previous decision is out of date by more than the
> target_residency?), but I left that as an additional optimization.
>
Yep. If the remaining time for idle is not enough, we
should abort that C-states since the CPU won't stay for good
enough time in the C-state to save power.

>> I reckon the coupled C-state concept can prove to be an effective one for
>> some platforms, currently benchmarking it.
>>
>>> A simple measurement using the tracing may show that it is
>>> unnecessary.  If the wakeup time for CPU1 to go from OFF to active is
>>> small there might be no need to optimize out the extra wakeup.
>>
>> Indeed, it is all about resetting the CPU and getting it started, with
>> inclusive L2 the power cost of shutting down a CPU and resuming it should be
>> low (and timing very fast) for most platforms.
> 
> The limiting factor may be the amount of time spent in ROM/Trustzone
> code when bringing a cpu back online.

It is fast but it's not very small time and will vary on CPU speed too.
As mentioned by Colin it all depends on secure code, CPU restore code
as well as the power domain transition time. Of course power domain
transition time will be different on different platforms.

Regards
santosh

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [PATCHv3 0/5] coupled cpuidle state support
@ 2012-05-02  7:22           ` Santosh Shilimkar
  0 siblings, 0 replies; 78+ messages in thread
From: Santosh Shilimkar @ 2012-05-02  7:22 UTC (permalink / raw)
  To: linux-arm-kernel

On Wednesday 02 May 2012 05:41 AM, Colin Cross wrote:
> On Tue, May 1, 2012 at 3:43 AM, Lorenzo Pieralisi
> <lorenzo.pieralisi@arm.com> wrote:
>> Hi Colin,
>>
>> On Mon, Apr 30, 2012 at 10:37:30PM +0100, Colin Cross wrote:
> <snip>
> 
>>> On Tegra3, the deepest individual cpu state for cpus 1-3 is OFF, the
>>> same state the cpu would go into as the first step of a transition to
>>> a deeper power state (cpus 0-3 OFF).  It would be more optimal in that
>>> case to bypass the SMP cross call, and leave the cpu in OFF, but that
>>> would require some way of disabling all wakeups for the secondary cpus
>>> and then verifying that they didn't start waking up just before the
>>> wakeups were disabled.  I have just started considering this
>>> optimization, but I don't see anything in the existing code that would
>>> prevent adding it later.
>>
I was also looking at how we can avoid the un-necessary wakeup on
secondary CPUs if the timer event is not for those CPUs. As you
rightly said, we can add all the optimisations once we have the
base patches merged.

>> I agree it is certainly an optimisation that can be added later if benchmarks
>> show it is needed (but again it is heavily platform dependent, ie technology
>> dependent).
>> On a side note, disabling (or move to the primary) wake-ups for "secondaries"
>> on platforms where every core is in a different power domain is still needed
>> to avoid having a situation where a CPU can independently get out of idle, ie
>> abort idle, after hitting the coupled barrier.
>> Still do not know if for those platforms coupled C-states should be used, but
>> it is much better to have a choice there IMHO.
> 
> Yes, that is the primary need for the coupled_cpuidle_parallel_barrier
> function - secondary cpus need to disable their wakeup sources, then
> check that a wakeup was not already pending and abort if necessary.
> 
>> I have also started thinking about a cluster or multi-CPU "next-event" that
>> could avoid triggering heavy operations like L2 cleaning (ie cluster shutdown)
>> if a timer is about to expire on a given CPU (as you know CPUs get in and out
>> of idle independently so the governor decision at the point the coupled state
>> barrier is hit might be stale).
> 
> It would be possible to re-check the governor to decide the next state
> (maybe only if the previous decision is out of date by more than the
> target_residency?), but I left that as an additional optimization.
>
Yep. If the remaining time for idle is not enough, we
should abort that C-states since the CPU won't stay for good
enough time in the C-state to save power.

>> I reckon the coupled C-state concept can prove to be an effective one for
>> some platforms, currently benchmarking it.
>>
>>> A simple measurement using the tracing may show that it is
>>> unnecessary.  If the wakeup time for CPU1 to go from OFF to active is
>>> small there might be no need to optimize out the extra wakeup.
>>
>> Indeed, it is all about resetting the CPU and getting it started, with
>> inclusive L2 the power cost of shutting down a CPU and resuming it should be
>> low (and timing very fast) for most platforms.
> 
> The limiting factor may be the amount of time spent in ROM/Trustzone
> code when bringing a cpu back online.

It is fast but it's not very small time and will vary on CPU speed too.
As mentioned by Colin it all depends on secure code, CPU restore code
as well as the power domain transition time. Of course power domain
transition time will be different on different platforms.

Regards
santosh

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 0/5] coupled cpuidle state support
  2012-04-30 22:01         ` Colin Cross
  (?)
@ 2012-05-03 20:00           ` Rafael J. Wysocki
  -1 siblings, 0 replies; 78+ messages in thread
From: Rafael J. Wysocki @ 2012-05-03 20:00 UTC (permalink / raw)
  To: Colin Cross
  Cc: linux-kernel, linux-arm-kernel, linux-pm, Kevin Hilman,
	Len Brown, Trinabh Gupta, Arjan van de Ven, Deepthi Dharwar,
	Greg Kroah-Hartman, Kay Sievers, Santosh Shilimkar,
	Daniel Lezcano, Amit Kucheria, Lorenzo Pieralisi, Arnd Bergmann,
	Russell King, Len Brown

On Tuesday, May 01, 2012, Colin Cross wrote:
> On Mon, Apr 30, 2012 at 2:54 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> > On Monday, April 30, 2012, Colin Cross wrote:
> >> On Mon, Apr 30, 2012 at 2:25 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> >> > Hi,
> >> >
> >> > I have a comment, which isn't about the series itself, but something
> >> > thay may be worth thinking about.
> >> >
> >> > On Monday, April 30, 2012, Colin Cross wrote:
> >> >> On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
> >> >> cpus cannot be independently powered down, either due to
> >> >> sequencing restrictions (on Tegra 2, cpu 0 must be the last to
> >> >> power down), or due to HW bugs (on OMAP4460, a cpu powering up
> >> >> will corrupt the gic state unless the other cpu runs a work
> >> >> around).  Each cpu has a power state that it can enter without
> >> >> coordinating with the other cpu (usually Wait For Interrupt, or
> >> >> WFI), and one or more "coupled" power states that affect blocks
> >> >> shared between the cpus (L2 cache, interrupt controller, and
> >> >> sometimes the whole SoC).  Entering a coupled power state must
> >> >> be tightly controlled on both cpus.
> >> >
> >> > That seems to be a special case of a more general situation where
> >> > a number of CPU cores belong into a single power domain, possibly along
> >> > some I/O devices.
> >> >
> >> > We'll need to handle the general case at one point anyway, so I wonder if
> >> > the approach shown here may get us in the way?
> >>
> >> I can't parse what you're saying here.
> >
> > The general case is a CPU core in one PM domain with a number of I/O
> > devices and a number of other CPU cores.  If we forget about the I/O
> > devices, we get a situation your patchset is addressing, so the
> > question is how difficult it is going to be to extend it to cover the
> > I/O devices as well.
> 
> The logic in this patch set is always going to be required to get
> multiple cpus to coordinate an idle transition, and it will need to
> stay fairly tightly coupled with cpuidle to correctly track the idle
> time statistics for the intermediate and final states.  I don't think
> there would be an issue if it ends up getting hoisted out into a
> future combined cpu/IO power domain, but it seems more likely that the
> coupled cpu idle states would call into the power domain to say they
> no longer need power.

There are two distinct cases to consider here, (1) when the last I/O
device in the domain becomes idle and the question is whether or not to
power off the entire domain and (2) when a CPU core in a power domain
becomes idle while all of the devices in the domain are idle already.

Case (2) is quite straightforward, the .enter() routine for the
"domain" C-state has to check whether the domain can be turned off and
do it eventually.

Case (1) is more difficult and (assuming that all CPU cores in the domain
are already idle at this point) i see two possible ways to handle it:
(a) Wake up all of the (idle) CPU cores in the domain and let the
  "domain" C-state's .enter() do the job (ie. turn it into case (2)),
  similarly to your patchset.
(b) If cpuidle has prepared the cores for going into deeper idle,
  turn the domain off directly without waking up the cores.

> >> >> The easiest solution to implementing coupled cpu power states is
> >> >> to hotplug all but one cpu whenever possible, usually using a
> >> >> cpufreq governor that looks at cpu load to determine when to
> >> >> enable the secondary cpus.  This causes problems, as hotplug is an
> >> >> expensive operation, so the number of hotplug transitions must be
> >> >> minimized, leading to very slow response to loads, often on the
> >> >> order of seconds.
> >> >
> >> > This isn't a solution at all, rather a workaround and a poor one for that
> >> > matter.
> >>
> >> Yes, which is what started me on this series.
> >>
> >> >> This patch series implements an alternative solution, where each
> >> >> cpu will wait in the WFI state until all cpus are ready to enter
> >> >> a coupled state, at which point the coupled state function will
> >> >> be called on all cpus at approximately the same time.
> >> >>
> >> >> Once all cpus are ready to enter idle, they are woken by an smp
> >> >> cross call.
> >> >
> >> > Is it really necessary to wake up all of the CPUs in WFI before
> >> > going to deeper idle?  We should be able to figure out when they
> >> > are going to be needed next time without waking them up and we should
> >> > know the latency to wake up from the deeper multi-CPU "C-state",
> >> > so it should be possible to decide whether or not to go to deeper
> >> > idle without the SMP cross call.  Is there anything I'm missing here?
> >>
> >> The decision to go to the lower state has already been made when the
> >> cross call occurs.  On the platforms I have worked directly with so
> >> far (Tegra2 and OMAP4460), the secondary cpu needs to execute code
> >> before the primary cpu turns off the power.  For example, on OMAP4460,
> >> the secondary cpu needs to go from WFI (clock gated) to OFF (power
> >> gated), because OFF is not supported as an individual cpu state due to
> >> a ROM code bug.  To do that transition, it needs to come out of WFI,
> >> set up it's power domain registers, save a bunch of state, and
> >> transition to OFF.
> >>
> >> On Tegra3, the deepest individual cpu state for cpus 1-3 is OFF, the
> >> same state the cpu would go into as the first step of a transition to
> >> a deeper power state (cpus 0-3 OFF).  It would be more optimal in that
> >> case to bypass the SMP cross call, and leave the cpu in OFF, but that
> >> would require some way of disabling all wakeups for the secondary cpus
> >> and then verifying that they didn't start waking up just before the
> >> wakeups were disabled.  I have just started considering this
> >> optimization, but I don't see anything in the existing code that would
> >> prevent adding it later.
> >
> > OK
> >
> >> A simple measurement using the tracing may show that it is
> >> unnecessary.  If the wakeup time for CPU1 to go from OFF to active is
> >> small there might be no need to optimize out the extra wakeup.
> >
> > I see.
> >
> > So, in the end, it may always be more straightforward to put individual
> > CPU cores into single-core idle states until the "we can all go to
> > deeper idle" condition is satisfied and then wake them all up and let
> > each of them do the transition individually, right?
> 
> Yes, the tradeoff will be the complexity of code to handle a generic
> way of holding another cpu in idle while this cpu does the transition
> vs. the time and power required to bring a cpu back online just to put
> it into a deeper state.  Right now, since all the users of this code
> are using WFI for their intermediate state, it takes microseconds to
> bring a cpu back up.  On Tegra3, the answer might be "sometimes" -
> only cpu0 can perform the final idle state transition, so if cpu1 is
> the last to go to idle, it will always have to SMP cross call to cpu0,
> but if cpu0 is the last to go idle it may be able to avoid waking up
> cpu1.

Having considered this for a while I think that it may be more straightforward
to avoid waking up the already idled cores.

For instance, say we have 4 CPU cores in a cluster (package) such that each
core has its own idle state (call it C1) and there is a multicore idle state
entered by turning off the entire cluster (call this state C-multi).  One of
the possible ways to handle this seems to be to use an identical table of
C-states for each core containing the C1 entry and a kind of fake entry called
(for example) C4 with the time characteristics of C-multi and a special
.enter() callback.  That callback will prepare the core it is called for to
enter C-multi, but instead of simply turning off the whole package it will
decrement a counter.  If the counte happens to be 0 at this point, the
package will be turned off.  Otherwise, the core will be put into the idle
state corresponding to C1, but it will be ready for entering C-multi at
any time. The counter will be incremented on exiting the C4 "state".

It looks like this should work without modifying the cpuidle core, but
the drawback here is that the cpuidle core doesn't know how much time
spend in C4 is really in C1 and how much of it is in C-multi, so the
statistics reported by it won't reflect the real energy usage.

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 0/5] coupled cpuidle state support
@ 2012-05-03 20:00           ` Rafael J. Wysocki
  0 siblings, 0 replies; 78+ messages in thread
From: Rafael J. Wysocki @ 2012-05-03 20:00 UTC (permalink / raw)
  To: Colin Cross
  Cc: Kevin Hilman, Len Brown, Russell King, Greg Kroah-Hartman,
	Kay Sievers, linux-kernel, Amit Kucheria, linux-pm,
	Arjan van de Ven, Arnd Bergmann, linux-arm-kernel

On Tuesday, May 01, 2012, Colin Cross wrote:
> On Mon, Apr 30, 2012 at 2:54 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> > On Monday, April 30, 2012, Colin Cross wrote:
> >> On Mon, Apr 30, 2012 at 2:25 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> >> > Hi,
> >> >
> >> > I have a comment, which isn't about the series itself, but something
> >> > thay may be worth thinking about.
> >> >
> >> > On Monday, April 30, 2012, Colin Cross wrote:
> >> >> On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
> >> >> cpus cannot be independently powered down, either due to
> >> >> sequencing restrictions (on Tegra 2, cpu 0 must be the last to
> >> >> power down), or due to HW bugs (on OMAP4460, a cpu powering up
> >> >> will corrupt the gic state unless the other cpu runs a work
> >> >> around).  Each cpu has a power state that it can enter without
> >> >> coordinating with the other cpu (usually Wait For Interrupt, or
> >> >> WFI), and one or more "coupled" power states that affect blocks
> >> >> shared between the cpus (L2 cache, interrupt controller, and
> >> >> sometimes the whole SoC).  Entering a coupled power state must
> >> >> be tightly controlled on both cpus.
> >> >
> >> > That seems to be a special case of a more general situation where
> >> > a number of CPU cores belong into a single power domain, possibly along
> >> > some I/O devices.
> >> >
> >> > We'll need to handle the general case at one point anyway, so I wonder if
> >> > the approach shown here may get us in the way?
> >>
> >> I can't parse what you're saying here.
> >
> > The general case is a CPU core in one PM domain with a number of I/O
> > devices and a number of other CPU cores.  If we forget about the I/O
> > devices, we get a situation your patchset is addressing, so the
> > question is how difficult it is going to be to extend it to cover the
> > I/O devices as well.
> 
> The logic in this patch set is always going to be required to get
> multiple cpus to coordinate an idle transition, and it will need to
> stay fairly tightly coupled with cpuidle to correctly track the idle
> time statistics for the intermediate and final states.  I don't think
> there would be an issue if it ends up getting hoisted out into a
> future combined cpu/IO power domain, but it seems more likely that the
> coupled cpu idle states would call into the power domain to say they
> no longer need power.

There are two distinct cases to consider here, (1) when the last I/O
device in the domain becomes idle and the question is whether or not to
power off the entire domain and (2) when a CPU core in a power domain
becomes idle while all of the devices in the domain are idle already.

Case (2) is quite straightforward, the .enter() routine for the
"domain" C-state has to check whether the domain can be turned off and
do it eventually.

Case (1) is more difficult and (assuming that all CPU cores in the domain
are already idle at this point) i see two possible ways to handle it:
(a) Wake up all of the (idle) CPU cores in the domain and let the
  "domain" C-state's .enter() do the job (ie. turn it into case (2)),
  similarly to your patchset.
(b) If cpuidle has prepared the cores for going into deeper idle,
  turn the domain off directly without waking up the cores.

> >> >> The easiest solution to implementing coupled cpu power states is
> >> >> to hotplug all but one cpu whenever possible, usually using a
> >> >> cpufreq governor that looks at cpu load to determine when to
> >> >> enable the secondary cpus.  This causes problems, as hotplug is an
> >> >> expensive operation, so the number of hotplug transitions must be
> >> >> minimized, leading to very slow response to loads, often on the
> >> >> order of seconds.
> >> >
> >> > This isn't a solution at all, rather a workaround and a poor one for that
> >> > matter.
> >>
> >> Yes, which is what started me on this series.
> >>
> >> >> This patch series implements an alternative solution, where each
> >> >> cpu will wait in the WFI state until all cpus are ready to enter
> >> >> a coupled state, at which point the coupled state function will
> >> >> be called on all cpus at approximately the same time.
> >> >>
> >> >> Once all cpus are ready to enter idle, they are woken by an smp
> >> >> cross call.
> >> >
> >> > Is it really necessary to wake up all of the CPUs in WFI before
> >> > going to deeper idle?  We should be able to figure out when they
> >> > are going to be needed next time without waking them up and we should
> >> > know the latency to wake up from the deeper multi-CPU "C-state",
> >> > so it should be possible to decide whether or not to go to deeper
> >> > idle without the SMP cross call.  Is there anything I'm missing here?
> >>
> >> The decision to go to the lower state has already been made when the
> >> cross call occurs.  On the platforms I have worked directly with so
> >> far (Tegra2 and OMAP4460), the secondary cpu needs to execute code
> >> before the primary cpu turns off the power.  For example, on OMAP4460,
> >> the secondary cpu needs to go from WFI (clock gated) to OFF (power
> >> gated), because OFF is not supported as an individual cpu state due to
> >> a ROM code bug.  To do that transition, it needs to come out of WFI,
> >> set up it's power domain registers, save a bunch of state, and
> >> transition to OFF.
> >>
> >> On Tegra3, the deepest individual cpu state for cpus 1-3 is OFF, the
> >> same state the cpu would go into as the first step of a transition to
> >> a deeper power state (cpus 0-3 OFF).  It would be more optimal in that
> >> case to bypass the SMP cross call, and leave the cpu in OFF, but that
> >> would require some way of disabling all wakeups for the secondary cpus
> >> and then verifying that they didn't start waking up just before the
> >> wakeups were disabled.  I have just started considering this
> >> optimization, but I don't see anything in the existing code that would
> >> prevent adding it later.
> >
> > OK
> >
> >> A simple measurement using the tracing may show that it is
> >> unnecessary.  If the wakeup time for CPU1 to go from OFF to active is
> >> small there might be no need to optimize out the extra wakeup.
> >
> > I see.
> >
> > So, in the end, it may always be more straightforward to put individual
> > CPU cores into single-core idle states until the "we can all go to
> > deeper idle" condition is satisfied and then wake them all up and let
> > each of them do the transition individually, right?
> 
> Yes, the tradeoff will be the complexity of code to handle a generic
> way of holding another cpu in idle while this cpu does the transition
> vs. the time and power required to bring a cpu back online just to put
> it into a deeper state.  Right now, since all the users of this code
> are using WFI for their intermediate state, it takes microseconds to
> bring a cpu back up.  On Tegra3, the answer might be "sometimes" -
> only cpu0 can perform the final idle state transition, so if cpu1 is
> the last to go to idle, it will always have to SMP cross call to cpu0,
> but if cpu0 is the last to go idle it may be able to avoid waking up
> cpu1.

Having considered this for a while I think that it may be more straightforward
to avoid waking up the already idled cores.

For instance, say we have 4 CPU cores in a cluster (package) such that each
core has its own idle state (call it C1) and there is a multicore idle state
entered by turning off the entire cluster (call this state C-multi).  One of
the possible ways to handle this seems to be to use an identical table of
C-states for each core containing the C1 entry and a kind of fake entry called
(for example) C4 with the time characteristics of C-multi and a special
.enter() callback.  That callback will prepare the core it is called for to
enter C-multi, but instead of simply turning off the whole package it will
decrement a counter.  If the counte happens to be 0 at this point, the
package will be turned off.  Otherwise, the core will be put into the idle
state corresponding to C1, but it will be ready for entering C-multi at
any time. The counter will be incremented on exiting the C4 "state".

It looks like this should work without modifying the cpuidle core, but
the drawback here is that the cpuidle core doesn't know how much time
spend in C4 is really in C1 and how much of it is in C-multi, so the
statistics reported by it won't reflect the real energy usage.

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [PATCHv3 0/5] coupled cpuidle state support
@ 2012-05-03 20:00           ` Rafael J. Wysocki
  0 siblings, 0 replies; 78+ messages in thread
From: Rafael J. Wysocki @ 2012-05-03 20:00 UTC (permalink / raw)
  To: linux-arm-kernel

On Tuesday, May 01, 2012, Colin Cross wrote:
> On Mon, Apr 30, 2012 at 2:54 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> > On Monday, April 30, 2012, Colin Cross wrote:
> >> On Mon, Apr 30, 2012 at 2:25 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> >> > Hi,
> >> >
> >> > I have a comment, which isn't about the series itself, but something
> >> > thay may be worth thinking about.
> >> >
> >> > On Monday, April 30, 2012, Colin Cross wrote:
> >> >> On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
> >> >> cpus cannot be independently powered down, either due to
> >> >> sequencing restrictions (on Tegra 2, cpu 0 must be the last to
> >> >> power down), or due to HW bugs (on OMAP4460, a cpu powering up
> >> >> will corrupt the gic state unless the other cpu runs a work
> >> >> around).  Each cpu has a power state that it can enter without
> >> >> coordinating with the other cpu (usually Wait For Interrupt, or
> >> >> WFI), and one or more "coupled" power states that affect blocks
> >> >> shared between the cpus (L2 cache, interrupt controller, and
> >> >> sometimes the whole SoC).  Entering a coupled power state must
> >> >> be tightly controlled on both cpus.
> >> >
> >> > That seems to be a special case of a more general situation where
> >> > a number of CPU cores belong into a single power domain, possibly along
> >> > some I/O devices.
> >> >
> >> > We'll need to handle the general case at one point anyway, so I wonder if
> >> > the approach shown here may get us in the way?
> >>
> >> I can't parse what you're saying here.
> >
> > The general case is a CPU core in one PM domain with a number of I/O
> > devices and a number of other CPU cores.  If we forget about the I/O
> > devices, we get a situation your patchset is addressing, so the
> > question is how difficult it is going to be to extend it to cover the
> > I/O devices as well.
> 
> The logic in this patch set is always going to be required to get
> multiple cpus to coordinate an idle transition, and it will need to
> stay fairly tightly coupled with cpuidle to correctly track the idle
> time statistics for the intermediate and final states.  I don't think
> there would be an issue if it ends up getting hoisted out into a
> future combined cpu/IO power domain, but it seems more likely that the
> coupled cpu idle states would call into the power domain to say they
> no longer need power.

There are two distinct cases to consider here, (1) when the last I/O
device in the domain becomes idle and the question is whether or not to
power off the entire domain and (2) when a CPU core in a power domain
becomes idle while all of the devices in the domain are idle already.

Case (2) is quite straightforward, the .enter() routine for the
"domain" C-state has to check whether the domain can be turned off and
do it eventually.

Case (1) is more difficult and (assuming that all CPU cores in the domain
are already idle at this point) i see two possible ways to handle it:
(a) Wake up all of the (idle) CPU cores in the domain and let the
  "domain" C-state's .enter() do the job (ie. turn it into case (2)),
  similarly to your patchset.
(b) If cpuidle has prepared the cores for going into deeper idle,
  turn the domain off directly without waking up the cores.

> >> >> The easiest solution to implementing coupled cpu power states is
> >> >> to hotplug all but one cpu whenever possible, usually using a
> >> >> cpufreq governor that looks at cpu load to determine when to
> >> >> enable the secondary cpus.  This causes problems, as hotplug is an
> >> >> expensive operation, so the number of hotplug transitions must be
> >> >> minimized, leading to very slow response to loads, often on the
> >> >> order of seconds.
> >> >
> >> > This isn't a solution at all, rather a workaround and a poor one for that
> >> > matter.
> >>
> >> Yes, which is what started me on this series.
> >>
> >> >> This patch series implements an alternative solution, where each
> >> >> cpu will wait in the WFI state until all cpus are ready to enter
> >> >> a coupled state, at which point the coupled state function will
> >> >> be called on all cpus at approximately the same time.
> >> >>
> >> >> Once all cpus are ready to enter idle, they are woken by an smp
> >> >> cross call.
> >> >
> >> > Is it really necessary to wake up all of the CPUs in WFI before
> >> > going to deeper idle?  We should be able to figure out when they
> >> > are going to be needed next time without waking them up and we should
> >> > know the latency to wake up from the deeper multi-CPU "C-state",
> >> > so it should be possible to decide whether or not to go to deeper
> >> > idle without the SMP cross call.  Is there anything I'm missing here?
> >>
> >> The decision to go to the lower state has already been made when the
> >> cross call occurs.  On the platforms I have worked directly with so
> >> far (Tegra2 and OMAP4460), the secondary cpu needs to execute code
> >> before the primary cpu turns off the power.  For example, on OMAP4460,
> >> the secondary cpu needs to go from WFI (clock gated) to OFF (power
> >> gated), because OFF is not supported as an individual cpu state due to
> >> a ROM code bug.  To do that transition, it needs to come out of WFI,
> >> set up it's power domain registers, save a bunch of state, and
> >> transition to OFF.
> >>
> >> On Tegra3, the deepest individual cpu state for cpus 1-3 is OFF, the
> >> same state the cpu would go into as the first step of a transition to
> >> a deeper power state (cpus 0-3 OFF).  It would be more optimal in that
> >> case to bypass the SMP cross call, and leave the cpu in OFF, but that
> >> would require some way of disabling all wakeups for the secondary cpus
> >> and then verifying that they didn't start waking up just before the
> >> wakeups were disabled.  I have just started considering this
> >> optimization, but I don't see anything in the existing code that would
> >> prevent adding it later.
> >
> > OK
> >
> >> A simple measurement using the tracing may show that it is
> >> unnecessary.  If the wakeup time for CPU1 to go from OFF to active is
> >> small there might be no need to optimize out the extra wakeup.
> >
> > I see.
> >
> > So, in the end, it may always be more straightforward to put individual
> > CPU cores into single-core idle states until the "we can all go to
> > deeper idle" condition is satisfied and then wake them all up and let
> > each of them do the transition individually, right?
> 
> Yes, the tradeoff will be the complexity of code to handle a generic
> way of holding another cpu in idle while this cpu does the transition
> vs. the time and power required to bring a cpu back online just to put
> it into a deeper state.  Right now, since all the users of this code
> are using WFI for their intermediate state, it takes microseconds to
> bring a cpu back up.  On Tegra3, the answer might be "sometimes" -
> only cpu0 can perform the final idle state transition, so if cpu1 is
> the last to go to idle, it will always have to SMP cross call to cpu0,
> but if cpu0 is the last to go idle it may be able to avoid waking up
> cpu1.

Having considered this for a while I think that it may be more straightforward
to avoid waking up the already idled cores.

For instance, say we have 4 CPU cores in a cluster (package) such that each
core has its own idle state (call it C1) and there is a multicore idle state
entered by turning off the entire cluster (call this state C-multi).  One of
the possible ways to handle this seems to be to use an identical table of
C-states for each core containing the C1 entry and a kind of fake entry called
(for example) C4 with the time characteristics of C-multi and a special
.enter() callback.  That callback will prepare the core it is called for to
enter C-multi, but instead of simply turning off the whole package it will
decrement a counter.  If the counte happens to be 0 at this point, the
package will be turned off.  Otherwise, the core will be put into the idle
state corresponding to C1, but it will be ready for entering C-multi at
any time. The counter will be incremented on exiting the C4 "state".

It looks like this should work without modifying the cpuidle core, but
the drawback here is that the cpuidle core doesn't know how much time
spend in C4 is really in C1 and how much of it is in C-multi, so the
statistics reported by it won't reflect the real energy usage.

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 0/5] coupled cpuidle state support
  2012-05-03 20:00           ` Rafael J. Wysocki
  (?)
@ 2012-05-03 20:18             ` Colin Cross
  -1 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-05-03 20:18 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: linux-kernel, linux-arm-kernel, linux-pm, Kevin Hilman,
	Len Brown, Trinabh Gupta, Arjan van de Ven, Deepthi Dharwar,
	Greg Kroah-Hartman, Kay Sievers, Santosh Shilimkar,
	Daniel Lezcano, Amit Kucheria, Lorenzo Pieralisi, Arnd Bergmann,
	Russell King, Len Brown

On Thu, May 3, 2012 at 1:00 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
<snip>
> There are two distinct cases to consider here, (1) when the last I/O
> device in the domain becomes idle and the question is whether or not to
> power off the entire domain and (2) when a CPU core in a power domain
> becomes idle while all of the devices in the domain are idle already.
>
> Case (2) is quite straightforward, the .enter() routine for the
> "domain" C-state has to check whether the domain can be turned off and
> do it eventually.
>
> Case (1) is more difficult and (assuming that all CPU cores in the domain
> are already idle at this point) i see two possible ways to handle it:
> (a) Wake up all of the (idle) CPU cores in the domain and let the
>  "domain" C-state's .enter() do the job (ie. turn it into case (2)),
>  similarly to your patchset.
> (b) If cpuidle has prepared the cores for going into deeper idle,
>  turn the domain off directly without waking up the cores.

Multiple clusters is a design that has been considered in this
patchset (all the data structures are in the right place to support
it), and can be supported in the future, but does not exist in any
current systems that would be using this.  In all of today's SoCs,
there is a single cluster, so (1) can't happen - no code can be
executing while all cpus are idle.

(b) is an optimization that would not be possible on any future SoC
that is similar to the current SoCs, where "turn the domain off" is
very tightly integrated with TrustZone secure code running on the
primary cpu of the cluster.

<snip>

> Having considered this for a while I think that it may be more straightforward
> to avoid waking up the already idled cores.
>
> For instance, say we have 4 CPU cores in a cluster (package) such that each
> core has its own idle state (call it C1) and there is a multicore idle state
> entered by turning off the entire cluster (call this state C-multi).  One of
> the possible ways to handle this seems to be to use an identical table of
> C-states for each core containing the C1 entry and a kind of fake entry called
> (for example) C4 with the time characteristics of C-multi and a special
> .enter() callback.  That callback will prepare the core it is called for to
> enter C-multi, but instead of simply turning off the whole package it will
> decrement a counter.  If the counte happens to be 0 at this point, the
> package will be turned off.  Otherwise, the core will be put into the idle
> state corresponding to C1, but it will be ready for entering C-multi at
> any time. The counter will be incremented on exiting the C4 "state".

I implemented something very similar to this on Tegra2 (having each
cpu go to C1, but with enough state saved for C-multi), but it turns
out not to work in hardware.  On every existing ARM SMP system where I
have worked with cpuidle (Tegra2, OMAP4, Exynos5, and some Tegra3),
only cpu 0 can trigger the transition to C-multi.  The cause of this
restriction is different on every platform - sometimes it's by design,
sometimes it's a bug in the SoC ROM code, but the restriction exists.
The primary cpu of the cluster always needs to be awake.

In addition, it may not be possible to transition secondary cpus from
C1 to C-multi without waking them.  That would generally involve
cutting power to a CPU that is in clock gating, which is not a
supported power transition in any SoC that I have a datasheet for.  I
made it work for cpu1 on Tegra2, but I can't guarantee that there are
not unsolvable HW race conditions.

The only generic way to make this work is to wake up all cpus.  Waking
up a subset of cpus is certainly worth investigating as an
optimization, but it would not be used on Tegra2, OMAP4, or Exynos5.
Tegra3 may benefit from it.

> It looks like this should work without modifying the cpuidle core, but
> the drawback here is that the cpuidle core doesn't know how much time
> spend in C4 is really in C1 and how much of it is in C-multi, so the
> statistics reported by it won't reflect the real energy usage.

Idle statistics are extremely important when determining why a
particular use case is drawing too much power, and it is worth
modifying the cpuidle core if only to keep them accurate.  Especially
when justifying the move from the cpufreq hotplug governor based code
that every SoC vendor uses in their BSP to a proper multi-CPU cpuidle
implementation.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 0/5] coupled cpuidle state support
@ 2012-05-03 20:18             ` Colin Cross
  0 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-05-03 20:18 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Kevin Hilman, Len Brown, Russell King, Greg Kroah-Hartman,
	Kay Sievers, linux-kernel, Amit Kucheria, linux-pm,
	Arjan van de Ven, Arnd Bergmann, linux-arm-kernel

On Thu, May 3, 2012 at 1:00 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
<snip>
> There are two distinct cases to consider here, (1) when the last I/O
> device in the domain becomes idle and the question is whether or not to
> power off the entire domain and (2) when a CPU core in a power domain
> becomes idle while all of the devices in the domain are idle already.
>
> Case (2) is quite straightforward, the .enter() routine for the
> "domain" C-state has to check whether the domain can be turned off and
> do it eventually.
>
> Case (1) is more difficult and (assuming that all CPU cores in the domain
> are already idle at this point) i see two possible ways to handle it:
> (a) Wake up all of the (idle) CPU cores in the domain and let the
>  "domain" C-state's .enter() do the job (ie. turn it into case (2)),
>  similarly to your patchset.
> (b) If cpuidle has prepared the cores for going into deeper idle,
>  turn the domain off directly without waking up the cores.

Multiple clusters is a design that has been considered in this
patchset (all the data structures are in the right place to support
it), and can be supported in the future, but does not exist in any
current systems that would be using this.  In all of today's SoCs,
there is a single cluster, so (1) can't happen - no code can be
executing while all cpus are idle.

(b) is an optimization that would not be possible on any future SoC
that is similar to the current SoCs, where "turn the domain off" is
very tightly integrated with TrustZone secure code running on the
primary cpu of the cluster.

<snip>

> Having considered this for a while I think that it may be more straightforward
> to avoid waking up the already idled cores.
>
> For instance, say we have 4 CPU cores in a cluster (package) such that each
> core has its own idle state (call it C1) and there is a multicore idle state
> entered by turning off the entire cluster (call this state C-multi).  One of
> the possible ways to handle this seems to be to use an identical table of
> C-states for each core containing the C1 entry and a kind of fake entry called
> (for example) C4 with the time characteristics of C-multi and a special
> .enter() callback.  That callback will prepare the core it is called for to
> enter C-multi, but instead of simply turning off the whole package it will
> decrement a counter.  If the counte happens to be 0 at this point, the
> package will be turned off.  Otherwise, the core will be put into the idle
> state corresponding to C1, but it will be ready for entering C-multi at
> any time. The counter will be incremented on exiting the C4 "state".

I implemented something very similar to this on Tegra2 (having each
cpu go to C1, but with enough state saved for C-multi), but it turns
out not to work in hardware.  On every existing ARM SMP system where I
have worked with cpuidle (Tegra2, OMAP4, Exynos5, and some Tegra3),
only cpu 0 can trigger the transition to C-multi.  The cause of this
restriction is different on every platform - sometimes it's by design,
sometimes it's a bug in the SoC ROM code, but the restriction exists.
The primary cpu of the cluster always needs to be awake.

In addition, it may not be possible to transition secondary cpus from
C1 to C-multi without waking them.  That would generally involve
cutting power to a CPU that is in clock gating, which is not a
supported power transition in any SoC that I have a datasheet for.  I
made it work for cpu1 on Tegra2, but I can't guarantee that there are
not unsolvable HW race conditions.

The only generic way to make this work is to wake up all cpus.  Waking
up a subset of cpus is certainly worth investigating as an
optimization, but it would not be used on Tegra2, OMAP4, or Exynos5.
Tegra3 may benefit from it.

> It looks like this should work without modifying the cpuidle core, but
> the drawback here is that the cpuidle core doesn't know how much time
> spend in C4 is really in C1 and how much of it is in C-multi, so the
> statistics reported by it won't reflect the real energy usage.

Idle statistics are extremely important when determining why a
particular use case is drawing too much power, and it is worth
modifying the cpuidle core if only to keep them accurate.  Especially
when justifying the move from the cpufreq hotplug governor based code
that every SoC vendor uses in their BSP to a proper multi-CPU cpuidle
implementation.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [PATCHv3 0/5] coupled cpuidle state support
@ 2012-05-03 20:18             ` Colin Cross
  0 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-05-03 20:18 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, May 3, 2012 at 1:00 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
<snip>
> There are two distinct cases to consider here, (1) when the last I/O
> device in the domain becomes idle and the question is whether or not to
> power off the entire domain and (2) when a CPU core in a power domain
> becomes idle while all of the devices in the domain are idle already.
>
> Case (2) is quite straightforward, the .enter() routine for the
> "domain" C-state has to check whether the domain can be turned off and
> do it eventually.
>
> Case (1) is more difficult and (assuming that all CPU cores in the domain
> are already idle at this point) i see two possible ways to handle it:
> (a) Wake up all of the (idle) CPU cores in the domain and let the
> ?"domain" C-state's .enter() do the job (ie. turn it into case (2)),
> ?similarly to your patchset.
> (b) If cpuidle has prepared the cores for going into deeper idle,
> ?turn the domain off directly without waking up the cores.

Multiple clusters is a design that has been considered in this
patchset (all the data structures are in the right place to support
it), and can be supported in the future, but does not exist in any
current systems that would be using this.  In all of today's SoCs,
there is a single cluster, so (1) can't happen - no code can be
executing while all cpus are idle.

(b) is an optimization that would not be possible on any future SoC
that is similar to the current SoCs, where "turn the domain off" is
very tightly integrated with TrustZone secure code running on the
primary cpu of the cluster.

<snip>

> Having considered this for a while I think that it may be more straightforward
> to avoid waking up the already idled cores.
>
> For instance, say we have 4 CPU cores in a cluster (package) such that each
> core has its own idle state (call it C1) and there is a multicore idle state
> entered by turning off the entire cluster (call this state C-multi). ?One of
> the possible ways to handle this seems to be to use an identical table of
> C-states for each core containing the C1 entry and a kind of fake entry called
> (for example) C4 with the time characteristics of C-multi and a special
> .enter() callback. ?That callback will prepare the core it is called for to
> enter C-multi, but instead of simply turning off the whole package it will
> decrement a counter. ?If the counte happens to be 0 at this point, the
> package will be turned off. ?Otherwise, the core will be put into the idle
> state corresponding to C1, but it will be ready for entering C-multi at
> any time. The counter will be incremented on exiting the C4 "state".

I implemented something very similar to this on Tegra2 (having each
cpu go to C1, but with enough state saved for C-multi), but it turns
out not to work in hardware.  On every existing ARM SMP system where I
have worked with cpuidle (Tegra2, OMAP4, Exynos5, and some Tegra3),
only cpu 0 can trigger the transition to C-multi.  The cause of this
restriction is different on every platform - sometimes it's by design,
sometimes it's a bug in the SoC ROM code, but the restriction exists.
The primary cpu of the cluster always needs to be awake.

In addition, it may not be possible to transition secondary cpus from
C1 to C-multi without waking them.  That would generally involve
cutting power to a CPU that is in clock gating, which is not a
supported power transition in any SoC that I have a datasheet for.  I
made it work for cpu1 on Tegra2, but I can't guarantee that there are
not unsolvable HW race conditions.

The only generic way to make this work is to wake up all cpus.  Waking
up a subset of cpus is certainly worth investigating as an
optimization, but it would not be used on Tegra2, OMAP4, or Exynos5.
Tegra3 may benefit from it.

> It looks like this should work without modifying the cpuidle core, but
> the drawback here is that the cpuidle core doesn't know how much time
> spend in C4 is really in C1 and how much of it is in C-multi, so the
> statistics reported by it won't reflect the real energy usage.

Idle statistics are extremely important when determining why a
particular use case is drawing too much power, and it is worth
modifying the cpuidle core if only to keep them accurate.  Especially
when justifying the move from the cpufreq hotplug governor based code
that every SoC vendor uses in their BSP to a proper multi-CPU cpuidle
implementation.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 0/5] coupled cpuidle state support
  2012-05-03 20:18             ` Colin Cross
  (?)
@ 2012-05-03 20:43               ` Rafael J. Wysocki
  -1 siblings, 0 replies; 78+ messages in thread
From: Rafael J. Wysocki @ 2012-05-03 20:43 UTC (permalink / raw)
  To: Colin Cross
  Cc: linux-kernel, linux-arm-kernel, linux-pm, Kevin Hilman,
	Len Brown, Trinabh Gupta, Arjan van de Ven, Deepthi Dharwar,
	Greg Kroah-Hartman, Kay Sievers, Santosh Shilimkar,
	Daniel Lezcano, Amit Kucheria, Lorenzo Pieralisi, Arnd Bergmann,
	Russell King, Len Brown

On Thursday, May 03, 2012, Colin Cross wrote:
> On Thu, May 3, 2012 at 1:00 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> <snip>
> > There are two distinct cases to consider here, (1) when the last I/O
> > device in the domain becomes idle and the question is whether or not to
> > power off the entire domain and (2) when a CPU core in a power domain
> > becomes idle while all of the devices in the domain are idle already.
> >
> > Case (2) is quite straightforward, the .enter() routine for the
> > "domain" C-state has to check whether the domain can be turned off and
> > do it eventually.
> >
> > Case (1) is more difficult and (assuming that all CPU cores in the domain
> > are already idle at this point) i see two possible ways to handle it:
> > (a) Wake up all of the (idle) CPU cores in the domain and let the
> >  "domain" C-state's .enter() do the job (ie. turn it into case (2)),
> >  similarly to your patchset.
> > (b) If cpuidle has prepared the cores for going into deeper idle,
> >  turn the domain off directly without waking up the cores.
> 
> Multiple clusters is a design that has been considered in this
> patchset (all the data structures are in the right place to support
> it), and can be supported in the future, but does not exist in any
> current systems that would be using this.  In all of today's SoCs,
> there is a single cluster, so (1) can't happen - no code can be
> executing while all cpus are idle.

OK, but I think it should be taken into consideration nonetheless.

> (b) is an optimization that would not be possible on any future SoC
> that is similar to the current SoCs, where "turn the domain off" is
> very tightly integrated with TrustZone secure code running on the
> primary cpu of the cluster.

I see.

> <snip>
> 
> > Having considered this for a while I think that it may be more straightforward
> > to avoid waking up the already idled cores.
> >
> > For instance, say we have 4 CPU cores in a cluster (package) such that each
> > core has its own idle state (call it C1) and there is a multicore idle state
> > entered by turning off the entire cluster (call this state C-multi).  One of
> > the possible ways to handle this seems to be to use an identical table of
> > C-states for each core containing the C1 entry and a kind of fake entry called
> > (for example) C4 with the time characteristics of C-multi and a special
> > .enter() callback.  That callback will prepare the core it is called for to
> > enter C-multi, but instead of simply turning off the whole package it will
> > decrement a counter.  If the counte happens to be 0 at this point, the
> > package will be turned off.  Otherwise, the core will be put into the idle
> > state corresponding to C1, but it will be ready for entering C-multi at
> > any time. The counter will be incremented on exiting the C4 "state".
> 
> I implemented something very similar to this on Tegra2 (having each
> cpu go to C1, but with enough state saved for C-multi), but it turns
> out not to work in hardware.  On every existing ARM SMP system where I
> have worked with cpuidle (Tegra2, OMAP4, Exynos5, and some Tegra3),
> only cpu 0 can trigger the transition to C-multi.  The cause of this
> restriction is different on every platform - sometimes it's by design,
> sometimes it's a bug in the SoC ROM code, but the restriction exists.
> The primary cpu of the cluster always needs to be awake.

OK, so that means we need to do the wakeup for technical reasons.

> In addition, it may not be possible to transition secondary cpus from
> C1 to C-multi without waking them.  That would generally involve
> cutting power to a CPU that is in clock gating, which is not a
> supported power transition in any SoC that I have a datasheet for.  I
> made it work for cpu1 on Tegra2, but I can't guarantee that there are
> not unsolvable HW race conditions.
> 
> The only generic way to make this work is to wake up all cpus.  Waking
> up a subset of cpus is certainly worth investigating as an
> optimization, but it would not be used on Tegra2, OMAP4, or Exynos5.
> Tegra3 may benefit from it.

OK

> > It looks like this should work without modifying the cpuidle core, but
> > the drawback here is that the cpuidle core doesn't know how much time
> > spend in C4 is really in C1 and how much of it is in C-multi, so the
> > statistics reported by it won't reflect the real energy usage.
> 
> Idle statistics are extremely important when determining why a
> particular use case is drawing too much power, and it is worth
> modifying the cpuidle core if only to keep them accurate.  Especially
> when justifying the move from the cpufreq hotplug governor based code
> that every SoC vendor uses in their BSP to a proper multi-CPU cpuidle
> implementation.

I see.

Thanks for the explanation,
Rafael

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 0/5] coupled cpuidle state support
@ 2012-05-03 20:43               ` Rafael J. Wysocki
  0 siblings, 0 replies; 78+ messages in thread
From: Rafael J. Wysocki @ 2012-05-03 20:43 UTC (permalink / raw)
  To: Colin Cross
  Cc: Kevin Hilman, Len Brown, Russell King, Greg Kroah-Hartman,
	Kay Sievers, linux-kernel, Amit Kucheria, linux-pm,
	Arjan van de Ven, Arnd Bergmann, linux-arm-kernel

On Thursday, May 03, 2012, Colin Cross wrote:
> On Thu, May 3, 2012 at 1:00 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> <snip>
> > There are two distinct cases to consider here, (1) when the last I/O
> > device in the domain becomes idle and the question is whether or not to
> > power off the entire domain and (2) when a CPU core in a power domain
> > becomes idle while all of the devices in the domain are idle already.
> >
> > Case (2) is quite straightforward, the .enter() routine for the
> > "domain" C-state has to check whether the domain can be turned off and
> > do it eventually.
> >
> > Case (1) is more difficult and (assuming that all CPU cores in the domain
> > are already idle at this point) i see two possible ways to handle it:
> > (a) Wake up all of the (idle) CPU cores in the domain and let the
> >  "domain" C-state's .enter() do the job (ie. turn it into case (2)),
> >  similarly to your patchset.
> > (b) If cpuidle has prepared the cores for going into deeper idle,
> >  turn the domain off directly without waking up the cores.
> 
> Multiple clusters is a design that has been considered in this
> patchset (all the data structures are in the right place to support
> it), and can be supported in the future, but does not exist in any
> current systems that would be using this.  In all of today's SoCs,
> there is a single cluster, so (1) can't happen - no code can be
> executing while all cpus are idle.

OK, but I think it should be taken into consideration nonetheless.

> (b) is an optimization that would not be possible on any future SoC
> that is similar to the current SoCs, where "turn the domain off" is
> very tightly integrated with TrustZone secure code running on the
> primary cpu of the cluster.

I see.

> <snip>
> 
> > Having considered this for a while I think that it may be more straightforward
> > to avoid waking up the already idled cores.
> >
> > For instance, say we have 4 CPU cores in a cluster (package) such that each
> > core has its own idle state (call it C1) and there is a multicore idle state
> > entered by turning off the entire cluster (call this state C-multi).  One of
> > the possible ways to handle this seems to be to use an identical table of
> > C-states for each core containing the C1 entry and a kind of fake entry called
> > (for example) C4 with the time characteristics of C-multi and a special
> > .enter() callback.  That callback will prepare the core it is called for to
> > enter C-multi, but instead of simply turning off the whole package it will
> > decrement a counter.  If the counte happens to be 0 at this point, the
> > package will be turned off.  Otherwise, the core will be put into the idle
> > state corresponding to C1, but it will be ready for entering C-multi at
> > any time. The counter will be incremented on exiting the C4 "state".
> 
> I implemented something very similar to this on Tegra2 (having each
> cpu go to C1, but with enough state saved for C-multi), but it turns
> out not to work in hardware.  On every existing ARM SMP system where I
> have worked with cpuidle (Tegra2, OMAP4, Exynos5, and some Tegra3),
> only cpu 0 can trigger the transition to C-multi.  The cause of this
> restriction is different on every platform - sometimes it's by design,
> sometimes it's a bug in the SoC ROM code, but the restriction exists.
> The primary cpu of the cluster always needs to be awake.

OK, so that means we need to do the wakeup for technical reasons.

> In addition, it may not be possible to transition secondary cpus from
> C1 to C-multi without waking them.  That would generally involve
> cutting power to a CPU that is in clock gating, which is not a
> supported power transition in any SoC that I have a datasheet for.  I
> made it work for cpu1 on Tegra2, but I can't guarantee that there are
> not unsolvable HW race conditions.
> 
> The only generic way to make this work is to wake up all cpus.  Waking
> up a subset of cpus is certainly worth investigating as an
> optimization, but it would not be used on Tegra2, OMAP4, or Exynos5.
> Tegra3 may benefit from it.

OK

> > It looks like this should work without modifying the cpuidle core, but
> > the drawback here is that the cpuidle core doesn't know how much time
> > spend in C4 is really in C1 and how much of it is in C-multi, so the
> > statistics reported by it won't reflect the real energy usage.
> 
> Idle statistics are extremely important when determining why a
> particular use case is drawing too much power, and it is worth
> modifying the cpuidle core if only to keep them accurate.  Especially
> when justifying the move from the cpufreq hotplug governor based code
> that every SoC vendor uses in their BSP to a proper multi-CPU cpuidle
> implementation.

I see.

Thanks for the explanation,
Rafael

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [PATCHv3 0/5] coupled cpuidle state support
@ 2012-05-03 20:43               ` Rafael J. Wysocki
  0 siblings, 0 replies; 78+ messages in thread
From: Rafael J. Wysocki @ 2012-05-03 20:43 UTC (permalink / raw)
  To: linux-arm-kernel

On Thursday, May 03, 2012, Colin Cross wrote:
> On Thu, May 3, 2012 at 1:00 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> <snip>
> > There are two distinct cases to consider here, (1) when the last I/O
> > device in the domain becomes idle and the question is whether or not to
> > power off the entire domain and (2) when a CPU core in a power domain
> > becomes idle while all of the devices in the domain are idle already.
> >
> > Case (2) is quite straightforward, the .enter() routine for the
> > "domain" C-state has to check whether the domain can be turned off and
> > do it eventually.
> >
> > Case (1) is more difficult and (assuming that all CPU cores in the domain
> > are already idle at this point) i see two possible ways to handle it:
> > (a) Wake up all of the (idle) CPU cores in the domain and let the
> >  "domain" C-state's .enter() do the job (ie. turn it into case (2)),
> >  similarly to your patchset.
> > (b) If cpuidle has prepared the cores for going into deeper idle,
> >  turn the domain off directly without waking up the cores.
> 
> Multiple clusters is a design that has been considered in this
> patchset (all the data structures are in the right place to support
> it), and can be supported in the future, but does not exist in any
> current systems that would be using this.  In all of today's SoCs,
> there is a single cluster, so (1) can't happen - no code can be
> executing while all cpus are idle.

OK, but I think it should be taken into consideration nonetheless.

> (b) is an optimization that would not be possible on any future SoC
> that is similar to the current SoCs, where "turn the domain off" is
> very tightly integrated with TrustZone secure code running on the
> primary cpu of the cluster.

I see.

> <snip>
> 
> > Having considered this for a while I think that it may be more straightforward
> > to avoid waking up the already idled cores.
> >
> > For instance, say we have 4 CPU cores in a cluster (package) such that each
> > core has its own idle state (call it C1) and there is a multicore idle state
> > entered by turning off the entire cluster (call this state C-multi).  One of
> > the possible ways to handle this seems to be to use an identical table of
> > C-states for each core containing the C1 entry and a kind of fake entry called
> > (for example) C4 with the time characteristics of C-multi and a special
> > .enter() callback.  That callback will prepare the core it is called for to
> > enter C-multi, but instead of simply turning off the whole package it will
> > decrement a counter.  If the counte happens to be 0 at this point, the
> > package will be turned off.  Otherwise, the core will be put into the idle
> > state corresponding to C1, but it will be ready for entering C-multi at
> > any time. The counter will be incremented on exiting the C4 "state".
> 
> I implemented something very similar to this on Tegra2 (having each
> cpu go to C1, but with enough state saved for C-multi), but it turns
> out not to work in hardware.  On every existing ARM SMP system where I
> have worked with cpuidle (Tegra2, OMAP4, Exynos5, and some Tegra3),
> only cpu 0 can trigger the transition to C-multi.  The cause of this
> restriction is different on every platform - sometimes it's by design,
> sometimes it's a bug in the SoC ROM code, but the restriction exists.
> The primary cpu of the cluster always needs to be awake.

OK, so that means we need to do the wakeup for technical reasons.

> In addition, it may not be possible to transition secondary cpus from
> C1 to C-multi without waking them.  That would generally involve
> cutting power to a CPU that is in clock gating, which is not a
> supported power transition in any SoC that I have a datasheet for.  I
> made it work for cpu1 on Tegra2, but I can't guarantee that there are
> not unsolvable HW race conditions.
> 
> The only generic way to make this work is to wake up all cpus.  Waking
> up a subset of cpus is certainly worth investigating as an
> optimization, but it would not be used on Tegra2, OMAP4, or Exynos5.
> Tegra3 may benefit from it.

OK

> > It looks like this should work without modifying the cpuidle core, but
> > the drawback here is that the cpuidle core doesn't know how much time
> > spend in C4 is really in C1 and how much of it is in C-multi, so the
> > statistics reported by it won't reflect the real energy usage.
> 
> Idle statistics are extremely important when determining why a
> particular use case is drawing too much power, and it is worth
> modifying the cpuidle core if only to keep them accurate.  Especially
> when justifying the move from the cpufreq hotplug governor based code
> that every SoC vendor uses in their BSP to a proper multi-CPU cpuidle
> implementation.

I see.

Thanks for the explanation,
Rafael

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 1/5] cpuidle: refactor out cpuidle_enter_state
  2012-04-30 20:09   ` Colin Cross
  (?)
@ 2012-05-03 20:50     ` Rafael J. Wysocki
  -1 siblings, 0 replies; 78+ messages in thread
From: Rafael J. Wysocki @ 2012-05-03 20:50 UTC (permalink / raw)
  To: Colin Cross
  Cc: linux-kernel, linux-arm-kernel, linux-pm, Kevin Hilman,
	Len Brown, Trinabh Gupta, Arjan van de Ven, Deepthi Dharwar,
	Greg Kroah-Hartman, Kay Sievers, Santosh Shilimkar,
	Daniel Lezcano, Amit Kucheria, Lorenzo Pieralisi, Arnd Bergmann,
	Russell King

On Monday, April 30, 2012, Colin Cross wrote:
> Split the code to enter a state and update the stats into a helper
> function, cpuidle_enter_state, and export it.  This function will
> be called by the coupled state code to handle entering the safe
> state and the final coupled state.
> 
> Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> Reviewed-by: Kevin Hilman <khilman@ti.com>
> Tested-by: Kevin Hilman <khilman@ti.com>
> Signed-off-by: Colin Cross <ccross@android.com>

Reviewed-by: Rafael J. Wysocki <rjw@sisk.pl>

> ---
>  drivers/cpuidle/cpuidle.c |   42 +++++++++++++++++++++++++++++-------------
>  drivers/cpuidle/cpuidle.h |    2 ++
>  2 files changed, 31 insertions(+), 13 deletions(-)
> 
> diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
> index 2f0083a..3e3e3e4 100644
> --- a/drivers/cpuidle/cpuidle.c
> +++ b/drivers/cpuidle/cpuidle.c
> @@ -103,6 +103,34 @@ int cpuidle_play_dead(void)
>  }
>  
>  /**
> + * cpuidle_enter_state - enter the state and update stats
> + * @dev: cpuidle device for this cpu
> + * @drv: cpuidle driver for this cpu
> + * @next_state: index into drv->states of the state to enter
> + */
> +int cpuidle_enter_state(struct cpuidle_device *dev, struct cpuidle_driver *drv,
> +		int next_state)
> +{
> +	int entered_state;
> +
> +	entered_state = cpuidle_enter_ops(dev, drv, next_state);
> +
> +	if (entered_state >= 0) {
> +		/* Update cpuidle counters */
> +		/* This can be moved to within driver enter routine
> +		 * but that results in multiple copies of same code.
> +		 */
> +		dev->states_usage[entered_state].time +=
> +				(unsigned long long)dev->last_residency;
> +		dev->states_usage[entered_state].usage++;
> +	} else {
> +		dev->last_residency = 0;
> +	}
> +
> +	return entered_state;
> +}
> +
> +/**
>   * cpuidle_idle_call - the main idle loop
>   *
>   * NOTE: no locks or semaphores should be used here
> @@ -143,23 +171,11 @@ int cpuidle_idle_call(void)
>  	trace_power_start_rcuidle(POWER_CSTATE, next_state, dev->cpu);
>  	trace_cpu_idle_rcuidle(next_state, dev->cpu);
>  
> -	entered_state = cpuidle_enter_ops(dev, drv, next_state);
> +	entered_state = cpuidle_enter_state(dev, drv, next_state);
>  
>  	trace_power_end_rcuidle(dev->cpu);
>  	trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, dev->cpu);
>  
> -	if (entered_state >= 0) {
> -		/* Update cpuidle counters */
> -		/* This can be moved to within driver enter routine
> -		 * but that results in multiple copies of same code.
> -		 */
> -		dev->states_usage[entered_state].time +=
> -				(unsigned long long)dev->last_residency;
> -		dev->states_usage[entered_state].usage++;
> -	} else {
> -		dev->last_residency = 0;
> -	}
> -
>  	/* give the governor an opportunity to reflect on the outcome */
>  	if (cpuidle_curr_governor->reflect)
>  		cpuidle_curr_governor->reflect(dev, entered_state);
> diff --git a/drivers/cpuidle/cpuidle.h b/drivers/cpuidle/cpuidle.h
> index 7db1866..d8a3ccc 100644
> --- a/drivers/cpuidle/cpuidle.h
> +++ b/drivers/cpuidle/cpuidle.h
> @@ -14,6 +14,8 @@
>  extern struct mutex cpuidle_lock;
>  extern spinlock_t cpuidle_driver_lock;
>  extern int cpuidle_disabled(void);
> +extern int cpuidle_enter_state(struct cpuidle_device *dev,
> +		struct cpuidle_driver *drv, int next_state);
>  
>  /* idle loop */
>  extern void cpuidle_install_idle_handler(void);
> 


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 1/5] cpuidle: refactor out cpuidle_enter_state
@ 2012-05-03 20:50     ` Rafael J. Wysocki
  0 siblings, 0 replies; 78+ messages in thread
From: Rafael J. Wysocki @ 2012-05-03 20:50 UTC (permalink / raw)
  To: Colin Cross
  Cc: Kevin Hilman, Len Brown, Russell King, Greg Kroah-Hartman,
	Kay Sievers, linux-kernel, Amit Kucheria, linux-pm,
	Arjan van de Ven, Arnd Bergmann, linux-arm-kernel

On Monday, April 30, 2012, Colin Cross wrote:
> Split the code to enter a state and update the stats into a helper
> function, cpuidle_enter_state, and export it.  This function will
> be called by the coupled state code to handle entering the safe
> state and the final coupled state.
> 
> Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> Reviewed-by: Kevin Hilman <khilman@ti.com>
> Tested-by: Kevin Hilman <khilman@ti.com>
> Signed-off-by: Colin Cross <ccross@android.com>

Reviewed-by: Rafael J. Wysocki <rjw@sisk.pl>

> ---
>  drivers/cpuidle/cpuidle.c |   42 +++++++++++++++++++++++++++++-------------
>  drivers/cpuidle/cpuidle.h |    2 ++
>  2 files changed, 31 insertions(+), 13 deletions(-)
> 
> diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
> index 2f0083a..3e3e3e4 100644
> --- a/drivers/cpuidle/cpuidle.c
> +++ b/drivers/cpuidle/cpuidle.c
> @@ -103,6 +103,34 @@ int cpuidle_play_dead(void)
>  }
>  
>  /**
> + * cpuidle_enter_state - enter the state and update stats
> + * @dev: cpuidle device for this cpu
> + * @drv: cpuidle driver for this cpu
> + * @next_state: index into drv->states of the state to enter
> + */
> +int cpuidle_enter_state(struct cpuidle_device *dev, struct cpuidle_driver *drv,
> +		int next_state)
> +{
> +	int entered_state;
> +
> +	entered_state = cpuidle_enter_ops(dev, drv, next_state);
> +
> +	if (entered_state >= 0) {
> +		/* Update cpuidle counters */
> +		/* This can be moved to within driver enter routine
> +		 * but that results in multiple copies of same code.
> +		 */
> +		dev->states_usage[entered_state].time +=
> +				(unsigned long long)dev->last_residency;
> +		dev->states_usage[entered_state].usage++;
> +	} else {
> +		dev->last_residency = 0;
> +	}
> +
> +	return entered_state;
> +}
> +
> +/**
>   * cpuidle_idle_call - the main idle loop
>   *
>   * NOTE: no locks or semaphores should be used here
> @@ -143,23 +171,11 @@ int cpuidle_idle_call(void)
>  	trace_power_start_rcuidle(POWER_CSTATE, next_state, dev->cpu);
>  	trace_cpu_idle_rcuidle(next_state, dev->cpu);
>  
> -	entered_state = cpuidle_enter_ops(dev, drv, next_state);
> +	entered_state = cpuidle_enter_state(dev, drv, next_state);
>  
>  	trace_power_end_rcuidle(dev->cpu);
>  	trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, dev->cpu);
>  
> -	if (entered_state >= 0) {
> -		/* Update cpuidle counters */
> -		/* This can be moved to within driver enter routine
> -		 * but that results in multiple copies of same code.
> -		 */
> -		dev->states_usage[entered_state].time +=
> -				(unsigned long long)dev->last_residency;
> -		dev->states_usage[entered_state].usage++;
> -	} else {
> -		dev->last_residency = 0;
> -	}
> -
>  	/* give the governor an opportunity to reflect on the outcome */
>  	if (cpuidle_curr_governor->reflect)
>  		cpuidle_curr_governor->reflect(dev, entered_state);
> diff --git a/drivers/cpuidle/cpuidle.h b/drivers/cpuidle/cpuidle.h
> index 7db1866..d8a3ccc 100644
> --- a/drivers/cpuidle/cpuidle.h
> +++ b/drivers/cpuidle/cpuidle.h
> @@ -14,6 +14,8 @@
>  extern struct mutex cpuidle_lock;
>  extern spinlock_t cpuidle_driver_lock;
>  extern int cpuidle_disabled(void);
> +extern int cpuidle_enter_state(struct cpuidle_device *dev,
> +		struct cpuidle_driver *drv, int next_state);
>  
>  /* idle loop */
>  extern void cpuidle_install_idle_handler(void);
> 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [PATCHv3 1/5] cpuidle: refactor out cpuidle_enter_state
@ 2012-05-03 20:50     ` Rafael J. Wysocki
  0 siblings, 0 replies; 78+ messages in thread
From: Rafael J. Wysocki @ 2012-05-03 20:50 UTC (permalink / raw)
  To: linux-arm-kernel

On Monday, April 30, 2012, Colin Cross wrote:
> Split the code to enter a state and update the stats into a helper
> function, cpuidle_enter_state, and export it.  This function will
> be called by the coupled state code to handle entering the safe
> state and the final coupled state.
> 
> Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> Reviewed-by: Kevin Hilman <khilman@ti.com>
> Tested-by: Kevin Hilman <khilman@ti.com>
> Signed-off-by: Colin Cross <ccross@android.com>

Reviewed-by: Rafael J. Wysocki <rjw@sisk.pl>

> ---
>  drivers/cpuidle/cpuidle.c |   42 +++++++++++++++++++++++++++++-------------
>  drivers/cpuidle/cpuidle.h |    2 ++
>  2 files changed, 31 insertions(+), 13 deletions(-)
> 
> diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
> index 2f0083a..3e3e3e4 100644
> --- a/drivers/cpuidle/cpuidle.c
> +++ b/drivers/cpuidle/cpuidle.c
> @@ -103,6 +103,34 @@ int cpuidle_play_dead(void)
>  }
>  
>  /**
> + * cpuidle_enter_state - enter the state and update stats
> + * @dev: cpuidle device for this cpu
> + * @drv: cpuidle driver for this cpu
> + * @next_state: index into drv->states of the state to enter
> + */
> +int cpuidle_enter_state(struct cpuidle_device *dev, struct cpuidle_driver *drv,
> +		int next_state)
> +{
> +	int entered_state;
> +
> +	entered_state = cpuidle_enter_ops(dev, drv, next_state);
> +
> +	if (entered_state >= 0) {
> +		/* Update cpuidle counters */
> +		/* This can be moved to within driver enter routine
> +		 * but that results in multiple copies of same code.
> +		 */
> +		dev->states_usage[entered_state].time +=
> +				(unsigned long long)dev->last_residency;
> +		dev->states_usage[entered_state].usage++;
> +	} else {
> +		dev->last_residency = 0;
> +	}
> +
> +	return entered_state;
> +}
> +
> +/**
>   * cpuidle_idle_call - the main idle loop
>   *
>   * NOTE: no locks or semaphores should be used here
> @@ -143,23 +171,11 @@ int cpuidle_idle_call(void)
>  	trace_power_start_rcuidle(POWER_CSTATE, next_state, dev->cpu);
>  	trace_cpu_idle_rcuidle(next_state, dev->cpu);
>  
> -	entered_state = cpuidle_enter_ops(dev, drv, next_state);
> +	entered_state = cpuidle_enter_state(dev, drv, next_state);
>  
>  	trace_power_end_rcuidle(dev->cpu);
>  	trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, dev->cpu);
>  
> -	if (entered_state >= 0) {
> -		/* Update cpuidle counters */
> -		/* This can be moved to within driver enter routine
> -		 * but that results in multiple copies of same code.
> -		 */
> -		dev->states_usage[entered_state].time +=
> -				(unsigned long long)dev->last_residency;
> -		dev->states_usage[entered_state].usage++;
> -	} else {
> -		dev->last_residency = 0;
> -	}
> -
>  	/* give the governor an opportunity to reflect on the outcome */
>  	if (cpuidle_curr_governor->reflect)
>  		cpuidle_curr_governor->reflect(dev, entered_state);
> diff --git a/drivers/cpuidle/cpuidle.h b/drivers/cpuidle/cpuidle.h
> index 7db1866..d8a3ccc 100644
> --- a/drivers/cpuidle/cpuidle.h
> +++ b/drivers/cpuidle/cpuidle.h
> @@ -14,6 +14,8 @@
>  extern struct mutex cpuidle_lock;
>  extern spinlock_t cpuidle_driver_lock;
>  extern int cpuidle_disabled(void);
> +extern int cpuidle_enter_state(struct cpuidle_device *dev,
> +		struct cpuidle_driver *drv, int next_state);
>  
>  /* idle loop */
>  extern void cpuidle_install_idle_handler(void);
> 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [linux-pm] [PATCHv3 2/5] cpuidle: fix error handling in __cpuidle_register_device
  2012-04-30 20:09   ` Colin Cross
  (?)
@ 2012-05-03 20:50     ` Rafael J. Wysocki
  -1 siblings, 0 replies; 78+ messages in thread
From: Rafael J. Wysocki @ 2012-05-03 20:50 UTC (permalink / raw)
  To: linux-pm
  Cc: Colin Cross, linux-kernel, Kevin Hilman, Len Brown, Russell King,
	Greg Kroah-Hartman, Kay Sievers, Amit Kucheria, Arjan van de Ven,
	Arnd Bergmann, linux-arm-kernel

On Monday, April 30, 2012, Colin Cross wrote:
> Fix the error handling in __cpuidle_register_device to include
> the missing list_del.  Move it to a label, which will simplify
> the error handling when coupled states are added.
> 
> Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> Reviewed-by: Kevin Hilman <khilman@ti.com>
> Tested-by: Kevin Hilman <khilman@ti.com>
> Signed-off-by: Colin Cross <ccross@android.com>

Reviewed-by: Rafael J. Wysocki <rjw@sisk.pl>

> ---
>  drivers/cpuidle/cpuidle.c |   13 +++++++++----
>  1 files changed, 9 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
> index 3e3e3e4..4540672 100644
> --- a/drivers/cpuidle/cpuidle.c
> +++ b/drivers/cpuidle/cpuidle.c
> @@ -403,13 +403,18 @@ static int __cpuidle_register_device(struct cpuidle_device *dev)
>  
>  	per_cpu(cpuidle_devices, dev->cpu) = dev;
>  	list_add(&dev->device_list, &cpuidle_detected_devices);
> -	if ((ret = cpuidle_add_sysfs(cpu_dev))) {
> -		module_put(cpuidle_driver->owner);
> -		return ret;
> -	}
> +	ret = cpuidle_add_sysfs(cpu_dev);
> +	if (ret)
> +		goto err_sysfs;
>  
>  	dev->registered = 1;
>  	return 0;
> +
> +err_sysfs:
> +	list_del(&dev->device_list);
> +	per_cpu(cpuidle_devices, dev->cpu) = NULL;
> +	module_put(cpuidle_driver->owner);
> +	return ret;
>  }
>  
>  /**
> 


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 2/5] cpuidle: fix error handling in __cpuidle_register_device
@ 2012-05-03 20:50     ` Rafael J. Wysocki
  0 siblings, 0 replies; 78+ messages in thread
From: Rafael J. Wysocki @ 2012-05-03 20:50 UTC (permalink / raw)
  To: linux-pm
  Cc: Kevin Hilman, Len Brown, Russell King, Greg Kroah-Hartman,
	Kay Sievers, linux-kernel, Amit Kucheria, Colin Cross,
	Arnd Bergmann, Arjan van de Ven, linux-arm-kernel

On Monday, April 30, 2012, Colin Cross wrote:
> Fix the error handling in __cpuidle_register_device to include
> the missing list_del.  Move it to a label, which will simplify
> the error handling when coupled states are added.
> 
> Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> Reviewed-by: Kevin Hilman <khilman@ti.com>
> Tested-by: Kevin Hilman <khilman@ti.com>
> Signed-off-by: Colin Cross <ccross@android.com>

Reviewed-by: Rafael J. Wysocki <rjw@sisk.pl>

> ---
>  drivers/cpuidle/cpuidle.c |   13 +++++++++----
>  1 files changed, 9 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
> index 3e3e3e4..4540672 100644
> --- a/drivers/cpuidle/cpuidle.c
> +++ b/drivers/cpuidle/cpuidle.c
> @@ -403,13 +403,18 @@ static int __cpuidle_register_device(struct cpuidle_device *dev)
>  
>  	per_cpu(cpuidle_devices, dev->cpu) = dev;
>  	list_add(&dev->device_list, &cpuidle_detected_devices);
> -	if ((ret = cpuidle_add_sysfs(cpu_dev))) {
> -		module_put(cpuidle_driver->owner);
> -		return ret;
> -	}
> +	ret = cpuidle_add_sysfs(cpu_dev);
> +	if (ret)
> +		goto err_sysfs;
>  
>  	dev->registered = 1;
>  	return 0;
> +
> +err_sysfs:
> +	list_del(&dev->device_list);
> +	per_cpu(cpuidle_devices, dev->cpu) = NULL;
> +	module_put(cpuidle_driver->owner);
> +	return ret;
>  }
>  
>  /**
> 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [linux-pm] [PATCHv3 2/5] cpuidle: fix error handling in __cpuidle_register_device
@ 2012-05-03 20:50     ` Rafael J. Wysocki
  0 siblings, 0 replies; 78+ messages in thread
From: Rafael J. Wysocki @ 2012-05-03 20:50 UTC (permalink / raw)
  To: linux-arm-kernel

On Monday, April 30, 2012, Colin Cross wrote:
> Fix the error handling in __cpuidle_register_device to include
> the missing list_del.  Move it to a label, which will simplify
> the error handling when coupled states are added.
> 
> Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> Reviewed-by: Kevin Hilman <khilman@ti.com>
> Tested-by: Kevin Hilman <khilman@ti.com>
> Signed-off-by: Colin Cross <ccross@android.com>

Reviewed-by: Rafael J. Wysocki <rjw@sisk.pl>

> ---
>  drivers/cpuidle/cpuidle.c |   13 +++++++++----
>  1 files changed, 9 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
> index 3e3e3e4..4540672 100644
> --- a/drivers/cpuidle/cpuidle.c
> +++ b/drivers/cpuidle/cpuidle.c
> @@ -403,13 +403,18 @@ static int __cpuidle_register_device(struct cpuidle_device *dev)
>  
>  	per_cpu(cpuidle_devices, dev->cpu) = dev;
>  	list_add(&dev->device_list, &cpuidle_detected_devices);
> -	if ((ret = cpuidle_add_sysfs(cpu_dev))) {
> -		module_put(cpuidle_driver->owner);
> -		return ret;
> -	}
> +	ret = cpuidle_add_sysfs(cpu_dev);
> +	if (ret)
> +		goto err_sysfs;
>  
>  	dev->registered = 1;
>  	return 0;
> +
> +err_sysfs:
> +	list_del(&dev->device_list);
> +	per_cpu(cpuidle_devices, dev->cpu) = NULL;
> +	module_put(cpuidle_driver->owner);
> +	return ret;
>  }
>  
>  /**
> 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 5/5] cpuidle: coupled: add trace events
  2012-04-30 20:09   ` Colin Cross
  (?)
@ 2012-05-03 21:00     ` Steven Rostedt
  -1 siblings, 0 replies; 78+ messages in thread
From: Steven Rostedt @ 2012-05-03 21:00 UTC (permalink / raw)
  To: Colin Cross
  Cc: linux-kernel, linux-arm-kernel, linux-pm, Kevin Hilman,
	Len Brown, Trinabh Gupta, Arjan van de Ven, Deepthi Dharwar,
	Greg Kroah-Hartman, Kay Sievers, Santosh Shilimkar,
	Daniel Lezcano, Amit Kucheria, Lorenzo Pieralisi, Arnd Bergmann,
	Russell King

On Mon, 2012-04-30 at 13:09 -0700, Colin Cross wrote:

> diff --git a/include/trace/events/cpuidle.h b/include/trace/events/cpuidle.h
> new file mode 100644
> index 0000000..9b2cbbb
> --- /dev/null
> +++ b/include/trace/events/cpuidle.h
> @@ -0,0 +1,243 @@
> +#undef TRACE_SYSTEM
> +#define TRACE_SYSTEM cpuidle
> +
> +#if !defined(_TRACE_CPUIDLE_H) || defined(TRACE_HEADER_MULTI_READ)
> +#define _TRACE_CPUIDLE_H
> +
> +#include <linux/atomic.h>
> +#include <linux/tracepoint.h>
> +
> +extern atomic_t cpuidle_trace_seq;
> +
> +TRACE_EVENT(coupled_enter,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_exit,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_spin,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_unspin,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_safe_enter,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_safe_exit,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_idle_enter,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_idle_exit,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_abort,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_detected_abort,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_poke,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_poked,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);

Egad! Please use DECLARE_EVENT_CLASS() and DEFINE_EVENT() when adding
events that are basically the same. Every TRACE_EVENT() can bloat the
kernel by 5k, using the DEFINE_EVENT()s keeps each event around just a
few hundred bytes.

See include/trace/events/ext4.h for examples.

-- Steve


> +
> +#endif /* if !defined(_TRACE_CPUIDLE_H) || defined(TRACE_HEADER_MULTI_READ) */
> +
> +/* This part must be outside protection */
> +#include <trace/define_trace.h>



^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 5/5] cpuidle: coupled: add trace events
@ 2012-05-03 21:00     ` Steven Rostedt
  0 siblings, 0 replies; 78+ messages in thread
From: Steven Rostedt @ 2012-05-03 21:00 UTC (permalink / raw)
  To: Colin Cross
  Cc: Kevin Hilman, Len Brown, Russell King, Greg Kroah-Hartman,
	Kay Sievers, linux-kernel, Amit Kucheria, linux-pm,
	Arjan van de Ven, Arnd Bergmann, linux-arm-kernel

On Mon, 2012-04-30 at 13:09 -0700, Colin Cross wrote:

> diff --git a/include/trace/events/cpuidle.h b/include/trace/events/cpuidle.h
> new file mode 100644
> index 0000000..9b2cbbb
> --- /dev/null
> +++ b/include/trace/events/cpuidle.h
> @@ -0,0 +1,243 @@
> +#undef TRACE_SYSTEM
> +#define TRACE_SYSTEM cpuidle
> +
> +#if !defined(_TRACE_CPUIDLE_H) || defined(TRACE_HEADER_MULTI_READ)
> +#define _TRACE_CPUIDLE_H
> +
> +#include <linux/atomic.h>
> +#include <linux/tracepoint.h>
> +
> +extern atomic_t cpuidle_trace_seq;
> +
> +TRACE_EVENT(coupled_enter,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_exit,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_spin,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_unspin,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_safe_enter,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_safe_exit,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_idle_enter,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_idle_exit,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_abort,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_detected_abort,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_poke,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_poked,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);

Egad! Please use DECLARE_EVENT_CLASS() and DEFINE_EVENT() when adding
events that are basically the same. Every TRACE_EVENT() can bloat the
kernel by 5k, using the DEFINE_EVENT()s keeps each event around just a
few hundred bytes.

See include/trace/events/ext4.h for examples.

-- Steve


> +
> +#endif /* if !defined(_TRACE_CPUIDLE_H) || defined(TRACE_HEADER_MULTI_READ) */
> +
> +/* This part must be outside protection */
> +#include <trace/define_trace.h>

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [PATCHv3 5/5] cpuidle: coupled: add trace events
@ 2012-05-03 21:00     ` Steven Rostedt
  0 siblings, 0 replies; 78+ messages in thread
From: Steven Rostedt @ 2012-05-03 21:00 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, 2012-04-30 at 13:09 -0700, Colin Cross wrote:

> diff --git a/include/trace/events/cpuidle.h b/include/trace/events/cpuidle.h
> new file mode 100644
> index 0000000..9b2cbbb
> --- /dev/null
> +++ b/include/trace/events/cpuidle.h
> @@ -0,0 +1,243 @@
> +#undef TRACE_SYSTEM
> +#define TRACE_SYSTEM cpuidle
> +
> +#if !defined(_TRACE_CPUIDLE_H) || defined(TRACE_HEADER_MULTI_READ)
> +#define _TRACE_CPUIDLE_H
> +
> +#include <linux/atomic.h>
> +#include <linux/tracepoint.h>
> +
> +extern atomic_t cpuidle_trace_seq;
> +
> +TRACE_EVENT(coupled_enter,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_exit,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_spin,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_unspin,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_safe_enter,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_safe_exit,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_idle_enter,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_idle_exit,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_abort,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_detected_abort,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_poke,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);
> +
> +TRACE_EVENT(coupled_poked,
> +
> +	TP_PROTO(unsigned int cpu),
> +
> +	TP_ARGS(cpu),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned int, cpu)
> +		__field(unsigned int, seq)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->cpu = cpu;
> +		__entry->seq = atomic_inc_return(&cpuidle_trace_seq);
> +	),
> +
> +	TP_printk("%u %u", __entry->seq, __entry->cpu)
> +);

Egad! Please use DECLARE_EVENT_CLASS() and DEFINE_EVENT() when adding
events that are basically the same. Every TRACE_EVENT() can bloat the
kernel by 5k, using the DEFINE_EVENT()s keeps each event around just a
few hundred bytes.

See include/trace/events/ext4.h for examples.

-- Steve


> +
> +#endif /* if !defined(_TRACE_CPUIDLE_H) || defined(TRACE_HEADER_MULTI_READ) */
> +
> +/* This part must be outside protection */
> +#include <trace/define_trace.h>

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 5/5] cpuidle: coupled: add trace events
  2012-05-03 21:00     ` Steven Rostedt
  (?)
@ 2012-05-03 21:13       ` Colin Cross
  -1 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-05-03 21:13 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, linux-arm-kernel, linux-pm, Kevin Hilman,
	Len Brown, Trinabh Gupta, Arjan van de Ven, Deepthi Dharwar,
	Greg Kroah-Hartman, Kay Sievers, Santosh Shilimkar,
	Daniel Lezcano, Amit Kucheria, Lorenzo Pieralisi, Arnd Bergmann,
	Russell King

On Thu, May 3, 2012 at 2:00 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
> On Mon, 2012-04-30 at 13:09 -0700, Colin Cross wrote:
>
>> diff --git a/include/trace/events/cpuidle.h b/include/trace/events/cpuidle.h
>> new file mode 100644
>> index 0000000..9b2cbbb
>> --- /dev/null
>> +++ b/include/trace/events/cpuidle.h
>> @@ -0,0 +1,243 @@
>> +#undef TRACE_SYSTEM
>> +#define TRACE_SYSTEM cpuidle
>> +
>> +#if !defined(_TRACE_CPUIDLE_H) || defined(TRACE_HEADER_MULTI_READ)
>> +#define _TRACE_CPUIDLE_H
>> +
>> +#include <linux/atomic.h>
>> +#include <linux/tracepoint.h>
>> +
>> +extern atomic_t cpuidle_trace_seq;
>> +
>> +TRACE_EVENT(coupled_enter,
>> +
>> +     TP_PROTO(unsigned int cpu),
>> +
>> +     TP_ARGS(cpu),
>> +
>> +     TP_STRUCT__entry(
>> +             __field(unsigned int, cpu)
>> +             __field(unsigned int, seq)
>> +     ),
>> +
>> +     TP_fast_assign(
>> +             __entry->cpu = cpu;
>> +             __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> +     ),
>> +
>> +     TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_exit,
>> +
>> +     TP_PROTO(unsigned int cpu),
>> +
>> +     TP_ARGS(cpu),
>> +
>> +     TP_STRUCT__entry(
>> +             __field(unsigned int, cpu)
>> +             __field(unsigned int, seq)
>> +     ),
>> +
>> +     TP_fast_assign(
>> +             __entry->cpu = cpu;
>> +             __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> +     ),
>> +
>> +     TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_spin,
>> +
>> +     TP_PROTO(unsigned int cpu),
>> +
>> +     TP_ARGS(cpu),
>> +
>> +     TP_STRUCT__entry(
>> +             __field(unsigned int, cpu)
>> +             __field(unsigned int, seq)
>> +     ),
>> +
>> +     TP_fast_assign(
>> +             __entry->cpu = cpu;
>> +             __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> +     ),
>> +
>> +     TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_unspin,
>> +
>> +     TP_PROTO(unsigned int cpu),
>> +
>> +     TP_ARGS(cpu),
>> +
>> +     TP_STRUCT__entry(
>> +             __field(unsigned int, cpu)
>> +             __field(unsigned int, seq)
>> +     ),
>> +
>> +     TP_fast_assign(
>> +             __entry->cpu = cpu;
>> +             __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> +     ),
>> +
>> +     TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_safe_enter,
>> +
>> +     TP_PROTO(unsigned int cpu),
>> +
>> +     TP_ARGS(cpu),
>> +
>> +     TP_STRUCT__entry(
>> +             __field(unsigned int, cpu)
>> +             __field(unsigned int, seq)
>> +     ),
>> +
>> +     TP_fast_assign(
>> +             __entry->cpu = cpu;
>> +             __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> +     ),
>> +
>> +     TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_safe_exit,
>> +
>> +     TP_PROTO(unsigned int cpu),
>> +
>> +     TP_ARGS(cpu),
>> +
>> +     TP_STRUCT__entry(
>> +             __field(unsigned int, cpu)
>> +             __field(unsigned int, seq)
>> +     ),
>> +
>> +     TP_fast_assign(
>> +             __entry->cpu = cpu;
>> +             __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> +     ),
>> +
>> +     TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_idle_enter,
>> +
>> +     TP_PROTO(unsigned int cpu),
>> +
>> +     TP_ARGS(cpu),
>> +
>> +     TP_STRUCT__entry(
>> +             __field(unsigned int, cpu)
>> +             __field(unsigned int, seq)
>> +     ),
>> +
>> +     TP_fast_assign(
>> +             __entry->cpu = cpu;
>> +             __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> +     ),
>> +
>> +     TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_idle_exit,
>> +
>> +     TP_PROTO(unsigned int cpu),
>> +
>> +     TP_ARGS(cpu),
>> +
>> +     TP_STRUCT__entry(
>> +             __field(unsigned int, cpu)
>> +             __field(unsigned int, seq)
>> +     ),
>> +
>> +     TP_fast_assign(
>> +             __entry->cpu = cpu;
>> +             __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> +     ),
>> +
>> +     TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_abort,
>> +
>> +     TP_PROTO(unsigned int cpu),
>> +
>> +     TP_ARGS(cpu),
>> +
>> +     TP_STRUCT__entry(
>> +             __field(unsigned int, cpu)
>> +             __field(unsigned int, seq)
>> +     ),
>> +
>> +     TP_fast_assign(
>> +             __entry->cpu = cpu;
>> +             __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> +     ),
>> +
>> +     TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_detected_abort,
>> +
>> +     TP_PROTO(unsigned int cpu),
>> +
>> +     TP_ARGS(cpu),
>> +
>> +     TP_STRUCT__entry(
>> +             __field(unsigned int, cpu)
>> +             __field(unsigned int, seq)
>> +     ),
>> +
>> +     TP_fast_assign(
>> +             __entry->cpu = cpu;
>> +             __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> +     ),
>> +
>> +     TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_poke,
>> +
>> +     TP_PROTO(unsigned int cpu),
>> +
>> +     TP_ARGS(cpu),
>> +
>> +     TP_STRUCT__entry(
>> +             __field(unsigned int, cpu)
>> +             __field(unsigned int, seq)
>> +     ),
>> +
>> +     TP_fast_assign(
>> +             __entry->cpu = cpu;
>> +             __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> +     ),
>> +
>> +     TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_poked,
>> +
>> +     TP_PROTO(unsigned int cpu),
>> +
>> +     TP_ARGS(cpu),
>> +
>> +     TP_STRUCT__entry(
>> +             __field(unsigned int, cpu)
>> +             __field(unsigned int, seq)
>> +     ),
>> +
>> +     TP_fast_assign(
>> +             __entry->cpu = cpu;
>> +             __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> +     ),
>> +
>> +     TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>
> Egad! Please use DECLARE_EVENT_CLASS() and DEFINE_EVENT() when adding
> events that are basically the same. Every TRACE_EVENT() can bloat the
> kernel by 5k, using the DEFINE_EVENT()s keeps each event around just a
> few hundred bytes.
>
> See include/trace/events/ext4.h for examples.

Thanks, I'll take a look.  There is no mention in Documentation/ or
samples/ of DECLARE_EVENT_CLASS() or DEFINE_EVENT(), nor any mention
of the cost of TRACE_EVENT().

Looking at the new power tracing code, I will also rework these events
to be more similar to the existing ones.

I suggest skipping this patch for 3.5, and I'll post an updated one for 3.6.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 5/5] cpuidle: coupled: add trace events
@ 2012-05-03 21:13       ` Colin Cross
  0 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-05-03 21:13 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Kevin Hilman, Len Brown, Russell King, Greg Kroah-Hartman,
	Kay Sievers, linux-kernel, Amit Kucheria, linux-pm,
	Arjan van de Ven, Arnd Bergmann, linux-arm-kernel

On Thu, May 3, 2012 at 2:00 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
> On Mon, 2012-04-30 at 13:09 -0700, Colin Cross wrote:
>
>> diff --git a/include/trace/events/cpuidle.h b/include/trace/events/cpuidle.h
>> new file mode 100644
>> index 0000000..9b2cbbb
>> --- /dev/null
>> +++ b/include/trace/events/cpuidle.h
>> @@ -0,0 +1,243 @@
>> +#undef TRACE_SYSTEM
>> +#define TRACE_SYSTEM cpuidle
>> +
>> +#if !defined(_TRACE_CPUIDLE_H) || defined(TRACE_HEADER_MULTI_READ)
>> +#define _TRACE_CPUIDLE_H
>> +
>> +#include <linux/atomic.h>
>> +#include <linux/tracepoint.h>
>> +
>> +extern atomic_t cpuidle_trace_seq;
>> +
>> +TRACE_EVENT(coupled_enter,
>> +
>> +     TP_PROTO(unsigned int cpu),
>> +
>> +     TP_ARGS(cpu),
>> +
>> +     TP_STRUCT__entry(
>> +             __field(unsigned int, cpu)
>> +             __field(unsigned int, seq)
>> +     ),
>> +
>> +     TP_fast_assign(
>> +             __entry->cpu = cpu;
>> +             __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> +     ),
>> +
>> +     TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_exit,
>> +
>> +     TP_PROTO(unsigned int cpu),
>> +
>> +     TP_ARGS(cpu),
>> +
>> +     TP_STRUCT__entry(
>> +             __field(unsigned int, cpu)
>> +             __field(unsigned int, seq)
>> +     ),
>> +
>> +     TP_fast_assign(
>> +             __entry->cpu = cpu;
>> +             __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> +     ),
>> +
>> +     TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_spin,
>> +
>> +     TP_PROTO(unsigned int cpu),
>> +
>> +     TP_ARGS(cpu),
>> +
>> +     TP_STRUCT__entry(
>> +             __field(unsigned int, cpu)
>> +             __field(unsigned int, seq)
>> +     ),
>> +
>> +     TP_fast_assign(
>> +             __entry->cpu = cpu;
>> +             __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> +     ),
>> +
>> +     TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_unspin,
>> +
>> +     TP_PROTO(unsigned int cpu),
>> +
>> +     TP_ARGS(cpu),
>> +
>> +     TP_STRUCT__entry(
>> +             __field(unsigned int, cpu)
>> +             __field(unsigned int, seq)
>> +     ),
>> +
>> +     TP_fast_assign(
>> +             __entry->cpu = cpu;
>> +             __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> +     ),
>> +
>> +     TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_safe_enter,
>> +
>> +     TP_PROTO(unsigned int cpu),
>> +
>> +     TP_ARGS(cpu),
>> +
>> +     TP_STRUCT__entry(
>> +             __field(unsigned int, cpu)
>> +             __field(unsigned int, seq)
>> +     ),
>> +
>> +     TP_fast_assign(
>> +             __entry->cpu = cpu;
>> +             __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> +     ),
>> +
>> +     TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_safe_exit,
>> +
>> +     TP_PROTO(unsigned int cpu),
>> +
>> +     TP_ARGS(cpu),
>> +
>> +     TP_STRUCT__entry(
>> +             __field(unsigned int, cpu)
>> +             __field(unsigned int, seq)
>> +     ),
>> +
>> +     TP_fast_assign(
>> +             __entry->cpu = cpu;
>> +             __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> +     ),
>> +
>> +     TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_idle_enter,
>> +
>> +     TP_PROTO(unsigned int cpu),
>> +
>> +     TP_ARGS(cpu),
>> +
>> +     TP_STRUCT__entry(
>> +             __field(unsigned int, cpu)
>> +             __field(unsigned int, seq)
>> +     ),
>> +
>> +     TP_fast_assign(
>> +             __entry->cpu = cpu;
>> +             __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> +     ),
>> +
>> +     TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_idle_exit,
>> +
>> +     TP_PROTO(unsigned int cpu),
>> +
>> +     TP_ARGS(cpu),
>> +
>> +     TP_STRUCT__entry(
>> +             __field(unsigned int, cpu)
>> +             __field(unsigned int, seq)
>> +     ),
>> +
>> +     TP_fast_assign(
>> +             __entry->cpu = cpu;
>> +             __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> +     ),
>> +
>> +     TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_abort,
>> +
>> +     TP_PROTO(unsigned int cpu),
>> +
>> +     TP_ARGS(cpu),
>> +
>> +     TP_STRUCT__entry(
>> +             __field(unsigned int, cpu)
>> +             __field(unsigned int, seq)
>> +     ),
>> +
>> +     TP_fast_assign(
>> +             __entry->cpu = cpu;
>> +             __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> +     ),
>> +
>> +     TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_detected_abort,
>> +
>> +     TP_PROTO(unsigned int cpu),
>> +
>> +     TP_ARGS(cpu),
>> +
>> +     TP_STRUCT__entry(
>> +             __field(unsigned int, cpu)
>> +             __field(unsigned int, seq)
>> +     ),
>> +
>> +     TP_fast_assign(
>> +             __entry->cpu = cpu;
>> +             __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> +     ),
>> +
>> +     TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_poke,
>> +
>> +     TP_PROTO(unsigned int cpu),
>> +
>> +     TP_ARGS(cpu),
>> +
>> +     TP_STRUCT__entry(
>> +             __field(unsigned int, cpu)
>> +             __field(unsigned int, seq)
>> +     ),
>> +
>> +     TP_fast_assign(
>> +             __entry->cpu = cpu;
>> +             __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> +     ),
>> +
>> +     TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_poked,
>> +
>> +     TP_PROTO(unsigned int cpu),
>> +
>> +     TP_ARGS(cpu),
>> +
>> +     TP_STRUCT__entry(
>> +             __field(unsigned int, cpu)
>> +             __field(unsigned int, seq)
>> +     ),
>> +
>> +     TP_fast_assign(
>> +             __entry->cpu = cpu;
>> +             __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> +     ),
>> +
>> +     TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>
> Egad! Please use DECLARE_EVENT_CLASS() and DEFINE_EVENT() when adding
> events that are basically the same. Every TRACE_EVENT() can bloat the
> kernel by 5k, using the DEFINE_EVENT()s keeps each event around just a
> few hundred bytes.
>
> See include/trace/events/ext4.h for examples.

Thanks, I'll take a look.  There is no mention in Documentation/ or
samples/ of DECLARE_EVENT_CLASS() or DEFINE_EVENT(), nor any mention
of the cost of TRACE_EVENT().

Looking at the new power tracing code, I will also rework these events
to be more similar to the existing ones.

I suggest skipping this patch for 3.5, and I'll post an updated one for 3.6.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [PATCHv3 5/5] cpuidle: coupled: add trace events
@ 2012-05-03 21:13       ` Colin Cross
  0 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-05-03 21:13 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, May 3, 2012 at 2:00 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
> On Mon, 2012-04-30 at 13:09 -0700, Colin Cross wrote:
>
>> diff --git a/include/trace/events/cpuidle.h b/include/trace/events/cpuidle.h
>> new file mode 100644
>> index 0000000..9b2cbbb
>> --- /dev/null
>> +++ b/include/trace/events/cpuidle.h
>> @@ -0,0 +1,243 @@
>> +#undef TRACE_SYSTEM
>> +#define TRACE_SYSTEM cpuidle
>> +
>> +#if !defined(_TRACE_CPUIDLE_H) || defined(TRACE_HEADER_MULTI_READ)
>> +#define _TRACE_CPUIDLE_H
>> +
>> +#include <linux/atomic.h>
>> +#include <linux/tracepoint.h>
>> +
>> +extern atomic_t cpuidle_trace_seq;
>> +
>> +TRACE_EVENT(coupled_enter,
>> +
>> + ? ? TP_PROTO(unsigned int cpu),
>> +
>> + ? ? TP_ARGS(cpu),
>> +
>> + ? ? TP_STRUCT__entry(
>> + ? ? ? ? ? ? __field(unsigned int, cpu)
>> + ? ? ? ? ? ? __field(unsigned int, seq)
>> + ? ? ),
>> +
>> + ? ? TP_fast_assign(
>> + ? ? ? ? ? ? __entry->cpu = cpu;
>> + ? ? ? ? ? ? __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> + ? ? ),
>> +
>> + ? ? TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_exit,
>> +
>> + ? ? TP_PROTO(unsigned int cpu),
>> +
>> + ? ? TP_ARGS(cpu),
>> +
>> + ? ? TP_STRUCT__entry(
>> + ? ? ? ? ? ? __field(unsigned int, cpu)
>> + ? ? ? ? ? ? __field(unsigned int, seq)
>> + ? ? ),
>> +
>> + ? ? TP_fast_assign(
>> + ? ? ? ? ? ? __entry->cpu = cpu;
>> + ? ? ? ? ? ? __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> + ? ? ),
>> +
>> + ? ? TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_spin,
>> +
>> + ? ? TP_PROTO(unsigned int cpu),
>> +
>> + ? ? TP_ARGS(cpu),
>> +
>> + ? ? TP_STRUCT__entry(
>> + ? ? ? ? ? ? __field(unsigned int, cpu)
>> + ? ? ? ? ? ? __field(unsigned int, seq)
>> + ? ? ),
>> +
>> + ? ? TP_fast_assign(
>> + ? ? ? ? ? ? __entry->cpu = cpu;
>> + ? ? ? ? ? ? __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> + ? ? ),
>> +
>> + ? ? TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_unspin,
>> +
>> + ? ? TP_PROTO(unsigned int cpu),
>> +
>> + ? ? TP_ARGS(cpu),
>> +
>> + ? ? TP_STRUCT__entry(
>> + ? ? ? ? ? ? __field(unsigned int, cpu)
>> + ? ? ? ? ? ? __field(unsigned int, seq)
>> + ? ? ),
>> +
>> + ? ? TP_fast_assign(
>> + ? ? ? ? ? ? __entry->cpu = cpu;
>> + ? ? ? ? ? ? __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> + ? ? ),
>> +
>> + ? ? TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_safe_enter,
>> +
>> + ? ? TP_PROTO(unsigned int cpu),
>> +
>> + ? ? TP_ARGS(cpu),
>> +
>> + ? ? TP_STRUCT__entry(
>> + ? ? ? ? ? ? __field(unsigned int, cpu)
>> + ? ? ? ? ? ? __field(unsigned int, seq)
>> + ? ? ),
>> +
>> + ? ? TP_fast_assign(
>> + ? ? ? ? ? ? __entry->cpu = cpu;
>> + ? ? ? ? ? ? __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> + ? ? ),
>> +
>> + ? ? TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_safe_exit,
>> +
>> + ? ? TP_PROTO(unsigned int cpu),
>> +
>> + ? ? TP_ARGS(cpu),
>> +
>> + ? ? TP_STRUCT__entry(
>> + ? ? ? ? ? ? __field(unsigned int, cpu)
>> + ? ? ? ? ? ? __field(unsigned int, seq)
>> + ? ? ),
>> +
>> + ? ? TP_fast_assign(
>> + ? ? ? ? ? ? __entry->cpu = cpu;
>> + ? ? ? ? ? ? __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> + ? ? ),
>> +
>> + ? ? TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_idle_enter,
>> +
>> + ? ? TP_PROTO(unsigned int cpu),
>> +
>> + ? ? TP_ARGS(cpu),
>> +
>> + ? ? TP_STRUCT__entry(
>> + ? ? ? ? ? ? __field(unsigned int, cpu)
>> + ? ? ? ? ? ? __field(unsigned int, seq)
>> + ? ? ),
>> +
>> + ? ? TP_fast_assign(
>> + ? ? ? ? ? ? __entry->cpu = cpu;
>> + ? ? ? ? ? ? __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> + ? ? ),
>> +
>> + ? ? TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_idle_exit,
>> +
>> + ? ? TP_PROTO(unsigned int cpu),
>> +
>> + ? ? TP_ARGS(cpu),
>> +
>> + ? ? TP_STRUCT__entry(
>> + ? ? ? ? ? ? __field(unsigned int, cpu)
>> + ? ? ? ? ? ? __field(unsigned int, seq)
>> + ? ? ),
>> +
>> + ? ? TP_fast_assign(
>> + ? ? ? ? ? ? __entry->cpu = cpu;
>> + ? ? ? ? ? ? __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> + ? ? ),
>> +
>> + ? ? TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_abort,
>> +
>> + ? ? TP_PROTO(unsigned int cpu),
>> +
>> + ? ? TP_ARGS(cpu),
>> +
>> + ? ? TP_STRUCT__entry(
>> + ? ? ? ? ? ? __field(unsigned int, cpu)
>> + ? ? ? ? ? ? __field(unsigned int, seq)
>> + ? ? ),
>> +
>> + ? ? TP_fast_assign(
>> + ? ? ? ? ? ? __entry->cpu = cpu;
>> + ? ? ? ? ? ? __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> + ? ? ),
>> +
>> + ? ? TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_detected_abort,
>> +
>> + ? ? TP_PROTO(unsigned int cpu),
>> +
>> + ? ? TP_ARGS(cpu),
>> +
>> + ? ? TP_STRUCT__entry(
>> + ? ? ? ? ? ? __field(unsigned int, cpu)
>> + ? ? ? ? ? ? __field(unsigned int, seq)
>> + ? ? ),
>> +
>> + ? ? TP_fast_assign(
>> + ? ? ? ? ? ? __entry->cpu = cpu;
>> + ? ? ? ? ? ? __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> + ? ? ),
>> +
>> + ? ? TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_poke,
>> +
>> + ? ? TP_PROTO(unsigned int cpu),
>> +
>> + ? ? TP_ARGS(cpu),
>> +
>> + ? ? TP_STRUCT__entry(
>> + ? ? ? ? ? ? __field(unsigned int, cpu)
>> + ? ? ? ? ? ? __field(unsigned int, seq)
>> + ? ? ),
>> +
>> + ? ? TP_fast_assign(
>> + ? ? ? ? ? ? __entry->cpu = cpu;
>> + ? ? ? ? ? ? __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> + ? ? ),
>> +
>> + ? ? TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>> +
>> +TRACE_EVENT(coupled_poked,
>> +
>> + ? ? TP_PROTO(unsigned int cpu),
>> +
>> + ? ? TP_ARGS(cpu),
>> +
>> + ? ? TP_STRUCT__entry(
>> + ? ? ? ? ? ? __field(unsigned int, cpu)
>> + ? ? ? ? ? ? __field(unsigned int, seq)
>> + ? ? ),
>> +
>> + ? ? TP_fast_assign(
>> + ? ? ? ? ? ? __entry->cpu = cpu;
>> + ? ? ? ? ? ? __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
>> + ? ? ),
>> +
>> + ? ? TP_printk("%u %u", __entry->seq, __entry->cpu)
>> +);
>
> Egad! Please use DECLARE_EVENT_CLASS() and DEFINE_EVENT() when adding
> events that are basically the same. Every TRACE_EVENT() can bloat the
> kernel by 5k, using the DEFINE_EVENT()s keeps each event around just a
> few hundred bytes.
>
> See include/trace/events/ext4.h for examples.

Thanks, I'll take a look.  There is no mention in Documentation/ or
samples/ of DECLARE_EVENT_CLASS() or DEFINE_EVENT(), nor any mention
of the cost of TRACE_EVENT().

Looking at the new power tracing code, I will also rework these events
to be more similar to the existing ones.

I suggest skipping this patch for 3.5, and I'll post an updated one for 3.6.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [linux-pm] [PATCHv3 3/5] cpuidle: add support for states that affect multiple cpus
  2012-04-30 20:09   ` Colin Cross
  (?)
@ 2012-05-03 22:14     ` Rafael J. Wysocki
  -1 siblings, 0 replies; 78+ messages in thread
From: Rafael J. Wysocki @ 2012-05-03 22:14 UTC (permalink / raw)
  To: linux-pm
  Cc: Colin Cross, linux-kernel, Kevin Hilman, Len Brown, Russell King,
	Greg Kroah-Hartman, Kay Sievers, Amit Kucheria, Arjan van de Ven,
	Arnd Bergmann, linux-arm-kernel

On Monday, April 30, 2012, Colin Cross wrote:
> On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
> cpus cannot be independently powered down, either due to
> sequencing restrictions (on Tegra 2, cpu 0 must be the last to
> power down), or due to HW bugs (on OMAP4460, a cpu powering up
> will corrupt the gic state unless the other cpu runs a work
> around).  Each cpu has a power state that it can enter without
> coordinating with the other cpu (usually Wait For Interrupt, or
> WFI), and one or more "coupled" power states that affect blocks
> shared between the cpus (L2 cache, interrupt controller, and
> sometimes the whole SoC).  Entering a coupled power state must
> be tightly controlled on both cpus.
> 
> The easiest solution to implementing coupled cpu power states is
> to hotplug all but one cpu whenever possible, usually using a
> cpufreq governor that looks at cpu load to determine when to
> enable the secondary cpus.  This causes problems, as hotplug is an
> expensive operation, so the number of hotplug transitions must be
> minimized, leading to very slow response to loads, often on the
> order of seconds.
> 
> This file implements an alternative solution, where each cpu will
> wait in the WFI state until all cpus are ready to enter a coupled
> state, at which point the coupled state function will be called
> on all cpus at approximately the same time.
> 
> Once all cpus are ready to enter idle, they are woken by an smp
> cross call.  At this point, there is a chance that one of the
> cpus will find work to do, and choose not to enter idle.  A
> final pass is needed to guarantee that all cpus will call the
> power state enter function at the same time.  During this pass,
> each cpu will increment the ready counter, and continue once the
> ready counter matches the number of online coupled cpus.  If any
> cpu exits idle, the other cpus will decrement their counter and
> retry.
> 
> To use coupled cpuidle states, a cpuidle driver must:
> 
>    Set struct cpuidle_device.coupled_cpus to the mask of all
>    coupled cpus, usually the same as cpu_possible_mask if all cpus
>    are part of the same cluster.  The coupled_cpus mask must be
>    set in the struct cpuidle_device for each cpu.
> 
>    Set struct cpuidle_device.safe_state to a state that is not a
>    coupled state.  This is usually WFI.
> 
>    Set CPUIDLE_FLAG_COUPLED in struct cpuidle_state.flags for each
>    state that affects multiple cpus.
> 
>    Provide a struct cpuidle_state.enter function for each state
>    that affects multiple cpus.  This function is guaranteed to be
>    called on all cpus at approximately the same time.  The driver
>    should ensure that the cpus all abort together if any cpu tries
>    to abort once the function is called.
> 
> Cc: Len Brown <len.brown@intel.com>
> Cc: Amit Kucheria <amit.kucheria@linaro.org>
> Cc: Arjan van de Ven <arjan@linux.intel.com>
> Cc: Trinabh Gupta <g.trinabh@gmail.com>
> Cc: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
> Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> Reviewed-by: Kevin Hilman <khilman@ti.com>
> Tested-by: Kevin Hilman <khilman@ti.com>
> Signed-off-by: Colin Cross <ccross@android.com>
> ---
>  drivers/cpuidle/Kconfig   |    3 +
>  drivers/cpuidle/Makefile  |    1 +
>  drivers/cpuidle/coupled.c |  571 +++++++++++++++++++++++++++++++++++++++++++++
>  drivers/cpuidle/cpuidle.c |   15 ++-
>  drivers/cpuidle/cpuidle.h |   30 +++
>  include/linux/cpuidle.h   |    7 +
>  6 files changed, 626 insertions(+), 1 deletions(-)
>  create mode 100644 drivers/cpuidle/coupled.c
> 
> v2:
>    * removed the coupled lock, replacing it with atomic counters
>    * added a check for outstanding pokes before beginning the
>      final transition to avoid extra wakeups
>    * made the cpuidle_coupled struct completely private
>    * fixed kerneldoc comment formatting
> 
> v3:
>    * fixed decrement in cpuidle_coupled_cpu_set_alive
>    * added kerneldoc annotation to the description
> 
> diff --git a/drivers/cpuidle/Kconfig b/drivers/cpuidle/Kconfig
> index 78a666d..a76b689 100644
> --- a/drivers/cpuidle/Kconfig
> +++ b/drivers/cpuidle/Kconfig
> @@ -18,3 +18,6 @@ config CPU_IDLE_GOV_MENU
>  	bool
>  	depends on CPU_IDLE && NO_HZ
>  	default y
> +
> +config ARCH_NEEDS_CPU_IDLE_COUPLED
> +	def_bool n
> diff --git a/drivers/cpuidle/Makefile b/drivers/cpuidle/Makefile
> index 5634f88..38c8f69 100644
> --- a/drivers/cpuidle/Makefile
> +++ b/drivers/cpuidle/Makefile
> @@ -3,3 +3,4 @@
>  #
>  
>  obj-y += cpuidle.o driver.o governor.o sysfs.o governors/
> +obj-$(CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED) += coupled.o
> diff --git a/drivers/cpuidle/coupled.c b/drivers/cpuidle/coupled.c
> new file mode 100644
> index 0000000..d097826
> --- /dev/null
> +++ b/drivers/cpuidle/coupled.c
> @@ -0,0 +1,571 @@
> +/*
> + * coupled.c - helper functions to enter the same idle state on multiple cpus
> + *
> + * Copyright (c) 2011 Google, Inc.
> + *
> + * Author: Colin Cross <ccross@android.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + */
> +
> +#include <linux/kernel.h>
> +#include <linux/cpu.h>
> +#include <linux/cpuidle.h>
> +#include <linux/mutex.h>
> +#include <linux/sched.h>
> +#include <linux/slab.h>
> +#include <linux/spinlock.h>
> +
> +#include "cpuidle.h"
> +
> +/**
> + * DOC: Coupled cpuidle states
> + *
> + * On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
> + * cpus cannot be independently powered down, either due to
> + * sequencing restrictions (on Tegra 2, cpu 0 must be the last to
> + * power down), or due to HW bugs (on OMAP4460, a cpu powering up
> + * will corrupt the gic state unless the other cpu runs a work
> + * around).  Each cpu has a power state that it can enter without
> + * coordinating with the other cpu (usually Wait For Interrupt, or
> + * WFI), and one or more "coupled" power states that affect blocks
> + * shared between the cpus (L2 cache, interrupt controller, and
> + * sometimes the whole SoC).  Entering a coupled power state must
> + * be tightly controlled on both cpus.
> + *
> + * The easiest solution to implementing coupled cpu power states is
> + * to hotplug all but one cpu whenever possible, usually using a
> + * cpufreq governor that looks at cpu load to determine when to
> + * enable the secondary cpus.  This causes problems, as hotplug is an
> + * expensive operation, so the number of hotplug transitions must be
> + * minimized, leading to very slow response to loads, often on the
> + * order of seconds.

I'd drop the above paragraph entirely.  It doesn't say much about what's in
the file and refers to an obviously suboptimal approach.

> + *
> + * This file implements an alternative solution, where each cpu will
> + * wait in the WFI state until all cpus are ready to enter a coupled
> + * state, at which point the coupled state function will be called
> + * on all cpus at approximately the same time.
> + *
> + * Once all cpus are ready to enter idle, they are woken by an smp
> + * cross call.  At this point, there is a chance that one of the
> + * cpus will find work to do, and choose not to enter idle.  A
> + * final pass is needed to guarantee that all cpus will call the
> + * power state enter function at the same time.  During this pass,
> + * each cpu will increment the ready counter, and continue once the
> + * ready counter matches the number of online coupled cpus.  If any
> + * cpu exits idle, the other cpus will decrement their counter and
> + * retry.
> + *
> + * requested_state stores the deepest coupled idle state each cpu
> + * is ready for.  It is assumed that the states are indexed from
> + * shallowest (highest power, lowest exit latency) to deepest
> + * (lowest power, highest exit latency).  The requested_state
> + * variable is not locked.  It is only written from the cpu that
> + * it stores (or by the on/offlining cpu if that cpu is offline),
> + * and only read after all the cpus are ready for the coupled idle
> + * state are are no longer updating it.
> + *
> + * Three atomic counters are used.  alive_count tracks the number
> + * of cpus in the coupled set that are currently or soon will be
> + * online.  waiting_count tracks the number of cpus that are in
> + * the waiting loop, in the ready loop, or in the coupled idle state.
> + * ready_count tracks the number of cpus that are in the ready loop
> + * or in the coupled idle state.
> + *
> + * To use coupled cpuidle states, a cpuidle driver must:
> + *
> + *    Set struct cpuidle_device.coupled_cpus to the mask of all
> + *    coupled cpus, usually the same as cpu_possible_mask if all cpus
> + *    are part of the same cluster.  The coupled_cpus mask must be
> + *    set in the struct cpuidle_device for each cpu.
> + *
> + *    Set struct cpuidle_device.safe_state to a state that is not a
> + *    coupled state.  This is usually WFI.
> + *
> + *    Set CPUIDLE_FLAG_COUPLED in struct cpuidle_state.flags for each
> + *    state that affects multiple cpus.
> + *
> + *    Provide a struct cpuidle_state.enter function for each state
> + *    that affects multiple cpus.  This function is guaranteed to be
> + *    called on all cpus at approximately the same time.  The driver
> + *    should ensure that the cpus all abort together if any cpu tries
> + *    to abort once the function is called.  The function should return
> + *    with interrupts still disabled.
> + */
> +
> +/**
> + * struct cpuidle_coupled - data for set of cpus that share a coupled idle state
> + * @coupled_cpus: mask of cpus that are part of the coupled set
> + * @requested_state: array of requested states for cpus in the coupled set
> + * @ready_count: count of cpus that are ready for the final idle transition
> + * @waiting_count: count of cpus that are waiting for all other cpus to be idle
> + * @alive_count: count of cpus that are online or soon will be
> + * @refcnt: reference count of cpuidle devices that are using this struct
> + */
> +struct cpuidle_coupled {
> +	cpumask_t coupled_cpus;
> +	int requested_state[NR_CPUS];
> +	atomic_t ready_count;
> +	atomic_t waiting_count;
> +	atomic_t alive_count;
> +	int refcnt;
> +};
> +
> +#define CPUIDLE_COUPLED_NOT_IDLE	(-1)
> +#define CPUIDLE_COUPLED_DEAD		(-2)
> +
> +static DEFINE_MUTEX(cpuidle_coupled_lock);
> +static DEFINE_PER_CPU(struct call_single_data, cpuidle_coupled_poke_cb);
> +
> +/*
> + * The cpuidle_coupled_poked_mask masked is used to avoid calling

s/masked/mask/ perhaps?

> + * __smp_call_function_single with the per cpu call_single_data struct already
> + * in use.  This prevents a deadlock where two cpus are waiting for each others
> + * call_single_data struct to be available
> + */
> +static cpumask_t cpuidle_coupled_poked_mask;
> +
> +/**
> + * cpuidle_state_is_coupled - check if a state is part of a coupled set
> + * @dev: struct cpuidle_device for the current cpu
> + * @drv: struct cpuidle_driver for the platform
> + * @state: index of the target state in drv->states
> + *
> + * Returns true if the target state is coupled with cpus besides this one
> + */
> +bool cpuidle_state_is_coupled(struct cpuidle_device *dev,
> +	struct cpuidle_driver *drv, int state)
> +{
> +	return drv->states[state].flags & CPUIDLE_FLAG_COUPLED;
> +}
> +
> +/**
> + * cpuidle_coupled_cpus_waiting - check if all cpus in a coupled set are waiting
> + * @coupled: the struct coupled that contains the current cpu
> + *
> + * Returns true if all cpus coupled to this target state are in the wait loop
> + */
> +static inline bool cpuidle_coupled_cpus_waiting(struct cpuidle_coupled *coupled)
> +{
> +	int alive;
> +	int waiting;
> +
> +	/*
> +	 * Read alive before reading waiting so a booting cpu is not treated as
> +	 * idle
> +	 */

Well, the comment doesn't really explain much.  In particular, why the boot CPU
could be treated as idle if the reads were in a different order.

> +	alive = atomic_read(&coupled->alive_count);
> +	smp_rmb();
> +	waiting = atomic_read(&coupled->waiting_count);

Have you considered using one atomic variable to accommodate both counters
such that the upper half contains one counter and the lower half contains
the other?

> +
> +	return (waiting == alive);
> +}
> +
> +/**
> + * cpuidle_coupled_get_state - determine the deepest idle state
> + * @dev: struct cpuidle_device for this cpu
> + * @coupled: the struct coupled that contains the current cpu
> + *
> + * Returns the deepest idle state that all coupled cpus can enter
> + */
> +static inline int cpuidle_coupled_get_state(struct cpuidle_device *dev,
> +		struct cpuidle_coupled *coupled)
> +{
> +	int i;
> +	int state = INT_MAX;
> +
> +	for_each_cpu_mask(i, coupled->coupled_cpus)
> +		if (coupled->requested_state[i] != CPUIDLE_COUPLED_DEAD &&
> +		    coupled->requested_state[i] < state)
> +			state = coupled->requested_state[i];
> +
> +	BUG_ON(state >= dev->state_count || state < 0);

Do you have to crash the kernel here if the assertion doesn't hold?  Maybe
you could use WARN_ON() and return error code?

> +
> +	return state;
> +}
> +
> +static void cpuidle_coupled_poked(void *info)
> +{
> +	int cpu = (unsigned long)info;
> +	cpumask_clear_cpu(cpu, &cpuidle_coupled_poked_mask);
> +}
> +
> +/**
> + * cpuidle_coupled_poke - wake up a cpu that may be waiting
> + * @cpu: target cpu
> + *
> + * Ensures that the target cpu exits it's waiting idle state (if it is in it)
> + * and will see updates to waiting_count before it re-enters it's waiting idle
> + * state.
> + *
> + * If cpuidle_coupled_poked_mask is already set for the target cpu, that cpu
> + * either has or will soon have a pending IPI that will wake it out of idle,
> + * or it is currently processing the IPI and is not in idle.
> + */
> +static void cpuidle_coupled_poke(int cpu)
> +{
> +	struct call_single_data *csd = &per_cpu(cpuidle_coupled_poke_cb, cpu);
> +
> +	if (!cpumask_test_and_set_cpu(cpu, &cpuidle_coupled_poked_mask))
> +		__smp_call_function_single(cpu, csd, 0);
> +}
> +
> +/**
> + * cpuidle_coupled_poke_others - wake up all other cpus that may be waiting
> + * @dev: struct cpuidle_device for this cpu
> + * @coupled: the struct coupled that contains the current cpu
> + *
> + * Calls cpuidle_coupled_poke on all other online cpus.
> + */
> +static void cpuidle_coupled_poke_others(struct cpuidle_device *dev,
> +		struct cpuidle_coupled *coupled)

It looks like you could simply pass cpu (not dev) to this function.

> +{
> +	int cpu;
> +
> +	for_each_cpu_mask(cpu, coupled->coupled_cpus)
> +		if (cpu != dev->cpu && cpu_online(cpu))
> +			cpuidle_coupled_poke(cpu);
> +}
> +
> +/**
> + * cpuidle_coupled_set_waiting - mark this cpu as in the wait loop
> + * @dev: struct cpuidle_device for this cpu
> + * @coupled: the struct coupled that contains the current cpu
> + * @next_state: the index in drv->states of the requested state for this cpu
> + *
> + * Updates the requested idle state for the specified cpuidle device,
> + * poking all coupled cpus out of idle if necessary to let them see the new
> + * state.
> + *
> + * Provides memory ordering around waiting_count.
> + */
> +static void cpuidle_coupled_set_waiting(struct cpuidle_device *dev,
> +		struct cpuidle_coupled *coupled, int next_state)

If you passed cpu (instead of dev) to cpuidle_coupled_poke_others(),
then you could pass cpu (instead of dev) to this function too, it seems.

> +{
> +	int alive;
> +
> +	BUG_ON(coupled->requested_state[dev->cpu] >= 0);

Would be WARN_ON() + do nothing too dangerous here?

> +
> +	coupled->requested_state[dev->cpu] = next_state;
> +
> +	/*
> +	 * If this is the last cpu to enter the waiting state, poke
> +	 * all the other cpus out of their waiting state so they can
> +	 * enter a deeper state.  This can race with one of the cpus
> +	 * exiting the waiting state due to an interrupt and
> +	 * decrementing waiting_count, see comment below.
> +	 */
> +	alive = atomic_read(&coupled->alive_count);
> +	if (atomic_inc_return(&coupled->waiting_count) == alive)
> +		cpuidle_coupled_poke_others(dev, coupled);
> +}
> +
> +/**
> + * cpuidle_coupled_set_not_waiting - mark this cpu as leaving the wait loop
> + * @dev: struct cpuidle_device for this cpu
> + * @coupled: the struct coupled that contains the current cpu
> + *
> + * Removes the requested idle state for the specified cpuidle device.
> + *
> + * Provides memory ordering around waiting_count.
> + */
> +static void cpuidle_coupled_set_not_waiting(struct cpuidle_device *dev,
> +		struct cpuidle_coupled *coupled)

It looks like dev doesn't have to be passed here, cpu would be enough.

> +{
> +	BUG_ON(coupled->requested_state[dev->cpu] < 0);

Well, like above?

> +
> +	/*
> +	 * Decrementing waiting_count can race with incrementing it in
> +	 * cpuidle_coupled_set_waiting, but that's OK.  Worst case, some
> +	 * cpus will increment ready_count and then spin until they
> +	 * notice that this cpu has cleared it's requested_state.
> +	 */

So it looks like having ready_count and waiting_count in one atomic variable
can spare us this particular race condition.

> +
> +	smp_mb__before_atomic_dec();
> +	atomic_dec(&coupled->waiting_count);
> +	smp_mb__after_atomic_dec();

Do you really need both the before and after barriers here?  If so, then why?

> +
> +	coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
> +}
> +
> +/**
> + * cpuidle_enter_state_coupled - attempt to enter a state with coupled cpus
> + * @dev: struct cpuidle_device for the current cpu
> + * @drv: struct cpuidle_driver for the platform
> + * @next_state: index of the requested state in drv->states
> + *
> + * Coordinate with coupled cpus to enter the target state.  This is a two
> + * stage process.  In the first stage, the cpus are operating independently,
> + * and may call into cpuidle_enter_state_coupled at completely different times.
> + * To save as much power as possible, the first cpus to call this function will
> + * go to an intermediate state (the cpuidle_device's safe state), and wait for
> + * all the other cpus to call this function.  Once all coupled cpus are idle,
> + * the second stage will start.  Each coupled cpu will spin until all cpus have
> + * guaranteed that they will call the target_state.

It would be good to mention the conditions for calling this function (eg.
interrupts disabled on the local CPU).

> + */
> +int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
> +		struct cpuidle_driver *drv, int next_state)
> +{
> +	int entered_state = -1;
> +	struct cpuidle_coupled *coupled = dev->coupled;
> +	int alive;
> +
> +	if (!coupled)
> +		return -EINVAL;
> +
> +	BUG_ON(atomic_read(&coupled->ready_count));

Again, I'd do a WARN_ON() and return error code from here (to avoid crashing
the kernel).

> +	cpuidle_coupled_set_waiting(dev, coupled, next_state);
> +
> +retry:
> +	/*
> +	 * Wait for all coupled cpus to be idle, using the deepest state
> +	 * allowed for a single cpu.
> +	 */
> +	while (!need_resched() && !cpuidle_coupled_cpus_waiting(coupled)) {
> +		entered_state = cpuidle_enter_state(dev, drv,
> +			dev->safe_state_index);
> +
> +		local_irq_enable();
> +		while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
> +			cpu_relax();

Hmm.  What exactly is this loop supposed to achieve?

> +		local_irq_disable();

Anyway, you seem to be calling it twice along with this enabling/disabling of
interrupts.  I'd put that into a separate function and explain its role in a
kerneldoc comment.

> +	}
> +
> +	/* give a chance to process any remaining pokes */
> +	local_irq_enable();
> +	while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
> +		cpu_relax();
> +	local_irq_disable();
> +
> +	if (need_resched()) {
> +		cpuidle_coupled_set_not_waiting(dev, coupled);
> +		goto out;
> +	}
> +
> +	/*
> +	 * All coupled cpus are probably idle.  There is a small chance that
> +	 * one of the other cpus just became active.  Increment a counter when
> +	 * ready, and spin until all coupled cpus have incremented the counter.
> +	 * Once a cpu has incremented the counter, it cannot abort idle and must
> +	 * spin until either the count has hit alive_count, or another cpu
> +	 * leaves idle.
> +	 */
> +
> +	smp_mb__before_atomic_inc();
> +	atomic_inc(&coupled->ready_count);
> +	smp_mb__after_atomic_inc();

It seems that at least one of these barriers is unnecessary ...

> +	/* alive_count can't change while ready_count > 0 */
> +	alive = atomic_read(&coupled->alive_count);
> +	while (atomic_read(&coupled->ready_count) != alive) {
> +		/* Check if any other cpus bailed out of idle. */
> +		if (!cpuidle_coupled_cpus_waiting(coupled)) {
> +			atomic_dec(&coupled->ready_count);
> +			smp_mb__after_atomic_dec();
> +			goto retry;
> +		}
> +
> +		cpu_relax();
> +	}
> +
> +	/* all cpus have acked the coupled state */
> +	smp_rmb();

What is the barrier here for?

> +
> +	next_state = cpuidle_coupled_get_state(dev, coupled);
> +
> +	entered_state = cpuidle_enter_state(dev, drv, next_state);
> +
> +	cpuidle_coupled_set_not_waiting(dev, coupled);
> +	atomic_dec(&coupled->ready_count);
> +	smp_mb__after_atomic_dec();
> +
> +out:
> +	/*
> +	 * Normal cpuidle states are expected to return with irqs enabled.
> +	 * That leads to an inefficiency where a cpu receiving an interrupt
> +	 * that brings it out of idle will process that interrupt before
> +	 * exiting the idle enter function and decrementing ready_count.  All
> +	 * other cpus will need to spin waiting for the cpu that is processing
> +	 * the interrupt.  If the driver returns with interrupts disabled,
> +	 * all other cpus will loop back into the safe idle state instead of
> +	 * spinning, saving power.
> +	 *
> +	 * Calling local_irq_enable here allows coupled states to return with
> +	 * interrupts disabled, but won't cause problems for drivers that
> +	 * exit with interrupts enabled.
> +	 */
> +	local_irq_enable();
> +
> +	/*
> +	 * Wait until all coupled cpus have exited idle.  There is no risk that
> +	 * a cpu exits and re-enters the ready state because this cpu has
> +	 * already decremented its waiting_count.
> +	 */
> +	while (atomic_read(&coupled->ready_count) != 0)
> +		cpu_relax();
> +
> +	smp_rmb();

And here?

> +
> +	return entered_state;
> +}
> +
> +/**
> + * cpuidle_coupled_register_device - register a coupled cpuidle device
> + * @dev: struct cpuidle_device for the current cpu
> + *
> + * Called from cpuidle_register_device to handle coupled idle init.  Finds the
> + * cpuidle_coupled struct for this set of coupled cpus, or creates one if none
> + * exists yet.
> + */
> +int cpuidle_coupled_register_device(struct cpuidle_device *dev)
> +{
> +	int cpu;
> +	struct cpuidle_device *other_dev;
> +	struct call_single_data *csd;
> +	struct cpuidle_coupled *coupled;
> +
> +	if (cpumask_empty(&dev->coupled_cpus))
> +		return 0;
> +
> +	for_each_cpu_mask(cpu, dev->coupled_cpus) {
> +		other_dev = per_cpu(cpuidle_devices, cpu);
> +		if (other_dev && other_dev->coupled) {
> +			coupled = other_dev->coupled;
> +			goto have_coupled;
> +		}
> +	}
> +
> +	/* No existing coupled info found, create a new one */
> +	coupled = kzalloc(sizeof(struct cpuidle_coupled), GFP_KERNEL);
> +	if (!coupled)
> +		return -ENOMEM;
> +
> +	coupled->coupled_cpus = dev->coupled_cpus;
> +	for_each_cpu_mask(cpu, coupled->coupled_cpus)
> +		coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_DEAD;
> +
> +have_coupled:
> +	dev->coupled = coupled;
> +	BUG_ON(!cpumask_equal(&dev->coupled_cpus, &coupled->coupled_cpus));
> +
> +	if (cpu_online(dev->cpu)) {
> +		coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
> +		atomic_inc(&coupled->alive_count);
> +	}
> +
> +	coupled->refcnt++;
> +
> +	csd = &per_cpu(cpuidle_coupled_poke_cb, dev->cpu);
> +	csd->func = cpuidle_coupled_poked;
> +	csd->info = (void *)(unsigned long)dev->cpu;
> +
> +	return 0;
> +}
> +
> +/**
> + * cpuidle_coupled_unregister_device - unregister a coupled cpuidle device
> + * @dev: struct cpuidle_device for the current cpu
> + *
> + * Called from cpuidle_unregister_device to tear down coupled idle.  Removes the
> + * cpu from the coupled idle set, and frees the cpuidle_coupled_info struct if
> + * this was the last cpu in the set.
> + */
> +void cpuidle_coupled_unregister_device(struct cpuidle_device *dev)
> +{
> +	struct cpuidle_coupled *coupled = dev->coupled;
> +
> +	if (cpumask_empty(&dev->coupled_cpus))
> +		return;
> +
> +	if (--coupled->refcnt)
> +		kfree(coupled);
> +	dev->coupled = NULL;
> +}
> +
> +/**
> + * cpuidle_coupled_cpu_set_alive - adjust alive_count during hotplug transitions
> + * @cpu: target cpu number
> + * @alive: whether the target cpu is going up or down
> + *
> + * Run on the cpu that is bringing up the target cpu, before the target cpu
> + * has been booted, or after the target cpu is completely dead.
> + */
> +static void cpuidle_coupled_cpu_set_alive(int cpu, bool alive)
> +{
> +	struct cpuidle_device *dev;
> +	struct cpuidle_coupled *coupled;
> +
> +	mutex_lock(&cpuidle_lock);
> +
> +	dev = per_cpu(cpuidle_devices, cpu);
> +	if (!dev->coupled)
> +		goto out;
> +
> +	coupled = dev->coupled;
> +
> +	/*
> +	 * waiting_count must be at least 1 less than alive_count, because
> +	 * this cpu is not waiting.  Spin until all cpus have noticed this cpu
> +	 * is not idle and exited the ready loop before changing alive_count.
> +	 */
> +	while (atomic_read(&coupled->ready_count))
> +		cpu_relax();
> +
> +	if (alive) {
> +		smp_mb__before_atomic_inc();
> +		atomic_inc(&coupled->alive_count);
> +		smp_mb__after_atomic_inc();
> +		coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
> +	} else {
> +		smp_mb__before_atomic_dec();
> +		atomic_dec(&coupled->alive_count);
> +		smp_mb__after_atomic_dec();
> +		coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_DEAD;

There's too many SMP barriers above, but I'm not quite sure which of them (if
any) are really necessary.

> +	}
> +
> +out:
> +	mutex_unlock(&cpuidle_lock);
> +}
> +
> +/**
> + * cpuidle_coupled_cpu_notify - notifier called during hotplug transitions
> + * @nb: notifier block
> + * @action: hotplug transition
> + * @hcpu: target cpu number
> + *
> + * Called when a cpu is brought on or offline using hotplug.  Updates the
> + * coupled cpu set appropriately
> + */
> +static int cpuidle_coupled_cpu_notify(struct notifier_block *nb,
> +		unsigned long action, void *hcpu)
> +{
> +	int cpu = (unsigned long)hcpu;
> +
> +	switch (action & ~CPU_TASKS_FROZEN) {
> +	case CPU_DEAD:
> +	case CPU_UP_CANCELED:
> +		cpuidle_coupled_cpu_set_alive(cpu, false);
> +		break;
> +	case CPU_UP_PREPARE:
> +		cpuidle_coupled_cpu_set_alive(cpu, true);
> +		break;
> +	}
> +	return NOTIFY_OK;
> +}
> +
> +static struct notifier_block cpuidle_coupled_cpu_notifier = {
> +	.notifier_call = cpuidle_coupled_cpu_notify,
> +};
> +
> +static int __init cpuidle_coupled_init(void)
> +{
> +	return register_cpu_notifier(&cpuidle_coupled_cpu_notifier);
> +}
> +core_initcall(cpuidle_coupled_init);
> diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
> index 4540672..e81cfda 100644
> --- a/drivers/cpuidle/cpuidle.c
> +++ b/drivers/cpuidle/cpuidle.c
> @@ -171,7 +171,11 @@ int cpuidle_idle_call(void)
>  	trace_power_start_rcuidle(POWER_CSTATE, next_state, dev->cpu);
>  	trace_cpu_idle_rcuidle(next_state, dev->cpu);
>  
> -	entered_state = cpuidle_enter_state(dev, drv, next_state);
> +	if (cpuidle_state_is_coupled(dev, drv, next_state))
> +		entered_state = cpuidle_enter_state_coupled(dev, drv,
> +							    next_state);
> +	else
> +		entered_state = cpuidle_enter_state(dev, drv, next_state);
>  
>  	trace_power_end_rcuidle(dev->cpu);
>  	trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, dev->cpu);
> @@ -407,9 +411,16 @@ static int __cpuidle_register_device(struct cpuidle_device *dev)
>  	if (ret)
>  		goto err_sysfs;
>  
> +	ret = cpuidle_coupled_register_device(dev);
> +	if (ret)
> +		goto err_coupled;
> +
>  	dev->registered = 1;
>  	return 0;
>  
> +err_coupled:
> +	cpuidle_remove_sysfs(cpu_dev);
> +	wait_for_completion(&dev->kobj_unregister);
>  err_sysfs:
>  	list_del(&dev->device_list);
>  	per_cpu(cpuidle_devices, dev->cpu) = NULL;
> @@ -464,6 +475,8 @@ void cpuidle_unregister_device(struct cpuidle_device *dev)
>  	wait_for_completion(&dev->kobj_unregister);
>  	per_cpu(cpuidle_devices, dev->cpu) = NULL;
>  
> +	cpuidle_coupled_unregister_device(dev);
> +
>  	cpuidle_resume_and_unlock();
>  
>  	module_put(cpuidle_driver->owner);
> diff --git a/drivers/cpuidle/cpuidle.h b/drivers/cpuidle/cpuidle.h
> index d8a3ccc..76e7f69 100644
> --- a/drivers/cpuidle/cpuidle.h
> +++ b/drivers/cpuidle/cpuidle.h
> @@ -32,4 +32,34 @@ extern int cpuidle_enter_state(struct cpuidle_device *dev,
>  extern int cpuidle_add_sysfs(struct device *dev);
>  extern void cpuidle_remove_sysfs(struct device *dev);
>  
> +#ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED
> +bool cpuidle_state_is_coupled(struct cpuidle_device *dev,
> +		struct cpuidle_driver *drv, int state);
> +int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
> +		struct cpuidle_driver *drv, int next_state);
> +int cpuidle_coupled_register_device(struct cpuidle_device *dev);
> +void cpuidle_coupled_unregister_device(struct cpuidle_device *dev);
> +#else
> +static inline bool cpuidle_state_is_coupled(struct cpuidle_device *dev,
> +		struct cpuidle_driver *drv, int state)
> +{
> +	return false;
> +}
> +
> +static inline int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
> +		struct cpuidle_driver *drv, int next_state)
> +{
> +	return -1;
> +}
> +
> +static inline int cpuidle_coupled_register_device(struct cpuidle_device *dev)
> +{
> +	return 0;
> +}
> +
> +static inline void cpuidle_coupled_unregister_device(struct cpuidle_device *dev)
> +{
> +}
> +#endif
> +
>  #endif /* __DRIVER_CPUIDLE_H */
> diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h
> index 6c26a3d..6038448 100644
> --- a/include/linux/cpuidle.h
> +++ b/include/linux/cpuidle.h
> @@ -57,6 +57,7 @@ struct cpuidle_state {
>  
>  /* Idle State Flags */
>  #define CPUIDLE_FLAG_TIME_VALID	(0x01) /* is residency time measurable? */
> +#define CPUIDLE_FLAG_COUPLED	(0x02) /* state applies to multiple cpus */
>  
>  #define CPUIDLE_DRIVER_FLAGS_MASK (0xFFFF0000)
>  
> @@ -100,6 +101,12 @@ struct cpuidle_device {
>  	struct list_head 	device_list;
>  	struct kobject		kobj;
>  	struct completion	kobj_unregister;
> +
> +#ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED
> +	int			safe_state_index;
> +	cpumask_t		coupled_cpus;
> +	struct cpuidle_coupled	*coupled;
> +#endif
>  };
>  
>  DECLARE_PER_CPU(struct cpuidle_device *, cpuidle_devices);

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 3/5] cpuidle: add support for states that affect multiple cpus
@ 2012-05-03 22:14     ` Rafael J. Wysocki
  0 siblings, 0 replies; 78+ messages in thread
From: Rafael J. Wysocki @ 2012-05-03 22:14 UTC (permalink / raw)
  To: linux-pm
  Cc: Kevin Hilman, Len Brown, Russell King, Greg Kroah-Hartman,
	Kay Sievers, linux-kernel, Amit Kucheria, Colin Cross,
	Arnd Bergmann, Arjan van de Ven, linux-arm-kernel

On Monday, April 30, 2012, Colin Cross wrote:
> On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
> cpus cannot be independently powered down, either due to
> sequencing restrictions (on Tegra 2, cpu 0 must be the last to
> power down), or due to HW bugs (on OMAP4460, a cpu powering up
> will corrupt the gic state unless the other cpu runs a work
> around).  Each cpu has a power state that it can enter without
> coordinating with the other cpu (usually Wait For Interrupt, or
> WFI), and one or more "coupled" power states that affect blocks
> shared between the cpus (L2 cache, interrupt controller, and
> sometimes the whole SoC).  Entering a coupled power state must
> be tightly controlled on both cpus.
> 
> The easiest solution to implementing coupled cpu power states is
> to hotplug all but one cpu whenever possible, usually using a
> cpufreq governor that looks at cpu load to determine when to
> enable the secondary cpus.  This causes problems, as hotplug is an
> expensive operation, so the number of hotplug transitions must be
> minimized, leading to very slow response to loads, often on the
> order of seconds.
> 
> This file implements an alternative solution, where each cpu will
> wait in the WFI state until all cpus are ready to enter a coupled
> state, at which point the coupled state function will be called
> on all cpus at approximately the same time.
> 
> Once all cpus are ready to enter idle, they are woken by an smp
> cross call.  At this point, there is a chance that one of the
> cpus will find work to do, and choose not to enter idle.  A
> final pass is needed to guarantee that all cpus will call the
> power state enter function at the same time.  During this pass,
> each cpu will increment the ready counter, and continue once the
> ready counter matches the number of online coupled cpus.  If any
> cpu exits idle, the other cpus will decrement their counter and
> retry.
> 
> To use coupled cpuidle states, a cpuidle driver must:
> 
>    Set struct cpuidle_device.coupled_cpus to the mask of all
>    coupled cpus, usually the same as cpu_possible_mask if all cpus
>    are part of the same cluster.  The coupled_cpus mask must be
>    set in the struct cpuidle_device for each cpu.
> 
>    Set struct cpuidle_device.safe_state to a state that is not a
>    coupled state.  This is usually WFI.
> 
>    Set CPUIDLE_FLAG_COUPLED in struct cpuidle_state.flags for each
>    state that affects multiple cpus.
> 
>    Provide a struct cpuidle_state.enter function for each state
>    that affects multiple cpus.  This function is guaranteed to be
>    called on all cpus at approximately the same time.  The driver
>    should ensure that the cpus all abort together if any cpu tries
>    to abort once the function is called.
> 
> Cc: Len Brown <len.brown@intel.com>
> Cc: Amit Kucheria <amit.kucheria@linaro.org>
> Cc: Arjan van de Ven <arjan@linux.intel.com>
> Cc: Trinabh Gupta <g.trinabh@gmail.com>
> Cc: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
> Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> Reviewed-by: Kevin Hilman <khilman@ti.com>
> Tested-by: Kevin Hilman <khilman@ti.com>
> Signed-off-by: Colin Cross <ccross@android.com>
> ---
>  drivers/cpuidle/Kconfig   |    3 +
>  drivers/cpuidle/Makefile  |    1 +
>  drivers/cpuidle/coupled.c |  571 +++++++++++++++++++++++++++++++++++++++++++++
>  drivers/cpuidle/cpuidle.c |   15 ++-
>  drivers/cpuidle/cpuidle.h |   30 +++
>  include/linux/cpuidle.h   |    7 +
>  6 files changed, 626 insertions(+), 1 deletions(-)
>  create mode 100644 drivers/cpuidle/coupled.c
> 
> v2:
>    * removed the coupled lock, replacing it with atomic counters
>    * added a check for outstanding pokes before beginning the
>      final transition to avoid extra wakeups
>    * made the cpuidle_coupled struct completely private
>    * fixed kerneldoc comment formatting
> 
> v3:
>    * fixed decrement in cpuidle_coupled_cpu_set_alive
>    * added kerneldoc annotation to the description
> 
> diff --git a/drivers/cpuidle/Kconfig b/drivers/cpuidle/Kconfig
> index 78a666d..a76b689 100644
> --- a/drivers/cpuidle/Kconfig
> +++ b/drivers/cpuidle/Kconfig
> @@ -18,3 +18,6 @@ config CPU_IDLE_GOV_MENU
>  	bool
>  	depends on CPU_IDLE && NO_HZ
>  	default y
> +
> +config ARCH_NEEDS_CPU_IDLE_COUPLED
> +	def_bool n
> diff --git a/drivers/cpuidle/Makefile b/drivers/cpuidle/Makefile
> index 5634f88..38c8f69 100644
> --- a/drivers/cpuidle/Makefile
> +++ b/drivers/cpuidle/Makefile
> @@ -3,3 +3,4 @@
>  #
>  
>  obj-y += cpuidle.o driver.o governor.o sysfs.o governors/
> +obj-$(CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED) += coupled.o
> diff --git a/drivers/cpuidle/coupled.c b/drivers/cpuidle/coupled.c
> new file mode 100644
> index 0000000..d097826
> --- /dev/null
> +++ b/drivers/cpuidle/coupled.c
> @@ -0,0 +1,571 @@
> +/*
> + * coupled.c - helper functions to enter the same idle state on multiple cpus
> + *
> + * Copyright (c) 2011 Google, Inc.
> + *
> + * Author: Colin Cross <ccross@android.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + */
> +
> +#include <linux/kernel.h>
> +#include <linux/cpu.h>
> +#include <linux/cpuidle.h>
> +#include <linux/mutex.h>
> +#include <linux/sched.h>
> +#include <linux/slab.h>
> +#include <linux/spinlock.h>
> +
> +#include "cpuidle.h"
> +
> +/**
> + * DOC: Coupled cpuidle states
> + *
> + * On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
> + * cpus cannot be independently powered down, either due to
> + * sequencing restrictions (on Tegra 2, cpu 0 must be the last to
> + * power down), or due to HW bugs (on OMAP4460, a cpu powering up
> + * will corrupt the gic state unless the other cpu runs a work
> + * around).  Each cpu has a power state that it can enter without
> + * coordinating with the other cpu (usually Wait For Interrupt, or
> + * WFI), and one or more "coupled" power states that affect blocks
> + * shared between the cpus (L2 cache, interrupt controller, and
> + * sometimes the whole SoC).  Entering a coupled power state must
> + * be tightly controlled on both cpus.
> + *
> + * The easiest solution to implementing coupled cpu power states is
> + * to hotplug all but one cpu whenever possible, usually using a
> + * cpufreq governor that looks at cpu load to determine when to
> + * enable the secondary cpus.  This causes problems, as hotplug is an
> + * expensive operation, so the number of hotplug transitions must be
> + * minimized, leading to very slow response to loads, often on the
> + * order of seconds.

I'd drop the above paragraph entirely.  It doesn't say much about what's in
the file and refers to an obviously suboptimal approach.

> + *
> + * This file implements an alternative solution, where each cpu will
> + * wait in the WFI state until all cpus are ready to enter a coupled
> + * state, at which point the coupled state function will be called
> + * on all cpus at approximately the same time.
> + *
> + * Once all cpus are ready to enter idle, they are woken by an smp
> + * cross call.  At this point, there is a chance that one of the
> + * cpus will find work to do, and choose not to enter idle.  A
> + * final pass is needed to guarantee that all cpus will call the
> + * power state enter function at the same time.  During this pass,
> + * each cpu will increment the ready counter, and continue once the
> + * ready counter matches the number of online coupled cpus.  If any
> + * cpu exits idle, the other cpus will decrement their counter and
> + * retry.
> + *
> + * requested_state stores the deepest coupled idle state each cpu
> + * is ready for.  It is assumed that the states are indexed from
> + * shallowest (highest power, lowest exit latency) to deepest
> + * (lowest power, highest exit latency).  The requested_state
> + * variable is not locked.  It is only written from the cpu that
> + * it stores (or by the on/offlining cpu if that cpu is offline),
> + * and only read after all the cpus are ready for the coupled idle
> + * state are are no longer updating it.
> + *
> + * Three atomic counters are used.  alive_count tracks the number
> + * of cpus in the coupled set that are currently or soon will be
> + * online.  waiting_count tracks the number of cpus that are in
> + * the waiting loop, in the ready loop, or in the coupled idle state.
> + * ready_count tracks the number of cpus that are in the ready loop
> + * or in the coupled idle state.
> + *
> + * To use coupled cpuidle states, a cpuidle driver must:
> + *
> + *    Set struct cpuidle_device.coupled_cpus to the mask of all
> + *    coupled cpus, usually the same as cpu_possible_mask if all cpus
> + *    are part of the same cluster.  The coupled_cpus mask must be
> + *    set in the struct cpuidle_device for each cpu.
> + *
> + *    Set struct cpuidle_device.safe_state to a state that is not a
> + *    coupled state.  This is usually WFI.
> + *
> + *    Set CPUIDLE_FLAG_COUPLED in struct cpuidle_state.flags for each
> + *    state that affects multiple cpus.
> + *
> + *    Provide a struct cpuidle_state.enter function for each state
> + *    that affects multiple cpus.  This function is guaranteed to be
> + *    called on all cpus at approximately the same time.  The driver
> + *    should ensure that the cpus all abort together if any cpu tries
> + *    to abort once the function is called.  The function should return
> + *    with interrupts still disabled.
> + */
> +
> +/**
> + * struct cpuidle_coupled - data for set of cpus that share a coupled idle state
> + * @coupled_cpus: mask of cpus that are part of the coupled set
> + * @requested_state: array of requested states for cpus in the coupled set
> + * @ready_count: count of cpus that are ready for the final idle transition
> + * @waiting_count: count of cpus that are waiting for all other cpus to be idle
> + * @alive_count: count of cpus that are online or soon will be
> + * @refcnt: reference count of cpuidle devices that are using this struct
> + */
> +struct cpuidle_coupled {
> +	cpumask_t coupled_cpus;
> +	int requested_state[NR_CPUS];
> +	atomic_t ready_count;
> +	atomic_t waiting_count;
> +	atomic_t alive_count;
> +	int refcnt;
> +};
> +
> +#define CPUIDLE_COUPLED_NOT_IDLE	(-1)
> +#define CPUIDLE_COUPLED_DEAD		(-2)
> +
> +static DEFINE_MUTEX(cpuidle_coupled_lock);
> +static DEFINE_PER_CPU(struct call_single_data, cpuidle_coupled_poke_cb);
> +
> +/*
> + * The cpuidle_coupled_poked_mask masked is used to avoid calling

s/masked/mask/ perhaps?

> + * __smp_call_function_single with the per cpu call_single_data struct already
> + * in use.  This prevents a deadlock where two cpus are waiting for each others
> + * call_single_data struct to be available
> + */
> +static cpumask_t cpuidle_coupled_poked_mask;
> +
> +/**
> + * cpuidle_state_is_coupled - check if a state is part of a coupled set
> + * @dev: struct cpuidle_device for the current cpu
> + * @drv: struct cpuidle_driver for the platform
> + * @state: index of the target state in drv->states
> + *
> + * Returns true if the target state is coupled with cpus besides this one
> + */
> +bool cpuidle_state_is_coupled(struct cpuidle_device *dev,
> +	struct cpuidle_driver *drv, int state)
> +{
> +	return drv->states[state].flags & CPUIDLE_FLAG_COUPLED;
> +}
> +
> +/**
> + * cpuidle_coupled_cpus_waiting - check if all cpus in a coupled set are waiting
> + * @coupled: the struct coupled that contains the current cpu
> + *
> + * Returns true if all cpus coupled to this target state are in the wait loop
> + */
> +static inline bool cpuidle_coupled_cpus_waiting(struct cpuidle_coupled *coupled)
> +{
> +	int alive;
> +	int waiting;
> +
> +	/*
> +	 * Read alive before reading waiting so a booting cpu is not treated as
> +	 * idle
> +	 */

Well, the comment doesn't really explain much.  In particular, why the boot CPU
could be treated as idle if the reads were in a different order.

> +	alive = atomic_read(&coupled->alive_count);
> +	smp_rmb();
> +	waiting = atomic_read(&coupled->waiting_count);

Have you considered using one atomic variable to accommodate both counters
such that the upper half contains one counter and the lower half contains
the other?

> +
> +	return (waiting == alive);
> +}
> +
> +/**
> + * cpuidle_coupled_get_state - determine the deepest idle state
> + * @dev: struct cpuidle_device for this cpu
> + * @coupled: the struct coupled that contains the current cpu
> + *
> + * Returns the deepest idle state that all coupled cpus can enter
> + */
> +static inline int cpuidle_coupled_get_state(struct cpuidle_device *dev,
> +		struct cpuidle_coupled *coupled)
> +{
> +	int i;
> +	int state = INT_MAX;
> +
> +	for_each_cpu_mask(i, coupled->coupled_cpus)
> +		if (coupled->requested_state[i] != CPUIDLE_COUPLED_DEAD &&
> +		    coupled->requested_state[i] < state)
> +			state = coupled->requested_state[i];
> +
> +	BUG_ON(state >= dev->state_count || state < 0);

Do you have to crash the kernel here if the assertion doesn't hold?  Maybe
you could use WARN_ON() and return error code?

> +
> +	return state;
> +}
> +
> +static void cpuidle_coupled_poked(void *info)
> +{
> +	int cpu = (unsigned long)info;
> +	cpumask_clear_cpu(cpu, &cpuidle_coupled_poked_mask);
> +}
> +
> +/**
> + * cpuidle_coupled_poke - wake up a cpu that may be waiting
> + * @cpu: target cpu
> + *
> + * Ensures that the target cpu exits it's waiting idle state (if it is in it)
> + * and will see updates to waiting_count before it re-enters it's waiting idle
> + * state.
> + *
> + * If cpuidle_coupled_poked_mask is already set for the target cpu, that cpu
> + * either has or will soon have a pending IPI that will wake it out of idle,
> + * or it is currently processing the IPI and is not in idle.
> + */
> +static void cpuidle_coupled_poke(int cpu)
> +{
> +	struct call_single_data *csd = &per_cpu(cpuidle_coupled_poke_cb, cpu);
> +
> +	if (!cpumask_test_and_set_cpu(cpu, &cpuidle_coupled_poked_mask))
> +		__smp_call_function_single(cpu, csd, 0);
> +}
> +
> +/**
> + * cpuidle_coupled_poke_others - wake up all other cpus that may be waiting
> + * @dev: struct cpuidle_device for this cpu
> + * @coupled: the struct coupled that contains the current cpu
> + *
> + * Calls cpuidle_coupled_poke on all other online cpus.
> + */
> +static void cpuidle_coupled_poke_others(struct cpuidle_device *dev,
> +		struct cpuidle_coupled *coupled)

It looks like you could simply pass cpu (not dev) to this function.

> +{
> +	int cpu;
> +
> +	for_each_cpu_mask(cpu, coupled->coupled_cpus)
> +		if (cpu != dev->cpu && cpu_online(cpu))
> +			cpuidle_coupled_poke(cpu);
> +}
> +
> +/**
> + * cpuidle_coupled_set_waiting - mark this cpu as in the wait loop
> + * @dev: struct cpuidle_device for this cpu
> + * @coupled: the struct coupled that contains the current cpu
> + * @next_state: the index in drv->states of the requested state for this cpu
> + *
> + * Updates the requested idle state for the specified cpuidle device,
> + * poking all coupled cpus out of idle if necessary to let them see the new
> + * state.
> + *
> + * Provides memory ordering around waiting_count.
> + */
> +static void cpuidle_coupled_set_waiting(struct cpuidle_device *dev,
> +		struct cpuidle_coupled *coupled, int next_state)

If you passed cpu (instead of dev) to cpuidle_coupled_poke_others(),
then you could pass cpu (instead of dev) to this function too, it seems.

> +{
> +	int alive;
> +
> +	BUG_ON(coupled->requested_state[dev->cpu] >= 0);

Would be WARN_ON() + do nothing too dangerous here?

> +
> +	coupled->requested_state[dev->cpu] = next_state;
> +
> +	/*
> +	 * If this is the last cpu to enter the waiting state, poke
> +	 * all the other cpus out of their waiting state so they can
> +	 * enter a deeper state.  This can race with one of the cpus
> +	 * exiting the waiting state due to an interrupt and
> +	 * decrementing waiting_count, see comment below.
> +	 */
> +	alive = atomic_read(&coupled->alive_count);
> +	if (atomic_inc_return(&coupled->waiting_count) == alive)
> +		cpuidle_coupled_poke_others(dev, coupled);
> +}
> +
> +/**
> + * cpuidle_coupled_set_not_waiting - mark this cpu as leaving the wait loop
> + * @dev: struct cpuidle_device for this cpu
> + * @coupled: the struct coupled that contains the current cpu
> + *
> + * Removes the requested idle state for the specified cpuidle device.
> + *
> + * Provides memory ordering around waiting_count.
> + */
> +static void cpuidle_coupled_set_not_waiting(struct cpuidle_device *dev,
> +		struct cpuidle_coupled *coupled)

It looks like dev doesn't have to be passed here, cpu would be enough.

> +{
> +	BUG_ON(coupled->requested_state[dev->cpu] < 0);

Well, like above?

> +
> +	/*
> +	 * Decrementing waiting_count can race with incrementing it in
> +	 * cpuidle_coupled_set_waiting, but that's OK.  Worst case, some
> +	 * cpus will increment ready_count and then spin until they
> +	 * notice that this cpu has cleared it's requested_state.
> +	 */

So it looks like having ready_count and waiting_count in one atomic variable
can spare us this particular race condition.

> +
> +	smp_mb__before_atomic_dec();
> +	atomic_dec(&coupled->waiting_count);
> +	smp_mb__after_atomic_dec();

Do you really need both the before and after barriers here?  If so, then why?

> +
> +	coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
> +}
> +
> +/**
> + * cpuidle_enter_state_coupled - attempt to enter a state with coupled cpus
> + * @dev: struct cpuidle_device for the current cpu
> + * @drv: struct cpuidle_driver for the platform
> + * @next_state: index of the requested state in drv->states
> + *
> + * Coordinate with coupled cpus to enter the target state.  This is a two
> + * stage process.  In the first stage, the cpus are operating independently,
> + * and may call into cpuidle_enter_state_coupled at completely different times.
> + * To save as much power as possible, the first cpus to call this function will
> + * go to an intermediate state (the cpuidle_device's safe state), and wait for
> + * all the other cpus to call this function.  Once all coupled cpus are idle,
> + * the second stage will start.  Each coupled cpu will spin until all cpus have
> + * guaranteed that they will call the target_state.

It would be good to mention the conditions for calling this function (eg.
interrupts disabled on the local CPU).

> + */
> +int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
> +		struct cpuidle_driver *drv, int next_state)
> +{
> +	int entered_state = -1;
> +	struct cpuidle_coupled *coupled = dev->coupled;
> +	int alive;
> +
> +	if (!coupled)
> +		return -EINVAL;
> +
> +	BUG_ON(atomic_read(&coupled->ready_count));

Again, I'd do a WARN_ON() and return error code from here (to avoid crashing
the kernel).

> +	cpuidle_coupled_set_waiting(dev, coupled, next_state);
> +
> +retry:
> +	/*
> +	 * Wait for all coupled cpus to be idle, using the deepest state
> +	 * allowed for a single cpu.
> +	 */
> +	while (!need_resched() && !cpuidle_coupled_cpus_waiting(coupled)) {
> +		entered_state = cpuidle_enter_state(dev, drv,
> +			dev->safe_state_index);
> +
> +		local_irq_enable();
> +		while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
> +			cpu_relax();

Hmm.  What exactly is this loop supposed to achieve?

> +		local_irq_disable();

Anyway, you seem to be calling it twice along with this enabling/disabling of
interrupts.  I'd put that into a separate function and explain its role in a
kerneldoc comment.

> +	}
> +
> +	/* give a chance to process any remaining pokes */
> +	local_irq_enable();
> +	while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
> +		cpu_relax();
> +	local_irq_disable();
> +
> +	if (need_resched()) {
> +		cpuidle_coupled_set_not_waiting(dev, coupled);
> +		goto out;
> +	}
> +
> +	/*
> +	 * All coupled cpus are probably idle.  There is a small chance that
> +	 * one of the other cpus just became active.  Increment a counter when
> +	 * ready, and spin until all coupled cpus have incremented the counter.
> +	 * Once a cpu has incremented the counter, it cannot abort idle and must
> +	 * spin until either the count has hit alive_count, or another cpu
> +	 * leaves idle.
> +	 */
> +
> +	smp_mb__before_atomic_inc();
> +	atomic_inc(&coupled->ready_count);
> +	smp_mb__after_atomic_inc();

It seems that at least one of these barriers is unnecessary ...

> +	/* alive_count can't change while ready_count > 0 */
> +	alive = atomic_read(&coupled->alive_count);
> +	while (atomic_read(&coupled->ready_count) != alive) {
> +		/* Check if any other cpus bailed out of idle. */
> +		if (!cpuidle_coupled_cpus_waiting(coupled)) {
> +			atomic_dec(&coupled->ready_count);
> +			smp_mb__after_atomic_dec();
> +			goto retry;
> +		}
> +
> +		cpu_relax();
> +	}
> +
> +	/* all cpus have acked the coupled state */
> +	smp_rmb();

What is the barrier here for?

> +
> +	next_state = cpuidle_coupled_get_state(dev, coupled);
> +
> +	entered_state = cpuidle_enter_state(dev, drv, next_state);
> +
> +	cpuidle_coupled_set_not_waiting(dev, coupled);
> +	atomic_dec(&coupled->ready_count);
> +	smp_mb__after_atomic_dec();
> +
> +out:
> +	/*
> +	 * Normal cpuidle states are expected to return with irqs enabled.
> +	 * That leads to an inefficiency where a cpu receiving an interrupt
> +	 * that brings it out of idle will process that interrupt before
> +	 * exiting the idle enter function and decrementing ready_count.  All
> +	 * other cpus will need to spin waiting for the cpu that is processing
> +	 * the interrupt.  If the driver returns with interrupts disabled,
> +	 * all other cpus will loop back into the safe idle state instead of
> +	 * spinning, saving power.
> +	 *
> +	 * Calling local_irq_enable here allows coupled states to return with
> +	 * interrupts disabled, but won't cause problems for drivers that
> +	 * exit with interrupts enabled.
> +	 */
> +	local_irq_enable();
> +
> +	/*
> +	 * Wait until all coupled cpus have exited idle.  There is no risk that
> +	 * a cpu exits and re-enters the ready state because this cpu has
> +	 * already decremented its waiting_count.
> +	 */
> +	while (atomic_read(&coupled->ready_count) != 0)
> +		cpu_relax();
> +
> +	smp_rmb();

And here?

> +
> +	return entered_state;
> +}
> +
> +/**
> + * cpuidle_coupled_register_device - register a coupled cpuidle device
> + * @dev: struct cpuidle_device for the current cpu
> + *
> + * Called from cpuidle_register_device to handle coupled idle init.  Finds the
> + * cpuidle_coupled struct for this set of coupled cpus, or creates one if none
> + * exists yet.
> + */
> +int cpuidle_coupled_register_device(struct cpuidle_device *dev)
> +{
> +	int cpu;
> +	struct cpuidle_device *other_dev;
> +	struct call_single_data *csd;
> +	struct cpuidle_coupled *coupled;
> +
> +	if (cpumask_empty(&dev->coupled_cpus))
> +		return 0;
> +
> +	for_each_cpu_mask(cpu, dev->coupled_cpus) {
> +		other_dev = per_cpu(cpuidle_devices, cpu);
> +		if (other_dev && other_dev->coupled) {
> +			coupled = other_dev->coupled;
> +			goto have_coupled;
> +		}
> +	}
> +
> +	/* No existing coupled info found, create a new one */
> +	coupled = kzalloc(sizeof(struct cpuidle_coupled), GFP_KERNEL);
> +	if (!coupled)
> +		return -ENOMEM;
> +
> +	coupled->coupled_cpus = dev->coupled_cpus;
> +	for_each_cpu_mask(cpu, coupled->coupled_cpus)
> +		coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_DEAD;
> +
> +have_coupled:
> +	dev->coupled = coupled;
> +	BUG_ON(!cpumask_equal(&dev->coupled_cpus, &coupled->coupled_cpus));
> +
> +	if (cpu_online(dev->cpu)) {
> +		coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
> +		atomic_inc(&coupled->alive_count);
> +	}
> +
> +	coupled->refcnt++;
> +
> +	csd = &per_cpu(cpuidle_coupled_poke_cb, dev->cpu);
> +	csd->func = cpuidle_coupled_poked;
> +	csd->info = (void *)(unsigned long)dev->cpu;
> +
> +	return 0;
> +}
> +
> +/**
> + * cpuidle_coupled_unregister_device - unregister a coupled cpuidle device
> + * @dev: struct cpuidle_device for the current cpu
> + *
> + * Called from cpuidle_unregister_device to tear down coupled idle.  Removes the
> + * cpu from the coupled idle set, and frees the cpuidle_coupled_info struct if
> + * this was the last cpu in the set.
> + */
> +void cpuidle_coupled_unregister_device(struct cpuidle_device *dev)
> +{
> +	struct cpuidle_coupled *coupled = dev->coupled;
> +
> +	if (cpumask_empty(&dev->coupled_cpus))
> +		return;
> +
> +	if (--coupled->refcnt)
> +		kfree(coupled);
> +	dev->coupled = NULL;
> +}
> +
> +/**
> + * cpuidle_coupled_cpu_set_alive - adjust alive_count during hotplug transitions
> + * @cpu: target cpu number
> + * @alive: whether the target cpu is going up or down
> + *
> + * Run on the cpu that is bringing up the target cpu, before the target cpu
> + * has been booted, or after the target cpu is completely dead.
> + */
> +static void cpuidle_coupled_cpu_set_alive(int cpu, bool alive)
> +{
> +	struct cpuidle_device *dev;
> +	struct cpuidle_coupled *coupled;
> +
> +	mutex_lock(&cpuidle_lock);
> +
> +	dev = per_cpu(cpuidle_devices, cpu);
> +	if (!dev->coupled)
> +		goto out;
> +
> +	coupled = dev->coupled;
> +
> +	/*
> +	 * waiting_count must be at least 1 less than alive_count, because
> +	 * this cpu is not waiting.  Spin until all cpus have noticed this cpu
> +	 * is not idle and exited the ready loop before changing alive_count.
> +	 */
> +	while (atomic_read(&coupled->ready_count))
> +		cpu_relax();
> +
> +	if (alive) {
> +		smp_mb__before_atomic_inc();
> +		atomic_inc(&coupled->alive_count);
> +		smp_mb__after_atomic_inc();
> +		coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
> +	} else {
> +		smp_mb__before_atomic_dec();
> +		atomic_dec(&coupled->alive_count);
> +		smp_mb__after_atomic_dec();
> +		coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_DEAD;

There's too many SMP barriers above, but I'm not quite sure which of them (if
any) are really necessary.

> +	}
> +
> +out:
> +	mutex_unlock(&cpuidle_lock);
> +}
> +
> +/**
> + * cpuidle_coupled_cpu_notify - notifier called during hotplug transitions
> + * @nb: notifier block
> + * @action: hotplug transition
> + * @hcpu: target cpu number
> + *
> + * Called when a cpu is brought on or offline using hotplug.  Updates the
> + * coupled cpu set appropriately
> + */
> +static int cpuidle_coupled_cpu_notify(struct notifier_block *nb,
> +		unsigned long action, void *hcpu)
> +{
> +	int cpu = (unsigned long)hcpu;
> +
> +	switch (action & ~CPU_TASKS_FROZEN) {
> +	case CPU_DEAD:
> +	case CPU_UP_CANCELED:
> +		cpuidle_coupled_cpu_set_alive(cpu, false);
> +		break;
> +	case CPU_UP_PREPARE:
> +		cpuidle_coupled_cpu_set_alive(cpu, true);
> +		break;
> +	}
> +	return NOTIFY_OK;
> +}
> +
> +static struct notifier_block cpuidle_coupled_cpu_notifier = {
> +	.notifier_call = cpuidle_coupled_cpu_notify,
> +};
> +
> +static int __init cpuidle_coupled_init(void)
> +{
> +	return register_cpu_notifier(&cpuidle_coupled_cpu_notifier);
> +}
> +core_initcall(cpuidle_coupled_init);
> diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
> index 4540672..e81cfda 100644
> --- a/drivers/cpuidle/cpuidle.c
> +++ b/drivers/cpuidle/cpuidle.c
> @@ -171,7 +171,11 @@ int cpuidle_idle_call(void)
>  	trace_power_start_rcuidle(POWER_CSTATE, next_state, dev->cpu);
>  	trace_cpu_idle_rcuidle(next_state, dev->cpu);
>  
> -	entered_state = cpuidle_enter_state(dev, drv, next_state);
> +	if (cpuidle_state_is_coupled(dev, drv, next_state))
> +		entered_state = cpuidle_enter_state_coupled(dev, drv,
> +							    next_state);
> +	else
> +		entered_state = cpuidle_enter_state(dev, drv, next_state);
>  
>  	trace_power_end_rcuidle(dev->cpu);
>  	trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, dev->cpu);
> @@ -407,9 +411,16 @@ static int __cpuidle_register_device(struct cpuidle_device *dev)
>  	if (ret)
>  		goto err_sysfs;
>  
> +	ret = cpuidle_coupled_register_device(dev);
> +	if (ret)
> +		goto err_coupled;
> +
>  	dev->registered = 1;
>  	return 0;
>  
> +err_coupled:
> +	cpuidle_remove_sysfs(cpu_dev);
> +	wait_for_completion(&dev->kobj_unregister);
>  err_sysfs:
>  	list_del(&dev->device_list);
>  	per_cpu(cpuidle_devices, dev->cpu) = NULL;
> @@ -464,6 +475,8 @@ void cpuidle_unregister_device(struct cpuidle_device *dev)
>  	wait_for_completion(&dev->kobj_unregister);
>  	per_cpu(cpuidle_devices, dev->cpu) = NULL;
>  
> +	cpuidle_coupled_unregister_device(dev);
> +
>  	cpuidle_resume_and_unlock();
>  
>  	module_put(cpuidle_driver->owner);
> diff --git a/drivers/cpuidle/cpuidle.h b/drivers/cpuidle/cpuidle.h
> index d8a3ccc..76e7f69 100644
> --- a/drivers/cpuidle/cpuidle.h
> +++ b/drivers/cpuidle/cpuidle.h
> @@ -32,4 +32,34 @@ extern int cpuidle_enter_state(struct cpuidle_device *dev,
>  extern int cpuidle_add_sysfs(struct device *dev);
>  extern void cpuidle_remove_sysfs(struct device *dev);
>  
> +#ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED
> +bool cpuidle_state_is_coupled(struct cpuidle_device *dev,
> +		struct cpuidle_driver *drv, int state);
> +int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
> +		struct cpuidle_driver *drv, int next_state);
> +int cpuidle_coupled_register_device(struct cpuidle_device *dev);
> +void cpuidle_coupled_unregister_device(struct cpuidle_device *dev);
> +#else
> +static inline bool cpuidle_state_is_coupled(struct cpuidle_device *dev,
> +		struct cpuidle_driver *drv, int state)
> +{
> +	return false;
> +}
> +
> +static inline int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
> +		struct cpuidle_driver *drv, int next_state)
> +{
> +	return -1;
> +}
> +
> +static inline int cpuidle_coupled_register_device(struct cpuidle_device *dev)
> +{
> +	return 0;
> +}
> +
> +static inline void cpuidle_coupled_unregister_device(struct cpuidle_device *dev)
> +{
> +}
> +#endif
> +
>  #endif /* __DRIVER_CPUIDLE_H */
> diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h
> index 6c26a3d..6038448 100644
> --- a/include/linux/cpuidle.h
> +++ b/include/linux/cpuidle.h
> @@ -57,6 +57,7 @@ struct cpuidle_state {
>  
>  /* Idle State Flags */
>  #define CPUIDLE_FLAG_TIME_VALID	(0x01) /* is residency time measurable? */
> +#define CPUIDLE_FLAG_COUPLED	(0x02) /* state applies to multiple cpus */
>  
>  #define CPUIDLE_DRIVER_FLAGS_MASK (0xFFFF0000)
>  
> @@ -100,6 +101,12 @@ struct cpuidle_device {
>  	struct list_head 	device_list;
>  	struct kobject		kobj;
>  	struct completion	kobj_unregister;
> +
> +#ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED
> +	int			safe_state_index;
> +	cpumask_t		coupled_cpus;
> +	struct cpuidle_coupled	*coupled;
> +#endif
>  };
>  
>  DECLARE_PER_CPU(struct cpuidle_device *, cpuidle_devices);

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [linux-pm] [PATCHv3 3/5] cpuidle: add support for states that affect multiple cpus
@ 2012-05-03 22:14     ` Rafael J. Wysocki
  0 siblings, 0 replies; 78+ messages in thread
From: Rafael J. Wysocki @ 2012-05-03 22:14 UTC (permalink / raw)
  To: linux-arm-kernel

On Monday, April 30, 2012, Colin Cross wrote:
> On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
> cpus cannot be independently powered down, either due to
> sequencing restrictions (on Tegra 2, cpu 0 must be the last to
> power down), or due to HW bugs (on OMAP4460, a cpu powering up
> will corrupt the gic state unless the other cpu runs a work
> around).  Each cpu has a power state that it can enter without
> coordinating with the other cpu (usually Wait For Interrupt, or
> WFI), and one or more "coupled" power states that affect blocks
> shared between the cpus (L2 cache, interrupt controller, and
> sometimes the whole SoC).  Entering a coupled power state must
> be tightly controlled on both cpus.
> 
> The easiest solution to implementing coupled cpu power states is
> to hotplug all but one cpu whenever possible, usually using a
> cpufreq governor that looks at cpu load to determine when to
> enable the secondary cpus.  This causes problems, as hotplug is an
> expensive operation, so the number of hotplug transitions must be
> minimized, leading to very slow response to loads, often on the
> order of seconds.
> 
> This file implements an alternative solution, where each cpu will
> wait in the WFI state until all cpus are ready to enter a coupled
> state, at which point the coupled state function will be called
> on all cpus at approximately the same time.
> 
> Once all cpus are ready to enter idle, they are woken by an smp
> cross call.  At this point, there is a chance that one of the
> cpus will find work to do, and choose not to enter idle.  A
> final pass is needed to guarantee that all cpus will call the
> power state enter function at the same time.  During this pass,
> each cpu will increment the ready counter, and continue once the
> ready counter matches the number of online coupled cpus.  If any
> cpu exits idle, the other cpus will decrement their counter and
> retry.
> 
> To use coupled cpuidle states, a cpuidle driver must:
> 
>    Set struct cpuidle_device.coupled_cpus to the mask of all
>    coupled cpus, usually the same as cpu_possible_mask if all cpus
>    are part of the same cluster.  The coupled_cpus mask must be
>    set in the struct cpuidle_device for each cpu.
> 
>    Set struct cpuidle_device.safe_state to a state that is not a
>    coupled state.  This is usually WFI.
> 
>    Set CPUIDLE_FLAG_COUPLED in struct cpuidle_state.flags for each
>    state that affects multiple cpus.
> 
>    Provide a struct cpuidle_state.enter function for each state
>    that affects multiple cpus.  This function is guaranteed to be
>    called on all cpus at approximately the same time.  The driver
>    should ensure that the cpus all abort together if any cpu tries
>    to abort once the function is called.
> 
> Cc: Len Brown <len.brown@intel.com>
> Cc: Amit Kucheria <amit.kucheria@linaro.org>
> Cc: Arjan van de Ven <arjan@linux.intel.com>
> Cc: Trinabh Gupta <g.trinabh@gmail.com>
> Cc: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
> Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> Reviewed-by: Kevin Hilman <khilman@ti.com>
> Tested-by: Kevin Hilman <khilman@ti.com>
> Signed-off-by: Colin Cross <ccross@android.com>
> ---
>  drivers/cpuidle/Kconfig   |    3 +
>  drivers/cpuidle/Makefile  |    1 +
>  drivers/cpuidle/coupled.c |  571 +++++++++++++++++++++++++++++++++++++++++++++
>  drivers/cpuidle/cpuidle.c |   15 ++-
>  drivers/cpuidle/cpuidle.h |   30 +++
>  include/linux/cpuidle.h   |    7 +
>  6 files changed, 626 insertions(+), 1 deletions(-)
>  create mode 100644 drivers/cpuidle/coupled.c
> 
> v2:
>    * removed the coupled lock, replacing it with atomic counters
>    * added a check for outstanding pokes before beginning the
>      final transition to avoid extra wakeups
>    * made the cpuidle_coupled struct completely private
>    * fixed kerneldoc comment formatting
> 
> v3:
>    * fixed decrement in cpuidle_coupled_cpu_set_alive
>    * added kerneldoc annotation to the description
> 
> diff --git a/drivers/cpuidle/Kconfig b/drivers/cpuidle/Kconfig
> index 78a666d..a76b689 100644
> --- a/drivers/cpuidle/Kconfig
> +++ b/drivers/cpuidle/Kconfig
> @@ -18,3 +18,6 @@ config CPU_IDLE_GOV_MENU
>  	bool
>  	depends on CPU_IDLE && NO_HZ
>  	default y
> +
> +config ARCH_NEEDS_CPU_IDLE_COUPLED
> +	def_bool n
> diff --git a/drivers/cpuidle/Makefile b/drivers/cpuidle/Makefile
> index 5634f88..38c8f69 100644
> --- a/drivers/cpuidle/Makefile
> +++ b/drivers/cpuidle/Makefile
> @@ -3,3 +3,4 @@
>  #
>  
>  obj-y += cpuidle.o driver.o governor.o sysfs.o governors/
> +obj-$(CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED) += coupled.o
> diff --git a/drivers/cpuidle/coupled.c b/drivers/cpuidle/coupled.c
> new file mode 100644
> index 0000000..d097826
> --- /dev/null
> +++ b/drivers/cpuidle/coupled.c
> @@ -0,0 +1,571 @@
> +/*
> + * coupled.c - helper functions to enter the same idle state on multiple cpus
> + *
> + * Copyright (c) 2011 Google, Inc.
> + *
> + * Author: Colin Cross <ccross@android.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + */
> +
> +#include <linux/kernel.h>
> +#include <linux/cpu.h>
> +#include <linux/cpuidle.h>
> +#include <linux/mutex.h>
> +#include <linux/sched.h>
> +#include <linux/slab.h>
> +#include <linux/spinlock.h>
> +
> +#include "cpuidle.h"
> +
> +/**
> + * DOC: Coupled cpuidle states
> + *
> + * On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
> + * cpus cannot be independently powered down, either due to
> + * sequencing restrictions (on Tegra 2, cpu 0 must be the last to
> + * power down), or due to HW bugs (on OMAP4460, a cpu powering up
> + * will corrupt the gic state unless the other cpu runs a work
> + * around).  Each cpu has a power state that it can enter without
> + * coordinating with the other cpu (usually Wait For Interrupt, or
> + * WFI), and one or more "coupled" power states that affect blocks
> + * shared between the cpus (L2 cache, interrupt controller, and
> + * sometimes the whole SoC).  Entering a coupled power state must
> + * be tightly controlled on both cpus.
> + *
> + * The easiest solution to implementing coupled cpu power states is
> + * to hotplug all but one cpu whenever possible, usually using a
> + * cpufreq governor that looks at cpu load to determine when to
> + * enable the secondary cpus.  This causes problems, as hotplug is an
> + * expensive operation, so the number of hotplug transitions must be
> + * minimized, leading to very slow response to loads, often on the
> + * order of seconds.

I'd drop the above paragraph entirely.  It doesn't say much about what's in
the file and refers to an obviously suboptimal approach.

> + *
> + * This file implements an alternative solution, where each cpu will
> + * wait in the WFI state until all cpus are ready to enter a coupled
> + * state, at which point the coupled state function will be called
> + * on all cpus at approximately the same time.
> + *
> + * Once all cpus are ready to enter idle, they are woken by an smp
> + * cross call.  At this point, there is a chance that one of the
> + * cpus will find work to do, and choose not to enter idle.  A
> + * final pass is needed to guarantee that all cpus will call the
> + * power state enter function at the same time.  During this pass,
> + * each cpu will increment the ready counter, and continue once the
> + * ready counter matches the number of online coupled cpus.  If any
> + * cpu exits idle, the other cpus will decrement their counter and
> + * retry.
> + *
> + * requested_state stores the deepest coupled idle state each cpu
> + * is ready for.  It is assumed that the states are indexed from
> + * shallowest (highest power, lowest exit latency) to deepest
> + * (lowest power, highest exit latency).  The requested_state
> + * variable is not locked.  It is only written from the cpu that
> + * it stores (or by the on/offlining cpu if that cpu is offline),
> + * and only read after all the cpus are ready for the coupled idle
> + * state are are no longer updating it.
> + *
> + * Three atomic counters are used.  alive_count tracks the number
> + * of cpus in the coupled set that are currently or soon will be
> + * online.  waiting_count tracks the number of cpus that are in
> + * the waiting loop, in the ready loop, or in the coupled idle state.
> + * ready_count tracks the number of cpus that are in the ready loop
> + * or in the coupled idle state.
> + *
> + * To use coupled cpuidle states, a cpuidle driver must:
> + *
> + *    Set struct cpuidle_device.coupled_cpus to the mask of all
> + *    coupled cpus, usually the same as cpu_possible_mask if all cpus
> + *    are part of the same cluster.  The coupled_cpus mask must be
> + *    set in the struct cpuidle_device for each cpu.
> + *
> + *    Set struct cpuidle_device.safe_state to a state that is not a
> + *    coupled state.  This is usually WFI.
> + *
> + *    Set CPUIDLE_FLAG_COUPLED in struct cpuidle_state.flags for each
> + *    state that affects multiple cpus.
> + *
> + *    Provide a struct cpuidle_state.enter function for each state
> + *    that affects multiple cpus.  This function is guaranteed to be
> + *    called on all cpus at approximately the same time.  The driver
> + *    should ensure that the cpus all abort together if any cpu tries
> + *    to abort once the function is called.  The function should return
> + *    with interrupts still disabled.
> + */
> +
> +/**
> + * struct cpuidle_coupled - data for set of cpus that share a coupled idle state
> + * @coupled_cpus: mask of cpus that are part of the coupled set
> + * @requested_state: array of requested states for cpus in the coupled set
> + * @ready_count: count of cpus that are ready for the final idle transition
> + * @waiting_count: count of cpus that are waiting for all other cpus to be idle
> + * @alive_count: count of cpus that are online or soon will be
> + * @refcnt: reference count of cpuidle devices that are using this struct
> + */
> +struct cpuidle_coupled {
> +	cpumask_t coupled_cpus;
> +	int requested_state[NR_CPUS];
> +	atomic_t ready_count;
> +	atomic_t waiting_count;
> +	atomic_t alive_count;
> +	int refcnt;
> +};
> +
> +#define CPUIDLE_COUPLED_NOT_IDLE	(-1)
> +#define CPUIDLE_COUPLED_DEAD		(-2)
> +
> +static DEFINE_MUTEX(cpuidle_coupled_lock);
> +static DEFINE_PER_CPU(struct call_single_data, cpuidle_coupled_poke_cb);
> +
> +/*
> + * The cpuidle_coupled_poked_mask masked is used to avoid calling

s/masked/mask/ perhaps?

> + * __smp_call_function_single with the per cpu call_single_data struct already
> + * in use.  This prevents a deadlock where two cpus are waiting for each others
> + * call_single_data struct to be available
> + */
> +static cpumask_t cpuidle_coupled_poked_mask;
> +
> +/**
> + * cpuidle_state_is_coupled - check if a state is part of a coupled set
> + * @dev: struct cpuidle_device for the current cpu
> + * @drv: struct cpuidle_driver for the platform
> + * @state: index of the target state in drv->states
> + *
> + * Returns true if the target state is coupled with cpus besides this one
> + */
> +bool cpuidle_state_is_coupled(struct cpuidle_device *dev,
> +	struct cpuidle_driver *drv, int state)
> +{
> +	return drv->states[state].flags & CPUIDLE_FLAG_COUPLED;
> +}
> +
> +/**
> + * cpuidle_coupled_cpus_waiting - check if all cpus in a coupled set are waiting
> + * @coupled: the struct coupled that contains the current cpu
> + *
> + * Returns true if all cpus coupled to this target state are in the wait loop
> + */
> +static inline bool cpuidle_coupled_cpus_waiting(struct cpuidle_coupled *coupled)
> +{
> +	int alive;
> +	int waiting;
> +
> +	/*
> +	 * Read alive before reading waiting so a booting cpu is not treated as
> +	 * idle
> +	 */

Well, the comment doesn't really explain much.  In particular, why the boot CPU
could be treated as idle if the reads were in a different order.

> +	alive = atomic_read(&coupled->alive_count);
> +	smp_rmb();
> +	waiting = atomic_read(&coupled->waiting_count);

Have you considered using one atomic variable to accommodate both counters
such that the upper half contains one counter and the lower half contains
the other?

> +
> +	return (waiting == alive);
> +}
> +
> +/**
> + * cpuidle_coupled_get_state - determine the deepest idle state
> + * @dev: struct cpuidle_device for this cpu
> + * @coupled: the struct coupled that contains the current cpu
> + *
> + * Returns the deepest idle state that all coupled cpus can enter
> + */
> +static inline int cpuidle_coupled_get_state(struct cpuidle_device *dev,
> +		struct cpuidle_coupled *coupled)
> +{
> +	int i;
> +	int state = INT_MAX;
> +
> +	for_each_cpu_mask(i, coupled->coupled_cpus)
> +		if (coupled->requested_state[i] != CPUIDLE_COUPLED_DEAD &&
> +		    coupled->requested_state[i] < state)
> +			state = coupled->requested_state[i];
> +
> +	BUG_ON(state >= dev->state_count || state < 0);

Do you have to crash the kernel here if the assertion doesn't hold?  Maybe
you could use WARN_ON() and return error code?

> +
> +	return state;
> +}
> +
> +static void cpuidle_coupled_poked(void *info)
> +{
> +	int cpu = (unsigned long)info;
> +	cpumask_clear_cpu(cpu, &cpuidle_coupled_poked_mask);
> +}
> +
> +/**
> + * cpuidle_coupled_poke - wake up a cpu that may be waiting
> + * @cpu: target cpu
> + *
> + * Ensures that the target cpu exits it's waiting idle state (if it is in it)
> + * and will see updates to waiting_count before it re-enters it's waiting idle
> + * state.
> + *
> + * If cpuidle_coupled_poked_mask is already set for the target cpu, that cpu
> + * either has or will soon have a pending IPI that will wake it out of idle,
> + * or it is currently processing the IPI and is not in idle.
> + */
> +static void cpuidle_coupled_poke(int cpu)
> +{
> +	struct call_single_data *csd = &per_cpu(cpuidle_coupled_poke_cb, cpu);
> +
> +	if (!cpumask_test_and_set_cpu(cpu, &cpuidle_coupled_poked_mask))
> +		__smp_call_function_single(cpu, csd, 0);
> +}
> +
> +/**
> + * cpuidle_coupled_poke_others - wake up all other cpus that may be waiting
> + * @dev: struct cpuidle_device for this cpu
> + * @coupled: the struct coupled that contains the current cpu
> + *
> + * Calls cpuidle_coupled_poke on all other online cpus.
> + */
> +static void cpuidle_coupled_poke_others(struct cpuidle_device *dev,
> +		struct cpuidle_coupled *coupled)

It looks like you could simply pass cpu (not dev) to this function.

> +{
> +	int cpu;
> +
> +	for_each_cpu_mask(cpu, coupled->coupled_cpus)
> +		if (cpu != dev->cpu && cpu_online(cpu))
> +			cpuidle_coupled_poke(cpu);
> +}
> +
> +/**
> + * cpuidle_coupled_set_waiting - mark this cpu as in the wait loop
> + * @dev: struct cpuidle_device for this cpu
> + * @coupled: the struct coupled that contains the current cpu
> + * @next_state: the index in drv->states of the requested state for this cpu
> + *
> + * Updates the requested idle state for the specified cpuidle device,
> + * poking all coupled cpus out of idle if necessary to let them see the new
> + * state.
> + *
> + * Provides memory ordering around waiting_count.
> + */
> +static void cpuidle_coupled_set_waiting(struct cpuidle_device *dev,
> +		struct cpuidle_coupled *coupled, int next_state)

If you passed cpu (instead of dev) to cpuidle_coupled_poke_others(),
then you could pass cpu (instead of dev) to this function too, it seems.

> +{
> +	int alive;
> +
> +	BUG_ON(coupled->requested_state[dev->cpu] >= 0);

Would be WARN_ON() + do nothing too dangerous here?

> +
> +	coupled->requested_state[dev->cpu] = next_state;
> +
> +	/*
> +	 * If this is the last cpu to enter the waiting state, poke
> +	 * all the other cpus out of their waiting state so they can
> +	 * enter a deeper state.  This can race with one of the cpus
> +	 * exiting the waiting state due to an interrupt and
> +	 * decrementing waiting_count, see comment below.
> +	 */
> +	alive = atomic_read(&coupled->alive_count);
> +	if (atomic_inc_return(&coupled->waiting_count) == alive)
> +		cpuidle_coupled_poke_others(dev, coupled);
> +}
> +
> +/**
> + * cpuidle_coupled_set_not_waiting - mark this cpu as leaving the wait loop
> + * @dev: struct cpuidle_device for this cpu
> + * @coupled: the struct coupled that contains the current cpu
> + *
> + * Removes the requested idle state for the specified cpuidle device.
> + *
> + * Provides memory ordering around waiting_count.
> + */
> +static void cpuidle_coupled_set_not_waiting(struct cpuidle_device *dev,
> +		struct cpuidle_coupled *coupled)

It looks like dev doesn't have to be passed here, cpu would be enough.

> +{
> +	BUG_ON(coupled->requested_state[dev->cpu] < 0);

Well, like above?

> +
> +	/*
> +	 * Decrementing waiting_count can race with incrementing it in
> +	 * cpuidle_coupled_set_waiting, but that's OK.  Worst case, some
> +	 * cpus will increment ready_count and then spin until they
> +	 * notice that this cpu has cleared it's requested_state.
> +	 */

So it looks like having ready_count and waiting_count in one atomic variable
can spare us this particular race condition.

> +
> +	smp_mb__before_atomic_dec();
> +	atomic_dec(&coupled->waiting_count);
> +	smp_mb__after_atomic_dec();

Do you really need both the before and after barriers here?  If so, then why?

> +
> +	coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
> +}
> +
> +/**
> + * cpuidle_enter_state_coupled - attempt to enter a state with coupled cpus
> + * @dev: struct cpuidle_device for the current cpu
> + * @drv: struct cpuidle_driver for the platform
> + * @next_state: index of the requested state in drv->states
> + *
> + * Coordinate with coupled cpus to enter the target state.  This is a two
> + * stage process.  In the first stage, the cpus are operating independently,
> + * and may call into cpuidle_enter_state_coupled at completely different times.
> + * To save as much power as possible, the first cpus to call this function will
> + * go to an intermediate state (the cpuidle_device's safe state), and wait for
> + * all the other cpus to call this function.  Once all coupled cpus are idle,
> + * the second stage will start.  Each coupled cpu will spin until all cpus have
> + * guaranteed that they will call the target_state.

It would be good to mention the conditions for calling this function (eg.
interrupts disabled on the local CPU).

> + */
> +int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
> +		struct cpuidle_driver *drv, int next_state)
> +{
> +	int entered_state = -1;
> +	struct cpuidle_coupled *coupled = dev->coupled;
> +	int alive;
> +
> +	if (!coupled)
> +		return -EINVAL;
> +
> +	BUG_ON(atomic_read(&coupled->ready_count));

Again, I'd do a WARN_ON() and return error code from here (to avoid crashing
the kernel).

> +	cpuidle_coupled_set_waiting(dev, coupled, next_state);
> +
> +retry:
> +	/*
> +	 * Wait for all coupled cpus to be idle, using the deepest state
> +	 * allowed for a single cpu.
> +	 */
> +	while (!need_resched() && !cpuidle_coupled_cpus_waiting(coupled)) {
> +		entered_state = cpuidle_enter_state(dev, drv,
> +			dev->safe_state_index);
> +
> +		local_irq_enable();
> +		while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
> +			cpu_relax();

Hmm.  What exactly is this loop supposed to achieve?

> +		local_irq_disable();

Anyway, you seem to be calling it twice along with this enabling/disabling of
interrupts.  I'd put that into a separate function and explain its role in a
kerneldoc comment.

> +	}
> +
> +	/* give a chance to process any remaining pokes */
> +	local_irq_enable();
> +	while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
> +		cpu_relax();
> +	local_irq_disable();
> +
> +	if (need_resched()) {
> +		cpuidle_coupled_set_not_waiting(dev, coupled);
> +		goto out;
> +	}
> +
> +	/*
> +	 * All coupled cpus are probably idle.  There is a small chance that
> +	 * one of the other cpus just became active.  Increment a counter when
> +	 * ready, and spin until all coupled cpus have incremented the counter.
> +	 * Once a cpu has incremented the counter, it cannot abort idle and must
> +	 * spin until either the count has hit alive_count, or another cpu
> +	 * leaves idle.
> +	 */
> +
> +	smp_mb__before_atomic_inc();
> +	atomic_inc(&coupled->ready_count);
> +	smp_mb__after_atomic_inc();

It seems that at least one of these barriers is unnecessary ...

> +	/* alive_count can't change while ready_count > 0 */
> +	alive = atomic_read(&coupled->alive_count);
> +	while (atomic_read(&coupled->ready_count) != alive) {
> +		/* Check if any other cpus bailed out of idle. */
> +		if (!cpuidle_coupled_cpus_waiting(coupled)) {
> +			atomic_dec(&coupled->ready_count);
> +			smp_mb__after_atomic_dec();
> +			goto retry;
> +		}
> +
> +		cpu_relax();
> +	}
> +
> +	/* all cpus have acked the coupled state */
> +	smp_rmb();

What is the barrier here for?

> +
> +	next_state = cpuidle_coupled_get_state(dev, coupled);
> +
> +	entered_state = cpuidle_enter_state(dev, drv, next_state);
> +
> +	cpuidle_coupled_set_not_waiting(dev, coupled);
> +	atomic_dec(&coupled->ready_count);
> +	smp_mb__after_atomic_dec();
> +
> +out:
> +	/*
> +	 * Normal cpuidle states are expected to return with irqs enabled.
> +	 * That leads to an inefficiency where a cpu receiving an interrupt
> +	 * that brings it out of idle will process that interrupt before
> +	 * exiting the idle enter function and decrementing ready_count.  All
> +	 * other cpus will need to spin waiting for the cpu that is processing
> +	 * the interrupt.  If the driver returns with interrupts disabled,
> +	 * all other cpus will loop back into the safe idle state instead of
> +	 * spinning, saving power.
> +	 *
> +	 * Calling local_irq_enable here allows coupled states to return with
> +	 * interrupts disabled, but won't cause problems for drivers that
> +	 * exit with interrupts enabled.
> +	 */
> +	local_irq_enable();
> +
> +	/*
> +	 * Wait until all coupled cpus have exited idle.  There is no risk that
> +	 * a cpu exits and re-enters the ready state because this cpu has
> +	 * already decremented its waiting_count.
> +	 */
> +	while (atomic_read(&coupled->ready_count) != 0)
> +		cpu_relax();
> +
> +	smp_rmb();

And here?

> +
> +	return entered_state;
> +}
> +
> +/**
> + * cpuidle_coupled_register_device - register a coupled cpuidle device
> + * @dev: struct cpuidle_device for the current cpu
> + *
> + * Called from cpuidle_register_device to handle coupled idle init.  Finds the
> + * cpuidle_coupled struct for this set of coupled cpus, or creates one if none
> + * exists yet.
> + */
> +int cpuidle_coupled_register_device(struct cpuidle_device *dev)
> +{
> +	int cpu;
> +	struct cpuidle_device *other_dev;
> +	struct call_single_data *csd;
> +	struct cpuidle_coupled *coupled;
> +
> +	if (cpumask_empty(&dev->coupled_cpus))
> +		return 0;
> +
> +	for_each_cpu_mask(cpu, dev->coupled_cpus) {
> +		other_dev = per_cpu(cpuidle_devices, cpu);
> +		if (other_dev && other_dev->coupled) {
> +			coupled = other_dev->coupled;
> +			goto have_coupled;
> +		}
> +	}
> +
> +	/* No existing coupled info found, create a new one */
> +	coupled = kzalloc(sizeof(struct cpuidle_coupled), GFP_KERNEL);
> +	if (!coupled)
> +		return -ENOMEM;
> +
> +	coupled->coupled_cpus = dev->coupled_cpus;
> +	for_each_cpu_mask(cpu, coupled->coupled_cpus)
> +		coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_DEAD;
> +
> +have_coupled:
> +	dev->coupled = coupled;
> +	BUG_ON(!cpumask_equal(&dev->coupled_cpus, &coupled->coupled_cpus));
> +
> +	if (cpu_online(dev->cpu)) {
> +		coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
> +		atomic_inc(&coupled->alive_count);
> +	}
> +
> +	coupled->refcnt++;
> +
> +	csd = &per_cpu(cpuidle_coupled_poke_cb, dev->cpu);
> +	csd->func = cpuidle_coupled_poked;
> +	csd->info = (void *)(unsigned long)dev->cpu;
> +
> +	return 0;
> +}
> +
> +/**
> + * cpuidle_coupled_unregister_device - unregister a coupled cpuidle device
> + * @dev: struct cpuidle_device for the current cpu
> + *
> + * Called from cpuidle_unregister_device to tear down coupled idle.  Removes the
> + * cpu from the coupled idle set, and frees the cpuidle_coupled_info struct if
> + * this was the last cpu in the set.
> + */
> +void cpuidle_coupled_unregister_device(struct cpuidle_device *dev)
> +{
> +	struct cpuidle_coupled *coupled = dev->coupled;
> +
> +	if (cpumask_empty(&dev->coupled_cpus))
> +		return;
> +
> +	if (--coupled->refcnt)
> +		kfree(coupled);
> +	dev->coupled = NULL;
> +}
> +
> +/**
> + * cpuidle_coupled_cpu_set_alive - adjust alive_count during hotplug transitions
> + * @cpu: target cpu number
> + * @alive: whether the target cpu is going up or down
> + *
> + * Run on the cpu that is bringing up the target cpu, before the target cpu
> + * has been booted, or after the target cpu is completely dead.
> + */
> +static void cpuidle_coupled_cpu_set_alive(int cpu, bool alive)
> +{
> +	struct cpuidle_device *dev;
> +	struct cpuidle_coupled *coupled;
> +
> +	mutex_lock(&cpuidle_lock);
> +
> +	dev = per_cpu(cpuidle_devices, cpu);
> +	if (!dev->coupled)
> +		goto out;
> +
> +	coupled = dev->coupled;
> +
> +	/*
> +	 * waiting_count must be at least 1 less than alive_count, because
> +	 * this cpu is not waiting.  Spin until all cpus have noticed this cpu
> +	 * is not idle and exited the ready loop before changing alive_count.
> +	 */
> +	while (atomic_read(&coupled->ready_count))
> +		cpu_relax();
> +
> +	if (alive) {
> +		smp_mb__before_atomic_inc();
> +		atomic_inc(&coupled->alive_count);
> +		smp_mb__after_atomic_inc();
> +		coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
> +	} else {
> +		smp_mb__before_atomic_dec();
> +		atomic_dec(&coupled->alive_count);
> +		smp_mb__after_atomic_dec();
> +		coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_DEAD;

There's too many SMP barriers above, but I'm not quite sure which of them (if
any) are really necessary.

> +	}
> +
> +out:
> +	mutex_unlock(&cpuidle_lock);
> +}
> +
> +/**
> + * cpuidle_coupled_cpu_notify - notifier called during hotplug transitions
> + * @nb: notifier block
> + * @action: hotplug transition
> + * @hcpu: target cpu number
> + *
> + * Called when a cpu is brought on or offline using hotplug.  Updates the
> + * coupled cpu set appropriately
> + */
> +static int cpuidle_coupled_cpu_notify(struct notifier_block *nb,
> +		unsigned long action, void *hcpu)
> +{
> +	int cpu = (unsigned long)hcpu;
> +
> +	switch (action & ~CPU_TASKS_FROZEN) {
> +	case CPU_DEAD:
> +	case CPU_UP_CANCELED:
> +		cpuidle_coupled_cpu_set_alive(cpu, false);
> +		break;
> +	case CPU_UP_PREPARE:
> +		cpuidle_coupled_cpu_set_alive(cpu, true);
> +		break;
> +	}
> +	return NOTIFY_OK;
> +}
> +
> +static struct notifier_block cpuidle_coupled_cpu_notifier = {
> +	.notifier_call = cpuidle_coupled_cpu_notify,
> +};
> +
> +static int __init cpuidle_coupled_init(void)
> +{
> +	return register_cpu_notifier(&cpuidle_coupled_cpu_notifier);
> +}
> +core_initcall(cpuidle_coupled_init);
> diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
> index 4540672..e81cfda 100644
> --- a/drivers/cpuidle/cpuidle.c
> +++ b/drivers/cpuidle/cpuidle.c
> @@ -171,7 +171,11 @@ int cpuidle_idle_call(void)
>  	trace_power_start_rcuidle(POWER_CSTATE, next_state, dev->cpu);
>  	trace_cpu_idle_rcuidle(next_state, dev->cpu);
>  
> -	entered_state = cpuidle_enter_state(dev, drv, next_state);
> +	if (cpuidle_state_is_coupled(dev, drv, next_state))
> +		entered_state = cpuidle_enter_state_coupled(dev, drv,
> +							    next_state);
> +	else
> +		entered_state = cpuidle_enter_state(dev, drv, next_state);
>  
>  	trace_power_end_rcuidle(dev->cpu);
>  	trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, dev->cpu);
> @@ -407,9 +411,16 @@ static int __cpuidle_register_device(struct cpuidle_device *dev)
>  	if (ret)
>  		goto err_sysfs;
>  
> +	ret = cpuidle_coupled_register_device(dev);
> +	if (ret)
> +		goto err_coupled;
> +
>  	dev->registered = 1;
>  	return 0;
>  
> +err_coupled:
> +	cpuidle_remove_sysfs(cpu_dev);
> +	wait_for_completion(&dev->kobj_unregister);
>  err_sysfs:
>  	list_del(&dev->device_list);
>  	per_cpu(cpuidle_devices, dev->cpu) = NULL;
> @@ -464,6 +475,8 @@ void cpuidle_unregister_device(struct cpuidle_device *dev)
>  	wait_for_completion(&dev->kobj_unregister);
>  	per_cpu(cpuidle_devices, dev->cpu) = NULL;
>  
> +	cpuidle_coupled_unregister_device(dev);
> +
>  	cpuidle_resume_and_unlock();
>  
>  	module_put(cpuidle_driver->owner);
> diff --git a/drivers/cpuidle/cpuidle.h b/drivers/cpuidle/cpuidle.h
> index d8a3ccc..76e7f69 100644
> --- a/drivers/cpuidle/cpuidle.h
> +++ b/drivers/cpuidle/cpuidle.h
> @@ -32,4 +32,34 @@ extern int cpuidle_enter_state(struct cpuidle_device *dev,
>  extern int cpuidle_add_sysfs(struct device *dev);
>  extern void cpuidle_remove_sysfs(struct device *dev);
>  
> +#ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED
> +bool cpuidle_state_is_coupled(struct cpuidle_device *dev,
> +		struct cpuidle_driver *drv, int state);
> +int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
> +		struct cpuidle_driver *drv, int next_state);
> +int cpuidle_coupled_register_device(struct cpuidle_device *dev);
> +void cpuidle_coupled_unregister_device(struct cpuidle_device *dev);
> +#else
> +static inline bool cpuidle_state_is_coupled(struct cpuidle_device *dev,
> +		struct cpuidle_driver *drv, int state)
> +{
> +	return false;
> +}
> +
> +static inline int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
> +		struct cpuidle_driver *drv, int next_state)
> +{
> +	return -1;
> +}
> +
> +static inline int cpuidle_coupled_register_device(struct cpuidle_device *dev)
> +{
> +	return 0;
> +}
> +
> +static inline void cpuidle_coupled_unregister_device(struct cpuidle_device *dev)
> +{
> +}
> +#endif
> +
>  #endif /* __DRIVER_CPUIDLE_H */
> diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h
> index 6c26a3d..6038448 100644
> --- a/include/linux/cpuidle.h
> +++ b/include/linux/cpuidle.h
> @@ -57,6 +57,7 @@ struct cpuidle_state {
>  
>  /* Idle State Flags */
>  #define CPUIDLE_FLAG_TIME_VALID	(0x01) /* is residency time measurable? */
> +#define CPUIDLE_FLAG_COUPLED	(0x02) /* state applies to multiple cpus */
>  
>  #define CPUIDLE_DRIVER_FLAGS_MASK (0xFFFF0000)
>  
> @@ -100,6 +101,12 @@ struct cpuidle_device {
>  	struct list_head 	device_list;
>  	struct kobject		kobj;
>  	struct completion	kobj_unregister;
> +
> +#ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED
> +	int			safe_state_index;
> +	cpumask_t		coupled_cpus;
> +	struct cpuidle_coupled	*coupled;
> +#endif
>  };
>  
>  DECLARE_PER_CPU(struct cpuidle_device *, cpuidle_devices);

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [linux-pm] [PATCHv3 3/5] cpuidle: add support for states that affect multiple cpus
  2012-05-03 22:14     ` Rafael J. Wysocki
  (?)
@ 2012-05-03 23:09       ` Colin Cross
  -1 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-05-03 23:09 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: linux-pm, linux-kernel, Kevin Hilman, Len Brown, Russell King,
	Greg Kroah-Hartman, Kay Sievers, Amit Kucheria, Arjan van de Ven,
	Arnd Bergmann, linux-arm-kernel

On Thu, May 3, 2012 at 3:14 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> On Monday, April 30, 2012, Colin Cross wrote:
<snip>

>> +/**
>> + * DOC: Coupled cpuidle states
>> + *
>> + * On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
>> + * cpus cannot be independently powered down, either due to
>> + * sequencing restrictions (on Tegra 2, cpu 0 must be the last to
>> + * power down), or due to HW bugs (on OMAP4460, a cpu powering up
>> + * will corrupt the gic state unless the other cpu runs a work
>> + * around).  Each cpu has a power state that it can enter without
>> + * coordinating with the other cpu (usually Wait For Interrupt, or
>> + * WFI), and one or more "coupled" power states that affect blocks
>> + * shared between the cpus (L2 cache, interrupt controller, and
>> + * sometimes the whole SoC).  Entering a coupled power state must
>> + * be tightly controlled on both cpus.
>> + *
>> + * The easiest solution to implementing coupled cpu power states is
>> + * to hotplug all but one cpu whenever possible, usually using a
>> + * cpufreq governor that looks at cpu load to determine when to
>> + * enable the secondary cpus.  This causes problems, as hotplug is an
>> + * expensive operation, so the number of hotplug transitions must be
>> + * minimized, leading to very slow response to loads, often on the
>> + * order of seconds.
>
> I'd drop the above paragraph entirely.  It doesn't say much about what's in
> the file and refers to an obviously suboptimal approach.

Sure.


<snip>

>> +/*
>> + * The cpuidle_coupled_poked_mask masked is used to avoid calling
>
> s/masked/mask/ perhaps?

Sure.

<snip>

>> +/**
>> + * cpuidle_coupled_cpus_waiting - check if all cpus in a coupled set are waiting
>> + * @coupled: the struct coupled that contains the current cpu
>> + *
>> + * Returns true if all cpus coupled to this target state are in the wait loop
>> + */
>> +static inline bool cpuidle_coupled_cpus_waiting(struct cpuidle_coupled *coupled)
>> +{
>> +     int alive;
>> +     int waiting;
>> +
>> +     /*
>> +      * Read alive before reading waiting so a booting cpu is not treated as
>> +      * idle
>> +      */
>
> Well, the comment doesn't really explain much.  In particular, why the boot CPU
> could be treated as idle if the reads were in a different order.

Hm, I think the race condition is on a cpu going down.  What about:
Read alive before reading waiting.  If waiting is read before alive,
this cpu could see another cpu as waiting just before it goes offline,
between when it the other cpu decrements waiting and when it
decrements alive, which could cause alive == waiting when one cpu is
not waiting.

>> +     alive = atomic_read(&coupled->alive_count);
>> +     smp_rmb();
>> +     waiting = atomic_read(&coupled->waiting_count);
>
> Have you considered using one atomic variable to accommodate both counters
> such that the upper half contains one counter and the lower half contains
> the other?

There are 3 counters (alive, waiting, and ready).  Do you want me to
squish all of them into a single atomic_t, which would limit to 1023
cpus?

>> +
>> +     return (waiting == alive);
>> +}
>> +
>> +/**
>> + * cpuidle_coupled_get_state - determine the deepest idle state
>> + * @dev: struct cpuidle_device for this cpu
>> + * @coupled: the struct coupled that contains the current cpu
>> + *
>> + * Returns the deepest idle state that all coupled cpus can enter
>> + */
>> +static inline int cpuidle_coupled_get_state(struct cpuidle_device *dev,
>> +             struct cpuidle_coupled *coupled)
>> +{
>> +     int i;
>> +     int state = INT_MAX;
>> +
>> +     for_each_cpu_mask(i, coupled->coupled_cpus)
>> +             if (coupled->requested_state[i] != CPUIDLE_COUPLED_DEAD &&
>> +                 coupled->requested_state[i] < state)
>> +                     state = coupled->requested_state[i];
>> +
>> +     BUG_ON(state >= dev->state_count || state < 0);
>
> Do you have to crash the kernel here if the assertion doesn't hold?  Maybe
> you could use WARN_ON() and return error code?

If this BUG_ON is hit, there is a race condition somewhere that
allowed a cpu out of idle unexpectedly, and there is no way to recover
without more race conditions.  I don't expect this to ever happen, it
is mostly there to detect race conditions during development.  Should
I drop it completely?

>> +
>> +     return state;
>> +}
>> +
>> +static void cpuidle_coupled_poked(void *info)
>> +{
>> +     int cpu = (unsigned long)info;
>> +     cpumask_clear_cpu(cpu, &cpuidle_coupled_poked_mask);
>> +}
>> +
>> +/**
>> + * cpuidle_coupled_poke - wake up a cpu that may be waiting
>> + * @cpu: target cpu
>> + *
>> + * Ensures that the target cpu exits it's waiting idle state (if it is in it)
>> + * and will see updates to waiting_count before it re-enters it's waiting idle
>> + * state.
>> + *
>> + * If cpuidle_coupled_poked_mask is already set for the target cpu, that cpu
>> + * either has or will soon have a pending IPI that will wake it out of idle,
>> + * or it is currently processing the IPI and is not in idle.
>> + */
>> +static void cpuidle_coupled_poke(int cpu)
>> +{
>> +     struct call_single_data *csd = &per_cpu(cpuidle_coupled_poke_cb, cpu);
>> +
>> +     if (!cpumask_test_and_set_cpu(cpu, &cpuidle_coupled_poked_mask))
>> +             __smp_call_function_single(cpu, csd, 0);
>> +}
>> +
>> +/**
>> + * cpuidle_coupled_poke_others - wake up all other cpus that may be waiting
>> + * @dev: struct cpuidle_device for this cpu
>> + * @coupled: the struct coupled that contains the current cpu
>> + *
>> + * Calls cpuidle_coupled_poke on all other online cpus.
>> + */
>> +static void cpuidle_coupled_poke_others(struct cpuidle_device *dev,
>> +             struct cpuidle_coupled *coupled)
>
> It looks like you could simply pass cpu (not dev) to this function.

OK.

>> +{
>> +     int cpu;
>> +
>> +     for_each_cpu_mask(cpu, coupled->coupled_cpus)
>> +             if (cpu != dev->cpu && cpu_online(cpu))
>> +                     cpuidle_coupled_poke(cpu);
>> +}
>> +
>> +/**
>> + * cpuidle_coupled_set_waiting - mark this cpu as in the wait loop
>> + * @dev: struct cpuidle_device for this cpu
>> + * @coupled: the struct coupled that contains the current cpu
>> + * @next_state: the index in drv->states of the requested state for this cpu
>> + *
>> + * Updates the requested idle state for the specified cpuidle device,
>> + * poking all coupled cpus out of idle if necessary to let them see the new
>> + * state.
>> + *
>> + * Provides memory ordering around waiting_count.
>> + */
>> +static void cpuidle_coupled_set_waiting(struct cpuidle_device *dev,
>> +             struct cpuidle_coupled *coupled, int next_state)
>
> If you passed cpu (instead of dev) to cpuidle_coupled_poke_others(),
> then you could pass cpu (instead of dev) to this function too, it seems.

OK.

>> +{
>> +     int alive;
>> +
>> +     BUG_ON(coupled->requested_state[dev->cpu] >= 0);
>
> Would be WARN_ON() + do nothing too dangerous here?

If this BUG_ON is hit, then this cpu exited idle without clearing its
waiting state, which could cause another cpu to enter the deeper idle
state while this cpu is still running.  The counters would be out of
sync, so it's not easy to recover.  Again, this is to detect race
conditions during development, but should never happen.  Should I drop
it?

>> +
>> +     coupled->requested_state[dev->cpu] = next_state;
>> +
>> +     /*
>> +      * If this is the last cpu to enter the waiting state, poke
>> +      * all the other cpus out of their waiting state so they can
>> +      * enter a deeper state.  This can race with one of the cpus
>> +      * exiting the waiting state due to an interrupt and
>> +      * decrementing waiting_count, see comment below.
>> +      */
>> +     alive = atomic_read(&coupled->alive_count);
>> +     if (atomic_inc_return(&coupled->waiting_count) == alive)
>> +             cpuidle_coupled_poke_others(dev, coupled);
>> +}
>> +
>> +/**
>> + * cpuidle_coupled_set_not_waiting - mark this cpu as leaving the wait loop
>> + * @dev: struct cpuidle_device for this cpu
>> + * @coupled: the struct coupled that contains the current cpu
>> + *
>> + * Removes the requested idle state for the specified cpuidle device.
>> + *
>> + * Provides memory ordering around waiting_count.
>> + */
>> +static void cpuidle_coupled_set_not_waiting(struct cpuidle_device *dev,
>> +             struct cpuidle_coupled *coupled)
>
> It looks like dev doesn't have to be passed here, cpu would be enough.
>
>> +{
>> +     BUG_ON(coupled->requested_state[dev->cpu] < 0);
>
> Well, like above?
Same as above.

>> +
>> +     /*
>> +      * Decrementing waiting_count can race with incrementing it in
>> +      * cpuidle_coupled_set_waiting, but that's OK.  Worst case, some
>> +      * cpus will increment ready_count and then spin until they
>> +      * notice that this cpu has cleared it's requested_state.
>> +      */
>
> So it looks like having ready_count and waiting_count in one atomic variable
> can spare us this particular race condition.
As above, there are 3 counters here, alive, ready, and waiting.

>> +
>> +     smp_mb__before_atomic_dec();
>> +     atomic_dec(&coupled->waiting_count);
>> +     smp_mb__after_atomic_dec();
>
> Do you really need both the before and after barriers here?  If so, then why?

I believe so, waiting is ordered vs. alive and ready, one barrier is
for each.  Do you want the answers to these questions here or in the
code?  I had comments for every barrier use during development, but it
made it too hard to follow the flow of the code.  I could add a
comment describing the ordering requirements instead, but it's still
hard to translate that to the required barrier locations.

>> +
>> +     coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
>> +}
>> +
>> +/**
>> + * cpuidle_enter_state_coupled - attempt to enter a state with coupled cpus
>> + * @dev: struct cpuidle_device for the current cpu
>> + * @drv: struct cpuidle_driver for the platform
>> + * @next_state: index of the requested state in drv->states
>> + *
>> + * Coordinate with coupled cpus to enter the target state.  This is a two
>> + * stage process.  In the first stage, the cpus are operating independently,
>> + * and may call into cpuidle_enter_state_coupled at completely different times.
>> + * To save as much power as possible, the first cpus to call this function will
>> + * go to an intermediate state (the cpuidle_device's safe state), and wait for
>> + * all the other cpus to call this function.  Once all coupled cpus are idle,
>> + * the second stage will start.  Each coupled cpu will spin until all cpus have
>> + * guaranteed that they will call the target_state.
>
> It would be good to mention the conditions for calling this function (eg.
> interrupts disabled on the local CPU).

OK.

>> + */
>> +int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
>> +             struct cpuidle_driver *drv, int next_state)
>> +{
>> +     int entered_state = -1;
>> +     struct cpuidle_coupled *coupled = dev->coupled;
>> +     int alive;
>> +
>> +     if (!coupled)
>> +             return -EINVAL;
>> +
>> +     BUG_ON(atomic_read(&coupled->ready_count));
>
> Again, I'd do a WARN_ON() and return error code from here (to avoid crashing
> the kernel).
Same as above, if ready_count is not 0 here then the counters are out
of sync and something is about to go horribly wrong, like cutting
power to a running cpu.

>> +     cpuidle_coupled_set_waiting(dev, coupled, next_state);
>> +
>> +retry:
>> +     /*
>> +      * Wait for all coupled cpus to be idle, using the deepest state
>> +      * allowed for a single cpu.
>> +      */
>> +     while (!need_resched() && !cpuidle_coupled_cpus_waiting(coupled)) {
>> +             entered_state = cpuidle_enter_state(dev, drv,
>> +                     dev->safe_state_index);
>> +
>> +             local_irq_enable();
>> +             while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
>> +                     cpu_relax();
>
> Hmm.  What exactly is this loop supposed to achieve?
This is to ensure that the outstanding wakeups have been processed so
we don't go to idle with an interrupt pending an immediately wake up.

>> +             local_irq_disable();
>
> Anyway, you seem to be calling it twice along with this enabling/disabling of
> interrupts.  I'd put that into a separate function and explain its role in a
> kerneldoc comment.

I left it here to be obvious that I was enabling interrupts in the
idle path, but I can refactor it out if you prefer.

>> +     }
>> +
>> +     /* give a chance to process any remaining pokes */
>> +     local_irq_enable();
>> +     while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
>> +             cpu_relax();
>> +     local_irq_disable();
>> +
>> +     if (need_resched()) {
>> +             cpuidle_coupled_set_not_waiting(dev, coupled);
>> +             goto out;
>> +     }
>> +
>> +     /*
>> +      * All coupled cpus are probably idle.  There is a small chance that
>> +      * one of the other cpus just became active.  Increment a counter when
>> +      * ready, and spin until all coupled cpus have incremented the counter.
>> +      * Once a cpu has incremented the counter, it cannot abort idle and must
>> +      * spin until either the count has hit alive_count, or another cpu
>> +      * leaves idle.
>> +      */
>> +
>> +     smp_mb__before_atomic_inc();
>> +     atomic_inc(&coupled->ready_count);
>> +     smp_mb__after_atomic_inc();
>
> It seems that at least one of these barriers is unnecessary ...
The first is to ensure ordering between ready_count and waiting count,
the second is for ready_count vs. alive_count and requested_state.

>> +     /* alive_count can't change while ready_count > 0 */
>> +     alive = atomic_read(&coupled->alive_count);
>> +     while (atomic_read(&coupled->ready_count) != alive) {
>> +             /* Check if any other cpus bailed out of idle. */
>> +             if (!cpuidle_coupled_cpus_waiting(coupled)) {
>> +                     atomic_dec(&coupled->ready_count);
>> +                     smp_mb__after_atomic_dec();
>> +                     goto retry;
>> +             }
>> +
>> +             cpu_relax();
>> +     }
>> +
>> +     /* all cpus have acked the coupled state */
>> +     smp_rmb();
>
> What is the barrier here for?
This protects ready_count vs. requested_state.  It is already
implicitly protected by the atomic_inc_return in set_waiting, but I
thought it would be better to protect it explicitly here.  I think I
added the smp_mb__after_atomic_inc above later, which makes this one
superflous, so I'll drop it.

>> +
>> +     next_state = cpuidle_coupled_get_state(dev, coupled);
>> +
>> +     entered_state = cpuidle_enter_state(dev, drv, next_state);
>> +
>> +     cpuidle_coupled_set_not_waiting(dev, coupled);
>> +     atomic_dec(&coupled->ready_count);
>> +     smp_mb__after_atomic_dec();
>> +
>> +out:
>> +     /*
>> +      * Normal cpuidle states are expected to return with irqs enabled.
>> +      * That leads to an inefficiency where a cpu receiving an interrupt
>> +      * that brings it out of idle will process that interrupt before
>> +      * exiting the idle enter function and decrementing ready_count.  All
>> +      * other cpus will need to spin waiting for the cpu that is processing
>> +      * the interrupt.  If the driver returns with interrupts disabled,
>> +      * all other cpus will loop back into the safe idle state instead of
>> +      * spinning, saving power.
>> +      *
>> +      * Calling local_irq_enable here allows coupled states to return with
>> +      * interrupts disabled, but won't cause problems for drivers that
>> +      * exit with interrupts enabled.
>> +      */
>> +     local_irq_enable();
>> +
>> +     /*
>> +      * Wait until all coupled cpus have exited idle.  There is no risk that
>> +      * a cpu exits and re-enters the ready state because this cpu has
>> +      * already decremented its waiting_count.
>> +      */
>> +     while (atomic_read(&coupled->ready_count) != 0)
>> +             cpu_relax();
>> +
>> +     smp_rmb();
>
> And here?

This was to protect ready_count vs. looping back in and reading
alive_count.  There will be plenty of synchronization calls between
the two with implicit barriers, but I thought it was better to do it
explicitly.

>> +
>> +     return entered_state;
>> +}
>> +
>> +/**
>> + * cpuidle_coupled_register_device - register a coupled cpuidle device
>> + * @dev: struct cpuidle_device for the current cpu
>> + *
>> + * Called from cpuidle_register_device to handle coupled idle init.  Finds the
>> + * cpuidle_coupled struct for this set of coupled cpus, or creates one if none
>> + * exists yet.
>> + */
>> +int cpuidle_coupled_register_device(struct cpuidle_device *dev)
>> +{
>> +     int cpu;
>> +     struct cpuidle_device *other_dev;
>> +     struct call_single_data *csd;
>> +     struct cpuidle_coupled *coupled;
>> +
>> +     if (cpumask_empty(&dev->coupled_cpus))
>> +             return 0;
>> +
>> +     for_each_cpu_mask(cpu, dev->coupled_cpus) {
>> +             other_dev = per_cpu(cpuidle_devices, cpu);
>> +             if (other_dev && other_dev->coupled) {
>> +                     coupled = other_dev->coupled;
>> +                     goto have_coupled;
>> +             }
>> +     }
>> +
>> +     /* No existing coupled info found, create a new one */
>> +     coupled = kzalloc(sizeof(struct cpuidle_coupled), GFP_KERNEL);
>> +     if (!coupled)
>> +             return -ENOMEM;
>> +
>> +     coupled->coupled_cpus = dev->coupled_cpus;
>> +     for_each_cpu_mask(cpu, coupled->coupled_cpus)
>> +             coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_DEAD;
>> +
>> +have_coupled:
>> +     dev->coupled = coupled;
>> +     BUG_ON(!cpumask_equal(&dev->coupled_cpus, &coupled->coupled_cpus));
>> +
>> +     if (cpu_online(dev->cpu)) {
>> +             coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
>> +             atomic_inc(&coupled->alive_count);
>> +     }
>> +
>> +     coupled->refcnt++;
>> +
>> +     csd = &per_cpu(cpuidle_coupled_poke_cb, dev->cpu);
>> +     csd->func = cpuidle_coupled_poked;
>> +     csd->info = (void *)(unsigned long)dev->cpu;
>> +
>> +     return 0;
>> +}
>> +
>> +/**
>> + * cpuidle_coupled_unregister_device - unregister a coupled cpuidle device
>> + * @dev: struct cpuidle_device for the current cpu
>> + *
>> + * Called from cpuidle_unregister_device to tear down coupled idle.  Removes the
>> + * cpu from the coupled idle set, and frees the cpuidle_coupled_info struct if
>> + * this was the last cpu in the set.
>> + */
>> +void cpuidle_coupled_unregister_device(struct cpuidle_device *dev)
>> +{
>> +     struct cpuidle_coupled *coupled = dev->coupled;
>> +
>> +     if (cpumask_empty(&dev->coupled_cpus))
>> +             return;
>> +
>> +     if (--coupled->refcnt)
>> +             kfree(coupled);
>> +     dev->coupled = NULL;
>> +}
>> +
>> +/**
>> + * cpuidle_coupled_cpu_set_alive - adjust alive_count during hotplug transitions
>> + * @cpu: target cpu number
>> + * @alive: whether the target cpu is going up or down
>> + *
>> + * Run on the cpu that is bringing up the target cpu, before the target cpu
>> + * has been booted, or after the target cpu is completely dead.
>> + */
>> +static void cpuidle_coupled_cpu_set_alive(int cpu, bool alive)
>> +{
>> +     struct cpuidle_device *dev;
>> +     struct cpuidle_coupled *coupled;
>> +
>> +     mutex_lock(&cpuidle_lock);
>> +
>> +     dev = per_cpu(cpuidle_devices, cpu);
>> +     if (!dev->coupled)
>> +             goto out;
>> +
>> +     coupled = dev->coupled;
>> +
>> +     /*
>> +      * waiting_count must be at least 1 less than alive_count, because
>> +      * this cpu is not waiting.  Spin until all cpus have noticed this cpu
>> +      * is not idle and exited the ready loop before changing alive_count.
>> +      */
>> +     while (atomic_read(&coupled->ready_count))
>> +             cpu_relax();
>> +
>> +     if (alive) {
>> +             smp_mb__before_atomic_inc();
>> +             atomic_inc(&coupled->alive_count);
>> +             smp_mb__after_atomic_inc();
>> +             coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
>> +     } else {
>> +             smp_mb__before_atomic_dec();
>> +             atomic_dec(&coupled->alive_count);
>> +             smp_mb__after_atomic_dec();
>> +             coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_DEAD;
>
> There's too many SMP barriers above, but I'm not quite sure which of them (if
> any) are really necessary.
The ones before order ready_count vs alive_count, the ones after order
alive_count vs. requested_state and future waiting_count increments.

>> +     }
>> +
>> +out:
>> +     mutex_unlock(&cpuidle_lock);
>> +}
>> +
>> +/**
>> + * cpuidle_coupled_cpu_notify - notifier called during hotplug transitions
>> + * @nb: notifier block
>> + * @action: hotplug transition
>> + * @hcpu: target cpu number
>> + *
>> + * Called when a cpu is brought on or offline using hotplug.  Updates the
>> + * coupled cpu set appropriately
>> + */
>> +static int cpuidle_coupled_cpu_notify(struct notifier_block *nb,
>> +             unsigned long action, void *hcpu)
>> +{
>> +     int cpu = (unsigned long)hcpu;
>> +
>> +     switch (action & ~CPU_TASKS_FROZEN) {
>> +     case CPU_DEAD:
>> +     case CPU_UP_CANCELED:
>> +             cpuidle_coupled_cpu_set_alive(cpu, false);
>> +             break;
>> +     case CPU_UP_PREPARE:
>> +             cpuidle_coupled_cpu_set_alive(cpu, true);
>> +             break;
>> +     }
>> +     return NOTIFY_OK;
>> +}
>> +
>> +static struct notifier_block cpuidle_coupled_cpu_notifier = {
>> +     .notifier_call = cpuidle_coupled_cpu_notify,
>> +};
>> +
>> +static int __init cpuidle_coupled_init(void)
>> +{
>> +     return register_cpu_notifier(&cpuidle_coupled_cpu_notifier);
>> +}
>> +core_initcall(cpuidle_coupled_init);
>> diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
>> index 4540672..e81cfda 100644
>> --- a/drivers/cpuidle/cpuidle.c
>> +++ b/drivers/cpuidle/cpuidle.c
>> @@ -171,7 +171,11 @@ int cpuidle_idle_call(void)
>>       trace_power_start_rcuidle(POWER_CSTATE, next_state, dev->cpu);
>>       trace_cpu_idle_rcuidle(next_state, dev->cpu);
>>
>> -     entered_state = cpuidle_enter_state(dev, drv, next_state);
>> +     if (cpuidle_state_is_coupled(dev, drv, next_state))
>> +             entered_state = cpuidle_enter_state_coupled(dev, drv,
>> +                                                         next_state);
>> +     else
>> +             entered_state = cpuidle_enter_state(dev, drv, next_state);
>>
>>       trace_power_end_rcuidle(dev->cpu);
>>       trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, dev->cpu);
>> @@ -407,9 +411,16 @@ static int __cpuidle_register_device(struct cpuidle_device *dev)
>>       if (ret)
>>               goto err_sysfs;
>>
>> +     ret = cpuidle_coupled_register_device(dev);
>> +     if (ret)
>> +             goto err_coupled;
>> +
>>       dev->registered = 1;
>>       return 0;
>>
>> +err_coupled:
>> +     cpuidle_remove_sysfs(cpu_dev);
>> +     wait_for_completion(&dev->kobj_unregister);
>>  err_sysfs:
>>       list_del(&dev->device_list);
>>       per_cpu(cpuidle_devices, dev->cpu) = NULL;
>> @@ -464,6 +475,8 @@ void cpuidle_unregister_device(struct cpuidle_device *dev)
>>       wait_for_completion(&dev->kobj_unregister);
>>       per_cpu(cpuidle_devices, dev->cpu) = NULL;
>>
>> +     cpuidle_coupled_unregister_device(dev);
>> +
>>       cpuidle_resume_and_unlock();
>>
>>       module_put(cpuidle_driver->owner);
>> diff --git a/drivers/cpuidle/cpuidle.h b/drivers/cpuidle/cpuidle.h
>> index d8a3ccc..76e7f69 100644
>> --- a/drivers/cpuidle/cpuidle.h
>> +++ b/drivers/cpuidle/cpuidle.h
>> @@ -32,4 +32,34 @@ extern int cpuidle_enter_state(struct cpuidle_device *dev,
>>  extern int cpuidle_add_sysfs(struct device *dev);
>>  extern void cpuidle_remove_sysfs(struct device *dev);
>>
>> +#ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED
>> +bool cpuidle_state_is_coupled(struct cpuidle_device *dev,
>> +             struct cpuidle_driver *drv, int state);
>> +int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
>> +             struct cpuidle_driver *drv, int next_state);
>> +int cpuidle_coupled_register_device(struct cpuidle_device *dev);
>> +void cpuidle_coupled_unregister_device(struct cpuidle_device *dev);
>> +#else
>> +static inline bool cpuidle_state_is_coupled(struct cpuidle_device *dev,
>> +             struct cpuidle_driver *drv, int state)
>> +{
>> +     return false;
>> +}
>> +
>> +static inline int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
>> +             struct cpuidle_driver *drv, int next_state)
>> +{
>> +     return -1;
>> +}
>> +
>> +static inline int cpuidle_coupled_register_device(struct cpuidle_device *dev)
>> +{
>> +     return 0;
>> +}
>> +
>> +static inline void cpuidle_coupled_unregister_device(struct cpuidle_device *dev)
>> +{
>> +}
>> +#endif
>> +
>>  #endif /* __DRIVER_CPUIDLE_H */
>> diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h
>> index 6c26a3d..6038448 100644
>> --- a/include/linux/cpuidle.h
>> +++ b/include/linux/cpuidle.h
>> @@ -57,6 +57,7 @@ struct cpuidle_state {
>>
>>  /* Idle State Flags */
>>  #define CPUIDLE_FLAG_TIME_VALID      (0x01) /* is residency time measurable? */
>> +#define CPUIDLE_FLAG_COUPLED (0x02) /* state applies to multiple cpus */
>>
>>  #define CPUIDLE_DRIVER_FLAGS_MASK (0xFFFF0000)
>>
>> @@ -100,6 +101,12 @@ struct cpuidle_device {
>>       struct list_head        device_list;
>>       struct kobject          kobj;
>>       struct completion       kobj_unregister;
>> +
>> +#ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED
>> +     int                     safe_state_index;
>> +     cpumask_t               coupled_cpus;
>> +     struct cpuidle_coupled  *coupled;
>> +#endif
>>  };
>>
>>  DECLARE_PER_CPU(struct cpuidle_device *, cpuidle_devices);
>
> Thanks,
> Rafael

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 3/5] cpuidle: add support for states that affect multiple cpus
@ 2012-05-03 23:09       ` Colin Cross
  0 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-05-03 23:09 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Kevin Hilman, Len Brown, Russell King, Greg Kroah-Hartman,
	Kay Sievers, linux-kernel, Amit Kucheria, linux-pm,
	Arjan van de Ven, Arnd Bergmann, linux-arm-kernel

On Thu, May 3, 2012 at 3:14 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> On Monday, April 30, 2012, Colin Cross wrote:
<snip>

>> +/**
>> + * DOC: Coupled cpuidle states
>> + *
>> + * On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
>> + * cpus cannot be independently powered down, either due to
>> + * sequencing restrictions (on Tegra 2, cpu 0 must be the last to
>> + * power down), or due to HW bugs (on OMAP4460, a cpu powering up
>> + * will corrupt the gic state unless the other cpu runs a work
>> + * around).  Each cpu has a power state that it can enter without
>> + * coordinating with the other cpu (usually Wait For Interrupt, or
>> + * WFI), and one or more "coupled" power states that affect blocks
>> + * shared between the cpus (L2 cache, interrupt controller, and
>> + * sometimes the whole SoC).  Entering a coupled power state must
>> + * be tightly controlled on both cpus.
>> + *
>> + * The easiest solution to implementing coupled cpu power states is
>> + * to hotplug all but one cpu whenever possible, usually using a
>> + * cpufreq governor that looks at cpu load to determine when to
>> + * enable the secondary cpus.  This causes problems, as hotplug is an
>> + * expensive operation, so the number of hotplug transitions must be
>> + * minimized, leading to very slow response to loads, often on the
>> + * order of seconds.
>
> I'd drop the above paragraph entirely.  It doesn't say much about what's in
> the file and refers to an obviously suboptimal approach.

Sure.


<snip>

>> +/*
>> + * The cpuidle_coupled_poked_mask masked is used to avoid calling
>
> s/masked/mask/ perhaps?

Sure.

<snip>

>> +/**
>> + * cpuidle_coupled_cpus_waiting - check if all cpus in a coupled set are waiting
>> + * @coupled: the struct coupled that contains the current cpu
>> + *
>> + * Returns true if all cpus coupled to this target state are in the wait loop
>> + */
>> +static inline bool cpuidle_coupled_cpus_waiting(struct cpuidle_coupled *coupled)
>> +{
>> +     int alive;
>> +     int waiting;
>> +
>> +     /*
>> +      * Read alive before reading waiting so a booting cpu is not treated as
>> +      * idle
>> +      */
>
> Well, the comment doesn't really explain much.  In particular, why the boot CPU
> could be treated as idle if the reads were in a different order.

Hm, I think the race condition is on a cpu going down.  What about:
Read alive before reading waiting.  If waiting is read before alive,
this cpu could see another cpu as waiting just before it goes offline,
between when it the other cpu decrements waiting and when it
decrements alive, which could cause alive == waiting when one cpu is
not waiting.

>> +     alive = atomic_read(&coupled->alive_count);
>> +     smp_rmb();
>> +     waiting = atomic_read(&coupled->waiting_count);
>
> Have you considered using one atomic variable to accommodate both counters
> such that the upper half contains one counter and the lower half contains
> the other?

There are 3 counters (alive, waiting, and ready).  Do you want me to
squish all of them into a single atomic_t, which would limit to 1023
cpus?

>> +
>> +     return (waiting == alive);
>> +}
>> +
>> +/**
>> + * cpuidle_coupled_get_state - determine the deepest idle state
>> + * @dev: struct cpuidle_device for this cpu
>> + * @coupled: the struct coupled that contains the current cpu
>> + *
>> + * Returns the deepest idle state that all coupled cpus can enter
>> + */
>> +static inline int cpuidle_coupled_get_state(struct cpuidle_device *dev,
>> +             struct cpuidle_coupled *coupled)
>> +{
>> +     int i;
>> +     int state = INT_MAX;
>> +
>> +     for_each_cpu_mask(i, coupled->coupled_cpus)
>> +             if (coupled->requested_state[i] != CPUIDLE_COUPLED_DEAD &&
>> +                 coupled->requested_state[i] < state)
>> +                     state = coupled->requested_state[i];
>> +
>> +     BUG_ON(state >= dev->state_count || state < 0);
>
> Do you have to crash the kernel here if the assertion doesn't hold?  Maybe
> you could use WARN_ON() and return error code?

If this BUG_ON is hit, there is a race condition somewhere that
allowed a cpu out of idle unexpectedly, and there is no way to recover
without more race conditions.  I don't expect this to ever happen, it
is mostly there to detect race conditions during development.  Should
I drop it completely?

>> +
>> +     return state;
>> +}
>> +
>> +static void cpuidle_coupled_poked(void *info)
>> +{
>> +     int cpu = (unsigned long)info;
>> +     cpumask_clear_cpu(cpu, &cpuidle_coupled_poked_mask);
>> +}
>> +
>> +/**
>> + * cpuidle_coupled_poke - wake up a cpu that may be waiting
>> + * @cpu: target cpu
>> + *
>> + * Ensures that the target cpu exits it's waiting idle state (if it is in it)
>> + * and will see updates to waiting_count before it re-enters it's waiting idle
>> + * state.
>> + *
>> + * If cpuidle_coupled_poked_mask is already set for the target cpu, that cpu
>> + * either has or will soon have a pending IPI that will wake it out of idle,
>> + * or it is currently processing the IPI and is not in idle.
>> + */
>> +static void cpuidle_coupled_poke(int cpu)
>> +{
>> +     struct call_single_data *csd = &per_cpu(cpuidle_coupled_poke_cb, cpu);
>> +
>> +     if (!cpumask_test_and_set_cpu(cpu, &cpuidle_coupled_poked_mask))
>> +             __smp_call_function_single(cpu, csd, 0);
>> +}
>> +
>> +/**
>> + * cpuidle_coupled_poke_others - wake up all other cpus that may be waiting
>> + * @dev: struct cpuidle_device for this cpu
>> + * @coupled: the struct coupled that contains the current cpu
>> + *
>> + * Calls cpuidle_coupled_poke on all other online cpus.
>> + */
>> +static void cpuidle_coupled_poke_others(struct cpuidle_device *dev,
>> +             struct cpuidle_coupled *coupled)
>
> It looks like you could simply pass cpu (not dev) to this function.

OK.

>> +{
>> +     int cpu;
>> +
>> +     for_each_cpu_mask(cpu, coupled->coupled_cpus)
>> +             if (cpu != dev->cpu && cpu_online(cpu))
>> +                     cpuidle_coupled_poke(cpu);
>> +}
>> +
>> +/**
>> + * cpuidle_coupled_set_waiting - mark this cpu as in the wait loop
>> + * @dev: struct cpuidle_device for this cpu
>> + * @coupled: the struct coupled that contains the current cpu
>> + * @next_state: the index in drv->states of the requested state for this cpu
>> + *
>> + * Updates the requested idle state for the specified cpuidle device,
>> + * poking all coupled cpus out of idle if necessary to let them see the new
>> + * state.
>> + *
>> + * Provides memory ordering around waiting_count.
>> + */
>> +static void cpuidle_coupled_set_waiting(struct cpuidle_device *dev,
>> +             struct cpuidle_coupled *coupled, int next_state)
>
> If you passed cpu (instead of dev) to cpuidle_coupled_poke_others(),
> then you could pass cpu (instead of dev) to this function too, it seems.

OK.

>> +{
>> +     int alive;
>> +
>> +     BUG_ON(coupled->requested_state[dev->cpu] >= 0);
>
> Would be WARN_ON() + do nothing too dangerous here?

If this BUG_ON is hit, then this cpu exited idle without clearing its
waiting state, which could cause another cpu to enter the deeper idle
state while this cpu is still running.  The counters would be out of
sync, so it's not easy to recover.  Again, this is to detect race
conditions during development, but should never happen.  Should I drop
it?

>> +
>> +     coupled->requested_state[dev->cpu] = next_state;
>> +
>> +     /*
>> +      * If this is the last cpu to enter the waiting state, poke
>> +      * all the other cpus out of their waiting state so they can
>> +      * enter a deeper state.  This can race with one of the cpus
>> +      * exiting the waiting state due to an interrupt and
>> +      * decrementing waiting_count, see comment below.
>> +      */
>> +     alive = atomic_read(&coupled->alive_count);
>> +     if (atomic_inc_return(&coupled->waiting_count) == alive)
>> +             cpuidle_coupled_poke_others(dev, coupled);
>> +}
>> +
>> +/**
>> + * cpuidle_coupled_set_not_waiting - mark this cpu as leaving the wait loop
>> + * @dev: struct cpuidle_device for this cpu
>> + * @coupled: the struct coupled that contains the current cpu
>> + *
>> + * Removes the requested idle state for the specified cpuidle device.
>> + *
>> + * Provides memory ordering around waiting_count.
>> + */
>> +static void cpuidle_coupled_set_not_waiting(struct cpuidle_device *dev,
>> +             struct cpuidle_coupled *coupled)
>
> It looks like dev doesn't have to be passed here, cpu would be enough.
>
>> +{
>> +     BUG_ON(coupled->requested_state[dev->cpu] < 0);
>
> Well, like above?
Same as above.

>> +
>> +     /*
>> +      * Decrementing waiting_count can race with incrementing it in
>> +      * cpuidle_coupled_set_waiting, but that's OK.  Worst case, some
>> +      * cpus will increment ready_count and then spin until they
>> +      * notice that this cpu has cleared it's requested_state.
>> +      */
>
> So it looks like having ready_count and waiting_count in one atomic variable
> can spare us this particular race condition.
As above, there are 3 counters here, alive, ready, and waiting.

>> +
>> +     smp_mb__before_atomic_dec();
>> +     atomic_dec(&coupled->waiting_count);
>> +     smp_mb__after_atomic_dec();
>
> Do you really need both the before and after barriers here?  If so, then why?

I believe so, waiting is ordered vs. alive and ready, one barrier is
for each.  Do you want the answers to these questions here or in the
code?  I had comments for every barrier use during development, but it
made it too hard to follow the flow of the code.  I could add a
comment describing the ordering requirements instead, but it's still
hard to translate that to the required barrier locations.

>> +
>> +     coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
>> +}
>> +
>> +/**
>> + * cpuidle_enter_state_coupled - attempt to enter a state with coupled cpus
>> + * @dev: struct cpuidle_device for the current cpu
>> + * @drv: struct cpuidle_driver for the platform
>> + * @next_state: index of the requested state in drv->states
>> + *
>> + * Coordinate with coupled cpus to enter the target state.  This is a two
>> + * stage process.  In the first stage, the cpus are operating independently,
>> + * and may call into cpuidle_enter_state_coupled at completely different times.
>> + * To save as much power as possible, the first cpus to call this function will
>> + * go to an intermediate state (the cpuidle_device's safe state), and wait for
>> + * all the other cpus to call this function.  Once all coupled cpus are idle,
>> + * the second stage will start.  Each coupled cpu will spin until all cpus have
>> + * guaranteed that they will call the target_state.
>
> It would be good to mention the conditions for calling this function (eg.
> interrupts disabled on the local CPU).

OK.

>> + */
>> +int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
>> +             struct cpuidle_driver *drv, int next_state)
>> +{
>> +     int entered_state = -1;
>> +     struct cpuidle_coupled *coupled = dev->coupled;
>> +     int alive;
>> +
>> +     if (!coupled)
>> +             return -EINVAL;
>> +
>> +     BUG_ON(atomic_read(&coupled->ready_count));
>
> Again, I'd do a WARN_ON() and return error code from here (to avoid crashing
> the kernel).
Same as above, if ready_count is not 0 here then the counters are out
of sync and something is about to go horribly wrong, like cutting
power to a running cpu.

>> +     cpuidle_coupled_set_waiting(dev, coupled, next_state);
>> +
>> +retry:
>> +     /*
>> +      * Wait for all coupled cpus to be idle, using the deepest state
>> +      * allowed for a single cpu.
>> +      */
>> +     while (!need_resched() && !cpuidle_coupled_cpus_waiting(coupled)) {
>> +             entered_state = cpuidle_enter_state(dev, drv,
>> +                     dev->safe_state_index);
>> +
>> +             local_irq_enable();
>> +             while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
>> +                     cpu_relax();
>
> Hmm.  What exactly is this loop supposed to achieve?
This is to ensure that the outstanding wakeups have been processed so
we don't go to idle with an interrupt pending an immediately wake up.

>> +             local_irq_disable();
>
> Anyway, you seem to be calling it twice along with this enabling/disabling of
> interrupts.  I'd put that into a separate function and explain its role in a
> kerneldoc comment.

I left it here to be obvious that I was enabling interrupts in the
idle path, but I can refactor it out if you prefer.

>> +     }
>> +
>> +     /* give a chance to process any remaining pokes */
>> +     local_irq_enable();
>> +     while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
>> +             cpu_relax();
>> +     local_irq_disable();
>> +
>> +     if (need_resched()) {
>> +             cpuidle_coupled_set_not_waiting(dev, coupled);
>> +             goto out;
>> +     }
>> +
>> +     /*
>> +      * All coupled cpus are probably idle.  There is a small chance that
>> +      * one of the other cpus just became active.  Increment a counter when
>> +      * ready, and spin until all coupled cpus have incremented the counter.
>> +      * Once a cpu has incremented the counter, it cannot abort idle and must
>> +      * spin until either the count has hit alive_count, or another cpu
>> +      * leaves idle.
>> +      */
>> +
>> +     smp_mb__before_atomic_inc();
>> +     atomic_inc(&coupled->ready_count);
>> +     smp_mb__after_atomic_inc();
>
> It seems that at least one of these barriers is unnecessary ...
The first is to ensure ordering between ready_count and waiting count,
the second is for ready_count vs. alive_count and requested_state.

>> +     /* alive_count can't change while ready_count > 0 */
>> +     alive = atomic_read(&coupled->alive_count);
>> +     while (atomic_read(&coupled->ready_count) != alive) {
>> +             /* Check if any other cpus bailed out of idle. */
>> +             if (!cpuidle_coupled_cpus_waiting(coupled)) {
>> +                     atomic_dec(&coupled->ready_count);
>> +                     smp_mb__after_atomic_dec();
>> +                     goto retry;
>> +             }
>> +
>> +             cpu_relax();
>> +     }
>> +
>> +     /* all cpus have acked the coupled state */
>> +     smp_rmb();
>
> What is the barrier here for?
This protects ready_count vs. requested_state.  It is already
implicitly protected by the atomic_inc_return in set_waiting, but I
thought it would be better to protect it explicitly here.  I think I
added the smp_mb__after_atomic_inc above later, which makes this one
superflous, so I'll drop it.

>> +
>> +     next_state = cpuidle_coupled_get_state(dev, coupled);
>> +
>> +     entered_state = cpuidle_enter_state(dev, drv, next_state);
>> +
>> +     cpuidle_coupled_set_not_waiting(dev, coupled);
>> +     atomic_dec(&coupled->ready_count);
>> +     smp_mb__after_atomic_dec();
>> +
>> +out:
>> +     /*
>> +      * Normal cpuidle states are expected to return with irqs enabled.
>> +      * That leads to an inefficiency where a cpu receiving an interrupt
>> +      * that brings it out of idle will process that interrupt before
>> +      * exiting the idle enter function and decrementing ready_count.  All
>> +      * other cpus will need to spin waiting for the cpu that is processing
>> +      * the interrupt.  If the driver returns with interrupts disabled,
>> +      * all other cpus will loop back into the safe idle state instead of
>> +      * spinning, saving power.
>> +      *
>> +      * Calling local_irq_enable here allows coupled states to return with
>> +      * interrupts disabled, but won't cause problems for drivers that
>> +      * exit with interrupts enabled.
>> +      */
>> +     local_irq_enable();
>> +
>> +     /*
>> +      * Wait until all coupled cpus have exited idle.  There is no risk that
>> +      * a cpu exits and re-enters the ready state because this cpu has
>> +      * already decremented its waiting_count.
>> +      */
>> +     while (atomic_read(&coupled->ready_count) != 0)
>> +             cpu_relax();
>> +
>> +     smp_rmb();
>
> And here?

This was to protect ready_count vs. looping back in and reading
alive_count.  There will be plenty of synchronization calls between
the two with implicit barriers, but I thought it was better to do it
explicitly.

>> +
>> +     return entered_state;
>> +}
>> +
>> +/**
>> + * cpuidle_coupled_register_device - register a coupled cpuidle device
>> + * @dev: struct cpuidle_device for the current cpu
>> + *
>> + * Called from cpuidle_register_device to handle coupled idle init.  Finds the
>> + * cpuidle_coupled struct for this set of coupled cpus, or creates one if none
>> + * exists yet.
>> + */
>> +int cpuidle_coupled_register_device(struct cpuidle_device *dev)
>> +{
>> +     int cpu;
>> +     struct cpuidle_device *other_dev;
>> +     struct call_single_data *csd;
>> +     struct cpuidle_coupled *coupled;
>> +
>> +     if (cpumask_empty(&dev->coupled_cpus))
>> +             return 0;
>> +
>> +     for_each_cpu_mask(cpu, dev->coupled_cpus) {
>> +             other_dev = per_cpu(cpuidle_devices, cpu);
>> +             if (other_dev && other_dev->coupled) {
>> +                     coupled = other_dev->coupled;
>> +                     goto have_coupled;
>> +             }
>> +     }
>> +
>> +     /* No existing coupled info found, create a new one */
>> +     coupled = kzalloc(sizeof(struct cpuidle_coupled), GFP_KERNEL);
>> +     if (!coupled)
>> +             return -ENOMEM;
>> +
>> +     coupled->coupled_cpus = dev->coupled_cpus;
>> +     for_each_cpu_mask(cpu, coupled->coupled_cpus)
>> +             coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_DEAD;
>> +
>> +have_coupled:
>> +     dev->coupled = coupled;
>> +     BUG_ON(!cpumask_equal(&dev->coupled_cpus, &coupled->coupled_cpus));
>> +
>> +     if (cpu_online(dev->cpu)) {
>> +             coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
>> +             atomic_inc(&coupled->alive_count);
>> +     }
>> +
>> +     coupled->refcnt++;
>> +
>> +     csd = &per_cpu(cpuidle_coupled_poke_cb, dev->cpu);
>> +     csd->func = cpuidle_coupled_poked;
>> +     csd->info = (void *)(unsigned long)dev->cpu;
>> +
>> +     return 0;
>> +}
>> +
>> +/**
>> + * cpuidle_coupled_unregister_device - unregister a coupled cpuidle device
>> + * @dev: struct cpuidle_device for the current cpu
>> + *
>> + * Called from cpuidle_unregister_device to tear down coupled idle.  Removes the
>> + * cpu from the coupled idle set, and frees the cpuidle_coupled_info struct if
>> + * this was the last cpu in the set.
>> + */
>> +void cpuidle_coupled_unregister_device(struct cpuidle_device *dev)
>> +{
>> +     struct cpuidle_coupled *coupled = dev->coupled;
>> +
>> +     if (cpumask_empty(&dev->coupled_cpus))
>> +             return;
>> +
>> +     if (--coupled->refcnt)
>> +             kfree(coupled);
>> +     dev->coupled = NULL;
>> +}
>> +
>> +/**
>> + * cpuidle_coupled_cpu_set_alive - adjust alive_count during hotplug transitions
>> + * @cpu: target cpu number
>> + * @alive: whether the target cpu is going up or down
>> + *
>> + * Run on the cpu that is bringing up the target cpu, before the target cpu
>> + * has been booted, or after the target cpu is completely dead.
>> + */
>> +static void cpuidle_coupled_cpu_set_alive(int cpu, bool alive)
>> +{
>> +     struct cpuidle_device *dev;
>> +     struct cpuidle_coupled *coupled;
>> +
>> +     mutex_lock(&cpuidle_lock);
>> +
>> +     dev = per_cpu(cpuidle_devices, cpu);
>> +     if (!dev->coupled)
>> +             goto out;
>> +
>> +     coupled = dev->coupled;
>> +
>> +     /*
>> +      * waiting_count must be at least 1 less than alive_count, because
>> +      * this cpu is not waiting.  Spin until all cpus have noticed this cpu
>> +      * is not idle and exited the ready loop before changing alive_count.
>> +      */
>> +     while (atomic_read(&coupled->ready_count))
>> +             cpu_relax();
>> +
>> +     if (alive) {
>> +             smp_mb__before_atomic_inc();
>> +             atomic_inc(&coupled->alive_count);
>> +             smp_mb__after_atomic_inc();
>> +             coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
>> +     } else {
>> +             smp_mb__before_atomic_dec();
>> +             atomic_dec(&coupled->alive_count);
>> +             smp_mb__after_atomic_dec();
>> +             coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_DEAD;
>
> There's too many SMP barriers above, but I'm not quite sure which of them (if
> any) are really necessary.
The ones before order ready_count vs alive_count, the ones after order
alive_count vs. requested_state and future waiting_count increments.

>> +     }
>> +
>> +out:
>> +     mutex_unlock(&cpuidle_lock);
>> +}
>> +
>> +/**
>> + * cpuidle_coupled_cpu_notify - notifier called during hotplug transitions
>> + * @nb: notifier block
>> + * @action: hotplug transition
>> + * @hcpu: target cpu number
>> + *
>> + * Called when a cpu is brought on or offline using hotplug.  Updates the
>> + * coupled cpu set appropriately
>> + */
>> +static int cpuidle_coupled_cpu_notify(struct notifier_block *nb,
>> +             unsigned long action, void *hcpu)
>> +{
>> +     int cpu = (unsigned long)hcpu;
>> +
>> +     switch (action & ~CPU_TASKS_FROZEN) {
>> +     case CPU_DEAD:
>> +     case CPU_UP_CANCELED:
>> +             cpuidle_coupled_cpu_set_alive(cpu, false);
>> +             break;
>> +     case CPU_UP_PREPARE:
>> +             cpuidle_coupled_cpu_set_alive(cpu, true);
>> +             break;
>> +     }
>> +     return NOTIFY_OK;
>> +}
>> +
>> +static struct notifier_block cpuidle_coupled_cpu_notifier = {
>> +     .notifier_call = cpuidle_coupled_cpu_notify,
>> +};
>> +
>> +static int __init cpuidle_coupled_init(void)
>> +{
>> +     return register_cpu_notifier(&cpuidle_coupled_cpu_notifier);
>> +}
>> +core_initcall(cpuidle_coupled_init);
>> diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
>> index 4540672..e81cfda 100644
>> --- a/drivers/cpuidle/cpuidle.c
>> +++ b/drivers/cpuidle/cpuidle.c
>> @@ -171,7 +171,11 @@ int cpuidle_idle_call(void)
>>       trace_power_start_rcuidle(POWER_CSTATE, next_state, dev->cpu);
>>       trace_cpu_idle_rcuidle(next_state, dev->cpu);
>>
>> -     entered_state = cpuidle_enter_state(dev, drv, next_state);
>> +     if (cpuidle_state_is_coupled(dev, drv, next_state))
>> +             entered_state = cpuidle_enter_state_coupled(dev, drv,
>> +                                                         next_state);
>> +     else
>> +             entered_state = cpuidle_enter_state(dev, drv, next_state);
>>
>>       trace_power_end_rcuidle(dev->cpu);
>>       trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, dev->cpu);
>> @@ -407,9 +411,16 @@ static int __cpuidle_register_device(struct cpuidle_device *dev)
>>       if (ret)
>>               goto err_sysfs;
>>
>> +     ret = cpuidle_coupled_register_device(dev);
>> +     if (ret)
>> +             goto err_coupled;
>> +
>>       dev->registered = 1;
>>       return 0;
>>
>> +err_coupled:
>> +     cpuidle_remove_sysfs(cpu_dev);
>> +     wait_for_completion(&dev->kobj_unregister);
>>  err_sysfs:
>>       list_del(&dev->device_list);
>>       per_cpu(cpuidle_devices, dev->cpu) = NULL;
>> @@ -464,6 +475,8 @@ void cpuidle_unregister_device(struct cpuidle_device *dev)
>>       wait_for_completion(&dev->kobj_unregister);
>>       per_cpu(cpuidle_devices, dev->cpu) = NULL;
>>
>> +     cpuidle_coupled_unregister_device(dev);
>> +
>>       cpuidle_resume_and_unlock();
>>
>>       module_put(cpuidle_driver->owner);
>> diff --git a/drivers/cpuidle/cpuidle.h b/drivers/cpuidle/cpuidle.h
>> index d8a3ccc..76e7f69 100644
>> --- a/drivers/cpuidle/cpuidle.h
>> +++ b/drivers/cpuidle/cpuidle.h
>> @@ -32,4 +32,34 @@ extern int cpuidle_enter_state(struct cpuidle_device *dev,
>>  extern int cpuidle_add_sysfs(struct device *dev);
>>  extern void cpuidle_remove_sysfs(struct device *dev);
>>
>> +#ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED
>> +bool cpuidle_state_is_coupled(struct cpuidle_device *dev,
>> +             struct cpuidle_driver *drv, int state);
>> +int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
>> +             struct cpuidle_driver *drv, int next_state);
>> +int cpuidle_coupled_register_device(struct cpuidle_device *dev);
>> +void cpuidle_coupled_unregister_device(struct cpuidle_device *dev);
>> +#else
>> +static inline bool cpuidle_state_is_coupled(struct cpuidle_device *dev,
>> +             struct cpuidle_driver *drv, int state)
>> +{
>> +     return false;
>> +}
>> +
>> +static inline int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
>> +             struct cpuidle_driver *drv, int next_state)
>> +{
>> +     return -1;
>> +}
>> +
>> +static inline int cpuidle_coupled_register_device(struct cpuidle_device *dev)
>> +{
>> +     return 0;
>> +}
>> +
>> +static inline void cpuidle_coupled_unregister_device(struct cpuidle_device *dev)
>> +{
>> +}
>> +#endif
>> +
>>  #endif /* __DRIVER_CPUIDLE_H */
>> diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h
>> index 6c26a3d..6038448 100644
>> --- a/include/linux/cpuidle.h
>> +++ b/include/linux/cpuidle.h
>> @@ -57,6 +57,7 @@ struct cpuidle_state {
>>
>>  /* Idle State Flags */
>>  #define CPUIDLE_FLAG_TIME_VALID      (0x01) /* is residency time measurable? */
>> +#define CPUIDLE_FLAG_COUPLED (0x02) /* state applies to multiple cpus */
>>
>>  #define CPUIDLE_DRIVER_FLAGS_MASK (0xFFFF0000)
>>
>> @@ -100,6 +101,12 @@ struct cpuidle_device {
>>       struct list_head        device_list;
>>       struct kobject          kobj;
>>       struct completion       kobj_unregister;
>> +
>> +#ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED
>> +     int                     safe_state_index;
>> +     cpumask_t               coupled_cpus;
>> +     struct cpuidle_coupled  *coupled;
>> +#endif
>>  };
>>
>>  DECLARE_PER_CPU(struct cpuidle_device *, cpuidle_devices);
>
> Thanks,
> Rafael

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [linux-pm] [PATCHv3 3/5] cpuidle: add support for states that affect multiple cpus
@ 2012-05-03 23:09       ` Colin Cross
  0 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-05-03 23:09 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, May 3, 2012 at 3:14 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> On Monday, April 30, 2012, Colin Cross wrote:
<snip>

>> +/**
>> + * DOC: Coupled cpuidle states
>> + *
>> + * On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
>> + * cpus cannot be independently powered down, either due to
>> + * sequencing restrictions (on Tegra 2, cpu 0 must be the last to
>> + * power down), or due to HW bugs (on OMAP4460, a cpu powering up
>> + * will corrupt the gic state unless the other cpu runs a work
>> + * around). ?Each cpu has a power state that it can enter without
>> + * coordinating with the other cpu (usually Wait For Interrupt, or
>> + * WFI), and one or more "coupled" power states that affect blocks
>> + * shared between the cpus (L2 cache, interrupt controller, and
>> + * sometimes the whole SoC). ?Entering a coupled power state must
>> + * be tightly controlled on both cpus.
>> + *
>> + * The easiest solution to implementing coupled cpu power states is
>> + * to hotplug all but one cpu whenever possible, usually using a
>> + * cpufreq governor that looks at cpu load to determine when to
>> + * enable the secondary cpus. ?This causes problems, as hotplug is an
>> + * expensive operation, so the number of hotplug transitions must be
>> + * minimized, leading to very slow response to loads, often on the
>> + * order of seconds.
>
> I'd drop the above paragraph entirely. ?It doesn't say much about what's in
> the file and refers to an obviously suboptimal approach.

Sure.


<snip>

>> +/*
>> + * The cpuidle_coupled_poked_mask masked is used to avoid calling
>
> s/masked/mask/ perhaps?

Sure.

<snip>

>> +/**
>> + * cpuidle_coupled_cpus_waiting - check if all cpus in a coupled set are waiting
>> + * @coupled: the struct coupled that contains the current cpu
>> + *
>> + * Returns true if all cpus coupled to this target state are in the wait loop
>> + */
>> +static inline bool cpuidle_coupled_cpus_waiting(struct cpuidle_coupled *coupled)
>> +{
>> + ? ? int alive;
>> + ? ? int waiting;
>> +
>> + ? ? /*
>> + ? ? ?* Read alive before reading waiting so a booting cpu is not treated as
>> + ? ? ?* idle
>> + ? ? ?*/
>
> Well, the comment doesn't really explain much. ?In particular, why the boot CPU
> could be treated as idle if the reads were in a different order.

Hm, I think the race condition is on a cpu going down.  What about:
Read alive before reading waiting.  If waiting is read before alive,
this cpu could see another cpu as waiting just before it goes offline,
between when it the other cpu decrements waiting and when it
decrements alive, which could cause alive == waiting when one cpu is
not waiting.

>> + ? ? alive = atomic_read(&coupled->alive_count);
>> + ? ? smp_rmb();
>> + ? ? waiting = atomic_read(&coupled->waiting_count);
>
> Have you considered using one atomic variable to accommodate both counters
> such that the upper half contains one counter and the lower half contains
> the other?

There are 3 counters (alive, waiting, and ready).  Do you want me to
squish all of them into a single atomic_t, which would limit to 1023
cpus?

>> +
>> + ? ? return (waiting == alive);
>> +}
>> +
>> +/**
>> + * cpuidle_coupled_get_state - determine the deepest idle state
>> + * @dev: struct cpuidle_device for this cpu
>> + * @coupled: the struct coupled that contains the current cpu
>> + *
>> + * Returns the deepest idle state that all coupled cpus can enter
>> + */
>> +static inline int cpuidle_coupled_get_state(struct cpuidle_device *dev,
>> + ? ? ? ? ? ? struct cpuidle_coupled *coupled)
>> +{
>> + ? ? int i;
>> + ? ? int state = INT_MAX;
>> +
>> + ? ? for_each_cpu_mask(i, coupled->coupled_cpus)
>> + ? ? ? ? ? ? if (coupled->requested_state[i] != CPUIDLE_COUPLED_DEAD &&
>> + ? ? ? ? ? ? ? ? coupled->requested_state[i] < state)
>> + ? ? ? ? ? ? ? ? ? ? state = coupled->requested_state[i];
>> +
>> + ? ? BUG_ON(state >= dev->state_count || state < 0);
>
> Do you have to crash the kernel here if the assertion doesn't hold? ?Maybe
> you could use WARN_ON() and return error code?

If this BUG_ON is hit, there is a race condition somewhere that
allowed a cpu out of idle unexpectedly, and there is no way to recover
without more race conditions.  I don't expect this to ever happen, it
is mostly there to detect race conditions during development.  Should
I drop it completely?

>> +
>> + ? ? return state;
>> +}
>> +
>> +static void cpuidle_coupled_poked(void *info)
>> +{
>> + ? ? int cpu = (unsigned long)info;
>> + ? ? cpumask_clear_cpu(cpu, &cpuidle_coupled_poked_mask);
>> +}
>> +
>> +/**
>> + * cpuidle_coupled_poke - wake up a cpu that may be waiting
>> + * @cpu: target cpu
>> + *
>> + * Ensures that the target cpu exits it's waiting idle state (if it is in it)
>> + * and will see updates to waiting_count before it re-enters it's waiting idle
>> + * state.
>> + *
>> + * If cpuidle_coupled_poked_mask is already set for the target cpu, that cpu
>> + * either has or will soon have a pending IPI that will wake it out of idle,
>> + * or it is currently processing the IPI and is not in idle.
>> + */
>> +static void cpuidle_coupled_poke(int cpu)
>> +{
>> + ? ? struct call_single_data *csd = &per_cpu(cpuidle_coupled_poke_cb, cpu);
>> +
>> + ? ? if (!cpumask_test_and_set_cpu(cpu, &cpuidle_coupled_poked_mask))
>> + ? ? ? ? ? ? __smp_call_function_single(cpu, csd, 0);
>> +}
>> +
>> +/**
>> + * cpuidle_coupled_poke_others - wake up all other cpus that may be waiting
>> + * @dev: struct cpuidle_device for this cpu
>> + * @coupled: the struct coupled that contains the current cpu
>> + *
>> + * Calls cpuidle_coupled_poke on all other online cpus.
>> + */
>> +static void cpuidle_coupled_poke_others(struct cpuidle_device *dev,
>> + ? ? ? ? ? ? struct cpuidle_coupled *coupled)
>
> It looks like you could simply pass cpu (not dev) to this function.

OK.

>> +{
>> + ? ? int cpu;
>> +
>> + ? ? for_each_cpu_mask(cpu, coupled->coupled_cpus)
>> + ? ? ? ? ? ? if (cpu != dev->cpu && cpu_online(cpu))
>> + ? ? ? ? ? ? ? ? ? ? cpuidle_coupled_poke(cpu);
>> +}
>> +
>> +/**
>> + * cpuidle_coupled_set_waiting - mark this cpu as in the wait loop
>> + * @dev: struct cpuidle_device for this cpu
>> + * @coupled: the struct coupled that contains the current cpu
>> + * @next_state: the index in drv->states of the requested state for this cpu
>> + *
>> + * Updates the requested idle state for the specified cpuidle device,
>> + * poking all coupled cpus out of idle if necessary to let them see the new
>> + * state.
>> + *
>> + * Provides memory ordering around waiting_count.
>> + */
>> +static void cpuidle_coupled_set_waiting(struct cpuidle_device *dev,
>> + ? ? ? ? ? ? struct cpuidle_coupled *coupled, int next_state)
>
> If you passed cpu (instead of dev) to cpuidle_coupled_poke_others(),
> then you could pass cpu (instead of dev) to this function too, it seems.

OK.

>> +{
>> + ? ? int alive;
>> +
>> + ? ? BUG_ON(coupled->requested_state[dev->cpu] >= 0);
>
> Would be WARN_ON() + do nothing too dangerous here?

If this BUG_ON is hit, then this cpu exited idle without clearing its
waiting state, which could cause another cpu to enter the deeper idle
state while this cpu is still running.  The counters would be out of
sync, so it's not easy to recover.  Again, this is to detect race
conditions during development, but should never happen.  Should I drop
it?

>> +
>> + ? ? coupled->requested_state[dev->cpu] = next_state;
>> +
>> + ? ? /*
>> + ? ? ?* If this is the last cpu to enter the waiting state, poke
>> + ? ? ?* all the other cpus out of their waiting state so they can
>> + ? ? ?* enter a deeper state. ?This can race with one of the cpus
>> + ? ? ?* exiting the waiting state due to an interrupt and
>> + ? ? ?* decrementing waiting_count, see comment below.
>> + ? ? ?*/
>> + ? ? alive = atomic_read(&coupled->alive_count);
>> + ? ? if (atomic_inc_return(&coupled->waiting_count) == alive)
>> + ? ? ? ? ? ? cpuidle_coupled_poke_others(dev, coupled);
>> +}
>> +
>> +/**
>> + * cpuidle_coupled_set_not_waiting - mark this cpu as leaving the wait loop
>> + * @dev: struct cpuidle_device for this cpu
>> + * @coupled: the struct coupled that contains the current cpu
>> + *
>> + * Removes the requested idle state for the specified cpuidle device.
>> + *
>> + * Provides memory ordering around waiting_count.
>> + */
>> +static void cpuidle_coupled_set_not_waiting(struct cpuidle_device *dev,
>> + ? ? ? ? ? ? struct cpuidle_coupled *coupled)
>
> It looks like dev doesn't have to be passed here, cpu would be enough.
>
>> +{
>> + ? ? BUG_ON(coupled->requested_state[dev->cpu] < 0);
>
> Well, like above?
Same as above.

>> +
>> + ? ? /*
>> + ? ? ?* Decrementing waiting_count can race with incrementing it in
>> + ? ? ?* cpuidle_coupled_set_waiting, but that's OK. ?Worst case, some
>> + ? ? ?* cpus will increment ready_count and then spin until they
>> + ? ? ?* notice that this cpu has cleared it's requested_state.
>> + ? ? ?*/
>
> So it looks like having ready_count and waiting_count in one atomic variable
> can spare us this particular race condition.
As above, there are 3 counters here, alive, ready, and waiting.

>> +
>> + ? ? smp_mb__before_atomic_dec();
>> + ? ? atomic_dec(&coupled->waiting_count);
>> + ? ? smp_mb__after_atomic_dec();
>
> Do you really need both the before and after barriers here? ?If so, then why?

I believe so, waiting is ordered vs. alive and ready, one barrier is
for each.  Do you want the answers to these questions here or in the
code?  I had comments for every barrier use during development, but it
made it too hard to follow the flow of the code.  I could add a
comment describing the ordering requirements instead, but it's still
hard to translate that to the required barrier locations.

>> +
>> + ? ? coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
>> +}
>> +
>> +/**
>> + * cpuidle_enter_state_coupled - attempt to enter a state with coupled cpus
>> + * @dev: struct cpuidle_device for the current cpu
>> + * @drv: struct cpuidle_driver for the platform
>> + * @next_state: index of the requested state in drv->states
>> + *
>> + * Coordinate with coupled cpus to enter the target state. ?This is a two
>> + * stage process. ?In the first stage, the cpus are operating independently,
>> + * and may call into cpuidle_enter_state_coupled at completely different times.
>> + * To save as much power as possible, the first cpus to call this function will
>> + * go to an intermediate state (the cpuidle_device's safe state), and wait for
>> + * all the other cpus to call this function. ?Once all coupled cpus are idle,
>> + * the second stage will start. ?Each coupled cpu will spin until all cpus have
>> + * guaranteed that they will call the target_state.
>
> It would be good to mention the conditions for calling this function (eg.
> interrupts disabled on the local CPU).

OK.

>> + */
>> +int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
>> + ? ? ? ? ? ? struct cpuidle_driver *drv, int next_state)
>> +{
>> + ? ? int entered_state = -1;
>> + ? ? struct cpuidle_coupled *coupled = dev->coupled;
>> + ? ? int alive;
>> +
>> + ? ? if (!coupled)
>> + ? ? ? ? ? ? return -EINVAL;
>> +
>> + ? ? BUG_ON(atomic_read(&coupled->ready_count));
>
> Again, I'd do a WARN_ON() and return error code from here (to avoid crashing
> the kernel).
Same as above, if ready_count is not 0 here then the counters are out
of sync and something is about to go horribly wrong, like cutting
power to a running cpu.

>> + ? ? cpuidle_coupled_set_waiting(dev, coupled, next_state);
>> +
>> +retry:
>> + ? ? /*
>> + ? ? ?* Wait for all coupled cpus to be idle, using the deepest state
>> + ? ? ?* allowed for a single cpu.
>> + ? ? ?*/
>> + ? ? while (!need_resched() && !cpuidle_coupled_cpus_waiting(coupled)) {
>> + ? ? ? ? ? ? entered_state = cpuidle_enter_state(dev, drv,
>> + ? ? ? ? ? ? ? ? ? ? dev->safe_state_index);
>> +
>> + ? ? ? ? ? ? local_irq_enable();
>> + ? ? ? ? ? ? while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
>> + ? ? ? ? ? ? ? ? ? ? cpu_relax();
>
> Hmm. ?What exactly is this loop supposed to achieve?
This is to ensure that the outstanding wakeups have been processed so
we don't go to idle with an interrupt pending an immediately wake up.

>> + ? ? ? ? ? ? local_irq_disable();
>
> Anyway, you seem to be calling it twice along with this enabling/disabling of
> interrupts. ?I'd put that into a separate function and explain its role in a
> kerneldoc comment.

I left it here to be obvious that I was enabling interrupts in the
idle path, but I can refactor it out if you prefer.

>> + ? ? }
>> +
>> + ? ? /* give a chance to process any remaining pokes */
>> + ? ? local_irq_enable();
>> + ? ? while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
>> + ? ? ? ? ? ? cpu_relax();
>> + ? ? local_irq_disable();
>> +
>> + ? ? if (need_resched()) {
>> + ? ? ? ? ? ? cpuidle_coupled_set_not_waiting(dev, coupled);
>> + ? ? ? ? ? ? goto out;
>> + ? ? }
>> +
>> + ? ? /*
>> + ? ? ?* All coupled cpus are probably idle. ?There is a small chance that
>> + ? ? ?* one of the other cpus just became active. ?Increment a counter when
>> + ? ? ?* ready, and spin until all coupled cpus have incremented the counter.
>> + ? ? ?* Once a cpu has incremented the counter, it cannot abort idle and must
>> + ? ? ?* spin until either the count has hit alive_count, or another cpu
>> + ? ? ?* leaves idle.
>> + ? ? ?*/
>> +
>> + ? ? smp_mb__before_atomic_inc();
>> + ? ? atomic_inc(&coupled->ready_count);
>> + ? ? smp_mb__after_atomic_inc();
>
> It seems that at least one of these barriers is unnecessary ...
The first is to ensure ordering between ready_count and waiting count,
the second is for ready_count vs. alive_count and requested_state.

>> + ? ? /* alive_count can't change while ready_count > 0 */
>> + ? ? alive = atomic_read(&coupled->alive_count);
>> + ? ? while (atomic_read(&coupled->ready_count) != alive) {
>> + ? ? ? ? ? ? /* Check if any other cpus bailed out of idle. */
>> + ? ? ? ? ? ? if (!cpuidle_coupled_cpus_waiting(coupled)) {
>> + ? ? ? ? ? ? ? ? ? ? atomic_dec(&coupled->ready_count);
>> + ? ? ? ? ? ? ? ? ? ? smp_mb__after_atomic_dec();
>> + ? ? ? ? ? ? ? ? ? ? goto retry;
>> + ? ? ? ? ? ? }
>> +
>> + ? ? ? ? ? ? cpu_relax();
>> + ? ? }
>> +
>> + ? ? /* all cpus have acked the coupled state */
>> + ? ? smp_rmb();
>
> What is the barrier here for?
This protects ready_count vs. requested_state.  It is already
implicitly protected by the atomic_inc_return in set_waiting, but I
thought it would be better to protect it explicitly here.  I think I
added the smp_mb__after_atomic_inc above later, which makes this one
superflous, so I'll drop it.

>> +
>> + ? ? next_state = cpuidle_coupled_get_state(dev, coupled);
>> +
>> + ? ? entered_state = cpuidle_enter_state(dev, drv, next_state);
>> +
>> + ? ? cpuidle_coupled_set_not_waiting(dev, coupled);
>> + ? ? atomic_dec(&coupled->ready_count);
>> + ? ? smp_mb__after_atomic_dec();
>> +
>> +out:
>> + ? ? /*
>> + ? ? ?* Normal cpuidle states are expected to return with irqs enabled.
>> + ? ? ?* That leads to an inefficiency where a cpu receiving an interrupt
>> + ? ? ?* that brings it out of idle will process that interrupt before
>> + ? ? ?* exiting the idle enter function and decrementing ready_count. ?All
>> + ? ? ?* other cpus will need to spin waiting for the cpu that is processing
>> + ? ? ?* the interrupt. ?If the driver returns with interrupts disabled,
>> + ? ? ?* all other cpus will loop back into the safe idle state instead of
>> + ? ? ?* spinning, saving power.
>> + ? ? ?*
>> + ? ? ?* Calling local_irq_enable here allows coupled states to return with
>> + ? ? ?* interrupts disabled, but won't cause problems for drivers that
>> + ? ? ?* exit with interrupts enabled.
>> + ? ? ?*/
>> + ? ? local_irq_enable();
>> +
>> + ? ? /*
>> + ? ? ?* Wait until all coupled cpus have exited idle. ?There is no risk that
>> + ? ? ?* a cpu exits and re-enters the ready state because this cpu has
>> + ? ? ?* already decremented its waiting_count.
>> + ? ? ?*/
>> + ? ? while (atomic_read(&coupled->ready_count) != 0)
>> + ? ? ? ? ? ? cpu_relax();
>> +
>> + ? ? smp_rmb();
>
> And here?

This was to protect ready_count vs. looping back in and reading
alive_count.  There will be plenty of synchronization calls between
the two with implicit barriers, but I thought it was better to do it
explicitly.

>> +
>> + ? ? return entered_state;
>> +}
>> +
>> +/**
>> + * cpuidle_coupled_register_device - register a coupled cpuidle device
>> + * @dev: struct cpuidle_device for the current cpu
>> + *
>> + * Called from cpuidle_register_device to handle coupled idle init. ?Finds the
>> + * cpuidle_coupled struct for this set of coupled cpus, or creates one if none
>> + * exists yet.
>> + */
>> +int cpuidle_coupled_register_device(struct cpuidle_device *dev)
>> +{
>> + ? ? int cpu;
>> + ? ? struct cpuidle_device *other_dev;
>> + ? ? struct call_single_data *csd;
>> + ? ? struct cpuidle_coupled *coupled;
>> +
>> + ? ? if (cpumask_empty(&dev->coupled_cpus))
>> + ? ? ? ? ? ? return 0;
>> +
>> + ? ? for_each_cpu_mask(cpu, dev->coupled_cpus) {
>> + ? ? ? ? ? ? other_dev = per_cpu(cpuidle_devices, cpu);
>> + ? ? ? ? ? ? if (other_dev && other_dev->coupled) {
>> + ? ? ? ? ? ? ? ? ? ? coupled = other_dev->coupled;
>> + ? ? ? ? ? ? ? ? ? ? goto have_coupled;
>> + ? ? ? ? ? ? }
>> + ? ? }
>> +
>> + ? ? /* No existing coupled info found, create a new one */
>> + ? ? coupled = kzalloc(sizeof(struct cpuidle_coupled), GFP_KERNEL);
>> + ? ? if (!coupled)
>> + ? ? ? ? ? ? return -ENOMEM;
>> +
>> + ? ? coupled->coupled_cpus = dev->coupled_cpus;
>> + ? ? for_each_cpu_mask(cpu, coupled->coupled_cpus)
>> + ? ? ? ? ? ? coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_DEAD;
>> +
>> +have_coupled:
>> + ? ? dev->coupled = coupled;
>> + ? ? BUG_ON(!cpumask_equal(&dev->coupled_cpus, &coupled->coupled_cpus));
>> +
>> + ? ? if (cpu_online(dev->cpu)) {
>> + ? ? ? ? ? ? coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
>> + ? ? ? ? ? ? atomic_inc(&coupled->alive_count);
>> + ? ? }
>> +
>> + ? ? coupled->refcnt++;
>> +
>> + ? ? csd = &per_cpu(cpuidle_coupled_poke_cb, dev->cpu);
>> + ? ? csd->func = cpuidle_coupled_poked;
>> + ? ? csd->info = (void *)(unsigned long)dev->cpu;
>> +
>> + ? ? return 0;
>> +}
>> +
>> +/**
>> + * cpuidle_coupled_unregister_device - unregister a coupled cpuidle device
>> + * @dev: struct cpuidle_device for the current cpu
>> + *
>> + * Called from cpuidle_unregister_device to tear down coupled idle. ?Removes the
>> + * cpu from the coupled idle set, and frees the cpuidle_coupled_info struct if
>> + * this was the last cpu in the set.
>> + */
>> +void cpuidle_coupled_unregister_device(struct cpuidle_device *dev)
>> +{
>> + ? ? struct cpuidle_coupled *coupled = dev->coupled;
>> +
>> + ? ? if (cpumask_empty(&dev->coupled_cpus))
>> + ? ? ? ? ? ? return;
>> +
>> + ? ? if (--coupled->refcnt)
>> + ? ? ? ? ? ? kfree(coupled);
>> + ? ? dev->coupled = NULL;
>> +}
>> +
>> +/**
>> + * cpuidle_coupled_cpu_set_alive - adjust alive_count during hotplug transitions
>> + * @cpu: target cpu number
>> + * @alive: whether the target cpu is going up or down
>> + *
>> + * Run on the cpu that is bringing up the target cpu, before the target cpu
>> + * has been booted, or after the target cpu is completely dead.
>> + */
>> +static void cpuidle_coupled_cpu_set_alive(int cpu, bool alive)
>> +{
>> + ? ? struct cpuidle_device *dev;
>> + ? ? struct cpuidle_coupled *coupled;
>> +
>> + ? ? mutex_lock(&cpuidle_lock);
>> +
>> + ? ? dev = per_cpu(cpuidle_devices, cpu);
>> + ? ? if (!dev->coupled)
>> + ? ? ? ? ? ? goto out;
>> +
>> + ? ? coupled = dev->coupled;
>> +
>> + ? ? /*
>> + ? ? ?* waiting_count must be at least 1 less than alive_count, because
>> + ? ? ?* this cpu is not waiting. ?Spin until all cpus have noticed this cpu
>> + ? ? ?* is not idle and exited the ready loop before changing alive_count.
>> + ? ? ?*/
>> + ? ? while (atomic_read(&coupled->ready_count))
>> + ? ? ? ? ? ? cpu_relax();
>> +
>> + ? ? if (alive) {
>> + ? ? ? ? ? ? smp_mb__before_atomic_inc();
>> + ? ? ? ? ? ? atomic_inc(&coupled->alive_count);
>> + ? ? ? ? ? ? smp_mb__after_atomic_inc();
>> + ? ? ? ? ? ? coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
>> + ? ? } else {
>> + ? ? ? ? ? ? smp_mb__before_atomic_dec();
>> + ? ? ? ? ? ? atomic_dec(&coupled->alive_count);
>> + ? ? ? ? ? ? smp_mb__after_atomic_dec();
>> + ? ? ? ? ? ? coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_DEAD;
>
> There's too many SMP barriers above, but I'm not quite sure which of them (if
> any) are really necessary.
The ones before order ready_count vs alive_count, the ones after order
alive_count vs. requested_state and future waiting_count increments.

>> + ? ? }
>> +
>> +out:
>> + ? ? mutex_unlock(&cpuidle_lock);
>> +}
>> +
>> +/**
>> + * cpuidle_coupled_cpu_notify - notifier called during hotplug transitions
>> + * @nb: notifier block
>> + * @action: hotplug transition
>> + * @hcpu: target cpu number
>> + *
>> + * Called when a cpu is brought on or offline using hotplug. ?Updates the
>> + * coupled cpu set appropriately
>> + */
>> +static int cpuidle_coupled_cpu_notify(struct notifier_block *nb,
>> + ? ? ? ? ? ? unsigned long action, void *hcpu)
>> +{
>> + ? ? int cpu = (unsigned long)hcpu;
>> +
>> + ? ? switch (action & ~CPU_TASKS_FROZEN) {
>> + ? ? case CPU_DEAD:
>> + ? ? case CPU_UP_CANCELED:
>> + ? ? ? ? ? ? cpuidle_coupled_cpu_set_alive(cpu, false);
>> + ? ? ? ? ? ? break;
>> + ? ? case CPU_UP_PREPARE:
>> + ? ? ? ? ? ? cpuidle_coupled_cpu_set_alive(cpu, true);
>> + ? ? ? ? ? ? break;
>> + ? ? }
>> + ? ? return NOTIFY_OK;
>> +}
>> +
>> +static struct notifier_block cpuidle_coupled_cpu_notifier = {
>> + ? ? .notifier_call = cpuidle_coupled_cpu_notify,
>> +};
>> +
>> +static int __init cpuidle_coupled_init(void)
>> +{
>> + ? ? return register_cpu_notifier(&cpuidle_coupled_cpu_notifier);
>> +}
>> +core_initcall(cpuidle_coupled_init);
>> diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
>> index 4540672..e81cfda 100644
>> --- a/drivers/cpuidle/cpuidle.c
>> +++ b/drivers/cpuidle/cpuidle.c
>> @@ -171,7 +171,11 @@ int cpuidle_idle_call(void)
>> ? ? ? trace_power_start_rcuidle(POWER_CSTATE, next_state, dev->cpu);
>> ? ? ? trace_cpu_idle_rcuidle(next_state, dev->cpu);
>>
>> - ? ? entered_state = cpuidle_enter_state(dev, drv, next_state);
>> + ? ? if (cpuidle_state_is_coupled(dev, drv, next_state))
>> + ? ? ? ? ? ? entered_state = cpuidle_enter_state_coupled(dev, drv,
>> + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? next_state);
>> + ? ? else
>> + ? ? ? ? ? ? entered_state = cpuidle_enter_state(dev, drv, next_state);
>>
>> ? ? ? trace_power_end_rcuidle(dev->cpu);
>> ? ? ? trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, dev->cpu);
>> @@ -407,9 +411,16 @@ static int __cpuidle_register_device(struct cpuidle_device *dev)
>> ? ? ? if (ret)
>> ? ? ? ? ? ? ? goto err_sysfs;
>>
>> + ? ? ret = cpuidle_coupled_register_device(dev);
>> + ? ? if (ret)
>> + ? ? ? ? ? ? goto err_coupled;
>> +
>> ? ? ? dev->registered = 1;
>> ? ? ? return 0;
>>
>> +err_coupled:
>> + ? ? cpuidle_remove_sysfs(cpu_dev);
>> + ? ? wait_for_completion(&dev->kobj_unregister);
>> ?err_sysfs:
>> ? ? ? list_del(&dev->device_list);
>> ? ? ? per_cpu(cpuidle_devices, dev->cpu) = NULL;
>> @@ -464,6 +475,8 @@ void cpuidle_unregister_device(struct cpuidle_device *dev)
>> ? ? ? wait_for_completion(&dev->kobj_unregister);
>> ? ? ? per_cpu(cpuidle_devices, dev->cpu) = NULL;
>>
>> + ? ? cpuidle_coupled_unregister_device(dev);
>> +
>> ? ? ? cpuidle_resume_and_unlock();
>>
>> ? ? ? module_put(cpuidle_driver->owner);
>> diff --git a/drivers/cpuidle/cpuidle.h b/drivers/cpuidle/cpuidle.h
>> index d8a3ccc..76e7f69 100644
>> --- a/drivers/cpuidle/cpuidle.h
>> +++ b/drivers/cpuidle/cpuidle.h
>> @@ -32,4 +32,34 @@ extern int cpuidle_enter_state(struct cpuidle_device *dev,
>> ?extern int cpuidle_add_sysfs(struct device *dev);
>> ?extern void cpuidle_remove_sysfs(struct device *dev);
>>
>> +#ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED
>> +bool cpuidle_state_is_coupled(struct cpuidle_device *dev,
>> + ? ? ? ? ? ? struct cpuidle_driver *drv, int state);
>> +int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
>> + ? ? ? ? ? ? struct cpuidle_driver *drv, int next_state);
>> +int cpuidle_coupled_register_device(struct cpuidle_device *dev);
>> +void cpuidle_coupled_unregister_device(struct cpuidle_device *dev);
>> +#else
>> +static inline bool cpuidle_state_is_coupled(struct cpuidle_device *dev,
>> + ? ? ? ? ? ? struct cpuidle_driver *drv, int state)
>> +{
>> + ? ? return false;
>> +}
>> +
>> +static inline int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
>> + ? ? ? ? ? ? struct cpuidle_driver *drv, int next_state)
>> +{
>> + ? ? return -1;
>> +}
>> +
>> +static inline int cpuidle_coupled_register_device(struct cpuidle_device *dev)
>> +{
>> + ? ? return 0;
>> +}
>> +
>> +static inline void cpuidle_coupled_unregister_device(struct cpuidle_device *dev)
>> +{
>> +}
>> +#endif
>> +
>> ?#endif /* __DRIVER_CPUIDLE_H */
>> diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h
>> index 6c26a3d..6038448 100644
>> --- a/include/linux/cpuidle.h
>> +++ b/include/linux/cpuidle.h
>> @@ -57,6 +57,7 @@ struct cpuidle_state {
>>
>> ?/* Idle State Flags */
>> ?#define CPUIDLE_FLAG_TIME_VALID ? ? ?(0x01) /* is residency time measurable? */
>> +#define CPUIDLE_FLAG_COUPLED (0x02) /* state applies to multiple cpus */
>>
>> ?#define CPUIDLE_DRIVER_FLAGS_MASK (0xFFFF0000)
>>
>> @@ -100,6 +101,12 @@ struct cpuidle_device {
>> ? ? ? struct list_head ? ? ? ?device_list;
>> ? ? ? struct kobject ? ? ? ? ?kobj;
>> ? ? ? struct completion ? ? ? kobj_unregister;
>> +
>> +#ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED
>> + ? ? int ? ? ? ? ? ? ? ? ? ? safe_state_index;
>> + ? ? cpumask_t ? ? ? ? ? ? ? coupled_cpus;
>> + ? ? struct cpuidle_coupled ?*coupled;
>> +#endif
>> ?};
>>
>> ?DECLARE_PER_CPU(struct cpuidle_device *, cpuidle_devices);
>
> Thanks,
> Rafael

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 0/5] coupled cpuidle state support
  2012-05-03 20:00           ` Rafael J. Wysocki
  (?)
@ 2012-05-04 10:04             ` Lorenzo Pieralisi
  -1 siblings, 0 replies; 78+ messages in thread
From: Lorenzo Pieralisi @ 2012-05-04 10:04 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Colin Cross, Kevin Hilman, Len Brown, Trinabh Gupta,
	Russell King, Daniel Lezcano, Deepthi Dharwar,
	Greg Kroah-Hartman, Kay Sievers, linux-kernel, Amit Kucheria,
	Santosh Shilimkar, linux-pm, Arjan van de Ven, Arnd Bergmann,
	linux-arm-kernel, Len Brown

On Thu, May 03, 2012 at 09:00:01PM +0100, Rafael J. Wysocki wrote:
> On Tuesday, May 01, 2012, Colin Cross wrote:
> > On Mon, Apr 30, 2012 at 2:54 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> > > On Monday, April 30, 2012, Colin Cross wrote:
> > >> On Mon, Apr 30, 2012 at 2:25 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:

[...]

> Having considered this for a while I think that it may be more straightforward
> to avoid waking up the already idled cores.
> 
> For instance, say we have 4 CPU cores in a cluster (package) such that each
> core has its own idle state (call it C1) and there is a multicore idle state
> entered by turning off the entire cluster (call this state C-multi).  One of
> the possible ways to handle this seems to be to use an identical table of
> C-states for each core containing the C1 entry and a kind of fake entry called
> (for example) C4 with the time characteristics of C-multi and a special
> .enter() callback.  That callback will prepare the core it is called for to
> enter C-multi, but instead of simply turning off the whole package it will
> decrement a counter.  If the counte happens to be 0 at this point, the
> package will be turned off.  Otherwise, the core will be put into the idle
> state corresponding to C1, but it will be ready for entering C-multi at
> any time. The counter will be incremented on exiting the C4 "state".
> 
> It looks like this should work without modifying the cpuidle core, but
> the drawback here is that the cpuidle core doesn't know how much time
> spend in C4 is really in C1 and how much of it is in C-multi, so the
> statistics reported by it won't reflect the real energy usage.

This is exactly what has been done in some ARM platforms with per-CPU power
rails ("C1" means shutdown here) and that are completely symmetric (ie every
CPU can trigger cluster shutdown); a "C4" multi-CPU state with target_residency
equivalent to the cluster shutdown implied power break-even point.

There are two issues with this approach. One you already mentioned it.
Second is that CPUs go idle at different times. Hence, by the time the last
CPU calls C4.enter() other CPUs in the package might already have a timer
that is about to expire. If we start cluster shutdown we are wasting power
(eg caches write-back to DDR) for nothing.

Colin and Santosh mentioned it already, we have to peek at the next event for
all CPUs in the package (or peek the broadcast global timer if CPUs rely
on it) and make a decision accordingly.

Lorenzo


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 0/5] coupled cpuidle state support
@ 2012-05-04 10:04             ` Lorenzo Pieralisi
  0 siblings, 0 replies; 78+ messages in thread
From: Lorenzo Pieralisi @ 2012-05-04 10:04 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Kevin Hilman, Len Brown, Russell King, Kay Sievers,
	Greg Kroah-Hartman, linux-kernel, Amit Kucheria, Colin Cross,
	linux-pm, Arjan van de Ven, Arnd Bergmann, linux-arm-kernel

On Thu, May 03, 2012 at 09:00:01PM +0100, Rafael J. Wysocki wrote:
> On Tuesday, May 01, 2012, Colin Cross wrote:
> > On Mon, Apr 30, 2012 at 2:54 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> > > On Monday, April 30, 2012, Colin Cross wrote:
> > >> On Mon, Apr 30, 2012 at 2:25 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:

[...]

> Having considered this for a while I think that it may be more straightforward
> to avoid waking up the already idled cores.
> 
> For instance, say we have 4 CPU cores in a cluster (package) such that each
> core has its own idle state (call it C1) and there is a multicore idle state
> entered by turning off the entire cluster (call this state C-multi).  One of
> the possible ways to handle this seems to be to use an identical table of
> C-states for each core containing the C1 entry and a kind of fake entry called
> (for example) C4 with the time characteristics of C-multi and a special
> .enter() callback.  That callback will prepare the core it is called for to
> enter C-multi, but instead of simply turning off the whole package it will
> decrement a counter.  If the counte happens to be 0 at this point, the
> package will be turned off.  Otherwise, the core will be put into the idle
> state corresponding to C1, but it will be ready for entering C-multi at
> any time. The counter will be incremented on exiting the C4 "state".
> 
> It looks like this should work without modifying the cpuidle core, but
> the drawback here is that the cpuidle core doesn't know how much time
> spend in C4 is really in C1 and how much of it is in C-multi, so the
> statistics reported by it won't reflect the real energy usage.

This is exactly what has been done in some ARM platforms with per-CPU power
rails ("C1" means shutdown here) and that are completely symmetric (ie every
CPU can trigger cluster shutdown); a "C4" multi-CPU state with target_residency
equivalent to the cluster shutdown implied power break-even point.

There are two issues with this approach. One you already mentioned it.
Second is that CPUs go idle at different times. Hence, by the time the last
CPU calls C4.enter() other CPUs in the package might already have a timer
that is about to expire. If we start cluster shutdown we are wasting power
(eg caches write-back to DDR) for nothing.

Colin and Santosh mentioned it already, we have to peek at the next event for
all CPUs in the package (or peek the broadcast global timer if CPUs rely
on it) and make a decision accordingly.

Lorenzo

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [PATCHv3 0/5] coupled cpuidle state support
@ 2012-05-04 10:04             ` Lorenzo Pieralisi
  0 siblings, 0 replies; 78+ messages in thread
From: Lorenzo Pieralisi @ 2012-05-04 10:04 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, May 03, 2012 at 09:00:01PM +0100, Rafael J. Wysocki wrote:
> On Tuesday, May 01, 2012, Colin Cross wrote:
> > On Mon, Apr 30, 2012 at 2:54 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> > > On Monday, April 30, 2012, Colin Cross wrote:
> > >> On Mon, Apr 30, 2012 at 2:25 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:

[...]

> Having considered this for a while I think that it may be more straightforward
> to avoid waking up the already idled cores.
> 
> For instance, say we have 4 CPU cores in a cluster (package) such that each
> core has its own idle state (call it C1) and there is a multicore idle state
> entered by turning off the entire cluster (call this state C-multi).  One of
> the possible ways to handle this seems to be to use an identical table of
> C-states for each core containing the C1 entry and a kind of fake entry called
> (for example) C4 with the time characteristics of C-multi and a special
> .enter() callback.  That callback will prepare the core it is called for to
> enter C-multi, but instead of simply turning off the whole package it will
> decrement a counter.  If the counte happens to be 0 at this point, the
> package will be turned off.  Otherwise, the core will be put into the idle
> state corresponding to C1, but it will be ready for entering C-multi at
> any time. The counter will be incremented on exiting the C4 "state".
> 
> It looks like this should work without modifying the cpuidle core, but
> the drawback here is that the cpuidle core doesn't know how much time
> spend in C4 is really in C1 and how much of it is in C-multi, so the
> statistics reported by it won't reflect the real energy usage.

This is exactly what has been done in some ARM platforms with per-CPU power
rails ("C1" means shutdown here) and that are completely symmetric (ie every
CPU can trigger cluster shutdown); a "C4" multi-CPU state with target_residency
equivalent to the cluster shutdown implied power break-even point.

There are two issues with this approach. One you already mentioned it.
Second is that CPUs go idle at different times. Hence, by the time the last
CPU calls C4.enter() other CPUs in the package might already have a timer
that is about to expire. If we start cluster shutdown we are wasting power
(eg caches write-back to DDR) for nothing.

Colin and Santosh mentioned it already, we have to peek at the next event for
all CPUs in the package (or peek the broadcast global timer if CPUs rely
on it) and make a decision accordingly.

Lorenzo

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [linux-pm] [PATCHv3 3/5] cpuidle: add support for states that affect multiple cpus
  2012-05-03 23:09       ` Colin Cross
  (?)
@ 2012-05-04 11:51         ` Rafael J. Wysocki
  -1 siblings, 0 replies; 78+ messages in thread
From: Rafael J. Wysocki @ 2012-05-04 11:51 UTC (permalink / raw)
  To: Colin Cross
  Cc: linux-pm, linux-kernel, Kevin Hilman, Len Brown, Russell King,
	Greg Kroah-Hartman, Kay Sievers, Amit Kucheria, Arjan van de Ven,
	Arnd Bergmann, linux-arm-kernel

On Friday, May 04, 2012, Colin Cross wrote:
> On Thu, May 3, 2012 at 3:14 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
[...]
> 
> >> +/**
> >> + * cpuidle_coupled_cpus_waiting - check if all cpus in a coupled set are waiting
> >> + * @coupled: the struct coupled that contains the current cpu
> >> + *
> >> + * Returns true if all cpus coupled to this target state are in the wait loop
> >> + */
> >> +static inline bool cpuidle_coupled_cpus_waiting(struct cpuidle_coupled *coupled)
> >> +{
> >> +     int alive;
> >> +     int waiting;
> >> +
> >> +     /*
> >> +      * Read alive before reading waiting so a booting cpu is not treated as
> >> +      * idle
> >> +      */
> >
> > Well, the comment doesn't really explain much.  In particular, why the boot CPU
> > could be treated as idle if the reads were in a different order.
> 
> Hm, I think the race condition is on a cpu going down.  What about:
> Read alive before reading waiting.  If waiting is read before alive,
> this cpu could see another cpu as waiting just before it goes offline,
> between when it the other cpu decrements waiting and when it
> decrements alive, which could cause alive == waiting when one cpu is
> not waiting.

Reading them in this particular order doesn't stop the race, though.  I mean,
if the hotplug happens just right after you've read alive_count, you still have
a wrong value.  waiting_count is set independently, it seems, so there's no
ordering between the two on the "store" side and the "load" side ordering
doesn't matter.

I would just make the CPU hotplug notifier routine block until
cpuidle_enter_state_coupled() is done and the latter return immediately
if the CPU hotplug notifier routine is in progress, perhaps falling back
to the safe state.  Or I would make the CPU hotplug notifier routine
disable the "coupled cpuidle" entirely on DOWN_PREPARE and UP_PREPARE
and only re-enable it after the hotplug has been completed.

> >> +     alive = atomic_read(&coupled->alive_count);
> >> +     smp_rmb();
> >> +     waiting = atomic_read(&coupled->waiting_count);
> >
> > Have you considered using one atomic variable to accommodate both counters
> > such that the upper half contains one counter and the lower half contains
> > the other?
> 
> There are 3 counters (alive, waiting, and ready).  Do you want me to
> squish all of them into a single atomic_t, which would limit to 1023
> cpus?

No.  I'd make sure that cpuidle_enter_state_coupled() did't race with CPU
hotplug, so as to make alive_count stable from its standpoint, and I'd
put the two remaining counters into one atomic_t variable.

> >> +
> >> +     return (waiting == alive);
> >> +}
> >> +
> >> +/**
> >> + * cpuidle_coupled_get_state - determine the deepest idle state
> >> + * @dev: struct cpuidle_device for this cpu
> >> + * @coupled: the struct coupled that contains the current cpu
> >> + *
> >> + * Returns the deepest idle state that all coupled cpus can enter
> >> + */
> >> +static inline int cpuidle_coupled_get_state(struct cpuidle_device *dev,
> >> +             struct cpuidle_coupled *coupled)
> >> +{
> >> +     int i;
> >> +     int state = INT_MAX;
> >> +
> >> +     for_each_cpu_mask(i, coupled->coupled_cpus)
> >> +             if (coupled->requested_state[i] != CPUIDLE_COUPLED_DEAD &&
> >> +                 coupled->requested_state[i] < state)
> >> +                     state = coupled->requested_state[i];
> >> +
> >> +     BUG_ON(state >= dev->state_count || state < 0);
> >
> > Do you have to crash the kernel here if the assertion doesn't hold?  Maybe
> > you could use WARN_ON() and return error code?
> 
> If this BUG_ON is hit, there is a race condition somewhere that
> allowed a cpu out of idle unexpectedly, and there is no way to recover
> without more race conditions.  I don't expect this to ever happen, it
> is mostly there to detect race conditions during development.  Should
> I drop it completely?

I would just drop it, then, in the final respin of the patch.

[...]
> >> +{
> >> +     int alive;
> >> +
> >> +     BUG_ON(coupled->requested_state[dev->cpu] >= 0);
> >
> > Would be WARN_ON() + do nothing too dangerous here?
> 
> If this BUG_ON is hit, then this cpu exited idle without clearing its
> waiting state, which could cause another cpu to enter the deeper idle
> state while this cpu is still running.  The counters would be out of
> sync, so it's not easy to recover.  Again, this is to detect race
> conditions during development, but should never happen.  Should I drop
> it?

Just like above.

> >> +
> >> +     coupled->requested_state[dev->cpu] = next_state;
> >> +
> >> +     /*
> >> +      * If this is the last cpu to enter the waiting state, poke
> >> +      * all the other cpus out of their waiting state so they can
> >> +      * enter a deeper state.  This can race with one of the cpus
> >> +      * exiting the waiting state due to an interrupt and
> >> +      * decrementing waiting_count, see comment below.
> >> +      */
> >> +     alive = atomic_read(&coupled->alive_count);
> >> +     if (atomic_inc_return(&coupled->waiting_count) == alive)
> >> +             cpuidle_coupled_poke_others(dev, coupled);
> >> +}
> >> +
> >> +/**
> >> + * cpuidle_coupled_set_not_waiting - mark this cpu as leaving the wait loop
> >> + * @dev: struct cpuidle_device for this cpu
> >> + * @coupled: the struct coupled that contains the current cpu
> >> + *
> >> + * Removes the requested idle state for the specified cpuidle device.
> >> + *
> >> + * Provides memory ordering around waiting_count.
> >> + */
> >> +static void cpuidle_coupled_set_not_waiting(struct cpuidle_device *dev,
> >> +             struct cpuidle_coupled *coupled)
> >
> > It looks like dev doesn't have to be passed here, cpu would be enough.
> >
> >> +{
> >> +     BUG_ON(coupled->requested_state[dev->cpu] < 0);
> >
> > Well, like above?
> Same as above.

Ditto. :-)

> >> +
> >> +     /*
> >> +      * Decrementing waiting_count can race with incrementing it in
> >> +      * cpuidle_coupled_set_waiting, but that's OK.  Worst case, some
> >> +      * cpus will increment ready_count and then spin until they
> >> +      * notice that this cpu has cleared it's requested_state.
> >> +      */
> >
> > So it looks like having ready_count and waiting_count in one atomic variable
> > can spare us this particular race condition.
> As above, there are 3 counters here, alive, ready, and waiting.

Please refer to my comment about that above.

> >> +
> >> +     smp_mb__before_atomic_dec();
> >> +     atomic_dec(&coupled->waiting_count);
> >> +     smp_mb__after_atomic_dec();
> >
> > Do you really need both the before and after barriers here?  If so, then why?
> 
> I believe so, waiting is ordered vs. alive and ready, one barrier is
> for each.  Do you want the answers to these questions here or in the
> code?  I had comments for every barrier use during development, but it
> made it too hard to follow the flow of the code.  I could add a
> comment describing the ordering requirements instead, but it's still
> hard to translate that to the required barrier locations.

Well, the barriers should be commented in the code, for the sake of people
reading it and wanting to learn from it if nothing else.

Wherever we put an SMP barrier directly like this, there should be a good
reason for that and it should be documented.

[...]
> >> + */
> >> +int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
> >> +             struct cpuidle_driver *drv, int next_state)
> >> +{
> >> +     int entered_state = -1;
> >> +     struct cpuidle_coupled *coupled = dev->coupled;
> >> +     int alive;
> >> +
> >> +     if (!coupled)
> >> +             return -EINVAL;
> >> +
> >> +     BUG_ON(atomic_read(&coupled->ready_count));
> >
> > Again, I'd do a WARN_ON() and return error code from here (to avoid crashing
> > the kernel).
> Same as above, if ready_count is not 0 here then the counters are out
> of sync and something is about to go horribly wrong, like cutting
> power to a running cpu.

OK

> >> +     cpuidle_coupled_set_waiting(dev, coupled, next_state);
> >> +
> >> +retry:
> >> +     /*
> >> +      * Wait for all coupled cpus to be idle, using the deepest state
> >> +      * allowed for a single cpu.
> >> +      */
> >> +     while (!need_resched() && !cpuidle_coupled_cpus_waiting(coupled)) {
> >> +             entered_state = cpuidle_enter_state(dev, drv,
> >> +                     dev->safe_state_index);
> >> +
> >> +             local_irq_enable();
> >> +             while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
> >> +                     cpu_relax();
> >
> > Hmm.  What exactly is this loop supposed to achieve?
> This is to ensure that the outstanding wakeups have been processed so
> we don't go to idle with an interrupt pending an immediately wake up.

I see.  Is it actually safe to reenable interrupts at this point, though?

> >> +             local_irq_disable();
> >
> > Anyway, you seem to be calling it twice along with this enabling/disabling of
> > interrupts.  I'd put that into a separate function and explain its role in a
> > kerneldoc comment.
> 
> I left it here to be obvious that I was enabling interrupts in the
> idle path, but I can refactor it out if you prefer.

Well, you can call the function to make it obvious. :-)

Anyway, I think that code duplication is a worse thing than a reasonable
amount of non-obviousness, so to speak.

> >> +     }
> >> +
> >> +     /* give a chance to process any remaining pokes */
> >> +     local_irq_enable();
> >> +     while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
> >> +             cpu_relax();
> >> +     local_irq_disable();
> >> +
> >> +     if (need_resched()) {
> >> +             cpuidle_coupled_set_not_waiting(dev, coupled);
> >> +             goto out;
> >> +     }
> >> +
> >> +     /*
> >> +      * All coupled cpus are probably idle.  There is a small chance that
> >> +      * one of the other cpus just became active.  Increment a counter when
> >> +      * ready, and spin until all coupled cpus have incremented the counter.
> >> +      * Once a cpu has incremented the counter, it cannot abort idle and must
> >> +      * spin until either the count has hit alive_count, or another cpu
> >> +      * leaves idle.
> >> +      */
> >> +
> >> +     smp_mb__before_atomic_inc();
> >> +     atomic_inc(&coupled->ready_count);
> >> +     smp_mb__after_atomic_inc();
> >
> > It seems that at least one of these barriers is unnecessary ...
> The first is to ensure ordering between ready_count and waiting count,

Are you afraid that the test against waiting_count from
cpuidle_coupled_cpus_waiting() may get reordered after the incrementation
of ready_count or is it something else?

> the second is for ready_count vs. alive_count and requested_state.

This one I can understand, but ...

> >> +     /* alive_count can't change while ready_count > 0 */
> >> +     alive = atomic_read(&coupled->alive_count);

What happens if CPU hotplug happens right here?

> >> +     while (atomic_read(&coupled->ready_count) != alive) {
> >> +             /* Check if any other cpus bailed out of idle. */
> >> +             if (!cpuidle_coupled_cpus_waiting(coupled)) {
> >> +                     atomic_dec(&coupled->ready_count);
> >> +                     smp_mb__after_atomic_dec();

And the barrier here?  Even if the old value of ready_count leaks into
the while () loop after retry, that doesn't seem to matter.

> >> +                     goto retry;
> >> +             }
> >> +
> >> +             cpu_relax();
> >> +     }
> >> +
> >> +     /* all cpus have acked the coupled state */
> >> +     smp_rmb();
> >
> > What is the barrier here for?
> This protects ready_count vs. requested_state.  It is already
> implicitly protected by the atomic_inc_return in set_waiting, but I
> thought it would be better to protect it explicitly here.  I think I
> added the smp_mb__after_atomic_inc above later, which makes this one
> superflous, so I'll drop it.

OK

> >> +
> >> +     next_state = cpuidle_coupled_get_state(dev, coupled);
> >> +
> >> +     entered_state = cpuidle_enter_state(dev, drv, next_state);
> >> +
> >> +     cpuidle_coupled_set_not_waiting(dev, coupled);
> >> +     atomic_dec(&coupled->ready_count);
> >> +     smp_mb__after_atomic_dec();
> >> +
> >> +out:
> >> +     /*
> >> +      * Normal cpuidle states are expected to return with irqs enabled.
> >> +      * That leads to an inefficiency where a cpu receiving an interrupt
> >> +      * that brings it out of idle will process that interrupt before
> >> +      * exiting the idle enter function and decrementing ready_count.  All
> >> +      * other cpus will need to spin waiting for the cpu that is processing
> >> +      * the interrupt.  If the driver returns with interrupts disabled,
> >> +      * all other cpus will loop back into the safe idle state instead of
> >> +      * spinning, saving power.
> >> +      *
> >> +      * Calling local_irq_enable here allows coupled states to return with
> >> +      * interrupts disabled, but won't cause problems for drivers that
> >> +      * exit with interrupts enabled.
> >> +      */
> >> +     local_irq_enable();
> >> +
> >> +     /*
> >> +      * Wait until all coupled cpus have exited idle.  There is no risk that
> >> +      * a cpu exits and re-enters the ready state because this cpu has
> >> +      * already decremented its waiting_count.
> >> +      */
> >> +     while (atomic_read(&coupled->ready_count) != 0)
> >> +             cpu_relax();
> >> +
> >> +     smp_rmb();
> >
> > And here?
> 
> This was to protect ready_count vs. looping back in and reading
> alive_count.

Well, I'm lost. :-)

You've not modified anything after the previous smp_mb__after_atomic_dec(),
so what exactly is the reordering this is supposed to work against?

And while we're at it, I'm not quite sure what the things that the previous
smp_mb__after_atomic_dec() separates from each other are.

> There will be plenty of synchronization calls between
> the two with implicit barriers, but I thought it was better to do it
> explicitly.

[...]
> >> +static void cpuidle_coupled_cpu_set_alive(int cpu, bool alive)
> >> +{
> >> +     struct cpuidle_device *dev;
> >> +     struct cpuidle_coupled *coupled;
> >> +
> >> +     mutex_lock(&cpuidle_lock);
> >> +
> >> +     dev = per_cpu(cpuidle_devices, cpu);
> >> +     if (!dev->coupled)
> >> +             goto out;
> >> +
> >> +     coupled = dev->coupled;
> >> +
> >> +     /*
> >> +      * waiting_count must be at least 1 less than alive_count, because
> >> +      * this cpu is not waiting.  Spin until all cpus have noticed this cpu
> >> +      * is not idle and exited the ready loop before changing alive_count.
> >> +      */
> >> +     while (atomic_read(&coupled->ready_count))
> >> +             cpu_relax();
> >> +
> >> +     if (alive) {
> >> +             smp_mb__before_atomic_inc();
> >> +             atomic_inc(&coupled->alive_count);
> >> +             smp_mb__after_atomic_inc();
> >> +             coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
> >> +     } else {
> >> +             smp_mb__before_atomic_dec();
> >> +             atomic_dec(&coupled->alive_count);
> >> +             smp_mb__after_atomic_dec();
> >> +             coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_DEAD;
> >
> > There's too many SMP barriers above, but I'm not quite sure which of them (if
> > any) are really necessary.
> The ones before order ready_count vs alive_count, the ones after order
> alive_count vs. requested_state and future waiting_count increments.

Well, so what are the matching barriers for these?

Rafael

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 3/5] cpuidle: add support for states that affect multiple cpus
@ 2012-05-04 11:51         ` Rafael J. Wysocki
  0 siblings, 0 replies; 78+ messages in thread
From: Rafael J. Wysocki @ 2012-05-04 11:51 UTC (permalink / raw)
  To: Colin Cross
  Cc: Kevin Hilman, Len Brown, Russell King, Greg Kroah-Hartman,
	Kay Sievers, linux-kernel, Amit Kucheria, linux-pm,
	Arjan van de Ven, Arnd Bergmann, linux-arm-kernel

On Friday, May 04, 2012, Colin Cross wrote:
> On Thu, May 3, 2012 at 3:14 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
[...]
> 
> >> +/**
> >> + * cpuidle_coupled_cpus_waiting - check if all cpus in a coupled set are waiting
> >> + * @coupled: the struct coupled that contains the current cpu
> >> + *
> >> + * Returns true if all cpus coupled to this target state are in the wait loop
> >> + */
> >> +static inline bool cpuidle_coupled_cpus_waiting(struct cpuidle_coupled *coupled)
> >> +{
> >> +     int alive;
> >> +     int waiting;
> >> +
> >> +     /*
> >> +      * Read alive before reading waiting so a booting cpu is not treated as
> >> +      * idle
> >> +      */
> >
> > Well, the comment doesn't really explain much.  In particular, why the boot CPU
> > could be treated as idle if the reads were in a different order.
> 
> Hm, I think the race condition is on a cpu going down.  What about:
> Read alive before reading waiting.  If waiting is read before alive,
> this cpu could see another cpu as waiting just before it goes offline,
> between when it the other cpu decrements waiting and when it
> decrements alive, which could cause alive == waiting when one cpu is
> not waiting.

Reading them in this particular order doesn't stop the race, though.  I mean,
if the hotplug happens just right after you've read alive_count, you still have
a wrong value.  waiting_count is set independently, it seems, so there's no
ordering between the two on the "store" side and the "load" side ordering
doesn't matter.

I would just make the CPU hotplug notifier routine block until
cpuidle_enter_state_coupled() is done and the latter return immediately
if the CPU hotplug notifier routine is in progress, perhaps falling back
to the safe state.  Or I would make the CPU hotplug notifier routine
disable the "coupled cpuidle" entirely on DOWN_PREPARE and UP_PREPARE
and only re-enable it after the hotplug has been completed.

> >> +     alive = atomic_read(&coupled->alive_count);
> >> +     smp_rmb();
> >> +     waiting = atomic_read(&coupled->waiting_count);
> >
> > Have you considered using one atomic variable to accommodate both counters
> > such that the upper half contains one counter and the lower half contains
> > the other?
> 
> There are 3 counters (alive, waiting, and ready).  Do you want me to
> squish all of them into a single atomic_t, which would limit to 1023
> cpus?

No.  I'd make sure that cpuidle_enter_state_coupled() did't race with CPU
hotplug, so as to make alive_count stable from its standpoint, and I'd
put the two remaining counters into one atomic_t variable.

> >> +
> >> +     return (waiting == alive);
> >> +}
> >> +
> >> +/**
> >> + * cpuidle_coupled_get_state - determine the deepest idle state
> >> + * @dev: struct cpuidle_device for this cpu
> >> + * @coupled: the struct coupled that contains the current cpu
> >> + *
> >> + * Returns the deepest idle state that all coupled cpus can enter
> >> + */
> >> +static inline int cpuidle_coupled_get_state(struct cpuidle_device *dev,
> >> +             struct cpuidle_coupled *coupled)
> >> +{
> >> +     int i;
> >> +     int state = INT_MAX;
> >> +
> >> +     for_each_cpu_mask(i, coupled->coupled_cpus)
> >> +             if (coupled->requested_state[i] != CPUIDLE_COUPLED_DEAD &&
> >> +                 coupled->requested_state[i] < state)
> >> +                     state = coupled->requested_state[i];
> >> +
> >> +     BUG_ON(state >= dev->state_count || state < 0);
> >
> > Do you have to crash the kernel here if the assertion doesn't hold?  Maybe
> > you could use WARN_ON() and return error code?
> 
> If this BUG_ON is hit, there is a race condition somewhere that
> allowed a cpu out of idle unexpectedly, and there is no way to recover
> without more race conditions.  I don't expect this to ever happen, it
> is mostly there to detect race conditions during development.  Should
> I drop it completely?

I would just drop it, then, in the final respin of the patch.

[...]
> >> +{
> >> +     int alive;
> >> +
> >> +     BUG_ON(coupled->requested_state[dev->cpu] >= 0);
> >
> > Would be WARN_ON() + do nothing too dangerous here?
> 
> If this BUG_ON is hit, then this cpu exited idle without clearing its
> waiting state, which could cause another cpu to enter the deeper idle
> state while this cpu is still running.  The counters would be out of
> sync, so it's not easy to recover.  Again, this is to detect race
> conditions during development, but should never happen.  Should I drop
> it?

Just like above.

> >> +
> >> +     coupled->requested_state[dev->cpu] = next_state;
> >> +
> >> +     /*
> >> +      * If this is the last cpu to enter the waiting state, poke
> >> +      * all the other cpus out of their waiting state so they can
> >> +      * enter a deeper state.  This can race with one of the cpus
> >> +      * exiting the waiting state due to an interrupt and
> >> +      * decrementing waiting_count, see comment below.
> >> +      */
> >> +     alive = atomic_read(&coupled->alive_count);
> >> +     if (atomic_inc_return(&coupled->waiting_count) == alive)
> >> +             cpuidle_coupled_poke_others(dev, coupled);
> >> +}
> >> +
> >> +/**
> >> + * cpuidle_coupled_set_not_waiting - mark this cpu as leaving the wait loop
> >> + * @dev: struct cpuidle_device for this cpu
> >> + * @coupled: the struct coupled that contains the current cpu
> >> + *
> >> + * Removes the requested idle state for the specified cpuidle device.
> >> + *
> >> + * Provides memory ordering around waiting_count.
> >> + */
> >> +static void cpuidle_coupled_set_not_waiting(struct cpuidle_device *dev,
> >> +             struct cpuidle_coupled *coupled)
> >
> > It looks like dev doesn't have to be passed here, cpu would be enough.
> >
> >> +{
> >> +     BUG_ON(coupled->requested_state[dev->cpu] < 0);
> >
> > Well, like above?
> Same as above.

Ditto. :-)

> >> +
> >> +     /*
> >> +      * Decrementing waiting_count can race with incrementing it in
> >> +      * cpuidle_coupled_set_waiting, but that's OK.  Worst case, some
> >> +      * cpus will increment ready_count and then spin until they
> >> +      * notice that this cpu has cleared it's requested_state.
> >> +      */
> >
> > So it looks like having ready_count and waiting_count in one atomic variable
> > can spare us this particular race condition.
> As above, there are 3 counters here, alive, ready, and waiting.

Please refer to my comment about that above.

> >> +
> >> +     smp_mb__before_atomic_dec();
> >> +     atomic_dec(&coupled->waiting_count);
> >> +     smp_mb__after_atomic_dec();
> >
> > Do you really need both the before and after barriers here?  If so, then why?
> 
> I believe so, waiting is ordered vs. alive and ready, one barrier is
> for each.  Do you want the answers to these questions here or in the
> code?  I had comments for every barrier use during development, but it
> made it too hard to follow the flow of the code.  I could add a
> comment describing the ordering requirements instead, but it's still
> hard to translate that to the required barrier locations.

Well, the barriers should be commented in the code, for the sake of people
reading it and wanting to learn from it if nothing else.

Wherever we put an SMP barrier directly like this, there should be a good
reason for that and it should be documented.

[...]
> >> + */
> >> +int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
> >> +             struct cpuidle_driver *drv, int next_state)
> >> +{
> >> +     int entered_state = -1;
> >> +     struct cpuidle_coupled *coupled = dev->coupled;
> >> +     int alive;
> >> +
> >> +     if (!coupled)
> >> +             return -EINVAL;
> >> +
> >> +     BUG_ON(atomic_read(&coupled->ready_count));
> >
> > Again, I'd do a WARN_ON() and return error code from here (to avoid crashing
> > the kernel).
> Same as above, if ready_count is not 0 here then the counters are out
> of sync and something is about to go horribly wrong, like cutting
> power to a running cpu.

OK

> >> +     cpuidle_coupled_set_waiting(dev, coupled, next_state);
> >> +
> >> +retry:
> >> +     /*
> >> +      * Wait for all coupled cpus to be idle, using the deepest state
> >> +      * allowed for a single cpu.
> >> +      */
> >> +     while (!need_resched() && !cpuidle_coupled_cpus_waiting(coupled)) {
> >> +             entered_state = cpuidle_enter_state(dev, drv,
> >> +                     dev->safe_state_index);
> >> +
> >> +             local_irq_enable();
> >> +             while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
> >> +                     cpu_relax();
> >
> > Hmm.  What exactly is this loop supposed to achieve?
> This is to ensure that the outstanding wakeups have been processed so
> we don't go to idle with an interrupt pending an immediately wake up.

I see.  Is it actually safe to reenable interrupts at this point, though?

> >> +             local_irq_disable();
> >
> > Anyway, you seem to be calling it twice along with this enabling/disabling of
> > interrupts.  I'd put that into a separate function and explain its role in a
> > kerneldoc comment.
> 
> I left it here to be obvious that I was enabling interrupts in the
> idle path, but I can refactor it out if you prefer.

Well, you can call the function to make it obvious. :-)

Anyway, I think that code duplication is a worse thing than a reasonable
amount of non-obviousness, so to speak.

> >> +     }
> >> +
> >> +     /* give a chance to process any remaining pokes */
> >> +     local_irq_enable();
> >> +     while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
> >> +             cpu_relax();
> >> +     local_irq_disable();
> >> +
> >> +     if (need_resched()) {
> >> +             cpuidle_coupled_set_not_waiting(dev, coupled);
> >> +             goto out;
> >> +     }
> >> +
> >> +     /*
> >> +      * All coupled cpus are probably idle.  There is a small chance that
> >> +      * one of the other cpus just became active.  Increment a counter when
> >> +      * ready, and spin until all coupled cpus have incremented the counter.
> >> +      * Once a cpu has incremented the counter, it cannot abort idle and must
> >> +      * spin until either the count has hit alive_count, or another cpu
> >> +      * leaves idle.
> >> +      */
> >> +
> >> +     smp_mb__before_atomic_inc();
> >> +     atomic_inc(&coupled->ready_count);
> >> +     smp_mb__after_atomic_inc();
> >
> > It seems that at least one of these barriers is unnecessary ...
> The first is to ensure ordering between ready_count and waiting count,

Are you afraid that the test against waiting_count from
cpuidle_coupled_cpus_waiting() may get reordered after the incrementation
of ready_count or is it something else?

> the second is for ready_count vs. alive_count and requested_state.

This one I can understand, but ...

> >> +     /* alive_count can't change while ready_count > 0 */
> >> +     alive = atomic_read(&coupled->alive_count);

What happens if CPU hotplug happens right here?

> >> +     while (atomic_read(&coupled->ready_count) != alive) {
> >> +             /* Check if any other cpus bailed out of idle. */
> >> +             if (!cpuidle_coupled_cpus_waiting(coupled)) {
> >> +                     atomic_dec(&coupled->ready_count);
> >> +                     smp_mb__after_atomic_dec();

And the barrier here?  Even if the old value of ready_count leaks into
the while () loop after retry, that doesn't seem to matter.

> >> +                     goto retry;
> >> +             }
> >> +
> >> +             cpu_relax();
> >> +     }
> >> +
> >> +     /* all cpus have acked the coupled state */
> >> +     smp_rmb();
> >
> > What is the barrier here for?
> This protects ready_count vs. requested_state.  It is already
> implicitly protected by the atomic_inc_return in set_waiting, but I
> thought it would be better to protect it explicitly here.  I think I
> added the smp_mb__after_atomic_inc above later, which makes this one
> superflous, so I'll drop it.

OK

> >> +
> >> +     next_state = cpuidle_coupled_get_state(dev, coupled);
> >> +
> >> +     entered_state = cpuidle_enter_state(dev, drv, next_state);
> >> +
> >> +     cpuidle_coupled_set_not_waiting(dev, coupled);
> >> +     atomic_dec(&coupled->ready_count);
> >> +     smp_mb__after_atomic_dec();
> >> +
> >> +out:
> >> +     /*
> >> +      * Normal cpuidle states are expected to return with irqs enabled.
> >> +      * That leads to an inefficiency where a cpu receiving an interrupt
> >> +      * that brings it out of idle will process that interrupt before
> >> +      * exiting the idle enter function and decrementing ready_count.  All
> >> +      * other cpus will need to spin waiting for the cpu that is processing
> >> +      * the interrupt.  If the driver returns with interrupts disabled,
> >> +      * all other cpus will loop back into the safe idle state instead of
> >> +      * spinning, saving power.
> >> +      *
> >> +      * Calling local_irq_enable here allows coupled states to return with
> >> +      * interrupts disabled, but won't cause problems for drivers that
> >> +      * exit with interrupts enabled.
> >> +      */
> >> +     local_irq_enable();
> >> +
> >> +     /*
> >> +      * Wait until all coupled cpus have exited idle.  There is no risk that
> >> +      * a cpu exits and re-enters the ready state because this cpu has
> >> +      * already decremented its waiting_count.
> >> +      */
> >> +     while (atomic_read(&coupled->ready_count) != 0)
> >> +             cpu_relax();
> >> +
> >> +     smp_rmb();
> >
> > And here?
> 
> This was to protect ready_count vs. looping back in and reading
> alive_count.

Well, I'm lost. :-)

You've not modified anything after the previous smp_mb__after_atomic_dec(),
so what exactly is the reordering this is supposed to work against?

And while we're at it, I'm not quite sure what the things that the previous
smp_mb__after_atomic_dec() separates from each other are.

> There will be plenty of synchronization calls between
> the two with implicit barriers, but I thought it was better to do it
> explicitly.

[...]
> >> +static void cpuidle_coupled_cpu_set_alive(int cpu, bool alive)
> >> +{
> >> +     struct cpuidle_device *dev;
> >> +     struct cpuidle_coupled *coupled;
> >> +
> >> +     mutex_lock(&cpuidle_lock);
> >> +
> >> +     dev = per_cpu(cpuidle_devices, cpu);
> >> +     if (!dev->coupled)
> >> +             goto out;
> >> +
> >> +     coupled = dev->coupled;
> >> +
> >> +     /*
> >> +      * waiting_count must be at least 1 less than alive_count, because
> >> +      * this cpu is not waiting.  Spin until all cpus have noticed this cpu
> >> +      * is not idle and exited the ready loop before changing alive_count.
> >> +      */
> >> +     while (atomic_read(&coupled->ready_count))
> >> +             cpu_relax();
> >> +
> >> +     if (alive) {
> >> +             smp_mb__before_atomic_inc();
> >> +             atomic_inc(&coupled->alive_count);
> >> +             smp_mb__after_atomic_inc();
> >> +             coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
> >> +     } else {
> >> +             smp_mb__before_atomic_dec();
> >> +             atomic_dec(&coupled->alive_count);
> >> +             smp_mb__after_atomic_dec();
> >> +             coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_DEAD;
> >
> > There's too many SMP barriers above, but I'm not quite sure which of them (if
> > any) are really necessary.
> The ones before order ready_count vs alive_count, the ones after order
> alive_count vs. requested_state and future waiting_count increments.

Well, so what are the matching barriers for these?

Rafael

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [linux-pm] [PATCHv3 3/5] cpuidle: add support for states that affect multiple cpus
@ 2012-05-04 11:51         ` Rafael J. Wysocki
  0 siblings, 0 replies; 78+ messages in thread
From: Rafael J. Wysocki @ 2012-05-04 11:51 UTC (permalink / raw)
  To: linux-arm-kernel

On Friday, May 04, 2012, Colin Cross wrote:
> On Thu, May 3, 2012 at 3:14 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
[...]
> 
> >> +/**
> >> + * cpuidle_coupled_cpus_waiting - check if all cpus in a coupled set are waiting
> >> + * @coupled: the struct coupled that contains the current cpu
> >> + *
> >> + * Returns true if all cpus coupled to this target state are in the wait loop
> >> + */
> >> +static inline bool cpuidle_coupled_cpus_waiting(struct cpuidle_coupled *coupled)
> >> +{
> >> +     int alive;
> >> +     int waiting;
> >> +
> >> +     /*
> >> +      * Read alive before reading waiting so a booting cpu is not treated as
> >> +      * idle
> >> +      */
> >
> > Well, the comment doesn't really explain much.  In particular, why the boot CPU
> > could be treated as idle if the reads were in a different order.
> 
> Hm, I think the race condition is on a cpu going down.  What about:
> Read alive before reading waiting.  If waiting is read before alive,
> this cpu could see another cpu as waiting just before it goes offline,
> between when it the other cpu decrements waiting and when it
> decrements alive, which could cause alive == waiting when one cpu is
> not waiting.

Reading them in this particular order doesn't stop the race, though.  I mean,
if the hotplug happens just right after you've read alive_count, you still have
a wrong value.  waiting_count is set independently, it seems, so there's no
ordering between the two on the "store" side and the "load" side ordering
doesn't matter.

I would just make the CPU hotplug notifier routine block until
cpuidle_enter_state_coupled() is done and the latter return immediately
if the CPU hotplug notifier routine is in progress, perhaps falling back
to the safe state.  Or I would make the CPU hotplug notifier routine
disable the "coupled cpuidle" entirely on DOWN_PREPARE and UP_PREPARE
and only re-enable it after the hotplug has been completed.

> >> +     alive = atomic_read(&coupled->alive_count);
> >> +     smp_rmb();
> >> +     waiting = atomic_read(&coupled->waiting_count);
> >
> > Have you considered using one atomic variable to accommodate both counters
> > such that the upper half contains one counter and the lower half contains
> > the other?
> 
> There are 3 counters (alive, waiting, and ready).  Do you want me to
> squish all of them into a single atomic_t, which would limit to 1023
> cpus?

No.  I'd make sure that cpuidle_enter_state_coupled() did't race with CPU
hotplug, so as to make alive_count stable from its standpoint, and I'd
put the two remaining counters into one atomic_t variable.

> >> +
> >> +     return (waiting == alive);
> >> +}
> >> +
> >> +/**
> >> + * cpuidle_coupled_get_state - determine the deepest idle state
> >> + * @dev: struct cpuidle_device for this cpu
> >> + * @coupled: the struct coupled that contains the current cpu
> >> + *
> >> + * Returns the deepest idle state that all coupled cpus can enter
> >> + */
> >> +static inline int cpuidle_coupled_get_state(struct cpuidle_device *dev,
> >> +             struct cpuidle_coupled *coupled)
> >> +{
> >> +     int i;
> >> +     int state = INT_MAX;
> >> +
> >> +     for_each_cpu_mask(i, coupled->coupled_cpus)
> >> +             if (coupled->requested_state[i] != CPUIDLE_COUPLED_DEAD &&
> >> +                 coupled->requested_state[i] < state)
> >> +                     state = coupled->requested_state[i];
> >> +
> >> +     BUG_ON(state >= dev->state_count || state < 0);
> >
> > Do you have to crash the kernel here if the assertion doesn't hold?  Maybe
> > you could use WARN_ON() and return error code?
> 
> If this BUG_ON is hit, there is a race condition somewhere that
> allowed a cpu out of idle unexpectedly, and there is no way to recover
> without more race conditions.  I don't expect this to ever happen, it
> is mostly there to detect race conditions during development.  Should
> I drop it completely?

I would just drop it, then, in the final respin of the patch.

[...]
> >> +{
> >> +     int alive;
> >> +
> >> +     BUG_ON(coupled->requested_state[dev->cpu] >= 0);
> >
> > Would be WARN_ON() + do nothing too dangerous here?
> 
> If this BUG_ON is hit, then this cpu exited idle without clearing its
> waiting state, which could cause another cpu to enter the deeper idle
> state while this cpu is still running.  The counters would be out of
> sync, so it's not easy to recover.  Again, this is to detect race
> conditions during development, but should never happen.  Should I drop
> it?

Just like above.

> >> +
> >> +     coupled->requested_state[dev->cpu] = next_state;
> >> +
> >> +     /*
> >> +      * If this is the last cpu to enter the waiting state, poke
> >> +      * all the other cpus out of their waiting state so they can
> >> +      * enter a deeper state.  This can race with one of the cpus
> >> +      * exiting the waiting state due to an interrupt and
> >> +      * decrementing waiting_count, see comment below.
> >> +      */
> >> +     alive = atomic_read(&coupled->alive_count);
> >> +     if (atomic_inc_return(&coupled->waiting_count) == alive)
> >> +             cpuidle_coupled_poke_others(dev, coupled);
> >> +}
> >> +
> >> +/**
> >> + * cpuidle_coupled_set_not_waiting - mark this cpu as leaving the wait loop
> >> + * @dev: struct cpuidle_device for this cpu
> >> + * @coupled: the struct coupled that contains the current cpu
> >> + *
> >> + * Removes the requested idle state for the specified cpuidle device.
> >> + *
> >> + * Provides memory ordering around waiting_count.
> >> + */
> >> +static void cpuidle_coupled_set_not_waiting(struct cpuidle_device *dev,
> >> +             struct cpuidle_coupled *coupled)
> >
> > It looks like dev doesn't have to be passed here, cpu would be enough.
> >
> >> +{
> >> +     BUG_ON(coupled->requested_state[dev->cpu] < 0);
> >
> > Well, like above?
> Same as above.

Ditto. :-)

> >> +
> >> +     /*
> >> +      * Decrementing waiting_count can race with incrementing it in
> >> +      * cpuidle_coupled_set_waiting, but that's OK.  Worst case, some
> >> +      * cpus will increment ready_count and then spin until they
> >> +      * notice that this cpu has cleared it's requested_state.
> >> +      */
> >
> > So it looks like having ready_count and waiting_count in one atomic variable
> > can spare us this particular race condition.
> As above, there are 3 counters here, alive, ready, and waiting.

Please refer to my comment about that above.

> >> +
> >> +     smp_mb__before_atomic_dec();
> >> +     atomic_dec(&coupled->waiting_count);
> >> +     smp_mb__after_atomic_dec();
> >
> > Do you really need both the before and after barriers here?  If so, then why?
> 
> I believe so, waiting is ordered vs. alive and ready, one barrier is
> for each.  Do you want the answers to these questions here or in the
> code?  I had comments for every barrier use during development, but it
> made it too hard to follow the flow of the code.  I could add a
> comment describing the ordering requirements instead, but it's still
> hard to translate that to the required barrier locations.

Well, the barriers should be commented in the code, for the sake of people
reading it and wanting to learn from it if nothing else.

Wherever we put an SMP barrier directly like this, there should be a good
reason for that and it should be documented.

[...]
> >> + */
> >> +int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
> >> +             struct cpuidle_driver *drv, int next_state)
> >> +{
> >> +     int entered_state = -1;
> >> +     struct cpuidle_coupled *coupled = dev->coupled;
> >> +     int alive;
> >> +
> >> +     if (!coupled)
> >> +             return -EINVAL;
> >> +
> >> +     BUG_ON(atomic_read(&coupled->ready_count));
> >
> > Again, I'd do a WARN_ON() and return error code from here (to avoid crashing
> > the kernel).
> Same as above, if ready_count is not 0 here then the counters are out
> of sync and something is about to go horribly wrong, like cutting
> power to a running cpu.

OK

> >> +     cpuidle_coupled_set_waiting(dev, coupled, next_state);
> >> +
> >> +retry:
> >> +     /*
> >> +      * Wait for all coupled cpus to be idle, using the deepest state
> >> +      * allowed for a single cpu.
> >> +      */
> >> +     while (!need_resched() && !cpuidle_coupled_cpus_waiting(coupled)) {
> >> +             entered_state = cpuidle_enter_state(dev, drv,
> >> +                     dev->safe_state_index);
> >> +
> >> +             local_irq_enable();
> >> +             while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
> >> +                     cpu_relax();
> >
> > Hmm.  What exactly is this loop supposed to achieve?
> This is to ensure that the outstanding wakeups have been processed so
> we don't go to idle with an interrupt pending an immediately wake up.

I see.  Is it actually safe to reenable interrupts at this point, though?

> >> +             local_irq_disable();
> >
> > Anyway, you seem to be calling it twice along with this enabling/disabling of
> > interrupts.  I'd put that into a separate function and explain its role in a
> > kerneldoc comment.
> 
> I left it here to be obvious that I was enabling interrupts in the
> idle path, but I can refactor it out if you prefer.

Well, you can call the function to make it obvious. :-)

Anyway, I think that code duplication is a worse thing than a reasonable
amount of non-obviousness, so to speak.

> >> +     }
> >> +
> >> +     /* give a chance to process any remaining pokes */
> >> +     local_irq_enable();
> >> +     while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
> >> +             cpu_relax();
> >> +     local_irq_disable();
> >> +
> >> +     if (need_resched()) {
> >> +             cpuidle_coupled_set_not_waiting(dev, coupled);
> >> +             goto out;
> >> +     }
> >> +
> >> +     /*
> >> +      * All coupled cpus are probably idle.  There is a small chance that
> >> +      * one of the other cpus just became active.  Increment a counter when
> >> +      * ready, and spin until all coupled cpus have incremented the counter.
> >> +      * Once a cpu has incremented the counter, it cannot abort idle and must
> >> +      * spin until either the count has hit alive_count, or another cpu
> >> +      * leaves idle.
> >> +      */
> >> +
> >> +     smp_mb__before_atomic_inc();
> >> +     atomic_inc(&coupled->ready_count);
> >> +     smp_mb__after_atomic_inc();
> >
> > It seems that at least one of these barriers is unnecessary ...
> The first is to ensure ordering between ready_count and waiting count,

Are you afraid that the test against waiting_count from
cpuidle_coupled_cpus_waiting() may get reordered after the incrementation
of ready_count or is it something else?

> the second is for ready_count vs. alive_count and requested_state.

This one I can understand, but ...

> >> +     /* alive_count can't change while ready_count > 0 */
> >> +     alive = atomic_read(&coupled->alive_count);

What happens if CPU hotplug happens right here?

> >> +     while (atomic_read(&coupled->ready_count) != alive) {
> >> +             /* Check if any other cpus bailed out of idle. */
> >> +             if (!cpuidle_coupled_cpus_waiting(coupled)) {
> >> +                     atomic_dec(&coupled->ready_count);
> >> +                     smp_mb__after_atomic_dec();

And the barrier here?  Even if the old value of ready_count leaks into
the while () loop after retry, that doesn't seem to matter.

> >> +                     goto retry;
> >> +             }
> >> +
> >> +             cpu_relax();
> >> +     }
> >> +
> >> +     /* all cpus have acked the coupled state */
> >> +     smp_rmb();
> >
> > What is the barrier here for?
> This protects ready_count vs. requested_state.  It is already
> implicitly protected by the atomic_inc_return in set_waiting, but I
> thought it would be better to protect it explicitly here.  I think I
> added the smp_mb__after_atomic_inc above later, which makes this one
> superflous, so I'll drop it.

OK

> >> +
> >> +     next_state = cpuidle_coupled_get_state(dev, coupled);
> >> +
> >> +     entered_state = cpuidle_enter_state(dev, drv, next_state);
> >> +
> >> +     cpuidle_coupled_set_not_waiting(dev, coupled);
> >> +     atomic_dec(&coupled->ready_count);
> >> +     smp_mb__after_atomic_dec();
> >> +
> >> +out:
> >> +     /*
> >> +      * Normal cpuidle states are expected to return with irqs enabled.
> >> +      * That leads to an inefficiency where a cpu receiving an interrupt
> >> +      * that brings it out of idle will process that interrupt before
> >> +      * exiting the idle enter function and decrementing ready_count.  All
> >> +      * other cpus will need to spin waiting for the cpu that is processing
> >> +      * the interrupt.  If the driver returns with interrupts disabled,
> >> +      * all other cpus will loop back into the safe idle state instead of
> >> +      * spinning, saving power.
> >> +      *
> >> +      * Calling local_irq_enable here allows coupled states to return with
> >> +      * interrupts disabled, but won't cause problems for drivers that
> >> +      * exit with interrupts enabled.
> >> +      */
> >> +     local_irq_enable();
> >> +
> >> +     /*
> >> +      * Wait until all coupled cpus have exited idle.  There is no risk that
> >> +      * a cpu exits and re-enters the ready state because this cpu has
> >> +      * already decremented its waiting_count.
> >> +      */
> >> +     while (atomic_read(&coupled->ready_count) != 0)
> >> +             cpu_relax();
> >> +
> >> +     smp_rmb();
> >
> > And here?
> 
> This was to protect ready_count vs. looping back in and reading
> alive_count.

Well, I'm lost. :-)

You've not modified anything after the previous smp_mb__after_atomic_dec(),
so what exactly is the reordering this is supposed to work against?

And while we're at it, I'm not quite sure what the things that the previous
smp_mb__after_atomic_dec() separates from each other are.

> There will be plenty of synchronization calls between
> the two with implicit barriers, but I thought it was better to do it
> explicitly.

[...]
> >> +static void cpuidle_coupled_cpu_set_alive(int cpu, bool alive)
> >> +{
> >> +     struct cpuidle_device *dev;
> >> +     struct cpuidle_coupled *coupled;
> >> +
> >> +     mutex_lock(&cpuidle_lock);
> >> +
> >> +     dev = per_cpu(cpuidle_devices, cpu);
> >> +     if (!dev->coupled)
> >> +             goto out;
> >> +
> >> +     coupled = dev->coupled;
> >> +
> >> +     /*
> >> +      * waiting_count must be at least 1 less than alive_count, because
> >> +      * this cpu is not waiting.  Spin until all cpus have noticed this cpu
> >> +      * is not idle and exited the ready loop before changing alive_count.
> >> +      */
> >> +     while (atomic_read(&coupled->ready_count))
> >> +             cpu_relax();
> >> +
> >> +     if (alive) {
> >> +             smp_mb__before_atomic_inc();
> >> +             atomic_inc(&coupled->alive_count);
> >> +             smp_mb__after_atomic_inc();
> >> +             coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
> >> +     } else {
> >> +             smp_mb__before_atomic_dec();
> >> +             atomic_dec(&coupled->alive_count);
> >> +             smp_mb__after_atomic_dec();
> >> +             coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_DEAD;
> >
> > There's too many SMP barriers above, but I'm not quite sure which of them (if
> > any) are really necessary.
> The ones before order ready_count vs alive_count, the ones after order
> alive_count vs. requested_state and future waiting_count increments.

Well, so what are the matching barriers for these?

Rafael

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [linux-pm] [PATCHv3 3/5] cpuidle: add support for states that affect multiple cpus
  2012-05-04 11:51         ` Rafael J. Wysocki
  (?)
@ 2012-05-04 18:56           ` Colin Cross
  -1 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-05-04 18:56 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: linux-pm, linux-kernel, Kevin Hilman, Len Brown, Russell King,
	Greg Kroah-Hartman, Kay Sievers, Amit Kucheria, Arjan van de Ven,
	Arnd Bergmann, linux-arm-kernel

On Fri, May 4, 2012 at 4:51 AM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> On Friday, May 04, 2012, Colin Cross wrote:
>> On Thu, May 3, 2012 at 3:14 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> [...]
>>
>> >> +/**
>> >> + * cpuidle_coupled_cpus_waiting - check if all cpus in a coupled set are waiting
>> >> + * @coupled: the struct coupled that contains the current cpu
>> >> + *
>> >> + * Returns true if all cpus coupled to this target state are in the wait loop
>> >> + */
>> >> +static inline bool cpuidle_coupled_cpus_waiting(struct cpuidle_coupled *coupled)
>> >> +{
>> >> +     int alive;
>> >> +     int waiting;
>> >> +
>> >> +     /*
>> >> +      * Read alive before reading waiting so a booting cpu is not treated as
>> >> +      * idle
>> >> +      */
>> >
>> > Well, the comment doesn't really explain much.  In particular, why the boot CPU
>> > could be treated as idle if the reads were in a different order.
>>
>> Hm, I think the race condition is on a cpu going down.  What about:
>> Read alive before reading waiting.  If waiting is read before alive,
>> this cpu could see another cpu as waiting just before it goes offline,
>> between when it the other cpu decrements waiting and when it
>> decrements alive, which could cause alive == waiting when one cpu is
>> not waiting.
>
> Reading them in this particular order doesn't stop the race, though.  I mean,
> if the hotplug happens just right after you've read alive_count, you still have
> a wrong value.  waiting_count is set independently, it seems, so there's no
> ordering between the two on the "store" side and the "load" side ordering
> doesn't matter.

As commented in the hotplug path, hotplug relies on the fact that one
of the cpus in the cluster is involved in the hotplug of the cpu that
is changing (this may not be true for multiple clusters, but it is
easy to fix by IPI-ing to a cpu that is in the same cluster when that
happens).  That means that waiting count is always guaranteed to be at
least 1 less than alive count when alive count changes.  All this read
ordering needs to do is make sure that this cpu doesn't see
waiting_count == alive_count by reading them in the wrong order.

> I would just make the CPU hotplug notifier routine block until
> cpuidle_enter_state_coupled() is done and the latter return immediately
> if the CPU hotplug notifier routine is in progress, perhaps falling back
> to the safe state.  Or I would make the CPU hotplug notifier routine
> disable the "coupled cpuidle" entirely on DOWN_PREPARE and UP_PREPARE
> and only re-enable it after the hotplug has been completed.

I'll take a look at disabling coupled idle completely during hotplug.

>> >> +     alive = atomic_read(&coupled->alive_count);
>> >> +     smp_rmb();
>> >> +     waiting = atomic_read(&coupled->waiting_count);
>> >
>> > Have you considered using one atomic variable to accommodate both counters
>> > such that the upper half contains one counter and the lower half contains
>> > the other?
>>
>> There are 3 counters (alive, waiting, and ready).  Do you want me to
>> squish all of them into a single atomic_t, which would limit to 1023
>> cpus?
>
> No.  I'd make sure that cpuidle_enter_state_coupled() did't race with CPU
> hotplug, so as to make alive_count stable from its standpoint, and I'd
> put the two remaining counters into one atomic_t variable.

I'll take a look at using a single atomic_t.  My initial worry was
that the increased contention on the shared variable would cause more
cmpxchg retries, but since waiting_count and ready_count are designed
to be modified in sequential phases that shouldn't be an issue.

>> >> +
>> >> +     return (waiting == alive);
>> >> +}
>> >> +
>> >> +/**
>> >> + * cpuidle_coupled_get_state - determine the deepest idle state
>> >> + * @dev: struct cpuidle_device for this cpu
>> >> + * @coupled: the struct coupled that contains the current cpu
>> >> + *
>> >> + * Returns the deepest idle state that all coupled cpus can enter
>> >> + */
>> >> +static inline int cpuidle_coupled_get_state(struct cpuidle_device *dev,
>> >> +             struct cpuidle_coupled *coupled)
>> >> +{
>> >> +     int i;
>> >> +     int state = INT_MAX;
>> >> +
>> >> +     for_each_cpu_mask(i, coupled->coupled_cpus)
>> >> +             if (coupled->requested_state[i] != CPUIDLE_COUPLED_DEAD &&
>> >> +                 coupled->requested_state[i] < state)
>> >> +                     state = coupled->requested_state[i];
>> >> +
>> >> +     BUG_ON(state >= dev->state_count || state < 0);
>> >
>> > Do you have to crash the kernel here if the assertion doesn't hold?  Maybe
>> > you could use WARN_ON() and return error code?
>>
>> If this BUG_ON is hit, there is a race condition somewhere that
>> allowed a cpu out of idle unexpectedly, and there is no way to recover
>> without more race conditions.  I don't expect this to ever happen, it
>> is mostly there to detect race conditions during development.  Should
>> I drop it completely?
>
> I would just drop it, then, in the final respin of the patch.
>
> [...]
>> >> +{
>> >> +     int alive;
>> >> +
>> >> +     BUG_ON(coupled->requested_state[dev->cpu] >= 0);
>> >
>> > Would be WARN_ON() + do nothing too dangerous here?
>>
>> If this BUG_ON is hit, then this cpu exited idle without clearing its
>> waiting state, which could cause another cpu to enter the deeper idle
>> state while this cpu is still running.  The counters would be out of
>> sync, so it's not easy to recover.  Again, this is to detect race
>> conditions during development, but should never happen.  Should I drop
>> it?
>
> Just like above.
>
>> >> +
>> >> +     coupled->requested_state[dev->cpu] = next_state;
>> >> +
>> >> +     /*
>> >> +      * If this is the last cpu to enter the waiting state, poke
>> >> +      * all the other cpus out of their waiting state so they can
>> >> +      * enter a deeper state.  This can race with one of the cpus
>> >> +      * exiting the waiting state due to an interrupt and
>> >> +      * decrementing waiting_count, see comment below.
>> >> +      */
>> >> +     alive = atomic_read(&coupled->alive_count);
>> >> +     if (atomic_inc_return(&coupled->waiting_count) == alive)
>> >> +             cpuidle_coupled_poke_others(dev, coupled);
>> >> +}
>> >> +
>> >> +/**
>> >> + * cpuidle_coupled_set_not_waiting - mark this cpu as leaving the wait loop
>> >> + * @dev: struct cpuidle_device for this cpu
>> >> + * @coupled: the struct coupled that contains the current cpu
>> >> + *
>> >> + * Removes the requested idle state for the specified cpuidle device.
>> >> + *
>> >> + * Provides memory ordering around waiting_count.
>> >> + */
>> >> +static void cpuidle_coupled_set_not_waiting(struct cpuidle_device *dev,
>> >> +             struct cpuidle_coupled *coupled)
>> >
>> > It looks like dev doesn't have to be passed here, cpu would be enough.
>> >
>> >> +{
>> >> +     BUG_ON(coupled->requested_state[dev->cpu] < 0);
>> >
>> > Well, like above?
>> Same as above.
>
> Ditto. :-)
>
>> >> +
>> >> +     /*
>> >> +      * Decrementing waiting_count can race with incrementing it in
>> >> +      * cpuidle_coupled_set_waiting, but that's OK.  Worst case, some
>> >> +      * cpus will increment ready_count and then spin until they
>> >> +      * notice that this cpu has cleared it's requested_state.
>> >> +      */
>> >
>> > So it looks like having ready_count and waiting_count in one atomic variable
>> > can spare us this particular race condition.
>> As above, there are 3 counters here, alive, ready, and waiting.
>
> Please refer to my comment about that above.
>
>> >> +
>> >> +     smp_mb__before_atomic_dec();
>> >> +     atomic_dec(&coupled->waiting_count);
>> >> +     smp_mb__after_atomic_dec();
>> >
>> > Do you really need both the before and after barriers here?  If so, then why?
>>
>> I believe so, waiting is ordered vs. alive and ready, one barrier is
>> for each.  Do you want the answers to these questions here or in the
>> code?  I had comments for every barrier use during development, but it
>> made it too hard to follow the flow of the code.  I could add a
>> comment describing the ordering requirements instead, but it's still
>> hard to translate that to the required barrier locations.
>
> Well, the barriers should be commented in the code, for the sake of people
> reading it and wanting to learn from it if nothing else.
>
> Wherever we put an SMP barrier directly like this, there should be a good
> reason for that and it should be documented.
>
> [...]
>> >> + */
>> >> +int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
>> >> +             struct cpuidle_driver *drv, int next_state)
>> >> +{
>> >> +     int entered_state = -1;
>> >> +     struct cpuidle_coupled *coupled = dev->coupled;
>> >> +     int alive;
>> >> +
>> >> +     if (!coupled)
>> >> +             return -EINVAL;
>> >> +
>> >> +     BUG_ON(atomic_read(&coupled->ready_count));
>> >
>> > Again, I'd do a WARN_ON() and return error code from here (to avoid crashing
>> > the kernel).
>> Same as above, if ready_count is not 0 here then the counters are out
>> of sync and something is about to go horribly wrong, like cutting
>> power to a running cpu.
>
> OK
>
>> >> +     cpuidle_coupled_set_waiting(dev, coupled, next_state);
>> >> +
>> >> +retry:
>> >> +     /*
>> >> +      * Wait for all coupled cpus to be idle, using the deepest state
>> >> +      * allowed for a single cpu.
>> >> +      */
>> >> +     while (!need_resched() && !cpuidle_coupled_cpus_waiting(coupled)) {
>> >> +             entered_state = cpuidle_enter_state(dev, drv,
>> >> +                     dev->safe_state_index);
>> >> +
>> >> +             local_irq_enable();
>> >> +             while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
>> >> +                     cpu_relax();
>> >
>> > Hmm.  What exactly is this loop supposed to achieve?
>> This is to ensure that the outstanding wakeups have been processed so
>> we don't go to idle with an interrupt pending an immediately wake up.
>
> I see.  Is it actually safe to reenable interrupts at this point, though?

I think so.  The normal idle loop will enable interrupts in a similar
fashion to what happens here.  There are two things to worry about: a
processed interrupt causing work to be scheduled that should bring
this cpu out of idle, or changing the next timer which would
invalidate the current requested state.  The first is handled by
checking need_resched() after interrupts are disabled again, the
second is currently unhandled but does not affect correct operation,
it just races into a less-than-optimal idle state.

>> >> +             local_irq_disable();
>> >
>> > Anyway, you seem to be calling it twice along with this enabling/disabling of
>> > interrupts.  I'd put that into a separate function and explain its role in a
>> > kerneldoc comment.
>>
>> I left it here to be obvious that I was enabling interrupts in the
>> idle path, but I can refactor it out if you prefer.
>
> Well, you can call the function to make it obvious. :-)
>
> Anyway, I think that code duplication is a worse thing than a reasonable
> amount of non-obviousness, so to speak.
>
>> >> +     }
>> >> +
>> >> +     /* give a chance to process any remaining pokes */
>> >> +     local_irq_enable();
>> >> +     while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
>> >> +             cpu_relax();
>> >> +     local_irq_disable();
>> >> +
>> >> +     if (need_resched()) {
>> >> +             cpuidle_coupled_set_not_waiting(dev, coupled);
>> >> +             goto out;
>> >> +     }
>> >> +
>> >> +     /*
>> >> +      * All coupled cpus are probably idle.  There is a small chance that
>> >> +      * one of the other cpus just became active.  Increment a counter when
>> >> +      * ready, and spin until all coupled cpus have incremented the counter.
>> >> +      * Once a cpu has incremented the counter, it cannot abort idle and must
>> >> +      * spin until either the count has hit alive_count, or another cpu
>> >> +      * leaves idle.
>> >> +      */
>> >> +
>> >> +     smp_mb__before_atomic_inc();
>> >> +     atomic_inc(&coupled->ready_count);
>> >> +     smp_mb__after_atomic_inc();
>> >
>> > It seems that at least one of these barriers is unnecessary ...
>> The first is to ensure ordering between ready_count and waiting count,
>
> Are you afraid that the test against waiting_count from
> cpuidle_coupled_cpus_waiting() may get reordered after the incrementation
> of ready_count or is it something else?

Yes, ready_count must not be incremented before waiting_count == alive_count.

>> the second is for ready_count vs. alive_count and requested_state.
>
> This one I can understand, but ...
>
>> >> +     /* alive_count can't change while ready_count > 0 */
>> >> +     alive = atomic_read(&coupled->alive_count);
>
> What happens if CPU hotplug happens right here?

According to the comment above that line that can't happen -
alive_count can't change while ready_count > 0, because that implies
that all cpus are waiting and none can be in the hotplug path where
alive_count is changed.  Looking at it again that is not entirely
true, alive_count could change on systems with >2 cpus, but I think it
can't cause an issue because alive_count would be 2 greater than
waiting_count before alive_count was changed.  Either way, it will be
fixed by disabling coupled idle during hotplug.

>> >> +     while (atomic_read(&coupled->ready_count) != alive) {
>> >> +             /* Check if any other cpus bailed out of idle. */
>> >> +             if (!cpuidle_coupled_cpus_waiting(coupled)) {
>> >> +                     atomic_dec(&coupled->ready_count);
>> >> +                     smp_mb__after_atomic_dec();
>
> And the barrier here?  Even if the old value of ready_count leaks into
> the while () loop after retry, that doesn't seem to matter.

All of these will be academic if ready_count and waiting_count share
an atomic_t.
waiting_count must not be decremented by exiting the while loop after
the retry label until ready_count is decremented here, but that is
also protected by the barrier in set_not_waiting.  One of them could
be dropped.

>> >> +                     goto retry;
>> >> +             }
>> >> +
>> >> +             cpu_relax();
>> >> +     }
>> >> +
>> >> +     /* all cpus have acked the coupled state */
>> >> +     smp_rmb();
>> >
>> > What is the barrier here for?
>> This protects ready_count vs. requested_state.  It is already
>> implicitly protected by the atomic_inc_return in set_waiting, but I
>> thought it would be better to protect it explicitly here.  I think I
>> added the smp_mb__after_atomic_inc above later, which makes this one
>> superflous, so I'll drop it.
>
> OK
>
>> >> +
>> >> +     next_state = cpuidle_coupled_get_state(dev, coupled);
>> >> +
>> >> +     entered_state = cpuidle_enter_state(dev, drv, next_state);
>> >> +
>> >> +     cpuidle_coupled_set_not_waiting(dev, coupled);
>> >> +     atomic_dec(&coupled->ready_count);
>> >> +     smp_mb__after_atomic_dec();
>> >> +
>> >> +out:
>> >> +     /*
>> >> +      * Normal cpuidle states are expected to return with irqs enabled.
>> >> +      * That leads to an inefficiency where a cpu receiving an interrupt
>> >> +      * that brings it out of idle will process that interrupt before
>> >> +      * exiting the idle enter function and decrementing ready_count.  All
>> >> +      * other cpus will need to spin waiting for the cpu that is processing
>> >> +      * the interrupt.  If the driver returns with interrupts disabled,
>> >> +      * all other cpus will loop back into the safe idle state instead of
>> >> +      * spinning, saving power.
>> >> +      *
>> >> +      * Calling local_irq_enable here allows coupled states to return with
>> >> +      * interrupts disabled, but won't cause problems for drivers that
>> >> +      * exit with interrupts enabled.
>> >> +      */
>> >> +     local_irq_enable();
>> >> +
>> >> +     /*
>> >> +      * Wait until all coupled cpus have exited idle.  There is no risk that
>> >> +      * a cpu exits and re-enters the ready state because this cpu has
>> >> +      * already decremented its waiting_count.
>> >> +      */
>> >> +     while (atomic_read(&coupled->ready_count) != 0)
>> >> +             cpu_relax();
>> >> +
>> >> +     smp_rmb();
>> >
>> > And here?
>>
>> This was to protect ready_count vs. looping back in and reading
>> alive_count.
>
> Well, I'm lost. :-)
>
> You've not modified anything after the previous smp_mb__after_atomic_dec(),
> so what exactly is the reordering this is supposed to work against?
>
> And while we're at it, I'm not quite sure what the things that the previous
> smp_mb__after_atomic_dec() separates from each other are.

Instead of justifying all of these, let me try the combined atomic_t
trick and justify the (many fewer) remaining barriers.

>> There will be plenty of synchronization calls between
>> the two with implicit barriers, but I thought it was better to do it
>> explicitly.
>
> [...]
>> >> +static void cpuidle_coupled_cpu_set_alive(int cpu, bool alive)
>> >> +{
>> >> +     struct cpuidle_device *dev;
>> >> +     struct cpuidle_coupled *coupled;
>> >> +
>> >> +     mutex_lock(&cpuidle_lock);
>> >> +
>> >> +     dev = per_cpu(cpuidle_devices, cpu);
>> >> +     if (!dev->coupled)
>> >> +             goto out;
>> >> +
>> >> +     coupled = dev->coupled;
>> >> +
>> >> +     /*
>> >> +      * waiting_count must be at least 1 less than alive_count, because
>> >> +      * this cpu is not waiting.  Spin until all cpus have noticed this cpu
>> >> +      * is not idle and exited the ready loop before changing alive_count.
>> >> +      */
>> >> +     while (atomic_read(&coupled->ready_count))
>> >> +             cpu_relax();
>> >> +
>> >> +     if (alive) {
>> >> +             smp_mb__before_atomic_inc();
>> >> +             atomic_inc(&coupled->alive_count);
>> >> +             smp_mb__after_atomic_inc();
>> >> +             coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
>> >> +     } else {
>> >> +             smp_mb__before_atomic_dec();
>> >> +             atomic_dec(&coupled->alive_count);
>> >> +             smp_mb__after_atomic_dec();
>> >> +             coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_DEAD;
>> >
>> > There's too many SMP barriers above, but I'm not quite sure which of them (if
>> > any) are really necessary.
>> The ones before order ready_count vs alive_count, the ones after order
>> alive_count vs. requested_state and future waiting_count increments.
>
> Well, so what are the matching barriers for these?
>
> Rafael

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 3/5] cpuidle: add support for states that affect multiple cpus
@ 2012-05-04 18:56           ` Colin Cross
  0 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-05-04 18:56 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Kevin Hilman, Len Brown, Russell King, Greg Kroah-Hartman,
	Kay Sievers, linux-kernel, Amit Kucheria, linux-pm,
	Arjan van de Ven, Arnd Bergmann, linux-arm-kernel

On Fri, May 4, 2012 at 4:51 AM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> On Friday, May 04, 2012, Colin Cross wrote:
>> On Thu, May 3, 2012 at 3:14 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> [...]
>>
>> >> +/**
>> >> + * cpuidle_coupled_cpus_waiting - check if all cpus in a coupled set are waiting
>> >> + * @coupled: the struct coupled that contains the current cpu
>> >> + *
>> >> + * Returns true if all cpus coupled to this target state are in the wait loop
>> >> + */
>> >> +static inline bool cpuidle_coupled_cpus_waiting(struct cpuidle_coupled *coupled)
>> >> +{
>> >> +     int alive;
>> >> +     int waiting;
>> >> +
>> >> +     /*
>> >> +      * Read alive before reading waiting so a booting cpu is not treated as
>> >> +      * idle
>> >> +      */
>> >
>> > Well, the comment doesn't really explain much.  In particular, why the boot CPU
>> > could be treated as idle if the reads were in a different order.
>>
>> Hm, I think the race condition is on a cpu going down.  What about:
>> Read alive before reading waiting.  If waiting is read before alive,
>> this cpu could see another cpu as waiting just before it goes offline,
>> between when it the other cpu decrements waiting and when it
>> decrements alive, which could cause alive == waiting when one cpu is
>> not waiting.
>
> Reading them in this particular order doesn't stop the race, though.  I mean,
> if the hotplug happens just right after you've read alive_count, you still have
> a wrong value.  waiting_count is set independently, it seems, so there's no
> ordering between the two on the "store" side and the "load" side ordering
> doesn't matter.

As commented in the hotplug path, hotplug relies on the fact that one
of the cpus in the cluster is involved in the hotplug of the cpu that
is changing (this may not be true for multiple clusters, but it is
easy to fix by IPI-ing to a cpu that is in the same cluster when that
happens).  That means that waiting count is always guaranteed to be at
least 1 less than alive count when alive count changes.  All this read
ordering needs to do is make sure that this cpu doesn't see
waiting_count == alive_count by reading them in the wrong order.

> I would just make the CPU hotplug notifier routine block until
> cpuidle_enter_state_coupled() is done and the latter return immediately
> if the CPU hotplug notifier routine is in progress, perhaps falling back
> to the safe state.  Or I would make the CPU hotplug notifier routine
> disable the "coupled cpuidle" entirely on DOWN_PREPARE and UP_PREPARE
> and only re-enable it after the hotplug has been completed.

I'll take a look at disabling coupled idle completely during hotplug.

>> >> +     alive = atomic_read(&coupled->alive_count);
>> >> +     smp_rmb();
>> >> +     waiting = atomic_read(&coupled->waiting_count);
>> >
>> > Have you considered using one atomic variable to accommodate both counters
>> > such that the upper half contains one counter and the lower half contains
>> > the other?
>>
>> There are 3 counters (alive, waiting, and ready).  Do you want me to
>> squish all of them into a single atomic_t, which would limit to 1023
>> cpus?
>
> No.  I'd make sure that cpuidle_enter_state_coupled() did't race with CPU
> hotplug, so as to make alive_count stable from its standpoint, and I'd
> put the two remaining counters into one atomic_t variable.

I'll take a look at using a single atomic_t.  My initial worry was
that the increased contention on the shared variable would cause more
cmpxchg retries, but since waiting_count and ready_count are designed
to be modified in sequential phases that shouldn't be an issue.

>> >> +
>> >> +     return (waiting == alive);
>> >> +}
>> >> +
>> >> +/**
>> >> + * cpuidle_coupled_get_state - determine the deepest idle state
>> >> + * @dev: struct cpuidle_device for this cpu
>> >> + * @coupled: the struct coupled that contains the current cpu
>> >> + *
>> >> + * Returns the deepest idle state that all coupled cpus can enter
>> >> + */
>> >> +static inline int cpuidle_coupled_get_state(struct cpuidle_device *dev,
>> >> +             struct cpuidle_coupled *coupled)
>> >> +{
>> >> +     int i;
>> >> +     int state = INT_MAX;
>> >> +
>> >> +     for_each_cpu_mask(i, coupled->coupled_cpus)
>> >> +             if (coupled->requested_state[i] != CPUIDLE_COUPLED_DEAD &&
>> >> +                 coupled->requested_state[i] < state)
>> >> +                     state = coupled->requested_state[i];
>> >> +
>> >> +     BUG_ON(state >= dev->state_count || state < 0);
>> >
>> > Do you have to crash the kernel here if the assertion doesn't hold?  Maybe
>> > you could use WARN_ON() and return error code?
>>
>> If this BUG_ON is hit, there is a race condition somewhere that
>> allowed a cpu out of idle unexpectedly, and there is no way to recover
>> without more race conditions.  I don't expect this to ever happen, it
>> is mostly there to detect race conditions during development.  Should
>> I drop it completely?
>
> I would just drop it, then, in the final respin of the patch.
>
> [...]
>> >> +{
>> >> +     int alive;
>> >> +
>> >> +     BUG_ON(coupled->requested_state[dev->cpu] >= 0);
>> >
>> > Would be WARN_ON() + do nothing too dangerous here?
>>
>> If this BUG_ON is hit, then this cpu exited idle without clearing its
>> waiting state, which could cause another cpu to enter the deeper idle
>> state while this cpu is still running.  The counters would be out of
>> sync, so it's not easy to recover.  Again, this is to detect race
>> conditions during development, but should never happen.  Should I drop
>> it?
>
> Just like above.
>
>> >> +
>> >> +     coupled->requested_state[dev->cpu] = next_state;
>> >> +
>> >> +     /*
>> >> +      * If this is the last cpu to enter the waiting state, poke
>> >> +      * all the other cpus out of their waiting state so they can
>> >> +      * enter a deeper state.  This can race with one of the cpus
>> >> +      * exiting the waiting state due to an interrupt and
>> >> +      * decrementing waiting_count, see comment below.
>> >> +      */
>> >> +     alive = atomic_read(&coupled->alive_count);
>> >> +     if (atomic_inc_return(&coupled->waiting_count) == alive)
>> >> +             cpuidle_coupled_poke_others(dev, coupled);
>> >> +}
>> >> +
>> >> +/**
>> >> + * cpuidle_coupled_set_not_waiting - mark this cpu as leaving the wait loop
>> >> + * @dev: struct cpuidle_device for this cpu
>> >> + * @coupled: the struct coupled that contains the current cpu
>> >> + *
>> >> + * Removes the requested idle state for the specified cpuidle device.
>> >> + *
>> >> + * Provides memory ordering around waiting_count.
>> >> + */
>> >> +static void cpuidle_coupled_set_not_waiting(struct cpuidle_device *dev,
>> >> +             struct cpuidle_coupled *coupled)
>> >
>> > It looks like dev doesn't have to be passed here, cpu would be enough.
>> >
>> >> +{
>> >> +     BUG_ON(coupled->requested_state[dev->cpu] < 0);
>> >
>> > Well, like above?
>> Same as above.
>
> Ditto. :-)
>
>> >> +
>> >> +     /*
>> >> +      * Decrementing waiting_count can race with incrementing it in
>> >> +      * cpuidle_coupled_set_waiting, but that's OK.  Worst case, some
>> >> +      * cpus will increment ready_count and then spin until they
>> >> +      * notice that this cpu has cleared it's requested_state.
>> >> +      */
>> >
>> > So it looks like having ready_count and waiting_count in one atomic variable
>> > can spare us this particular race condition.
>> As above, there are 3 counters here, alive, ready, and waiting.
>
> Please refer to my comment about that above.
>
>> >> +
>> >> +     smp_mb__before_atomic_dec();
>> >> +     atomic_dec(&coupled->waiting_count);
>> >> +     smp_mb__after_atomic_dec();
>> >
>> > Do you really need both the before and after barriers here?  If so, then why?
>>
>> I believe so, waiting is ordered vs. alive and ready, one barrier is
>> for each.  Do you want the answers to these questions here or in the
>> code?  I had comments for every barrier use during development, but it
>> made it too hard to follow the flow of the code.  I could add a
>> comment describing the ordering requirements instead, but it's still
>> hard to translate that to the required barrier locations.
>
> Well, the barriers should be commented in the code, for the sake of people
> reading it and wanting to learn from it if nothing else.
>
> Wherever we put an SMP barrier directly like this, there should be a good
> reason for that and it should be documented.
>
> [...]
>> >> + */
>> >> +int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
>> >> +             struct cpuidle_driver *drv, int next_state)
>> >> +{
>> >> +     int entered_state = -1;
>> >> +     struct cpuidle_coupled *coupled = dev->coupled;
>> >> +     int alive;
>> >> +
>> >> +     if (!coupled)
>> >> +             return -EINVAL;
>> >> +
>> >> +     BUG_ON(atomic_read(&coupled->ready_count));
>> >
>> > Again, I'd do a WARN_ON() and return error code from here (to avoid crashing
>> > the kernel).
>> Same as above, if ready_count is not 0 here then the counters are out
>> of sync and something is about to go horribly wrong, like cutting
>> power to a running cpu.
>
> OK
>
>> >> +     cpuidle_coupled_set_waiting(dev, coupled, next_state);
>> >> +
>> >> +retry:
>> >> +     /*
>> >> +      * Wait for all coupled cpus to be idle, using the deepest state
>> >> +      * allowed for a single cpu.
>> >> +      */
>> >> +     while (!need_resched() && !cpuidle_coupled_cpus_waiting(coupled)) {
>> >> +             entered_state = cpuidle_enter_state(dev, drv,
>> >> +                     dev->safe_state_index);
>> >> +
>> >> +             local_irq_enable();
>> >> +             while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
>> >> +                     cpu_relax();
>> >
>> > Hmm.  What exactly is this loop supposed to achieve?
>> This is to ensure that the outstanding wakeups have been processed so
>> we don't go to idle with an interrupt pending an immediately wake up.
>
> I see.  Is it actually safe to reenable interrupts at this point, though?

I think so.  The normal idle loop will enable interrupts in a similar
fashion to what happens here.  There are two things to worry about: a
processed interrupt causing work to be scheduled that should bring
this cpu out of idle, or changing the next timer which would
invalidate the current requested state.  The first is handled by
checking need_resched() after interrupts are disabled again, the
second is currently unhandled but does not affect correct operation,
it just races into a less-than-optimal idle state.

>> >> +             local_irq_disable();
>> >
>> > Anyway, you seem to be calling it twice along with this enabling/disabling of
>> > interrupts.  I'd put that into a separate function and explain its role in a
>> > kerneldoc comment.
>>
>> I left it here to be obvious that I was enabling interrupts in the
>> idle path, but I can refactor it out if you prefer.
>
> Well, you can call the function to make it obvious. :-)
>
> Anyway, I think that code duplication is a worse thing than a reasonable
> amount of non-obviousness, so to speak.
>
>> >> +     }
>> >> +
>> >> +     /* give a chance to process any remaining pokes */
>> >> +     local_irq_enable();
>> >> +     while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
>> >> +             cpu_relax();
>> >> +     local_irq_disable();
>> >> +
>> >> +     if (need_resched()) {
>> >> +             cpuidle_coupled_set_not_waiting(dev, coupled);
>> >> +             goto out;
>> >> +     }
>> >> +
>> >> +     /*
>> >> +      * All coupled cpus are probably idle.  There is a small chance that
>> >> +      * one of the other cpus just became active.  Increment a counter when
>> >> +      * ready, and spin until all coupled cpus have incremented the counter.
>> >> +      * Once a cpu has incremented the counter, it cannot abort idle and must
>> >> +      * spin until either the count has hit alive_count, or another cpu
>> >> +      * leaves idle.
>> >> +      */
>> >> +
>> >> +     smp_mb__before_atomic_inc();
>> >> +     atomic_inc(&coupled->ready_count);
>> >> +     smp_mb__after_atomic_inc();
>> >
>> > It seems that at least one of these barriers is unnecessary ...
>> The first is to ensure ordering between ready_count and waiting count,
>
> Are you afraid that the test against waiting_count from
> cpuidle_coupled_cpus_waiting() may get reordered after the incrementation
> of ready_count or is it something else?

Yes, ready_count must not be incremented before waiting_count == alive_count.

>> the second is for ready_count vs. alive_count and requested_state.
>
> This one I can understand, but ...
>
>> >> +     /* alive_count can't change while ready_count > 0 */
>> >> +     alive = atomic_read(&coupled->alive_count);
>
> What happens if CPU hotplug happens right here?

According to the comment above that line that can't happen -
alive_count can't change while ready_count > 0, because that implies
that all cpus are waiting and none can be in the hotplug path where
alive_count is changed.  Looking at it again that is not entirely
true, alive_count could change on systems with >2 cpus, but I think it
can't cause an issue because alive_count would be 2 greater than
waiting_count before alive_count was changed.  Either way, it will be
fixed by disabling coupled idle during hotplug.

>> >> +     while (atomic_read(&coupled->ready_count) != alive) {
>> >> +             /* Check if any other cpus bailed out of idle. */
>> >> +             if (!cpuidle_coupled_cpus_waiting(coupled)) {
>> >> +                     atomic_dec(&coupled->ready_count);
>> >> +                     smp_mb__after_atomic_dec();
>
> And the barrier here?  Even if the old value of ready_count leaks into
> the while () loop after retry, that doesn't seem to matter.

All of these will be academic if ready_count and waiting_count share
an atomic_t.
waiting_count must not be decremented by exiting the while loop after
the retry label until ready_count is decremented here, but that is
also protected by the barrier in set_not_waiting.  One of them could
be dropped.

>> >> +                     goto retry;
>> >> +             }
>> >> +
>> >> +             cpu_relax();
>> >> +     }
>> >> +
>> >> +     /* all cpus have acked the coupled state */
>> >> +     smp_rmb();
>> >
>> > What is the barrier here for?
>> This protects ready_count vs. requested_state.  It is already
>> implicitly protected by the atomic_inc_return in set_waiting, but I
>> thought it would be better to protect it explicitly here.  I think I
>> added the smp_mb__after_atomic_inc above later, which makes this one
>> superflous, so I'll drop it.
>
> OK
>
>> >> +
>> >> +     next_state = cpuidle_coupled_get_state(dev, coupled);
>> >> +
>> >> +     entered_state = cpuidle_enter_state(dev, drv, next_state);
>> >> +
>> >> +     cpuidle_coupled_set_not_waiting(dev, coupled);
>> >> +     atomic_dec(&coupled->ready_count);
>> >> +     smp_mb__after_atomic_dec();
>> >> +
>> >> +out:
>> >> +     /*
>> >> +      * Normal cpuidle states are expected to return with irqs enabled.
>> >> +      * That leads to an inefficiency where a cpu receiving an interrupt
>> >> +      * that brings it out of idle will process that interrupt before
>> >> +      * exiting the idle enter function and decrementing ready_count.  All
>> >> +      * other cpus will need to spin waiting for the cpu that is processing
>> >> +      * the interrupt.  If the driver returns with interrupts disabled,
>> >> +      * all other cpus will loop back into the safe idle state instead of
>> >> +      * spinning, saving power.
>> >> +      *
>> >> +      * Calling local_irq_enable here allows coupled states to return with
>> >> +      * interrupts disabled, but won't cause problems for drivers that
>> >> +      * exit with interrupts enabled.
>> >> +      */
>> >> +     local_irq_enable();
>> >> +
>> >> +     /*
>> >> +      * Wait until all coupled cpus have exited idle.  There is no risk that
>> >> +      * a cpu exits and re-enters the ready state because this cpu has
>> >> +      * already decremented its waiting_count.
>> >> +      */
>> >> +     while (atomic_read(&coupled->ready_count) != 0)
>> >> +             cpu_relax();
>> >> +
>> >> +     smp_rmb();
>> >
>> > And here?
>>
>> This was to protect ready_count vs. looping back in and reading
>> alive_count.
>
> Well, I'm lost. :-)
>
> You've not modified anything after the previous smp_mb__after_atomic_dec(),
> so what exactly is the reordering this is supposed to work against?
>
> And while we're at it, I'm not quite sure what the things that the previous
> smp_mb__after_atomic_dec() separates from each other are.

Instead of justifying all of these, let me try the combined atomic_t
trick and justify the (many fewer) remaining barriers.

>> There will be plenty of synchronization calls between
>> the two with implicit barriers, but I thought it was better to do it
>> explicitly.
>
> [...]
>> >> +static void cpuidle_coupled_cpu_set_alive(int cpu, bool alive)
>> >> +{
>> >> +     struct cpuidle_device *dev;
>> >> +     struct cpuidle_coupled *coupled;
>> >> +
>> >> +     mutex_lock(&cpuidle_lock);
>> >> +
>> >> +     dev = per_cpu(cpuidle_devices, cpu);
>> >> +     if (!dev->coupled)
>> >> +             goto out;
>> >> +
>> >> +     coupled = dev->coupled;
>> >> +
>> >> +     /*
>> >> +      * waiting_count must be at least 1 less than alive_count, because
>> >> +      * this cpu is not waiting.  Spin until all cpus have noticed this cpu
>> >> +      * is not idle and exited the ready loop before changing alive_count.
>> >> +      */
>> >> +     while (atomic_read(&coupled->ready_count))
>> >> +             cpu_relax();
>> >> +
>> >> +     if (alive) {
>> >> +             smp_mb__before_atomic_inc();
>> >> +             atomic_inc(&coupled->alive_count);
>> >> +             smp_mb__after_atomic_inc();
>> >> +             coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
>> >> +     } else {
>> >> +             smp_mb__before_atomic_dec();
>> >> +             atomic_dec(&coupled->alive_count);
>> >> +             smp_mb__after_atomic_dec();
>> >> +             coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_DEAD;
>> >
>> > There's too many SMP barriers above, but I'm not quite sure which of them (if
>> > any) are really necessary.
>> The ones before order ready_count vs alive_count, the ones after order
>> alive_count vs. requested_state and future waiting_count increments.
>
> Well, so what are the matching barriers for these?
>
> Rafael

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [linux-pm] [PATCHv3 3/5] cpuidle: add support for states that affect multiple cpus
@ 2012-05-04 18:56           ` Colin Cross
  0 siblings, 0 replies; 78+ messages in thread
From: Colin Cross @ 2012-05-04 18:56 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, May 4, 2012 at 4:51 AM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> On Friday, May 04, 2012, Colin Cross wrote:
>> On Thu, May 3, 2012 at 3:14 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> [...]
>>
>> >> +/**
>> >> + * cpuidle_coupled_cpus_waiting - check if all cpus in a coupled set are waiting
>> >> + * @coupled: the struct coupled that contains the current cpu
>> >> + *
>> >> + * Returns true if all cpus coupled to this target state are in the wait loop
>> >> + */
>> >> +static inline bool cpuidle_coupled_cpus_waiting(struct cpuidle_coupled *coupled)
>> >> +{
>> >> + ? ? int alive;
>> >> + ? ? int waiting;
>> >> +
>> >> + ? ? /*
>> >> + ? ? ?* Read alive before reading waiting so a booting cpu is not treated as
>> >> + ? ? ?* idle
>> >> + ? ? ?*/
>> >
>> > Well, the comment doesn't really explain much. ?In particular, why the boot CPU
>> > could be treated as idle if the reads were in a different order.
>>
>> Hm, I think the race condition is on a cpu going down. ?What about:
>> Read alive before reading waiting. ?If waiting is read before alive,
>> this cpu could see another cpu as waiting just before it goes offline,
>> between when it the other cpu decrements waiting and when it
>> decrements alive, which could cause alive == waiting when one cpu is
>> not waiting.
>
> Reading them in this particular order doesn't stop the race, though. ?I mean,
> if the hotplug happens just right after you've read alive_count, you still have
> a wrong value. ?waiting_count is set independently, it seems, so there's no
> ordering between the two on the "store" side and the "load" side ordering
> doesn't matter.

As commented in the hotplug path, hotplug relies on the fact that one
of the cpus in the cluster is involved in the hotplug of the cpu that
is changing (this may not be true for multiple clusters, but it is
easy to fix by IPI-ing to a cpu that is in the same cluster when that
happens).  That means that waiting count is always guaranteed to be at
least 1 less than alive count when alive count changes.  All this read
ordering needs to do is make sure that this cpu doesn't see
waiting_count == alive_count by reading them in the wrong order.

> I would just make the CPU hotplug notifier routine block until
> cpuidle_enter_state_coupled() is done and the latter return immediately
> if the CPU hotplug notifier routine is in progress, perhaps falling back
> to the safe state. ?Or I would make the CPU hotplug notifier routine
> disable the "coupled cpuidle" entirely on DOWN_PREPARE and UP_PREPARE
> and only re-enable it after the hotplug has been completed.

I'll take a look at disabling coupled idle completely during hotplug.

>> >> + ? ? alive = atomic_read(&coupled->alive_count);
>> >> + ? ? smp_rmb();
>> >> + ? ? waiting = atomic_read(&coupled->waiting_count);
>> >
>> > Have you considered using one atomic variable to accommodate both counters
>> > such that the upper half contains one counter and the lower half contains
>> > the other?
>>
>> There are 3 counters (alive, waiting, and ready). ?Do you want me to
>> squish all of them into a single atomic_t, which would limit to 1023
>> cpus?
>
> No. ?I'd make sure that cpuidle_enter_state_coupled() did't race with CPU
> hotplug, so as to make alive_count stable from its standpoint, and I'd
> put the two remaining counters into one atomic_t variable.

I'll take a look at using a single atomic_t.  My initial worry was
that the increased contention on the shared variable would cause more
cmpxchg retries, but since waiting_count and ready_count are designed
to be modified in sequential phases that shouldn't be an issue.

>> >> +
>> >> + ? ? return (waiting == alive);
>> >> +}
>> >> +
>> >> +/**
>> >> + * cpuidle_coupled_get_state - determine the deepest idle state
>> >> + * @dev: struct cpuidle_device for this cpu
>> >> + * @coupled: the struct coupled that contains the current cpu
>> >> + *
>> >> + * Returns the deepest idle state that all coupled cpus can enter
>> >> + */
>> >> +static inline int cpuidle_coupled_get_state(struct cpuidle_device *dev,
>> >> + ? ? ? ? ? ? struct cpuidle_coupled *coupled)
>> >> +{
>> >> + ? ? int i;
>> >> + ? ? int state = INT_MAX;
>> >> +
>> >> + ? ? for_each_cpu_mask(i, coupled->coupled_cpus)
>> >> + ? ? ? ? ? ? if (coupled->requested_state[i] != CPUIDLE_COUPLED_DEAD &&
>> >> + ? ? ? ? ? ? ? ? coupled->requested_state[i] < state)
>> >> + ? ? ? ? ? ? ? ? ? ? state = coupled->requested_state[i];
>> >> +
>> >> + ? ? BUG_ON(state >= dev->state_count || state < 0);
>> >
>> > Do you have to crash the kernel here if the assertion doesn't hold? ?Maybe
>> > you could use WARN_ON() and return error code?
>>
>> If this BUG_ON is hit, there is a race condition somewhere that
>> allowed a cpu out of idle unexpectedly, and there is no way to recover
>> without more race conditions. ?I don't expect this to ever happen, it
>> is mostly there to detect race conditions during development. ?Should
>> I drop it completely?
>
> I would just drop it, then, in the final respin of the patch.
>
> [...]
>> >> +{
>> >> + ? ? int alive;
>> >> +
>> >> + ? ? BUG_ON(coupled->requested_state[dev->cpu] >= 0);
>> >
>> > Would be WARN_ON() + do nothing too dangerous here?
>>
>> If this BUG_ON is hit, then this cpu exited idle without clearing its
>> waiting state, which could cause another cpu to enter the deeper idle
>> state while this cpu is still running. ?The counters would be out of
>> sync, so it's not easy to recover. ?Again, this is to detect race
>> conditions during development, but should never happen. ?Should I drop
>> it?
>
> Just like above.
>
>> >> +
>> >> + ? ? coupled->requested_state[dev->cpu] = next_state;
>> >> +
>> >> + ? ? /*
>> >> + ? ? ?* If this is the last cpu to enter the waiting state, poke
>> >> + ? ? ?* all the other cpus out of their waiting state so they can
>> >> + ? ? ?* enter a deeper state. ?This can race with one of the cpus
>> >> + ? ? ?* exiting the waiting state due to an interrupt and
>> >> + ? ? ?* decrementing waiting_count, see comment below.
>> >> + ? ? ?*/
>> >> + ? ? alive = atomic_read(&coupled->alive_count);
>> >> + ? ? if (atomic_inc_return(&coupled->waiting_count) == alive)
>> >> + ? ? ? ? ? ? cpuidle_coupled_poke_others(dev, coupled);
>> >> +}
>> >> +
>> >> +/**
>> >> + * cpuidle_coupled_set_not_waiting - mark this cpu as leaving the wait loop
>> >> + * @dev: struct cpuidle_device for this cpu
>> >> + * @coupled: the struct coupled that contains the current cpu
>> >> + *
>> >> + * Removes the requested idle state for the specified cpuidle device.
>> >> + *
>> >> + * Provides memory ordering around waiting_count.
>> >> + */
>> >> +static void cpuidle_coupled_set_not_waiting(struct cpuidle_device *dev,
>> >> + ? ? ? ? ? ? struct cpuidle_coupled *coupled)
>> >
>> > It looks like dev doesn't have to be passed here, cpu would be enough.
>> >
>> >> +{
>> >> + ? ? BUG_ON(coupled->requested_state[dev->cpu] < 0);
>> >
>> > Well, like above?
>> Same as above.
>
> Ditto. :-)
>
>> >> +
>> >> + ? ? /*
>> >> + ? ? ?* Decrementing waiting_count can race with incrementing it in
>> >> + ? ? ?* cpuidle_coupled_set_waiting, but that's OK. ?Worst case, some
>> >> + ? ? ?* cpus will increment ready_count and then spin until they
>> >> + ? ? ?* notice that this cpu has cleared it's requested_state.
>> >> + ? ? ?*/
>> >
>> > So it looks like having ready_count and waiting_count in one atomic variable
>> > can spare us this particular race condition.
>> As above, there are 3 counters here, alive, ready, and waiting.
>
> Please refer to my comment about that above.
>
>> >> +
>> >> + ? ? smp_mb__before_atomic_dec();
>> >> + ? ? atomic_dec(&coupled->waiting_count);
>> >> + ? ? smp_mb__after_atomic_dec();
>> >
>> > Do you really need both the before and after barriers here? ?If so, then why?
>>
>> I believe so, waiting is ordered vs. alive and ready, one barrier is
>> for each. ?Do you want the answers to these questions here or in the
>> code? ?I had comments for every barrier use during development, but it
>> made it too hard to follow the flow of the code. ?I could add a
>> comment describing the ordering requirements instead, but it's still
>> hard to translate that to the required barrier locations.
>
> Well, the barriers should be commented in the code, for the sake of people
> reading it and wanting to learn from it if nothing else.
>
> Wherever we put an SMP barrier directly like this, there should be a good
> reason for that and it should be documented.
>
> [...]
>> >> + */
>> >> +int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
>> >> + ? ? ? ? ? ? struct cpuidle_driver *drv, int next_state)
>> >> +{
>> >> + ? ? int entered_state = -1;
>> >> + ? ? struct cpuidle_coupled *coupled = dev->coupled;
>> >> + ? ? int alive;
>> >> +
>> >> + ? ? if (!coupled)
>> >> + ? ? ? ? ? ? return -EINVAL;
>> >> +
>> >> + ? ? BUG_ON(atomic_read(&coupled->ready_count));
>> >
>> > Again, I'd do a WARN_ON() and return error code from here (to avoid crashing
>> > the kernel).
>> Same as above, if ready_count is not 0 here then the counters are out
>> of sync and something is about to go horribly wrong, like cutting
>> power to a running cpu.
>
> OK
>
>> >> + ? ? cpuidle_coupled_set_waiting(dev, coupled, next_state);
>> >> +
>> >> +retry:
>> >> + ? ? /*
>> >> + ? ? ?* Wait for all coupled cpus to be idle, using the deepest state
>> >> + ? ? ?* allowed for a single cpu.
>> >> + ? ? ?*/
>> >> + ? ? while (!need_resched() && !cpuidle_coupled_cpus_waiting(coupled)) {
>> >> + ? ? ? ? ? ? entered_state = cpuidle_enter_state(dev, drv,
>> >> + ? ? ? ? ? ? ? ? ? ? dev->safe_state_index);
>> >> +
>> >> + ? ? ? ? ? ? local_irq_enable();
>> >> + ? ? ? ? ? ? while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
>> >> + ? ? ? ? ? ? ? ? ? ? cpu_relax();
>> >
>> > Hmm. ?What exactly is this loop supposed to achieve?
>> This is to ensure that the outstanding wakeups have been processed so
>> we don't go to idle with an interrupt pending an immediately wake up.
>
> I see. ?Is it actually safe to reenable interrupts at this point, though?

I think so.  The normal idle loop will enable interrupts in a similar
fashion to what happens here.  There are two things to worry about: a
processed interrupt causing work to be scheduled that should bring
this cpu out of idle, or changing the next timer which would
invalidate the current requested state.  The first is handled by
checking need_resched() after interrupts are disabled again, the
second is currently unhandled but does not affect correct operation,
it just races into a less-than-optimal idle state.

>> >> + ? ? ? ? ? ? local_irq_disable();
>> >
>> > Anyway, you seem to be calling it twice along with this enabling/disabling of
>> > interrupts. ?I'd put that into a separate function and explain its role in a
>> > kerneldoc comment.
>>
>> I left it here to be obvious that I was enabling interrupts in the
>> idle path, but I can refactor it out if you prefer.
>
> Well, you can call the function to make it obvious. :-)
>
> Anyway, I think that code duplication is a worse thing than a reasonable
> amount of non-obviousness, so to speak.
>
>> >> + ? ? }
>> >> +
>> >> + ? ? /* give a chance to process any remaining pokes */
>> >> + ? ? local_irq_enable();
>> >> + ? ? while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
>> >> + ? ? ? ? ? ? cpu_relax();
>> >> + ? ? local_irq_disable();
>> >> +
>> >> + ? ? if (need_resched()) {
>> >> + ? ? ? ? ? ? cpuidle_coupled_set_not_waiting(dev, coupled);
>> >> + ? ? ? ? ? ? goto out;
>> >> + ? ? }
>> >> +
>> >> + ? ? /*
>> >> + ? ? ?* All coupled cpus are probably idle. ?There is a small chance that
>> >> + ? ? ?* one of the other cpus just became active. ?Increment a counter when
>> >> + ? ? ?* ready, and spin until all coupled cpus have incremented the counter.
>> >> + ? ? ?* Once a cpu has incremented the counter, it cannot abort idle and must
>> >> + ? ? ?* spin until either the count has hit alive_count, or another cpu
>> >> + ? ? ?* leaves idle.
>> >> + ? ? ?*/
>> >> +
>> >> + ? ? smp_mb__before_atomic_inc();
>> >> + ? ? atomic_inc(&coupled->ready_count);
>> >> + ? ? smp_mb__after_atomic_inc();
>> >
>> > It seems that at least one of these barriers is unnecessary ...
>> The first is to ensure ordering between ready_count and waiting count,
>
> Are you afraid that the test against waiting_count from
> cpuidle_coupled_cpus_waiting() may get reordered after the incrementation
> of ready_count or is it something else?

Yes, ready_count must not be incremented before waiting_count == alive_count.

>> the second is for ready_count vs. alive_count and requested_state.
>
> This one I can understand, but ...
>
>> >> + ? ? /* alive_count can't change while ready_count > 0 */
>> >> + ? ? alive = atomic_read(&coupled->alive_count);
>
> What happens if CPU hotplug happens right here?

According to the comment above that line that can't happen -
alive_count can't change while ready_count > 0, because that implies
that all cpus are waiting and none can be in the hotplug path where
alive_count is changed.  Looking at it again that is not entirely
true, alive_count could change on systems with >2 cpus, but I think it
can't cause an issue because alive_count would be 2 greater than
waiting_count before alive_count was changed.  Either way, it will be
fixed by disabling coupled idle during hotplug.

>> >> + ? ? while (atomic_read(&coupled->ready_count) != alive) {
>> >> + ? ? ? ? ? ? /* Check if any other cpus bailed out of idle. */
>> >> + ? ? ? ? ? ? if (!cpuidle_coupled_cpus_waiting(coupled)) {
>> >> + ? ? ? ? ? ? ? ? ? ? atomic_dec(&coupled->ready_count);
>> >> + ? ? ? ? ? ? ? ? ? ? smp_mb__after_atomic_dec();
>
> And the barrier here? ?Even if the old value of ready_count leaks into
> the while () loop after retry, that doesn't seem to matter.

All of these will be academic if ready_count and waiting_count share
an atomic_t.
waiting_count must not be decremented by exiting the while loop after
the retry label until ready_count is decremented here, but that is
also protected by the barrier in set_not_waiting.  One of them could
be dropped.

>> >> + ? ? ? ? ? ? ? ? ? ? goto retry;
>> >> + ? ? ? ? ? ? }
>> >> +
>> >> + ? ? ? ? ? ? cpu_relax();
>> >> + ? ? }
>> >> +
>> >> + ? ? /* all cpus have acked the coupled state */
>> >> + ? ? smp_rmb();
>> >
>> > What is the barrier here for?
>> This protects ready_count vs. requested_state. ?It is already
>> implicitly protected by the atomic_inc_return in set_waiting, but I
>> thought it would be better to protect it explicitly here. ?I think I
>> added the smp_mb__after_atomic_inc above later, which makes this one
>> superflous, so I'll drop it.
>
> OK
>
>> >> +
>> >> + ? ? next_state = cpuidle_coupled_get_state(dev, coupled);
>> >> +
>> >> + ? ? entered_state = cpuidle_enter_state(dev, drv, next_state);
>> >> +
>> >> + ? ? cpuidle_coupled_set_not_waiting(dev, coupled);
>> >> + ? ? atomic_dec(&coupled->ready_count);
>> >> + ? ? smp_mb__after_atomic_dec();
>> >> +
>> >> +out:
>> >> + ? ? /*
>> >> + ? ? ?* Normal cpuidle states are expected to return with irqs enabled.
>> >> + ? ? ?* That leads to an inefficiency where a cpu receiving an interrupt
>> >> + ? ? ?* that brings it out of idle will process that interrupt before
>> >> + ? ? ?* exiting the idle enter function and decrementing ready_count. ?All
>> >> + ? ? ?* other cpus will need to spin waiting for the cpu that is processing
>> >> + ? ? ?* the interrupt. ?If the driver returns with interrupts disabled,
>> >> + ? ? ?* all other cpus will loop back into the safe idle state instead of
>> >> + ? ? ?* spinning, saving power.
>> >> + ? ? ?*
>> >> + ? ? ?* Calling local_irq_enable here allows coupled states to return with
>> >> + ? ? ?* interrupts disabled, but won't cause problems for drivers that
>> >> + ? ? ?* exit with interrupts enabled.
>> >> + ? ? ?*/
>> >> + ? ? local_irq_enable();
>> >> +
>> >> + ? ? /*
>> >> + ? ? ?* Wait until all coupled cpus have exited idle. ?There is no risk that
>> >> + ? ? ?* a cpu exits and re-enters the ready state because this cpu has
>> >> + ? ? ?* already decremented its waiting_count.
>> >> + ? ? ?*/
>> >> + ? ? while (atomic_read(&coupled->ready_count) != 0)
>> >> + ? ? ? ? ? ? cpu_relax();
>> >> +
>> >> + ? ? smp_rmb();
>> >
>> > And here?
>>
>> This was to protect ready_count vs. looping back in and reading
>> alive_count.
>
> Well, I'm lost. :-)
>
> You've not modified anything after the previous smp_mb__after_atomic_dec(),
> so what exactly is the reordering this is supposed to work against?
>
> And while we're at it, I'm not quite sure what the things that the previous
> smp_mb__after_atomic_dec() separates from each other are.

Instead of justifying all of these, let me try the combined atomic_t
trick and justify the (many fewer) remaining barriers.

>> There will be plenty of synchronization calls between
>> the two with implicit barriers, but I thought it was better to do it
>> explicitly.
>
> [...]
>> >> +static void cpuidle_coupled_cpu_set_alive(int cpu, bool alive)
>> >> +{
>> >> + ? ? struct cpuidle_device *dev;
>> >> + ? ? struct cpuidle_coupled *coupled;
>> >> +
>> >> + ? ? mutex_lock(&cpuidle_lock);
>> >> +
>> >> + ? ? dev = per_cpu(cpuidle_devices, cpu);
>> >> + ? ? if (!dev->coupled)
>> >> + ? ? ? ? ? ? goto out;
>> >> +
>> >> + ? ? coupled = dev->coupled;
>> >> +
>> >> + ? ? /*
>> >> + ? ? ?* waiting_count must be at least 1 less than alive_count, because
>> >> + ? ? ?* this cpu is not waiting. ?Spin until all cpus have noticed this cpu
>> >> + ? ? ?* is not idle and exited the ready loop before changing alive_count.
>> >> + ? ? ?*/
>> >> + ? ? while (atomic_read(&coupled->ready_count))
>> >> + ? ? ? ? ? ? cpu_relax();
>> >> +
>> >> + ? ? if (alive) {
>> >> + ? ? ? ? ? ? smp_mb__before_atomic_inc();
>> >> + ? ? ? ? ? ? atomic_inc(&coupled->alive_count);
>> >> + ? ? ? ? ? ? smp_mb__after_atomic_inc();
>> >> + ? ? ? ? ? ? coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
>> >> + ? ? } else {
>> >> + ? ? ? ? ? ? smp_mb__before_atomic_dec();
>> >> + ? ? ? ? ? ? atomic_dec(&coupled->alive_count);
>> >> + ? ? ? ? ? ? smp_mb__after_atomic_dec();
>> >> + ? ? ? ? ? ? coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_DEAD;
>> >
>> > There's too many SMP barriers above, but I'm not quite sure which of them (if
>> > any) are really necessary.
>> The ones before order ready_count vs alive_count, the ones after order
>> alive_count vs. requested_state and future waiting_count increments.
>
> Well, so what are the matching barriers for these?
>
> Rafael

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [linux-pm] [PATCHv3 3/5] cpuidle: add support for states that affect multiple cpus
  2012-05-04 18:56           ` Colin Cross
  (?)
@ 2012-05-04 22:27             ` Rafael J. Wysocki
  -1 siblings, 0 replies; 78+ messages in thread
From: Rafael J. Wysocki @ 2012-05-04 22:27 UTC (permalink / raw)
  To: Colin Cross
  Cc: linux-pm, linux-kernel, Kevin Hilman, Len Brown, Russell King,
	Greg Kroah-Hartman, Kay Sievers, Amit Kucheria, Arjan van de Ven,
	Arnd Bergmann, linux-arm-kernel

On Friday, May 04, 2012, Colin Cross wrote:
> On Fri, May 4, 2012 at 4:51 AM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> > On Friday, May 04, 2012, Colin Cross wrote:
> >> On Thu, May 3, 2012 at 3:14 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> > [...]
> >>
> >> >> +/**
> >> >> + * cpuidle_coupled_cpus_waiting - check if all cpus in a coupled set are waiting
> >> >> + * @coupled: the struct coupled that contains the current cpu
> >> >> + *
> >> >> + * Returns true if all cpus coupled to this target state are in the wait loop
> >> >> + */
> >> >> +static inline bool cpuidle_coupled_cpus_waiting(struct cpuidle_coupled *coupled)
> >> >> +{
> >> >> +     int alive;
> >> >> +     int waiting;
> >> >> +
> >> >> +     /*
> >> >> +      * Read alive before reading waiting so a booting cpu is not treated as
> >> >> +      * idle
> >> >> +      */
> >> >
> >> > Well, the comment doesn't really explain much.  In particular, why the boot CPU
> >> > could be treated as idle if the reads were in a different order.
> >>
> >> Hm, I think the race condition is on a cpu going down.  What about:
> >> Read alive before reading waiting.  If waiting is read before alive,
> >> this cpu could see another cpu as waiting just before it goes offline,
> >> between when it the other cpu decrements waiting and when it
> >> decrements alive, which could cause alive == waiting when one cpu is
> >> not waiting.
> >
> > Reading them in this particular order doesn't stop the race, though.  I mean,
> > if the hotplug happens just right after you've read alive_count, you still have
> > a wrong value.  waiting_count is set independently, it seems, so there's no
> > ordering between the two on the "store" side and the "load" side ordering
> > doesn't matter.
> 
> As commented in the hotplug path, hotplug relies on the fact that one
> of the cpus in the cluster is involved in the hotplug of the cpu that
> is changing (this may not be true for multiple clusters, but it is
> easy to fix by IPI-ing to a cpu that is in the same cluster when that
> happens).

That's very fragile and potentially sets a trap for people trying to make
the kernel work on systems with multiple clusters.

> That means that waiting count is always guaranteed to be at
> least 1 less than alive count when alive count changes.  All this read
> ordering needs to do is make sure that this cpu doesn't see
> waiting_count == alive_count by reading them in the wrong order.

So, the concern seems to be that if the local CPU reorders the reads
from waiting_count and alive_count and enough time elapses between one
read and the other, the decrementation of waiting_count may happen
between them and then the CPU may use the outdated value for comparison,
right?

Still, though, even if the barrier is there, the modification of
alive_count in the hotplug notifier routine may not happen before
the read from alive_count in cpuidle_coupled_cpus_waiting() is completed.
Isn't that a problem?

> > I would just make the CPU hotplug notifier routine block until
> > cpuidle_enter_state_coupled() is done and the latter return immediately
> > if the CPU hotplug notifier routine is in progress, perhaps falling back
> > to the safe state.  Or I would make the CPU hotplug notifier routine
> > disable the "coupled cpuidle" entirely on DOWN_PREPARE and UP_PREPARE
> > and only re-enable it after the hotplug has been completed.
> 
> I'll take a look at disabling coupled idle completely during hotplug.

Great, thanks!

> >> >> +     alive = atomic_read(&coupled->alive_count);
> >> >> +     smp_rmb();
> >> >> +     waiting = atomic_read(&coupled->waiting_count);
> >> >
> >> > Have you considered using one atomic variable to accommodate both counters
> >> > such that the upper half contains one counter and the lower half contains
> >> > the other?
> >>
> >> There are 3 counters (alive, waiting, and ready).  Do you want me to
> >> squish all of them into a single atomic_t, which would limit to 1023
> >> cpus?
> >
> > No.  I'd make sure that cpuidle_enter_state_coupled() did't race with CPU
> > hotplug, so as to make alive_count stable from its standpoint, and I'd
> > put the two remaining counters into one atomic_t variable.
> 
> I'll take a look at using a single atomic_t.  My initial worry was
> that the increased contention on the shared variable would cause more
> cmpxchg retries, but since waiting_count and ready_count are designed
> to be modified in sequential phases that shouldn't be an issue.
> 
[...]
> >> >> +     while (!need_resched() && !cpuidle_coupled_cpus_waiting(coupled)) {
> >> >> +             entered_state = cpuidle_enter_state(dev, drv,
> >> >> +                     dev->safe_state_index);
> >> >> +
> >> >> +             local_irq_enable();
> >> >> +             while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
> >> >> +                     cpu_relax();
> >> >
> >> > Hmm.  What exactly is this loop supposed to achieve?
> >> This is to ensure that the outstanding wakeups have been processed so
> >> we don't go to idle with an interrupt pending an immediately wake up.
> >
> > I see.  Is it actually safe to reenable interrupts at this point, though?
> 
> I think so.  The normal idle loop will enable interrupts in a similar
> fashion to what happens here.  There are two things to worry about: a
> processed interrupt causing work to be scheduled that should bring
> this cpu out of idle, or changing the next timer which would
> invalidate the current requested state.  The first is handled by
> checking need_resched() after interrupts are disabled again, the
> second is currently unhandled but does not affect correct operation,
> it just races into a less-than-optimal idle state.

I see.

> >> >> +             local_irq_disable();
> >> >
> >> > Anyway, you seem to be calling it twice along with this enabling/disabling of
> >> > interrupts.  I'd put that into a separate function and explain its role in a
> >> > kerneldoc comment.
> >>
> >> I left it here to be obvious that I was enabling interrupts in the
> >> idle path, but I can refactor it out if you prefer.
> >
> > Well, you can call the function to make it obvious. :-)
> >
> > Anyway, I think that code duplication is a worse thing than a reasonable
> > amount of non-obviousness, so to speak.
> >
> >> >> +     }
> >> >> +
> >> >> +     /* give a chance to process any remaining pokes */
> >> >> +     local_irq_enable();
> >> >> +     while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
> >> >> +             cpu_relax();
> >> >> +     local_irq_disable();
> >> >> +
> >> >> +     if (need_resched()) {
> >> >> +             cpuidle_coupled_set_not_waiting(dev, coupled);
> >> >> +             goto out;
> >> >> +     }
> >> >> +
> >> >> +     /*
> >> >> +      * All coupled cpus are probably idle.  There is a small chance that
> >> >> +      * one of the other cpus just became active.  Increment a counter when
> >> >> +      * ready, and spin until all coupled cpus have incremented the counter.
> >> >> +      * Once a cpu has incremented the counter, it cannot abort idle and must
> >> >> +      * spin until either the count has hit alive_count, or another cpu
> >> >> +      * leaves idle.
> >> >> +      */
> >> >> +
> >> >> +     smp_mb__before_atomic_inc();
> >> >> +     atomic_inc(&coupled->ready_count);
> >> >> +     smp_mb__after_atomic_inc();
> >> >
> >> > It seems that at least one of these barriers is unnecessary ...
> >> The first is to ensure ordering between ready_count and waiting count,
> >
> > Are you afraid that the test against waiting_count from
> > cpuidle_coupled_cpus_waiting() may get reordered after the incrementation
> > of ready_count or is it something else?
> 
> Yes, ready_count must not be incremented before waiting_count == alive_count.

Well, control doesn't reach the atomic_inc() statement if this condition
is not satisfied, so I don't see how it can be possibly reordered before
the while () loop without breaking the control flow guarantees.

> >> the second is for ready_count vs. alive_count and requested_state.
> >
> > This one I can understand, but ...
> >
> >> >> +     /* alive_count can't change while ready_count > 0 */
> >> >> +     alive = atomic_read(&coupled->alive_count);
> >
> > What happens if CPU hotplug happens right here?
> 
> According to the comment above that line that can't happen -
> alive_count can't change while ready_count > 0, because that implies
> that all cpus are waiting and none can be in the hotplug path where
> alive_count is changed.  Looking at it again that is not entirely
> true, alive_count could change on systems with >2 cpus, but I think it
> can't cause an issue because alive_count would be 2 greater than
> waiting_count before alive_count was changed.  Either way, it will be
> fixed by disabling coupled idle during hotplug.

Yup.

> >> >> +     while (atomic_read(&coupled->ready_count) != alive) {
> >> >> +             /* Check if any other cpus bailed out of idle. */
> >> >> +             if (!cpuidle_coupled_cpus_waiting(coupled)) {
> >> >> +                     atomic_dec(&coupled->ready_count);
> >> >> +                     smp_mb__after_atomic_dec();
> >
> > And the barrier here?  Even if the old value of ready_count leaks into
> > the while () loop after retry, that doesn't seem to matter.
> 
> All of these will be academic if ready_count and waiting_count share
> an atomic_t.
> waiting_count must not be decremented by exiting the while loop after
> the retry label until ready_count is decremented here, but that is
> also protected by the barrier in set_not_waiting.  One of them could
> be dropped.
> 
> >> >> +                     goto retry;
> >> >> +             }
> >> >> +
> >> >> +             cpu_relax();
> >> >> +     }
> >> >> +
> >> >> +     /* all cpus have acked the coupled state */
> >> >> +     smp_rmb();
> >> >
> >> > What is the barrier here for?
> >> This protects ready_count vs. requested_state.  It is already
> >> implicitly protected by the atomic_inc_return in set_waiting, but I
> >> thought it would be better to protect it explicitly here.  I think I
> >> added the smp_mb__after_atomic_inc above later, which makes this one
> >> superflous, so I'll drop it.
> >
> > OK
> >
> >> >> +
> >> >> +     next_state = cpuidle_coupled_get_state(dev, coupled);
> >> >> +
> >> >> +     entered_state = cpuidle_enter_state(dev, drv, next_state);
> >> >> +
> >> >> +     cpuidle_coupled_set_not_waiting(dev, coupled);
> >> >> +     atomic_dec(&coupled->ready_count);
> >> >> +     smp_mb__after_atomic_dec();
> >> >> +
> >> >> +out:
> >> >> +     /*
> >> >> +      * Normal cpuidle states are expected to return with irqs enabled.
> >> >> +      * That leads to an inefficiency where a cpu receiving an interrupt
> >> >> +      * that brings it out of idle will process that interrupt before
> >> >> +      * exiting the idle enter function and decrementing ready_count.  All
> >> >> +      * other cpus will need to spin waiting for the cpu that is processing
> >> >> +      * the interrupt.  If the driver returns with interrupts disabled,
> >> >> +      * all other cpus will loop back into the safe idle state instead of
> >> >> +      * spinning, saving power.
> >> >> +      *
> >> >> +      * Calling local_irq_enable here allows coupled states to return with
> >> >> +      * interrupts disabled, but won't cause problems for drivers that
> >> >> +      * exit with interrupts enabled.
> >> >> +      */
> >> >> +     local_irq_enable();
> >> >> +
> >> >> +     /*
> >> >> +      * Wait until all coupled cpus have exited idle.  There is no risk that
> >> >> +      * a cpu exits and re-enters the ready state because this cpu has
> >> >> +      * already decremented its waiting_count.
> >> >> +      */
> >> >> +     while (atomic_read(&coupled->ready_count) != 0)
> >> >> +             cpu_relax();
> >> >> +
> >> >> +     smp_rmb();
> >> >
> >> > And here?
> >>
> >> This was to protect ready_count vs. looping back in and reading
> >> alive_count.
> >
> > Well, I'm lost. :-)
> >
> > You've not modified anything after the previous smp_mb__after_atomic_dec(),
> > so what exactly is the reordering this is supposed to work against?
> >
> > And while we're at it, I'm not quite sure what the things that the previous
> > smp_mb__after_atomic_dec() separates from each other are.
> 
> Instead of justifying all of these, let me try the combined atomic_t
> trick and justify the (many fewer) remaining barriers.

OK, cool! :-)

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCHv3 3/5] cpuidle: add support for states that affect multiple cpus
@ 2012-05-04 22:27             ` Rafael J. Wysocki
  0 siblings, 0 replies; 78+ messages in thread
From: Rafael J. Wysocki @ 2012-05-04 22:27 UTC (permalink / raw)
  To: Colin Cross
  Cc: Kevin Hilman, Len Brown, Russell King, Greg Kroah-Hartman,
	Kay Sievers, linux-kernel, Amit Kucheria, linux-pm,
	Arjan van de Ven, Arnd Bergmann, linux-arm-kernel

On Friday, May 04, 2012, Colin Cross wrote:
> On Fri, May 4, 2012 at 4:51 AM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> > On Friday, May 04, 2012, Colin Cross wrote:
> >> On Thu, May 3, 2012 at 3:14 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> > [...]
> >>
> >> >> +/**
> >> >> + * cpuidle_coupled_cpus_waiting - check if all cpus in a coupled set are waiting
> >> >> + * @coupled: the struct coupled that contains the current cpu
> >> >> + *
> >> >> + * Returns true if all cpus coupled to this target state are in the wait loop
> >> >> + */
> >> >> +static inline bool cpuidle_coupled_cpus_waiting(struct cpuidle_coupled *coupled)
> >> >> +{
> >> >> +     int alive;
> >> >> +     int waiting;
> >> >> +
> >> >> +     /*
> >> >> +      * Read alive before reading waiting so a booting cpu is not treated as
> >> >> +      * idle
> >> >> +      */
> >> >
> >> > Well, the comment doesn't really explain much.  In particular, why the boot CPU
> >> > could be treated as idle if the reads were in a different order.
> >>
> >> Hm, I think the race condition is on a cpu going down.  What about:
> >> Read alive before reading waiting.  If waiting is read before alive,
> >> this cpu could see another cpu as waiting just before it goes offline,
> >> between when it the other cpu decrements waiting and when it
> >> decrements alive, which could cause alive == waiting when one cpu is
> >> not waiting.
> >
> > Reading them in this particular order doesn't stop the race, though.  I mean,
> > if the hotplug happens just right after you've read alive_count, you still have
> > a wrong value.  waiting_count is set independently, it seems, so there's no
> > ordering between the two on the "store" side and the "load" side ordering
> > doesn't matter.
> 
> As commented in the hotplug path, hotplug relies on the fact that one
> of the cpus in the cluster is involved in the hotplug of the cpu that
> is changing (this may not be true for multiple clusters, but it is
> easy to fix by IPI-ing to a cpu that is in the same cluster when that
> happens).

That's very fragile and potentially sets a trap for people trying to make
the kernel work on systems with multiple clusters.

> That means that waiting count is always guaranteed to be at
> least 1 less than alive count when alive count changes.  All this read
> ordering needs to do is make sure that this cpu doesn't see
> waiting_count == alive_count by reading them in the wrong order.

So, the concern seems to be that if the local CPU reorders the reads
from waiting_count and alive_count and enough time elapses between one
read and the other, the decrementation of waiting_count may happen
between them and then the CPU may use the outdated value for comparison,
right?

Still, though, even if the barrier is there, the modification of
alive_count in the hotplug notifier routine may not happen before
the read from alive_count in cpuidle_coupled_cpus_waiting() is completed.
Isn't that a problem?

> > I would just make the CPU hotplug notifier routine block until
> > cpuidle_enter_state_coupled() is done and the latter return immediately
> > if the CPU hotplug notifier routine is in progress, perhaps falling back
> > to the safe state.  Or I would make the CPU hotplug notifier routine
> > disable the "coupled cpuidle" entirely on DOWN_PREPARE and UP_PREPARE
> > and only re-enable it after the hotplug has been completed.
> 
> I'll take a look at disabling coupled idle completely during hotplug.

Great, thanks!

> >> >> +     alive = atomic_read(&coupled->alive_count);
> >> >> +     smp_rmb();
> >> >> +     waiting = atomic_read(&coupled->waiting_count);
> >> >
> >> > Have you considered using one atomic variable to accommodate both counters
> >> > such that the upper half contains one counter and the lower half contains
> >> > the other?
> >>
> >> There are 3 counters (alive, waiting, and ready).  Do you want me to
> >> squish all of them into a single atomic_t, which would limit to 1023
> >> cpus?
> >
> > No.  I'd make sure that cpuidle_enter_state_coupled() did't race with CPU
> > hotplug, so as to make alive_count stable from its standpoint, and I'd
> > put the two remaining counters into one atomic_t variable.
> 
> I'll take a look at using a single atomic_t.  My initial worry was
> that the increased contention on the shared variable would cause more
> cmpxchg retries, but since waiting_count and ready_count are designed
> to be modified in sequential phases that shouldn't be an issue.
> 
[...]
> >> >> +     while (!need_resched() && !cpuidle_coupled_cpus_waiting(coupled)) {
> >> >> +             entered_state = cpuidle_enter_state(dev, drv,
> >> >> +                     dev->safe_state_index);
> >> >> +
> >> >> +             local_irq_enable();
> >> >> +             while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
> >> >> +                     cpu_relax();
> >> >
> >> > Hmm.  What exactly is this loop supposed to achieve?
> >> This is to ensure that the outstanding wakeups have been processed so
> >> we don't go to idle with an interrupt pending an immediately wake up.
> >
> > I see.  Is it actually safe to reenable interrupts at this point, though?
> 
> I think so.  The normal idle loop will enable interrupts in a similar
> fashion to what happens here.  There are two things to worry about: a
> processed interrupt causing work to be scheduled that should bring
> this cpu out of idle, or changing the next timer which would
> invalidate the current requested state.  The first is handled by
> checking need_resched() after interrupts are disabled again, the
> second is currently unhandled but does not affect correct operation,
> it just races into a less-than-optimal idle state.

I see.

> >> >> +             local_irq_disable();
> >> >
> >> > Anyway, you seem to be calling it twice along with this enabling/disabling of
> >> > interrupts.  I'd put that into a separate function and explain its role in a
> >> > kerneldoc comment.
> >>
> >> I left it here to be obvious that I was enabling interrupts in the
> >> idle path, but I can refactor it out if you prefer.
> >
> > Well, you can call the function to make it obvious. :-)
> >
> > Anyway, I think that code duplication is a worse thing than a reasonable
> > amount of non-obviousness, so to speak.
> >
> >> >> +     }
> >> >> +
> >> >> +     /* give a chance to process any remaining pokes */
> >> >> +     local_irq_enable();
> >> >> +     while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
> >> >> +             cpu_relax();
> >> >> +     local_irq_disable();
> >> >> +
> >> >> +     if (need_resched()) {
> >> >> +             cpuidle_coupled_set_not_waiting(dev, coupled);
> >> >> +             goto out;
> >> >> +     }
> >> >> +
> >> >> +     /*
> >> >> +      * All coupled cpus are probably idle.  There is a small chance that
> >> >> +      * one of the other cpus just became active.  Increment a counter when
> >> >> +      * ready, and spin until all coupled cpus have incremented the counter.
> >> >> +      * Once a cpu has incremented the counter, it cannot abort idle and must
> >> >> +      * spin until either the count has hit alive_count, or another cpu
> >> >> +      * leaves idle.
> >> >> +      */
> >> >> +
> >> >> +     smp_mb__before_atomic_inc();
> >> >> +     atomic_inc(&coupled->ready_count);
> >> >> +     smp_mb__after_atomic_inc();
> >> >
> >> > It seems that at least one of these barriers is unnecessary ...
> >> The first is to ensure ordering between ready_count and waiting count,
> >
> > Are you afraid that the test against waiting_count from
> > cpuidle_coupled_cpus_waiting() may get reordered after the incrementation
> > of ready_count or is it something else?
> 
> Yes, ready_count must not be incremented before waiting_count == alive_count.

Well, control doesn't reach the atomic_inc() statement if this condition
is not satisfied, so I don't see how it can be possibly reordered before
the while () loop without breaking the control flow guarantees.

> >> the second is for ready_count vs. alive_count and requested_state.
> >
> > This one I can understand, but ...
> >
> >> >> +     /* alive_count can't change while ready_count > 0 */
> >> >> +     alive = atomic_read(&coupled->alive_count);
> >
> > What happens if CPU hotplug happens right here?
> 
> According to the comment above that line that can't happen -
> alive_count can't change while ready_count > 0, because that implies
> that all cpus are waiting and none can be in the hotplug path where
> alive_count is changed.  Looking at it again that is not entirely
> true, alive_count could change on systems with >2 cpus, but I think it
> can't cause an issue because alive_count would be 2 greater than
> waiting_count before alive_count was changed.  Either way, it will be
> fixed by disabling coupled idle during hotplug.

Yup.

> >> >> +     while (atomic_read(&coupled->ready_count) != alive) {
> >> >> +             /* Check if any other cpus bailed out of idle. */
> >> >> +             if (!cpuidle_coupled_cpus_waiting(coupled)) {
> >> >> +                     atomic_dec(&coupled->ready_count);
> >> >> +                     smp_mb__after_atomic_dec();
> >
> > And the barrier here?  Even if the old value of ready_count leaks into
> > the while () loop after retry, that doesn't seem to matter.
> 
> All of these will be academic if ready_count and waiting_count share
> an atomic_t.
> waiting_count must not be decremented by exiting the while loop after
> the retry label until ready_count is decremented here, but that is
> also protected by the barrier in set_not_waiting.  One of them could
> be dropped.
> 
> >> >> +                     goto retry;
> >> >> +             }
> >> >> +
> >> >> +             cpu_relax();
> >> >> +     }
> >> >> +
> >> >> +     /* all cpus have acked the coupled state */
> >> >> +     smp_rmb();
> >> >
> >> > What is the barrier here for?
> >> This protects ready_count vs. requested_state.  It is already
> >> implicitly protected by the atomic_inc_return in set_waiting, but I
> >> thought it would be better to protect it explicitly here.  I think I
> >> added the smp_mb__after_atomic_inc above later, which makes this one
> >> superflous, so I'll drop it.
> >
> > OK
> >
> >> >> +
> >> >> +     next_state = cpuidle_coupled_get_state(dev, coupled);
> >> >> +
> >> >> +     entered_state = cpuidle_enter_state(dev, drv, next_state);
> >> >> +
> >> >> +     cpuidle_coupled_set_not_waiting(dev, coupled);
> >> >> +     atomic_dec(&coupled->ready_count);
> >> >> +     smp_mb__after_atomic_dec();
> >> >> +
> >> >> +out:
> >> >> +     /*
> >> >> +      * Normal cpuidle states are expected to return with irqs enabled.
> >> >> +      * That leads to an inefficiency where a cpu receiving an interrupt
> >> >> +      * that brings it out of idle will process that interrupt before
> >> >> +      * exiting the idle enter function and decrementing ready_count.  All
> >> >> +      * other cpus will need to spin waiting for the cpu that is processing
> >> >> +      * the interrupt.  If the driver returns with interrupts disabled,
> >> >> +      * all other cpus will loop back into the safe idle state instead of
> >> >> +      * spinning, saving power.
> >> >> +      *
> >> >> +      * Calling local_irq_enable here allows coupled states to return with
> >> >> +      * interrupts disabled, but won't cause problems for drivers that
> >> >> +      * exit with interrupts enabled.
> >> >> +      */
> >> >> +     local_irq_enable();
> >> >> +
> >> >> +     /*
> >> >> +      * Wait until all coupled cpus have exited idle.  There is no risk that
> >> >> +      * a cpu exits and re-enters the ready state because this cpu has
> >> >> +      * already decremented its waiting_count.
> >> >> +      */
> >> >> +     while (atomic_read(&coupled->ready_count) != 0)
> >> >> +             cpu_relax();
> >> >> +
> >> >> +     smp_rmb();
> >> >
> >> > And here?
> >>
> >> This was to protect ready_count vs. looping back in and reading
> >> alive_count.
> >
> > Well, I'm lost. :-)
> >
> > You've not modified anything after the previous smp_mb__after_atomic_dec(),
> > so what exactly is the reordering this is supposed to work against?
> >
> > And while we're at it, I'm not quite sure what the things that the previous
> > smp_mb__after_atomic_dec() separates from each other are.
> 
> Instead of justifying all of these, let me try the combined atomic_t
> trick and justify the (many fewer) remaining barriers.

OK, cool! :-)

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [linux-pm] [PATCHv3 3/5] cpuidle: add support for states that affect multiple cpus
@ 2012-05-04 22:27             ` Rafael J. Wysocki
  0 siblings, 0 replies; 78+ messages in thread
From: Rafael J. Wysocki @ 2012-05-04 22:27 UTC (permalink / raw)
  To: linux-arm-kernel

On Friday, May 04, 2012, Colin Cross wrote:
> On Fri, May 4, 2012 at 4:51 AM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> > On Friday, May 04, 2012, Colin Cross wrote:
> >> On Thu, May 3, 2012 at 3:14 PM, Rafael J. Wysocki <rjw@sisk.pl> wrote:
> > [...]
> >>
> >> >> +/**
> >> >> + * cpuidle_coupled_cpus_waiting - check if all cpus in a coupled set are waiting
> >> >> + * @coupled: the struct coupled that contains the current cpu
> >> >> + *
> >> >> + * Returns true if all cpus coupled to this target state are in the wait loop
> >> >> + */
> >> >> +static inline bool cpuidle_coupled_cpus_waiting(struct cpuidle_coupled *coupled)
> >> >> +{
> >> >> +     int alive;
> >> >> +     int waiting;
> >> >> +
> >> >> +     /*
> >> >> +      * Read alive before reading waiting so a booting cpu is not treated as
> >> >> +      * idle
> >> >> +      */
> >> >
> >> > Well, the comment doesn't really explain much.  In particular, why the boot CPU
> >> > could be treated as idle if the reads were in a different order.
> >>
> >> Hm, I think the race condition is on a cpu going down.  What about:
> >> Read alive before reading waiting.  If waiting is read before alive,
> >> this cpu could see another cpu as waiting just before it goes offline,
> >> between when it the other cpu decrements waiting and when it
> >> decrements alive, which could cause alive == waiting when one cpu is
> >> not waiting.
> >
> > Reading them in this particular order doesn't stop the race, though.  I mean,
> > if the hotplug happens just right after you've read alive_count, you still have
> > a wrong value.  waiting_count is set independently, it seems, so there's no
> > ordering between the two on the "store" side and the "load" side ordering
> > doesn't matter.
> 
> As commented in the hotplug path, hotplug relies on the fact that one
> of the cpus in the cluster is involved in the hotplug of the cpu that
> is changing (this may not be true for multiple clusters, but it is
> easy to fix by IPI-ing to a cpu that is in the same cluster when that
> happens).

That's very fragile and potentially sets a trap for people trying to make
the kernel work on systems with multiple clusters.

> That means that waiting count is always guaranteed to be at
> least 1 less than alive count when alive count changes.  All this read
> ordering needs to do is make sure that this cpu doesn't see
> waiting_count == alive_count by reading them in the wrong order.

So, the concern seems to be that if the local CPU reorders the reads
from waiting_count and alive_count and enough time elapses between one
read and the other, the decrementation of waiting_count may happen
between them and then the CPU may use the outdated value for comparison,
right?

Still, though, even if the barrier is there, the modification of
alive_count in the hotplug notifier routine may not happen before
the read from alive_count in cpuidle_coupled_cpus_waiting() is completed.
Isn't that a problem?

> > I would just make the CPU hotplug notifier routine block until
> > cpuidle_enter_state_coupled() is done and the latter return immediately
> > if the CPU hotplug notifier routine is in progress, perhaps falling back
> > to the safe state.  Or I would make the CPU hotplug notifier routine
> > disable the "coupled cpuidle" entirely on DOWN_PREPARE and UP_PREPARE
> > and only re-enable it after the hotplug has been completed.
> 
> I'll take a look at disabling coupled idle completely during hotplug.

Great, thanks!

> >> >> +     alive = atomic_read(&coupled->alive_count);
> >> >> +     smp_rmb();
> >> >> +     waiting = atomic_read(&coupled->waiting_count);
> >> >
> >> > Have you considered using one atomic variable to accommodate both counters
> >> > such that the upper half contains one counter and the lower half contains
> >> > the other?
> >>
> >> There are 3 counters (alive, waiting, and ready).  Do you want me to
> >> squish all of them into a single atomic_t, which would limit to 1023
> >> cpus?
> >
> > No.  I'd make sure that cpuidle_enter_state_coupled() did't race with CPU
> > hotplug, so as to make alive_count stable from its standpoint, and I'd
> > put the two remaining counters into one atomic_t variable.
> 
> I'll take a look at using a single atomic_t.  My initial worry was
> that the increased contention on the shared variable would cause more
> cmpxchg retries, but since waiting_count and ready_count are designed
> to be modified in sequential phases that shouldn't be an issue.
> 
[...]
> >> >> +     while (!need_resched() && !cpuidle_coupled_cpus_waiting(coupled)) {
> >> >> +             entered_state = cpuidle_enter_state(dev, drv,
> >> >> +                     dev->safe_state_index);
> >> >> +
> >> >> +             local_irq_enable();
> >> >> +             while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
> >> >> +                     cpu_relax();
> >> >
> >> > Hmm.  What exactly is this loop supposed to achieve?
> >> This is to ensure that the outstanding wakeups have been processed so
> >> we don't go to idle with an interrupt pending an immediately wake up.
> >
> > I see.  Is it actually safe to reenable interrupts at this point, though?
> 
> I think so.  The normal idle loop will enable interrupts in a similar
> fashion to what happens here.  There are two things to worry about: a
> processed interrupt causing work to be scheduled that should bring
> this cpu out of idle, or changing the next timer which would
> invalidate the current requested state.  The first is handled by
> checking need_resched() after interrupts are disabled again, the
> second is currently unhandled but does not affect correct operation,
> it just races into a less-than-optimal idle state.

I see.

> >> >> +             local_irq_disable();
> >> >
> >> > Anyway, you seem to be calling it twice along with this enabling/disabling of
> >> > interrupts.  I'd put that into a separate function and explain its role in a
> >> > kerneldoc comment.
> >>
> >> I left it here to be obvious that I was enabling interrupts in the
> >> idle path, but I can refactor it out if you prefer.
> >
> > Well, you can call the function to make it obvious. :-)
> >
> > Anyway, I think that code duplication is a worse thing than a reasonable
> > amount of non-obviousness, so to speak.
> >
> >> >> +     }
> >> >> +
> >> >> +     /* give a chance to process any remaining pokes */
> >> >> +     local_irq_enable();
> >> >> +     while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
> >> >> +             cpu_relax();
> >> >> +     local_irq_disable();
> >> >> +
> >> >> +     if (need_resched()) {
> >> >> +             cpuidle_coupled_set_not_waiting(dev, coupled);
> >> >> +             goto out;
> >> >> +     }
> >> >> +
> >> >> +     /*
> >> >> +      * All coupled cpus are probably idle.  There is a small chance that
> >> >> +      * one of the other cpus just became active.  Increment a counter when
> >> >> +      * ready, and spin until all coupled cpus have incremented the counter.
> >> >> +      * Once a cpu has incremented the counter, it cannot abort idle and must
> >> >> +      * spin until either the count has hit alive_count, or another cpu
> >> >> +      * leaves idle.
> >> >> +      */
> >> >> +
> >> >> +     smp_mb__before_atomic_inc();
> >> >> +     atomic_inc(&coupled->ready_count);
> >> >> +     smp_mb__after_atomic_inc();
> >> >
> >> > It seems that at least one of these barriers is unnecessary ...
> >> The first is to ensure ordering between ready_count and waiting count,
> >
> > Are you afraid that the test against waiting_count from
> > cpuidle_coupled_cpus_waiting() may get reordered after the incrementation
> > of ready_count or is it something else?
> 
> Yes, ready_count must not be incremented before waiting_count == alive_count.

Well, control doesn't reach the atomic_inc() statement if this condition
is not satisfied, so I don't see how it can be possibly reordered before
the while () loop without breaking the control flow guarantees.

> >> the second is for ready_count vs. alive_count and requested_state.
> >
> > This one I can understand, but ...
> >
> >> >> +     /* alive_count can't change while ready_count > 0 */
> >> >> +     alive = atomic_read(&coupled->alive_count);
> >
> > What happens if CPU hotplug happens right here?
> 
> According to the comment above that line that can't happen -
> alive_count can't change while ready_count > 0, because that implies
> that all cpus are waiting and none can be in the hotplug path where
> alive_count is changed.  Looking at it again that is not entirely
> true, alive_count could change on systems with >2 cpus, but I think it
> can't cause an issue because alive_count would be 2 greater than
> waiting_count before alive_count was changed.  Either way, it will be
> fixed by disabling coupled idle during hotplug.

Yup.

> >> >> +     while (atomic_read(&coupled->ready_count) != alive) {
> >> >> +             /* Check if any other cpus bailed out of idle. */
> >> >> +             if (!cpuidle_coupled_cpus_waiting(coupled)) {
> >> >> +                     atomic_dec(&coupled->ready_count);
> >> >> +                     smp_mb__after_atomic_dec();
> >
> > And the barrier here?  Even if the old value of ready_count leaks into
> > the while () loop after retry, that doesn't seem to matter.
> 
> All of these will be academic if ready_count and waiting_count share
> an atomic_t.
> waiting_count must not be decremented by exiting the while loop after
> the retry label until ready_count is decremented here, but that is
> also protected by the barrier in set_not_waiting.  One of them could
> be dropped.
> 
> >> >> +                     goto retry;
> >> >> +             }
> >> >> +
> >> >> +             cpu_relax();
> >> >> +     }
> >> >> +
> >> >> +     /* all cpus have acked the coupled state */
> >> >> +     smp_rmb();
> >> >
> >> > What is the barrier here for?
> >> This protects ready_count vs. requested_state.  It is already
> >> implicitly protected by the atomic_inc_return in set_waiting, but I
> >> thought it would be better to protect it explicitly here.  I think I
> >> added the smp_mb__after_atomic_inc above later, which makes this one
> >> superflous, so I'll drop it.
> >
> > OK
> >
> >> >> +
> >> >> +     next_state = cpuidle_coupled_get_state(dev, coupled);
> >> >> +
> >> >> +     entered_state = cpuidle_enter_state(dev, drv, next_state);
> >> >> +
> >> >> +     cpuidle_coupled_set_not_waiting(dev, coupled);
> >> >> +     atomic_dec(&coupled->ready_count);
> >> >> +     smp_mb__after_atomic_dec();
> >> >> +
> >> >> +out:
> >> >> +     /*
> >> >> +      * Normal cpuidle states are expected to return with irqs enabled.
> >> >> +      * That leads to an inefficiency where a cpu receiving an interrupt
> >> >> +      * that brings it out of idle will process that interrupt before
> >> >> +      * exiting the idle enter function and decrementing ready_count.  All
> >> >> +      * other cpus will need to spin waiting for the cpu that is processing
> >> >> +      * the interrupt.  If the driver returns with interrupts disabled,
> >> >> +      * all other cpus will loop back into the safe idle state instead of
> >> >> +      * spinning, saving power.
> >> >> +      *
> >> >> +      * Calling local_irq_enable here allows coupled states to return with
> >> >> +      * interrupts disabled, but won't cause problems for drivers that
> >> >> +      * exit with interrupts enabled.
> >> >> +      */
> >> >> +     local_irq_enable();
> >> >> +
> >> >> +     /*
> >> >> +      * Wait until all coupled cpus have exited idle.  There is no risk that
> >> >> +      * a cpu exits and re-enters the ready state because this cpu has
> >> >> +      * already decremented its waiting_count.
> >> >> +      */
> >> >> +     while (atomic_read(&coupled->ready_count) != 0)
> >> >> +             cpu_relax();
> >> >> +
> >> >> +     smp_rmb();
> >> >
> >> > And here?
> >>
> >> This was to protect ready_count vs. looping back in and reading
> >> alive_count.
> >
> > Well, I'm lost. :-)
> >
> > You've not modified anything after the previous smp_mb__after_atomic_dec(),
> > so what exactly is the reordering this is supposed to work against?
> >
> > And while we're at it, I'm not quite sure what the things that the previous
> > smp_mb__after_atomic_dec() separates from each other are.
> 
> Instead of justifying all of these, let me try the combined atomic_t
> trick and justify the (many fewer) remaining barriers.

OK, cool! :-)

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 78+ messages in thread

end of thread, other threads:[~2012-05-04 22:27 UTC | newest]

Thread overview: 78+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-04-30 20:09 [PATCHv3 0/5] coupled cpuidle state support Colin Cross
2012-04-30 20:09 ` Colin Cross
2012-04-30 20:09 ` [PATCHv3 1/5] cpuidle: refactor out cpuidle_enter_state Colin Cross
2012-04-30 20:09   ` Colin Cross
2012-05-03 20:50   ` Rafael J. Wysocki
2012-05-03 20:50     ` Rafael J. Wysocki
2012-05-03 20:50     ` Rafael J. Wysocki
2012-04-30 20:09 ` [PATCHv3 2/5] cpuidle: fix error handling in __cpuidle_register_device Colin Cross
2012-04-30 20:09   ` Colin Cross
2012-04-30 20:09   ` Colin Cross
2012-05-03 20:50   ` [linux-pm] " Rafael J. Wysocki
2012-05-03 20:50     ` Rafael J. Wysocki
2012-05-03 20:50     ` Rafael J. Wysocki
2012-04-30 20:09 ` [PATCHv3 3/5] cpuidle: add support for states that affect multiple cpus Colin Cross
2012-04-30 20:09   ` Colin Cross
2012-04-30 20:09   ` Colin Cross
2012-05-03 22:14   ` [linux-pm] " Rafael J. Wysocki
2012-05-03 22:14     ` Rafael J. Wysocki
2012-05-03 22:14     ` Rafael J. Wysocki
2012-05-03 23:09     ` [linux-pm] " Colin Cross
2012-05-03 23:09       ` Colin Cross
2012-05-03 23:09       ` Colin Cross
2012-05-04 11:51       ` [linux-pm] " Rafael J. Wysocki
2012-05-04 11:51         ` Rafael J. Wysocki
2012-05-04 11:51         ` Rafael J. Wysocki
2012-05-04 18:56         ` [linux-pm] " Colin Cross
2012-05-04 18:56           ` Colin Cross
2012-05-04 18:56           ` Colin Cross
2012-05-04 22:27           ` [linux-pm] " Rafael J. Wysocki
2012-05-04 22:27             ` Rafael J. Wysocki
2012-05-04 22:27             ` Rafael J. Wysocki
2012-04-30 20:09 ` [PATCHv3 4/5] cpuidle: coupled: add parallel barrier function Colin Cross
2012-04-30 20:09   ` Colin Cross
2012-04-30 20:09 ` [PATCHv3 5/5] cpuidle: coupled: add trace events Colin Cross
2012-04-30 20:09   ` Colin Cross
2012-04-30 20:09   ` Colin Cross
2012-05-03 21:00   ` Steven Rostedt
2012-05-03 21:00     ` Steven Rostedt
2012-05-03 21:00     ` Steven Rostedt
2012-05-03 21:13     ` Colin Cross
2012-05-03 21:13       ` Colin Cross
2012-05-03 21:13       ` Colin Cross
2012-04-30 21:18 ` [PATCHv3 0/5] coupled cpuidle state support Colin Cross
2012-04-30 21:18   ` Colin Cross
2012-04-30 21:18   ` Colin Cross
2012-04-30 21:25 ` Rafael J. Wysocki
2012-04-30 21:25   ` Rafael J. Wysocki
2012-04-30 21:25   ` Rafael J. Wysocki
2012-04-30 21:37   ` Colin Cross
2012-04-30 21:37     ` Colin Cross
2012-04-30 21:37     ` Colin Cross
2012-04-30 21:54     ` Rafael J. Wysocki
2012-04-30 21:54       ` Rafael J. Wysocki
2012-04-30 21:54       ` Rafael J. Wysocki
2012-04-30 22:01       ` Colin Cross
2012-04-30 22:01         ` Colin Cross
2012-04-30 22:01         ` Colin Cross
2012-05-03 20:00         ` Rafael J. Wysocki
2012-05-03 20:00           ` Rafael J. Wysocki
2012-05-03 20:00           ` Rafael J. Wysocki
2012-05-03 20:18           ` Colin Cross
2012-05-03 20:18             ` Colin Cross
2012-05-03 20:18             ` Colin Cross
2012-05-03 20:43             ` Rafael J. Wysocki
2012-05-03 20:43               ` Rafael J. Wysocki
2012-05-03 20:43               ` Rafael J. Wysocki
2012-05-04 10:04           ` Lorenzo Pieralisi
2012-05-04 10:04             ` Lorenzo Pieralisi
2012-05-04 10:04             ` Lorenzo Pieralisi
2012-05-01 10:43     ` Lorenzo Pieralisi
2012-05-01 10:43       ` Lorenzo Pieralisi
2012-05-01 10:43       ` Lorenzo Pieralisi
2012-05-02  0:11       ` Colin Cross
2012-05-02  0:11         ` Colin Cross
2012-05-02  0:11         ` Colin Cross
2012-05-02  7:22         ` Santosh Shilimkar
2012-05-02  7:22           ` Santosh Shilimkar
2012-05-02  7:22           ` Santosh Shilimkar

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.