All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/3] Forced-wakeup for stop states on Powernv
@ 2019-07-04  9:18 ` Abhishek Goel
  0 siblings, 0 replies; 12+ messages in thread
From: Abhishek Goel @ 2019-07-04  9:18 UTC (permalink / raw)
  To: linux-kernel, linuxppc-dev, linux-pm
  Cc: npiggin, rjw, daniel.lezcano, mpe, ego, dja, Abhishek Goel

Currently, the cpuidle governors determine what idle state a idling CPU
should enter into based on heuristics that depend on the idle history on
that CPU. Given that no predictive heuristic is perfect, there are cases
where the governor predicts a shallow idle state, hoping that the CPU will
be busy soon. However, if no new workload is scheduled on that CPU in the
near future, the CPU will end up in the shallow state.

Motivation
----------
In case of POWER, this is problematic, when the predicted state in the
aforementioned scenario is a shallow stop state on a tickless system. As
we might get stuck into shallow states even for hours, in absence of ticks
or interrupts.

To address this, We forcefully wakeup the cpu by setting the decrementer.
The decrementer is set to a value that corresponds with the residency of
the next available state. Thus firing up a timer that will forcefully
wakeup the cpu. Few such iterations will essentially train the governor to
select a deeper state for that cpu, as the timer here corresponds to the
next available cpuidle state residency. Thus, cpu will eventually end up
in the deepest possible state and we won't get stuck in a shallow state
for long duration.

Experiment
----------
For earlier versions when this feature was meat to be only for shallow lite
states, I performed experiments for three scenarios to collect some data.

case 1 :
Without this patch and without tick retained, i.e. in a upstream kernel,
It would spend more than even a second to get out of stop0_lite.

case 2 : With tick retained in a upstream kernel -

Generally, we have a sched tick at 4ms(CONF_HZ = 250). Ideally I expected
it to take 8 sched tick to get out of stop0_lite. Experimentally,
observation was

=========================================================
sample          min            max           99percentile
20              4ms            12ms          4ms
=========================================================

It would take atleast one sched tick to get out of stop0_lite.

case 2 :  With this patch (not stopping tick, but explicitly queuing a
          timer)

============================================================
sample          min             max             99percentile
============================================================
20              144us           192us           144us
============================================================


Description of current implementation
-------------------------------------

We calculate timeout for the current idle state as the residency value
of the next available idle state. If the decrementer is set to be
greater than this timeout, we update the decrementer value with the
residency of next available idle state. Thus, essentially training the
governor to select the next available deeper state until we reach the
deepest state. Hence, we won't get stuck unnecessarily in shallow states
for longer duration.

--------------------------------
v1 of auto-promotion : https://lkml.org/lkml/2019/3/22/58 This patch was
implemented only for shallow lite state in generic cpuidle driver.

v2 : Removed timeout_needed and rebased to current
upstream kernel

Then,
v1 of forced-wakeup : Moved the code to cpuidle powernv driver and started
as forced wakeup instead of auto-promotion

v2 : Extended the forced wakeup logic for all states.
Setting the decrementer instead of queuing up a hrtimer to implement the
logic.

v3 : 1) Cleanly handle setting the decrementer after exiting out of stop
       states.
     2) Added a disable_callback feature to compute timeout whenever a
        state is enbaled or disabled instead of computing everytime in fast
        idle path.
     3) Use disable callback to recompute timeout whenever state usage
        is changed for a state. Also, cleaned up the get_snooze_timeout
        function.


Abhishek Goel (3):
  cpuidle-powernv : forced wakeup for stop states
  cpuidle : Add callback whenever a state usage is enabled/disabled
  cpuidle-powernv : Recompute the idle-state timeouts when state usage
    is enabled/disabled

 arch/powerpc/include/asm/time.h   |  2 ++
 arch/powerpc/kernel/time.c        | 40 ++++++++++++++++++++++++++
 drivers/cpuidle/cpuidle-powernv.c | 47 ++++++++++++++++++++++---------
 drivers/cpuidle/sysfs.c           | 15 +++++++++-
 include/linux/cpuidle.h           |  5 ++++
 5 files changed, 95 insertions(+), 14 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v3 0/3] Forced-wakeup for stop states on Powernv
@ 2019-07-04  9:18 ` Abhishek Goel
  0 siblings, 0 replies; 12+ messages in thread
From: Abhishek Goel @ 2019-07-04  9:18 UTC (permalink / raw)
  To: linux-kernel, linuxppc-dev, linux-pm
  Cc: ego, daniel.lezcano, rjw, npiggin, Abhishek Goel, dja

Currently, the cpuidle governors determine what idle state a idling CPU
should enter into based on heuristics that depend on the idle history on
that CPU. Given that no predictive heuristic is perfect, there are cases
where the governor predicts a shallow idle state, hoping that the CPU will
be busy soon. However, if no new workload is scheduled on that CPU in the
near future, the CPU will end up in the shallow state.

Motivation
----------
In case of POWER, this is problematic, when the predicted state in the
aforementioned scenario is a shallow stop state on a tickless system. As
we might get stuck into shallow states even for hours, in absence of ticks
or interrupts.

To address this, We forcefully wakeup the cpu by setting the decrementer.
The decrementer is set to a value that corresponds with the residency of
the next available state. Thus firing up a timer that will forcefully
wakeup the cpu. Few such iterations will essentially train the governor to
select a deeper state for that cpu, as the timer here corresponds to the
next available cpuidle state residency. Thus, cpu will eventually end up
in the deepest possible state and we won't get stuck in a shallow state
for long duration.

Experiment
----------
For earlier versions when this feature was meat to be only for shallow lite
states, I performed experiments for three scenarios to collect some data.

case 1 :
Without this patch and without tick retained, i.e. in a upstream kernel,
It would spend more than even a second to get out of stop0_lite.

case 2 : With tick retained in a upstream kernel -

Generally, we have a sched tick at 4ms(CONF_HZ = 250). Ideally I expected
it to take 8 sched tick to get out of stop0_lite. Experimentally,
observation was

=========================================================
sample          min            max           99percentile
20              4ms            12ms          4ms
=========================================================

It would take atleast one sched tick to get out of stop0_lite.

case 2 :  With this patch (not stopping tick, but explicitly queuing a
          timer)

============================================================
sample          min             max             99percentile
============================================================
20              144us           192us           144us
============================================================


Description of current implementation
-------------------------------------

We calculate timeout for the current idle state as the residency value
of the next available idle state. If the decrementer is set to be
greater than this timeout, we update the decrementer value with the
residency of next available idle state. Thus, essentially training the
governor to select the next available deeper state until we reach the
deepest state. Hence, we won't get stuck unnecessarily in shallow states
for longer duration.

--------------------------------
v1 of auto-promotion : https://lkml.org/lkml/2019/3/22/58 This patch was
implemented only for shallow lite state in generic cpuidle driver.

v2 : Removed timeout_needed and rebased to current
upstream kernel

Then,
v1 of forced-wakeup : Moved the code to cpuidle powernv driver and started
as forced wakeup instead of auto-promotion

v2 : Extended the forced wakeup logic for all states.
Setting the decrementer instead of queuing up a hrtimer to implement the
logic.

v3 : 1) Cleanly handle setting the decrementer after exiting out of stop
       states.
     2) Added a disable_callback feature to compute timeout whenever a
        state is enbaled or disabled instead of computing everytime in fast
        idle path.
     3) Use disable callback to recompute timeout whenever state usage
        is changed for a state. Also, cleaned up the get_snooze_timeout
        function.


Abhishek Goel (3):
  cpuidle-powernv : forced wakeup for stop states
  cpuidle : Add callback whenever a state usage is enabled/disabled
  cpuidle-powernv : Recompute the idle-state timeouts when state usage
    is enabled/disabled

 arch/powerpc/include/asm/time.h   |  2 ++
 arch/powerpc/kernel/time.c        | 40 ++++++++++++++++++++++++++
 drivers/cpuidle/cpuidle-powernv.c | 47 ++++++++++++++++++++++---------
 drivers/cpuidle/sysfs.c           | 15 +++++++++-
 include/linux/cpuidle.h           |  5 ++++
 5 files changed, 95 insertions(+), 14 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v3 1/3] cpuidle-powernv : forced wakeup for stop states
  2019-07-04  9:18 ` Abhishek Goel
@ 2019-07-04  9:18   ` Abhishek Goel
  -1 siblings, 0 replies; 12+ messages in thread
From: Abhishek Goel @ 2019-07-04  9:18 UTC (permalink / raw)
  To: linux-kernel, linuxppc-dev, linux-pm
  Cc: npiggin, rjw, daniel.lezcano, mpe, ego, dja, Abhishek Goel

Currently, the cpuidle governors determine what idle state a idling CPU
should enter into based on heuristics that depend on the idle history on
that CPU. Given that no predictive heuristic is perfect, there are cases
where the governor predicts a shallow idle state, hoping that the CPU will
be busy soon. However, if no new workload is scheduled on that CPU in the
near future, the CPU may end up in the shallow state.

This is problematic, when the predicted state in the aforementioned
scenario is a shallow stop state on a tickless system. As we might get
stuck into shallow states for hours, in absence of ticks or interrupts.

To address this, We forcefully wakeup the cpu by setting the
decrementer. The decrementer is set to a value that corresponds with the
residency of the next available state. Thus firing up a timer that will
forcefully wakeup the cpu. Few such iterations will essentially train the
governor to select a deeper state for that cpu, as the timer here
corresponds to the next available cpuidle state residency. Thus, cpu will
eventually end up in the deepest possible state.

Signed-off-by: Abhishek Goel <huntbag@linux.vnet.ibm.com>
---

Auto-promotion
 v1 : started as auto promotion logic for cpuidle states in generic
driver
 v2 : Removed timeout_needed and rebased the code to upstream kernel
Forced-wakeup
 v1 : New patch with name of forced wakeup started
 v2 : Extending the forced wakeup logic for all states. Setting the
decrementer instead of queuing up a hrtimer to implement the logic.
 v3 : Cleanly handle setting/resetting of decrementer so as to not break
irq work 

 arch/powerpc/include/asm/time.h   |  2 ++
 arch/powerpc/kernel/time.c        | 40 +++++++++++++++++++++++++++++++
 drivers/cpuidle/cpuidle-powernv.c | 32 +++++++++++++++++++++++++
 3 files changed, 74 insertions(+)

diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h
index 54f4ec1f9..a3bd4f3c0 100644
--- a/arch/powerpc/include/asm/time.h
+++ b/arch/powerpc/include/asm/time.h
@@ -188,6 +188,8 @@ static inline unsigned long tb_ticks_since(unsigned long tstamp)
 extern u64 mulhdu(u64, u64);
 #endif
 
+extern int set_dec_before_idle(u64 timeout);
+extern void reset_dec_after_idle(void);
 extern void div128_by_32(u64 dividend_high, u64 dividend_low,
 			 unsigned divisor, struct div_result *dr);
 
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index 694522308..814de3469 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -576,6 +576,46 @@ void arch_irq_work_raise(void)
 
 #endif /* CONFIG_IRQ_WORK */
 
+/*
+ * Returns 1 if we have reprogrammed the decrementer for idle.
+ * Returns 0 if the decrementer is unchanged.
+ */
+int set_dec_before_idle(u64 timeout)
+{
+	u64 *next_tb = this_cpu_ptr(&decrementers_next_tb);
+	u64 now = get_tb_or_rtc();
+
+	/*
+	 * Ensure that the timeout is at least one microsecond
+	 * before the current decrement value. Else, we will
+	 * unnecesarily wakeup again within a microsecond.
+	 */
+	if (now + timeout + 512 > *next_tb)
+		return 0;
+
+	set_dec(timeout);
+
+	return 1;
+}
+
+void reset_dec_after_idle(void)
+{
+	u64 now;
+	u64 *next_tb;
+
+	if (test_irq_work_pending())
+		return;
+
+	now = get_tb_or_rtc();
+	next_tb = this_cpu_ptr(&decrementers_next_tb);
+	if (now >= *next_tb)
+		return;
+
+	set_dec(*next_tb - now);
+	if (test_irq_work_pending())
+		set_dec(1);
+}
+
 /*
  * timer_interrupt - gets called when the decrementer overflows,
  * with interrupts disabled.
diff --git a/drivers/cpuidle/cpuidle-powernv.c b/drivers/cpuidle/cpuidle-powernv.c
index 84b1ebe21..f51478460 100644
--- a/drivers/cpuidle/cpuidle-powernv.c
+++ b/drivers/cpuidle/cpuidle-powernv.c
@@ -21,6 +21,7 @@
 #include <asm/opal.h>
 #include <asm/runlatch.h>
 #include <asm/cpuidle.h>
+#include <asm/time.h>
 
 /*
  * Expose only those Hardware idle states via the cpuidle framework
@@ -46,6 +47,26 @@ static struct stop_psscr_table stop_psscr_table[CPUIDLE_STATE_MAX] __read_mostly
 static u64 default_snooze_timeout __read_mostly;
 static bool snooze_timeout_en __read_mostly;
 
+static u64 forced_wakeup_timeout(struct cpuidle_device *dev,
+				 struct cpuidle_driver *drv,
+				 int index)
+{
+	int i;
+
+	for (i = index + 1; i < drv->state_count; i++) {
+		struct cpuidle_state *s = &drv->states[i];
+		struct cpuidle_state_usage *su = &dev->states_usage[i];
+
+		if (s->disabled || su->disable)
+			continue;
+
+		return (s->target_residency + 2 * s->exit_latency) *
+			tb_ticks_per_usec;
+	}
+
+	return 0;
+}
+
 static u64 get_snooze_timeout(struct cpuidle_device *dev,
 			      struct cpuidle_driver *drv,
 			      int index)
@@ -144,8 +165,19 @@ static int stop_loop(struct cpuidle_device *dev,
 		     struct cpuidle_driver *drv,
 		     int index)
 {
+	u64 timeout_tb;
+	int forced_wakeup = 0;
+
+	timeout_tb = forced_wakeup_timeout(dev, drv, index);
+	if (timeout_tb)
+		forced_wakeup = set_dec_before_idle(timeout_tb);
+
 	power9_idle_type(stop_psscr_table[index].val,
 			 stop_psscr_table[index].mask);
+
+	if (forced_wakeup)
+		reset_dec_after_idle();
+
 	return index;
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 1/3] cpuidle-powernv : forced wakeup for stop states
@ 2019-07-04  9:18   ` Abhishek Goel
  0 siblings, 0 replies; 12+ messages in thread
From: Abhishek Goel @ 2019-07-04  9:18 UTC (permalink / raw)
  To: linux-kernel, linuxppc-dev, linux-pm
  Cc: ego, daniel.lezcano, rjw, npiggin, Abhishek Goel, dja

Currently, the cpuidle governors determine what idle state a idling CPU
should enter into based on heuristics that depend on the idle history on
that CPU. Given that no predictive heuristic is perfect, there are cases
where the governor predicts a shallow idle state, hoping that the CPU will
be busy soon. However, if no new workload is scheduled on that CPU in the
near future, the CPU may end up in the shallow state.

This is problematic, when the predicted state in the aforementioned
scenario is a shallow stop state on a tickless system. As we might get
stuck into shallow states for hours, in absence of ticks or interrupts.

To address this, We forcefully wakeup the cpu by setting the
decrementer. The decrementer is set to a value that corresponds with the
residency of the next available state. Thus firing up a timer that will
forcefully wakeup the cpu. Few such iterations will essentially train the
governor to select a deeper state for that cpu, as the timer here
corresponds to the next available cpuidle state residency. Thus, cpu will
eventually end up in the deepest possible state.

Signed-off-by: Abhishek Goel <huntbag@linux.vnet.ibm.com>
---

Auto-promotion
 v1 : started as auto promotion logic for cpuidle states in generic
driver
 v2 : Removed timeout_needed and rebased the code to upstream kernel
Forced-wakeup
 v1 : New patch with name of forced wakeup started
 v2 : Extending the forced wakeup logic for all states. Setting the
decrementer instead of queuing up a hrtimer to implement the logic.
 v3 : Cleanly handle setting/resetting of decrementer so as to not break
irq work 

 arch/powerpc/include/asm/time.h   |  2 ++
 arch/powerpc/kernel/time.c        | 40 +++++++++++++++++++++++++++++++
 drivers/cpuidle/cpuidle-powernv.c | 32 +++++++++++++++++++++++++
 3 files changed, 74 insertions(+)

diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h
index 54f4ec1f9..a3bd4f3c0 100644
--- a/arch/powerpc/include/asm/time.h
+++ b/arch/powerpc/include/asm/time.h
@@ -188,6 +188,8 @@ static inline unsigned long tb_ticks_since(unsigned long tstamp)
 extern u64 mulhdu(u64, u64);
 #endif
 
+extern int set_dec_before_idle(u64 timeout);
+extern void reset_dec_after_idle(void);
 extern void div128_by_32(u64 dividend_high, u64 dividend_low,
 			 unsigned divisor, struct div_result *dr);
 
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index 694522308..814de3469 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -576,6 +576,46 @@ void arch_irq_work_raise(void)
 
 #endif /* CONFIG_IRQ_WORK */
 
+/*
+ * Returns 1 if we have reprogrammed the decrementer for idle.
+ * Returns 0 if the decrementer is unchanged.
+ */
+int set_dec_before_idle(u64 timeout)
+{
+	u64 *next_tb = this_cpu_ptr(&decrementers_next_tb);
+	u64 now = get_tb_or_rtc();
+
+	/*
+	 * Ensure that the timeout is at least one microsecond
+	 * before the current decrement value. Else, we will
+	 * unnecesarily wakeup again within a microsecond.
+	 */
+	if (now + timeout + 512 > *next_tb)
+		return 0;
+
+	set_dec(timeout);
+
+	return 1;
+}
+
+void reset_dec_after_idle(void)
+{
+	u64 now;
+	u64 *next_tb;
+
+	if (test_irq_work_pending())
+		return;
+
+	now = get_tb_or_rtc();
+	next_tb = this_cpu_ptr(&decrementers_next_tb);
+	if (now >= *next_tb)
+		return;
+
+	set_dec(*next_tb - now);
+	if (test_irq_work_pending())
+		set_dec(1);
+}
+
 /*
  * timer_interrupt - gets called when the decrementer overflows,
  * with interrupts disabled.
diff --git a/drivers/cpuidle/cpuidle-powernv.c b/drivers/cpuidle/cpuidle-powernv.c
index 84b1ebe21..f51478460 100644
--- a/drivers/cpuidle/cpuidle-powernv.c
+++ b/drivers/cpuidle/cpuidle-powernv.c
@@ -21,6 +21,7 @@
 #include <asm/opal.h>
 #include <asm/runlatch.h>
 #include <asm/cpuidle.h>
+#include <asm/time.h>
 
 /*
  * Expose only those Hardware idle states via the cpuidle framework
@@ -46,6 +47,26 @@ static struct stop_psscr_table stop_psscr_table[CPUIDLE_STATE_MAX] __read_mostly
 static u64 default_snooze_timeout __read_mostly;
 static bool snooze_timeout_en __read_mostly;
 
+static u64 forced_wakeup_timeout(struct cpuidle_device *dev,
+				 struct cpuidle_driver *drv,
+				 int index)
+{
+	int i;
+
+	for (i = index + 1; i < drv->state_count; i++) {
+		struct cpuidle_state *s = &drv->states[i];
+		struct cpuidle_state_usage *su = &dev->states_usage[i];
+
+		if (s->disabled || su->disable)
+			continue;
+
+		return (s->target_residency + 2 * s->exit_latency) *
+			tb_ticks_per_usec;
+	}
+
+	return 0;
+}
+
 static u64 get_snooze_timeout(struct cpuidle_device *dev,
 			      struct cpuidle_driver *drv,
 			      int index)
@@ -144,8 +165,19 @@ static int stop_loop(struct cpuidle_device *dev,
 		     struct cpuidle_driver *drv,
 		     int index)
 {
+	u64 timeout_tb;
+	int forced_wakeup = 0;
+
+	timeout_tb = forced_wakeup_timeout(dev, drv, index);
+	if (timeout_tb)
+		forced_wakeup = set_dec_before_idle(timeout_tb);
+
 	power9_idle_type(stop_psscr_table[index].val,
 			 stop_psscr_table[index].mask);
+
+	if (forced_wakeup)
+		reset_dec_after_idle();
+
 	return index;
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [RFC v3 2/3] cpuidle : Add callback whenever a state usage is enabled/disabled
  2019-07-04  9:18 ` Abhishek Goel
@ 2019-07-04  9:18   ` Abhishek Goel
  -1 siblings, 0 replies; 12+ messages in thread
From: Abhishek Goel @ 2019-07-04  9:18 UTC (permalink / raw)
  To: linux-kernel, linuxppc-dev, linux-pm
  Cc: npiggin, rjw, daniel.lezcano, mpe, ego, dja, Abhishek Goel

To force wakeup a cpu, we need to compute the timeout in the fast idle
path as a state may be enabled or disabled but there did not exist a
feedback to driver when a state is enabled or disabled.
This patch adds a callback whenever a state_usage records a store for
disable attribute.

Signed-off-by: Abhishek Goel <huntbag@linux.vnet.ibm.com>
---
 drivers/cpuidle/sysfs.c | 15 ++++++++++++++-
 include/linux/cpuidle.h |  4 ++++
 2 files changed, 18 insertions(+), 1 deletion(-)

diff --git a/drivers/cpuidle/sysfs.c b/drivers/cpuidle/sysfs.c
index eb20adb5d..141671a53 100644
--- a/drivers/cpuidle/sysfs.c
+++ b/drivers/cpuidle/sysfs.c
@@ -415,8 +415,21 @@ static ssize_t cpuidle_state_store(struct kobject *kobj, struct attribute *attr,
 	struct cpuidle_state_usage *state_usage = kobj_to_state_usage(kobj);
 	struct cpuidle_state_attr *cattr = attr_to_stateattr(attr);
 
-	if (cattr->store)
+	if (cattr->store) {
 		ret = cattr->store(state, state_usage, buf, size);
+		if (ret == size &&
+			strncmp(cattr->attr.name, "disable",
+						strlen("disable"))) {
+			struct kobject *cpuidle_kobj = kobj->parent;
+			struct cpuidle_device *dev =
+					to_cpuidle_device(cpuidle_kobj);
+			struct cpuidle_driver *drv =
+					cpuidle_get_cpu_driver(dev);
+
+			if (drv->disable_callback)
+				drv->disable_callback(dev, drv);
+		}
+	}
 
 	return ret;
 }
diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h
index bb9a0db89..8a0e54bd0 100644
--- a/include/linux/cpuidle.h
+++ b/include/linux/cpuidle.h
@@ -119,6 +119,10 @@ struct cpuidle_driver {
 
 	/* the driver handles the cpus in cpumask */
 	struct cpumask		*cpumask;
+
+	void (*disable_callback)(struct cpuidle_device *dev,
+				struct cpuidle_driver *drv);
+
 };
 
 #ifdef CONFIG_CPU_IDLE
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [RFC v3 2/3] cpuidle : Add callback whenever a state usage is enabled/disabled
@ 2019-07-04  9:18   ` Abhishek Goel
  0 siblings, 0 replies; 12+ messages in thread
From: Abhishek Goel @ 2019-07-04  9:18 UTC (permalink / raw)
  To: linux-kernel, linuxppc-dev, linux-pm
  Cc: ego, daniel.lezcano, rjw, npiggin, Abhishek Goel, dja

To force wakeup a cpu, we need to compute the timeout in the fast idle
path as a state may be enabled or disabled but there did not exist a
feedback to driver when a state is enabled or disabled.
This patch adds a callback whenever a state_usage records a store for
disable attribute.

Signed-off-by: Abhishek Goel <huntbag@linux.vnet.ibm.com>
---
 drivers/cpuidle/sysfs.c | 15 ++++++++++++++-
 include/linux/cpuidle.h |  4 ++++
 2 files changed, 18 insertions(+), 1 deletion(-)

diff --git a/drivers/cpuidle/sysfs.c b/drivers/cpuidle/sysfs.c
index eb20adb5d..141671a53 100644
--- a/drivers/cpuidle/sysfs.c
+++ b/drivers/cpuidle/sysfs.c
@@ -415,8 +415,21 @@ static ssize_t cpuidle_state_store(struct kobject *kobj, struct attribute *attr,
 	struct cpuidle_state_usage *state_usage = kobj_to_state_usage(kobj);
 	struct cpuidle_state_attr *cattr = attr_to_stateattr(attr);
 
-	if (cattr->store)
+	if (cattr->store) {
 		ret = cattr->store(state, state_usage, buf, size);
+		if (ret == size &&
+			strncmp(cattr->attr.name, "disable",
+						strlen("disable"))) {
+			struct kobject *cpuidle_kobj = kobj->parent;
+			struct cpuidle_device *dev =
+					to_cpuidle_device(cpuidle_kobj);
+			struct cpuidle_driver *drv =
+					cpuidle_get_cpu_driver(dev);
+
+			if (drv->disable_callback)
+				drv->disable_callback(dev, drv);
+		}
+	}
 
 	return ret;
 }
diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h
index bb9a0db89..8a0e54bd0 100644
--- a/include/linux/cpuidle.h
+++ b/include/linux/cpuidle.h
@@ -119,6 +119,10 @@ struct cpuidle_driver {
 
 	/* the driver handles the cpus in cpumask */
 	struct cpumask		*cpumask;
+
+	void (*disable_callback)(struct cpuidle_device *dev,
+				struct cpuidle_driver *drv);
+
 };
 
 #ifdef CONFIG_CPU_IDLE
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [RFC v3 3/3] cpuidle-powernv : Recompute the idle-state timeouts when state usage is enabled/disabled
  2019-07-04  9:18 ` Abhishek Goel
@ 2019-07-04  9:18   ` Abhishek Goel
  -1 siblings, 0 replies; 12+ messages in thread
From: Abhishek Goel @ 2019-07-04  9:18 UTC (permalink / raw)
  To: linux-kernel, linuxppc-dev, linux-pm
  Cc: npiggin, rjw, daniel.lezcano, mpe, ego, dja, Abhishek Goel

The disable callback can be used to compute timeout for other states
whenever a state is enabled or disabled. We store the computed timeout
in "timeout" defined in cpuidle state strucure. So, we compute timeout
only when some state is enabled or disabled and not every time in the
fast idle path.
We also use the computed timeout to get timeout for snooze, thus getting
rid of get_snooze_timeout for snooze loop.

Signed-off-by: Abhishek Goel <huntbag@linux.vnet.ibm.com>
---
 drivers/cpuidle/cpuidle-powernv.c | 35 +++++++++++--------------------
 include/linux/cpuidle.h           |  1 +
 2 files changed, 13 insertions(+), 23 deletions(-)

diff --git a/drivers/cpuidle/cpuidle-powernv.c b/drivers/cpuidle/cpuidle-powernv.c
index f51478460..7350f404a 100644
--- a/drivers/cpuidle/cpuidle-powernv.c
+++ b/drivers/cpuidle/cpuidle-powernv.c
@@ -45,7 +45,6 @@ struct stop_psscr_table {
 static struct stop_psscr_table stop_psscr_table[CPUIDLE_STATE_MAX] __read_mostly;
 
 static u64 default_snooze_timeout __read_mostly;
-static bool snooze_timeout_en __read_mostly;
 
 static u64 forced_wakeup_timeout(struct cpuidle_device *dev,
 				 struct cpuidle_driver *drv,
@@ -67,26 +66,13 @@ static u64 forced_wakeup_timeout(struct cpuidle_device *dev,
 	return 0;
 }
 
-static u64 get_snooze_timeout(struct cpuidle_device *dev,
-			      struct cpuidle_driver *drv,
-			      int index)
+static void pnv_disable_callback(struct cpuidle_device *dev,
+				 struct cpuidle_driver *drv)
 {
 	int i;
 
-	if (unlikely(!snooze_timeout_en))
-		return default_snooze_timeout;
-
-	for (i = index + 1; i < drv->state_count; i++) {
-		struct cpuidle_state *s = &drv->states[i];
-		struct cpuidle_state_usage *su = &dev->states_usage[i];
-
-		if (s->disabled || su->disable)
-			continue;
-
-		return s->target_residency * tb_ticks_per_usec;
-	}
-
-	return default_snooze_timeout;
+	for (i = 0; i < drv->state_count; i++)
+		drv->states[i].timeout = forced_wakeup_timeout(dev, drv, i);
 }
 
 static int snooze_loop(struct cpuidle_device *dev,
@@ -94,16 +80,20 @@ static int snooze_loop(struct cpuidle_device *dev,
 			int index)
 {
 	u64 snooze_exit_time;
+	u64 snooze_timeout = drv->states[index].timeout;
+
+	if (!snooze_timeout)
+		snooze_timeout = default_snooze_timeout;
 
 	set_thread_flag(TIF_POLLING_NRFLAG);
 
 	local_irq_enable();
 
-	snooze_exit_time = get_tb() + get_snooze_timeout(dev, drv, index);
+	snooze_exit_time = get_tb() + snooze_timeout;
 	ppc64_runlatch_off();
 	HMT_very_low();
 	while (!need_resched()) {
-		if (likely(snooze_timeout_en) && get_tb() > snooze_exit_time) {
+		if (get_tb() > snooze_exit_time) {
 			/*
 			 * Task has not woken up but we are exiting the polling
 			 * loop anyway. Require a barrier after polling is
@@ -168,7 +158,7 @@ static int stop_loop(struct cpuidle_device *dev,
 	u64 timeout_tb;
 	int forced_wakeup = 0;
 
-	timeout_tb = forced_wakeup_timeout(dev, drv, index);
+	timeout_tb = drv->states[index].timeout;
 	if (timeout_tb)
 		forced_wakeup = set_dec_before_idle(timeout_tb);
 
@@ -255,6 +245,7 @@ static int powernv_cpuidle_driver_init(void)
 	 */
 
 	drv->cpumask = (struct cpumask *)cpu_present_mask;
+	drv->disable_callback = pnv_disable_callback;
 
 	return 0;
 }
@@ -414,8 +405,6 @@ static int powernv_idle_probe(void)
 		/* Device tree can indicate more idle states */
 		max_idle_state = powernv_add_idle_states();
 		default_snooze_timeout = TICK_USEC * tb_ticks_per_usec;
-		if (max_idle_state > 1)
-			snooze_timeout_en = true;
  	} else
  		return -ENODEV;
 
diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h
index 8a0e54bd0..31662b657 100644
--- a/include/linux/cpuidle.h
+++ b/include/linux/cpuidle.h
@@ -50,6 +50,7 @@ struct cpuidle_state {
 	int		power_usage; /* in mW */
 	unsigned int	target_residency; /* in US */
 	bool		disabled; /* disabled on all CPUs */
+	unsigned long long timeout; /* timeout for exiting out of a state */
 
 	int (*enter)	(struct cpuidle_device *dev,
 			struct cpuidle_driver *drv,
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [RFC v3 3/3] cpuidle-powernv : Recompute the idle-state timeouts when state usage is enabled/disabled
@ 2019-07-04  9:18   ` Abhishek Goel
  0 siblings, 0 replies; 12+ messages in thread
From: Abhishek Goel @ 2019-07-04  9:18 UTC (permalink / raw)
  To: linux-kernel, linuxppc-dev, linux-pm
  Cc: ego, daniel.lezcano, rjw, npiggin, Abhishek Goel, dja

The disable callback can be used to compute timeout for other states
whenever a state is enabled or disabled. We store the computed timeout
in "timeout" defined in cpuidle state strucure. So, we compute timeout
only when some state is enabled or disabled and not every time in the
fast idle path.
We also use the computed timeout to get timeout for snooze, thus getting
rid of get_snooze_timeout for snooze loop.

Signed-off-by: Abhishek Goel <huntbag@linux.vnet.ibm.com>
---
 drivers/cpuidle/cpuidle-powernv.c | 35 +++++++++++--------------------
 include/linux/cpuidle.h           |  1 +
 2 files changed, 13 insertions(+), 23 deletions(-)

diff --git a/drivers/cpuidle/cpuidle-powernv.c b/drivers/cpuidle/cpuidle-powernv.c
index f51478460..7350f404a 100644
--- a/drivers/cpuidle/cpuidle-powernv.c
+++ b/drivers/cpuidle/cpuidle-powernv.c
@@ -45,7 +45,6 @@ struct stop_psscr_table {
 static struct stop_psscr_table stop_psscr_table[CPUIDLE_STATE_MAX] __read_mostly;
 
 static u64 default_snooze_timeout __read_mostly;
-static bool snooze_timeout_en __read_mostly;
 
 static u64 forced_wakeup_timeout(struct cpuidle_device *dev,
 				 struct cpuidle_driver *drv,
@@ -67,26 +66,13 @@ static u64 forced_wakeup_timeout(struct cpuidle_device *dev,
 	return 0;
 }
 
-static u64 get_snooze_timeout(struct cpuidle_device *dev,
-			      struct cpuidle_driver *drv,
-			      int index)
+static void pnv_disable_callback(struct cpuidle_device *dev,
+				 struct cpuidle_driver *drv)
 {
 	int i;
 
-	if (unlikely(!snooze_timeout_en))
-		return default_snooze_timeout;
-
-	for (i = index + 1; i < drv->state_count; i++) {
-		struct cpuidle_state *s = &drv->states[i];
-		struct cpuidle_state_usage *su = &dev->states_usage[i];
-
-		if (s->disabled || su->disable)
-			continue;
-
-		return s->target_residency * tb_ticks_per_usec;
-	}
-
-	return default_snooze_timeout;
+	for (i = 0; i < drv->state_count; i++)
+		drv->states[i].timeout = forced_wakeup_timeout(dev, drv, i);
 }
 
 static int snooze_loop(struct cpuidle_device *dev,
@@ -94,16 +80,20 @@ static int snooze_loop(struct cpuidle_device *dev,
 			int index)
 {
 	u64 snooze_exit_time;
+	u64 snooze_timeout = drv->states[index].timeout;
+
+	if (!snooze_timeout)
+		snooze_timeout = default_snooze_timeout;
 
 	set_thread_flag(TIF_POLLING_NRFLAG);
 
 	local_irq_enable();
 
-	snooze_exit_time = get_tb() + get_snooze_timeout(dev, drv, index);
+	snooze_exit_time = get_tb() + snooze_timeout;
 	ppc64_runlatch_off();
 	HMT_very_low();
 	while (!need_resched()) {
-		if (likely(snooze_timeout_en) && get_tb() > snooze_exit_time) {
+		if (get_tb() > snooze_exit_time) {
 			/*
 			 * Task has not woken up but we are exiting the polling
 			 * loop anyway. Require a barrier after polling is
@@ -168,7 +158,7 @@ static int stop_loop(struct cpuidle_device *dev,
 	u64 timeout_tb;
 	int forced_wakeup = 0;
 
-	timeout_tb = forced_wakeup_timeout(dev, drv, index);
+	timeout_tb = drv->states[index].timeout;
 	if (timeout_tb)
 		forced_wakeup = set_dec_before_idle(timeout_tb);
 
@@ -255,6 +245,7 @@ static int powernv_cpuidle_driver_init(void)
 	 */
 
 	drv->cpumask = (struct cpumask *)cpu_present_mask;
+	drv->disable_callback = pnv_disable_callback;
 
 	return 0;
 }
@@ -414,8 +405,6 @@ static int powernv_idle_probe(void)
 		/* Device tree can indicate more idle states */
 		max_idle_state = powernv_add_idle_states();
 		default_snooze_timeout = TICK_USEC * tb_ticks_per_usec;
-		if (max_idle_state > 1)
-			snooze_timeout_en = true;
  	} else
  		return -ENODEV;
 
diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h
index 8a0e54bd0..31662b657 100644
--- a/include/linux/cpuidle.h
+++ b/include/linux/cpuidle.h
@@ -50,6 +50,7 @@ struct cpuidle_state {
 	int		power_usage; /* in mW */
 	unsigned int	target_residency; /* in US */
 	bool		disabled; /* disabled on all CPUs */
+	unsigned long long timeout; /* timeout for exiting out of a state */
 
 	int (*enter)	(struct cpuidle_device *dev,
 			struct cpuidle_driver *drv,
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 1/3] cpuidle-powernv : forced wakeup for stop states
  2019-07-04  9:18   ` Abhishek Goel
@ 2019-07-07 10:13     ` Nicholas Piggin
  -1 siblings, 0 replies; 12+ messages in thread
From: Nicholas Piggin @ 2019-07-07 10:13 UTC (permalink / raw)
  To: Abhishek Goel, linux-kernel, linux-pm, linuxppc-dev
  Cc: daniel.lezcano, dja, ego, mpe, rjw

Abhishek Goel's on July 4, 2019 7:18 pm:
> Currently, the cpuidle governors determine what idle state a idling CPU
> should enter into based on heuristics that depend on the idle history on
> that CPU. Given that no predictive heuristic is perfect, there are cases
> where the governor predicts a shallow idle state, hoping that the CPU will
> be busy soon. However, if no new workload is scheduled on that CPU in the
> near future, the CPU may end up in the shallow state.
> 
> This is problematic, when the predicted state in the aforementioned
> scenario is a shallow stop state on a tickless system. As we might get
> stuck into shallow states for hours, in absence of ticks or interrupts.
> 
> To address this, We forcefully wakeup the cpu by setting the
> decrementer. The decrementer is set to a value that corresponds with the
> residency of the next available state. Thus firing up a timer that will
> forcefully wakeup the cpu. Few such iterations will essentially train the
> governor to select a deeper state for that cpu, as the timer here
> corresponds to the next available cpuidle state residency. Thus, cpu will
> eventually end up in the deepest possible state.
> 
> Signed-off-by: Abhishek Goel <huntbag@linux.vnet.ibm.com>
> ---
> 
> Auto-promotion
>  v1 : started as auto promotion logic for cpuidle states in generic
> driver
>  v2 : Removed timeout_needed and rebased the code to upstream kernel
> Forced-wakeup
>  v1 : New patch with name of forced wakeup started
>  v2 : Extending the forced wakeup logic for all states. Setting the
> decrementer instead of queuing up a hrtimer to implement the logic.
>  v3 : Cleanly handle setting/resetting of decrementer so as to not break
> irq work 
> 
>  arch/powerpc/include/asm/time.h   |  2 ++
>  arch/powerpc/kernel/time.c        | 40 +++++++++++++++++++++++++++++++
>  drivers/cpuidle/cpuidle-powernv.c | 32 +++++++++++++++++++++++++
>  3 files changed, 74 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h
> index 54f4ec1f9..a3bd4f3c0 100644
> --- a/arch/powerpc/include/asm/time.h
> +++ b/arch/powerpc/include/asm/time.h
> @@ -188,6 +188,8 @@ static inline unsigned long tb_ticks_since(unsigned long tstamp)
>  extern u64 mulhdu(u64, u64);
>  #endif
>  
> +extern int set_dec_before_idle(u64 timeout);
> +extern void reset_dec_after_idle(void);
>  extern void div128_by_32(u64 dividend_high, u64 dividend_low,
>  			 unsigned divisor, struct div_result *dr);
>  
> diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
> index 694522308..814de3469 100644
> --- a/arch/powerpc/kernel/time.c
> +++ b/arch/powerpc/kernel/time.c
> @@ -576,6 +576,46 @@ void arch_irq_work_raise(void)
>  
>  #endif /* CONFIG_IRQ_WORK */
>  
> +/*
> + * Returns 1 if we have reprogrammed the decrementer for idle.
> + * Returns 0 if the decrementer is unchanged.
> + */
> +int set_dec_before_idle(u64 timeout)
> +{
> +	u64 *next_tb = this_cpu_ptr(&decrementers_next_tb);
> +	u64 now = get_tb_or_rtc();
> +
> +	/*
> +	 * Ensure that the timeout is at least one microsecond
> +	 * before the current decrement value. Else, we will
> +	 * unnecesarily wakeup again within a microsecond.
> +	 */
> +	if (now + timeout + 512 > *next_tb)

I would pass this 512 in as a parameter and put the comment in the
idle code. Timer code does not know/care.

Maybe return bool and call it try_set_dec_before_idle.

> +		return 0;
> +
> +	set_dec(timeout);

This needs to have

  if (test_irq_work_pending())
      set_dec(1);

here AFAIKS

> +
> +	return 1;
> +}
> +
> +void reset_dec_after_idle(void)
> +{
> +	u64 now;
> +	u64 *next_tb;
> +
> +	if (test_irq_work_pending())
> +		return;
> +
> +	now = get_tb_or_rtc();
> +	next_tb = this_cpu_ptr(&decrementers_next_tb);
> +	if (now >= *next_tb)
> +		return;

Are you sure it's okay to escape early in this case?

Thanks,
Nick

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 1/3] cpuidle-powernv : forced wakeup for stop states
@ 2019-07-07 10:13     ` Nicholas Piggin
  0 siblings, 0 replies; 12+ messages in thread
From: Nicholas Piggin @ 2019-07-07 10:13 UTC (permalink / raw)
  To: Abhishek Goel, linux-kernel, linux-pm, linuxppc-dev
  Cc: ego, daniel.lezcano, rjw, dja

Abhishek Goel's on July 4, 2019 7:18 pm:
> Currently, the cpuidle governors determine what idle state a idling CPU
> should enter into based on heuristics that depend on the idle history on
> that CPU. Given that no predictive heuristic is perfect, there are cases
> where the governor predicts a shallow idle state, hoping that the CPU will
> be busy soon. However, if no new workload is scheduled on that CPU in the
> near future, the CPU may end up in the shallow state.
> 
> This is problematic, when the predicted state in the aforementioned
> scenario is a shallow stop state on a tickless system. As we might get
> stuck into shallow states for hours, in absence of ticks or interrupts.
> 
> To address this, We forcefully wakeup the cpu by setting the
> decrementer. The decrementer is set to a value that corresponds with the
> residency of the next available state. Thus firing up a timer that will
> forcefully wakeup the cpu. Few such iterations will essentially train the
> governor to select a deeper state for that cpu, as the timer here
> corresponds to the next available cpuidle state residency. Thus, cpu will
> eventually end up in the deepest possible state.
> 
> Signed-off-by: Abhishek Goel <huntbag@linux.vnet.ibm.com>
> ---
> 
> Auto-promotion
>  v1 : started as auto promotion logic for cpuidle states in generic
> driver
>  v2 : Removed timeout_needed and rebased the code to upstream kernel
> Forced-wakeup
>  v1 : New patch with name of forced wakeup started
>  v2 : Extending the forced wakeup logic for all states. Setting the
> decrementer instead of queuing up a hrtimer to implement the logic.
>  v3 : Cleanly handle setting/resetting of decrementer so as to not break
> irq work 
> 
>  arch/powerpc/include/asm/time.h   |  2 ++
>  arch/powerpc/kernel/time.c        | 40 +++++++++++++++++++++++++++++++
>  drivers/cpuidle/cpuidle-powernv.c | 32 +++++++++++++++++++++++++
>  3 files changed, 74 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h
> index 54f4ec1f9..a3bd4f3c0 100644
> --- a/arch/powerpc/include/asm/time.h
> +++ b/arch/powerpc/include/asm/time.h
> @@ -188,6 +188,8 @@ static inline unsigned long tb_ticks_since(unsigned long tstamp)
>  extern u64 mulhdu(u64, u64);
>  #endif
>  
> +extern int set_dec_before_idle(u64 timeout);
> +extern void reset_dec_after_idle(void);
>  extern void div128_by_32(u64 dividend_high, u64 dividend_low,
>  			 unsigned divisor, struct div_result *dr);
>  
> diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
> index 694522308..814de3469 100644
> --- a/arch/powerpc/kernel/time.c
> +++ b/arch/powerpc/kernel/time.c
> @@ -576,6 +576,46 @@ void arch_irq_work_raise(void)
>  
>  #endif /* CONFIG_IRQ_WORK */
>  
> +/*
> + * Returns 1 if we have reprogrammed the decrementer for idle.
> + * Returns 0 if the decrementer is unchanged.
> + */
> +int set_dec_before_idle(u64 timeout)
> +{
> +	u64 *next_tb = this_cpu_ptr(&decrementers_next_tb);
> +	u64 now = get_tb_or_rtc();
> +
> +	/*
> +	 * Ensure that the timeout is at least one microsecond
> +	 * before the current decrement value. Else, we will
> +	 * unnecesarily wakeup again within a microsecond.
> +	 */
> +	if (now + timeout + 512 > *next_tb)

I would pass this 512 in as a parameter and put the comment in the
idle code. Timer code does not know/care.

Maybe return bool and call it try_set_dec_before_idle.

> +		return 0;
> +
> +	set_dec(timeout);

This needs to have

  if (test_irq_work_pending())
      set_dec(1);

here AFAIKS

> +
> +	return 1;
> +}
> +
> +void reset_dec_after_idle(void)
> +{
> +	u64 now;
> +	u64 *next_tb;
> +
> +	if (test_irq_work_pending())
> +		return;
> +
> +	now = get_tb_or_rtc();
> +	next_tb = this_cpu_ptr(&decrementers_next_tb);
> +	if (now >= *next_tb)
> +		return;

Are you sure it's okay to escape early in this case?

Thanks,
Nick

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 1/3] cpuidle-powernv : forced wakeup for stop states
  2019-07-07 10:13     ` Nicholas Piggin
@ 2019-07-09  5:32       ` Abhishek
  -1 siblings, 0 replies; 12+ messages in thread
From: Abhishek @ 2019-07-09  5:32 UTC (permalink / raw)
  To: Nicholas Piggin, linux-kernel, linux-pm, linuxppc-dev
  Cc: daniel.lezcano, dja, ego, mpe, rjw

Hi Nick,

Will post next version with the changes you have suggested.
There is a comment below.

On 07/07/2019 03:43 PM, Nicholas Piggin wrote:
> Abhishek Goel's on July 4, 2019 7:18 pm:
>> Currently, the cpuidle governors determine what idle state a idling CPU
>> should enter into based on heuristics that depend on the idle history on
>> that CPU. Given that no predictive heuristic is perfect, there are cases
>> where the governor predicts a shallow idle state, hoping that the CPU will
>> be busy soon. However, if no new workload is scheduled on that CPU in the
>> near future, the CPU may end up in the shallow state.
>>
>> This is problematic, when the predicted state in the aforementioned
>> scenario is a shallow stop state on a tickless system. As we might get
>> stuck into shallow states for hours, in absence of ticks or interrupts.
>>
>> To address this, We forcefully wakeup the cpu by setting the
>> decrementer. The decrementer is set to a value that corresponds with the
>> residency of the next available state. Thus firing up a timer that will
>> forcefully wakeup the cpu. Few such iterations will essentially train the
>> governor to select a deeper state for that cpu, as the timer here
>> corresponds to the next available cpuidle state residency. Thus, cpu will
>> eventually end up in the deepest possible state.
>>
>> Signed-off-by: Abhishek Goel <huntbag@linux.vnet.ibm.com>
>> ---
>>
>> Auto-promotion
>>   v1 : started as auto promotion logic for cpuidle states in generic
>> driver
>>   v2 : Removed timeout_needed and rebased the code to upstream kernel
>> Forced-wakeup
>>   v1 : New patch with name of forced wakeup started
>>   v2 : Extending the forced wakeup logic for all states. Setting the
>> decrementer instead of queuing up a hrtimer to implement the logic.
>>   v3 : Cleanly handle setting/resetting of decrementer so as to not break
>> irq work
>>
>>   arch/powerpc/include/asm/time.h   |  2 ++
>>   arch/powerpc/kernel/time.c        | 40 +++++++++++++++++++++++++++++++
>>   drivers/cpuidle/cpuidle-powernv.c | 32 +++++++++++++++++++++++++
>>   3 files changed, 74 insertions(+)
>>
>> diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h
>> index 54f4ec1f9..a3bd4f3c0 100644
>> --- a/arch/powerpc/include/asm/time.h
>> +++ b/arch/powerpc/include/asm/time.h
>> @@ -188,6 +188,8 @@ static inline unsigned long tb_ticks_since(unsigned long tstamp)
>>   extern u64 mulhdu(u64, u64);
>>   #endif
>>   
>> +extern int set_dec_before_idle(u64 timeout);
>> +extern void reset_dec_after_idle(void);
>>   extern void div128_by_32(u64 dividend_high, u64 dividend_low,
>>   			 unsigned divisor, struct div_result *dr);
>>   
>> diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
>> index 694522308..814de3469 100644
>> --- a/arch/powerpc/kernel/time.c
>> +++ b/arch/powerpc/kernel/time.c
>> @@ -576,6 +576,46 @@ void arch_irq_work_raise(void)
>>   
>>   #endif /* CONFIG_IRQ_WORK */
>>   
>> +/*
>> + * Returns 1 if we have reprogrammed the decrementer for idle.
>> + * Returns 0 if the decrementer is unchanged.
>> + */
>> +int set_dec_before_idle(u64 timeout)
>> +{
>> +	u64 *next_tb = this_cpu_ptr(&decrementers_next_tb);
>> +	u64 now = get_tb_or_rtc();
>> +
>> +	/*
>> +	 * Ensure that the timeout is at least one microsecond
>> +	 * before the current decrement value. Else, we will
>> +	 * unnecesarily wakeup again within a microsecond.
>> +	 */
>> +	if (now + timeout + 512 > *next_tb)
> I would pass this 512 in as a parameter and put the comment in the
> idle code. Timer code does not know/care.
>
> Maybe return bool and call it try_set_dec_before_idle.
>> +		return 0;
>> +
>> +	set_dec(timeout);
> This needs to have
>
>    if (test_irq_work_pending())
>        set_dec(1);
>
> here AFAIKS
>
>> +
>> +	return 1;
>> +}
>> +
>> +void reset_dec_after_idle(void)
>> +{
>> +	u64 now;
>> +	u64 *next_tb;
>> +
>> +	if (test_irq_work_pending())
>> +		return;
>> +
>> +	now = get_tb_or_rtc();
>> +	next_tb = this_cpu_ptr(&decrementers_next_tb);
>> +	if (now >= *next_tb)
>> +		return;
> Are you sure it's okay to escape early in this case?

Yeah, It looks safe. In power9_idle_type, we call irq_set_pending_from_srr1
which sets the irq_happened. If reason is IRQ_DEC, in __check_irq_replay,
decrementer_check_overflow will be called which will set dec to a positive
valid value.
Also, we typically disable MSR EE before entering stop. And if a decrementer
wakes us up, before we enable EE, check for pending interrupt will be done.
And we finally reset dec to a positive value before we set EE=1.
> Thanks,
> Nick
>

Thanks,
Abhishek


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 1/3] cpuidle-powernv : forced wakeup for stop states
@ 2019-07-09  5:32       ` Abhishek
  0 siblings, 0 replies; 12+ messages in thread
From: Abhishek @ 2019-07-09  5:32 UTC (permalink / raw)
  To: Nicholas Piggin, linux-kernel, linux-pm, linuxppc-dev
  Cc: ego, daniel.lezcano, rjw, dja

Hi Nick,

Will post next version with the changes you have suggested.
There is a comment below.

On 07/07/2019 03:43 PM, Nicholas Piggin wrote:
> Abhishek Goel's on July 4, 2019 7:18 pm:
>> Currently, the cpuidle governors determine what idle state a idling CPU
>> should enter into based on heuristics that depend on the idle history on
>> that CPU. Given that no predictive heuristic is perfect, there are cases
>> where the governor predicts a shallow idle state, hoping that the CPU will
>> be busy soon. However, if no new workload is scheduled on that CPU in the
>> near future, the CPU may end up in the shallow state.
>>
>> This is problematic, when the predicted state in the aforementioned
>> scenario is a shallow stop state on a tickless system. As we might get
>> stuck into shallow states for hours, in absence of ticks or interrupts.
>>
>> To address this, We forcefully wakeup the cpu by setting the
>> decrementer. The decrementer is set to a value that corresponds with the
>> residency of the next available state. Thus firing up a timer that will
>> forcefully wakeup the cpu. Few such iterations will essentially train the
>> governor to select a deeper state for that cpu, as the timer here
>> corresponds to the next available cpuidle state residency. Thus, cpu will
>> eventually end up in the deepest possible state.
>>
>> Signed-off-by: Abhishek Goel <huntbag@linux.vnet.ibm.com>
>> ---
>>
>> Auto-promotion
>>   v1 : started as auto promotion logic for cpuidle states in generic
>> driver
>>   v2 : Removed timeout_needed and rebased the code to upstream kernel
>> Forced-wakeup
>>   v1 : New patch with name of forced wakeup started
>>   v2 : Extending the forced wakeup logic for all states. Setting the
>> decrementer instead of queuing up a hrtimer to implement the logic.
>>   v3 : Cleanly handle setting/resetting of decrementer so as to not break
>> irq work
>>
>>   arch/powerpc/include/asm/time.h   |  2 ++
>>   arch/powerpc/kernel/time.c        | 40 +++++++++++++++++++++++++++++++
>>   drivers/cpuidle/cpuidle-powernv.c | 32 +++++++++++++++++++++++++
>>   3 files changed, 74 insertions(+)
>>
>> diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h
>> index 54f4ec1f9..a3bd4f3c0 100644
>> --- a/arch/powerpc/include/asm/time.h
>> +++ b/arch/powerpc/include/asm/time.h
>> @@ -188,6 +188,8 @@ static inline unsigned long tb_ticks_since(unsigned long tstamp)
>>   extern u64 mulhdu(u64, u64);
>>   #endif
>>   
>> +extern int set_dec_before_idle(u64 timeout);
>> +extern void reset_dec_after_idle(void);
>>   extern void div128_by_32(u64 dividend_high, u64 dividend_low,
>>   			 unsigned divisor, struct div_result *dr);
>>   
>> diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
>> index 694522308..814de3469 100644
>> --- a/arch/powerpc/kernel/time.c
>> +++ b/arch/powerpc/kernel/time.c
>> @@ -576,6 +576,46 @@ void arch_irq_work_raise(void)
>>   
>>   #endif /* CONFIG_IRQ_WORK */
>>   
>> +/*
>> + * Returns 1 if we have reprogrammed the decrementer for idle.
>> + * Returns 0 if the decrementer is unchanged.
>> + */
>> +int set_dec_before_idle(u64 timeout)
>> +{
>> +	u64 *next_tb = this_cpu_ptr(&decrementers_next_tb);
>> +	u64 now = get_tb_or_rtc();
>> +
>> +	/*
>> +	 * Ensure that the timeout is at least one microsecond
>> +	 * before the current decrement value. Else, we will
>> +	 * unnecesarily wakeup again within a microsecond.
>> +	 */
>> +	if (now + timeout + 512 > *next_tb)
> I would pass this 512 in as a parameter and put the comment in the
> idle code. Timer code does not know/care.
>
> Maybe return bool and call it try_set_dec_before_idle.
>> +		return 0;
>> +
>> +	set_dec(timeout);
> This needs to have
>
>    if (test_irq_work_pending())
>        set_dec(1);
>
> here AFAIKS
>
>> +
>> +	return 1;
>> +}
>> +
>> +void reset_dec_after_idle(void)
>> +{
>> +	u64 now;
>> +	u64 *next_tb;
>> +
>> +	if (test_irq_work_pending())
>> +		return;
>> +
>> +	now = get_tb_or_rtc();
>> +	next_tb = this_cpu_ptr(&decrementers_next_tb);
>> +	if (now >= *next_tb)
>> +		return;
> Are you sure it's okay to escape early in this case?

Yeah, It looks safe. In power9_idle_type, we call irq_set_pending_from_srr1
which sets the irq_happened. If reason is IRQ_DEC, in __check_irq_replay,
decrementer_check_overflow will be called which will set dec to a positive
valid value.
Also, we typically disable MSR EE before entering stop. And if a decrementer
wakes us up, before we enable EE, check for pending interrupt will be done.
And we finally reset dec to a positive value before we set EE=1.
> Thanks,
> Nick
>

Thanks,
Abhishek


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2019-07-09  5:35 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-04  9:18 [PATCH v3 0/3] Forced-wakeup for stop states on Powernv Abhishek Goel
2019-07-04  9:18 ` Abhishek Goel
2019-07-04  9:18 ` [PATCH v3 1/3] cpuidle-powernv : forced wakeup for stop states Abhishek Goel
2019-07-04  9:18   ` Abhishek Goel
2019-07-07 10:13   ` Nicholas Piggin
2019-07-07 10:13     ` Nicholas Piggin
2019-07-09  5:32     ` Abhishek
2019-07-09  5:32       ` Abhishek
2019-07-04  9:18 ` [RFC v3 2/3] cpuidle : Add callback whenever a state usage is enabled/disabled Abhishek Goel
2019-07-04  9:18   ` Abhishek Goel
2019-07-04  9:18 ` [RFC v3 3/3] cpuidle-powernv : Recompute the idle-state timeouts when " Abhishek Goel
2019-07-04  9:18   ` Abhishek Goel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.