All of lore.kernel.org
 help / color / mirror / Atom feed
From: Abhishek Goel <huntbag@linux.vnet.ibm.com>
To: linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-pm@vger.kernel.org
Cc: npiggin@gmail.com, rjw@rjwysocki.net, daniel.lezcano@linaro.org,
	mpe@ellerman.id.au, ego@linux.vnet.ibm.com, dja@axtens.net,
	Abhishek Goel <huntbag@linux.vnet.ibm.com>
Subject: [PATCH v3 1/3] cpuidle-powernv : forced wakeup for stop states
Date: Thu,  4 Jul 2019 04:18:25 -0500	[thread overview]
Message-ID: <20190704091827.19555-2-huntbag@linux.vnet.ibm.com> (raw)
In-Reply-To: <20190704091827.19555-1-huntbag@linux.vnet.ibm.com>

Currently, the cpuidle governors determine what idle state a idling CPU
should enter into based on heuristics that depend on the idle history on
that CPU. Given that no predictive heuristic is perfect, there are cases
where the governor predicts a shallow idle state, hoping that the CPU will
be busy soon. However, if no new workload is scheduled on that CPU in the
near future, the CPU may end up in the shallow state.

This is problematic, when the predicted state in the aforementioned
scenario is a shallow stop state on a tickless system. As we might get
stuck into shallow states for hours, in absence of ticks or interrupts.

To address this, We forcefully wakeup the cpu by setting the
decrementer. The decrementer is set to a value that corresponds with the
residency of the next available state. Thus firing up a timer that will
forcefully wakeup the cpu. Few such iterations will essentially train the
governor to select a deeper state for that cpu, as the timer here
corresponds to the next available cpuidle state residency. Thus, cpu will
eventually end up in the deepest possible state.

Signed-off-by: Abhishek Goel <huntbag@linux.vnet.ibm.com>
---

Auto-promotion
 v1 : started as auto promotion logic for cpuidle states in generic
driver
 v2 : Removed timeout_needed and rebased the code to upstream kernel
Forced-wakeup
 v1 : New patch with name of forced wakeup started
 v2 : Extending the forced wakeup logic for all states. Setting the
decrementer instead of queuing up a hrtimer to implement the logic.
 v3 : Cleanly handle setting/resetting of decrementer so as to not break
irq work 

 arch/powerpc/include/asm/time.h   |  2 ++
 arch/powerpc/kernel/time.c        | 40 +++++++++++++++++++++++++++++++
 drivers/cpuidle/cpuidle-powernv.c | 32 +++++++++++++++++++++++++
 3 files changed, 74 insertions(+)

diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h
index 54f4ec1f9..a3bd4f3c0 100644
--- a/arch/powerpc/include/asm/time.h
+++ b/arch/powerpc/include/asm/time.h
@@ -188,6 +188,8 @@ static inline unsigned long tb_ticks_since(unsigned long tstamp)
 extern u64 mulhdu(u64, u64);
 #endif
 
+extern int set_dec_before_idle(u64 timeout);
+extern void reset_dec_after_idle(void);
 extern void div128_by_32(u64 dividend_high, u64 dividend_low,
 			 unsigned divisor, struct div_result *dr);
 
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index 694522308..814de3469 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -576,6 +576,46 @@ void arch_irq_work_raise(void)
 
 #endif /* CONFIG_IRQ_WORK */
 
+/*
+ * Returns 1 if we have reprogrammed the decrementer for idle.
+ * Returns 0 if the decrementer is unchanged.
+ */
+int set_dec_before_idle(u64 timeout)
+{
+	u64 *next_tb = this_cpu_ptr(&decrementers_next_tb);
+	u64 now = get_tb_or_rtc();
+
+	/*
+	 * Ensure that the timeout is at least one microsecond
+	 * before the current decrement value. Else, we will
+	 * unnecesarily wakeup again within a microsecond.
+	 */
+	if (now + timeout + 512 > *next_tb)
+		return 0;
+
+	set_dec(timeout);
+
+	return 1;
+}
+
+void reset_dec_after_idle(void)
+{
+	u64 now;
+	u64 *next_tb;
+
+	if (test_irq_work_pending())
+		return;
+
+	now = get_tb_or_rtc();
+	next_tb = this_cpu_ptr(&decrementers_next_tb);
+	if (now >= *next_tb)
+		return;
+
+	set_dec(*next_tb - now);
+	if (test_irq_work_pending())
+		set_dec(1);
+}
+
 /*
  * timer_interrupt - gets called when the decrementer overflows,
  * with interrupts disabled.
diff --git a/drivers/cpuidle/cpuidle-powernv.c b/drivers/cpuidle/cpuidle-powernv.c
index 84b1ebe21..f51478460 100644
--- a/drivers/cpuidle/cpuidle-powernv.c
+++ b/drivers/cpuidle/cpuidle-powernv.c
@@ -21,6 +21,7 @@
 #include <asm/opal.h>
 #include <asm/runlatch.h>
 #include <asm/cpuidle.h>
+#include <asm/time.h>
 
 /*
  * Expose only those Hardware idle states via the cpuidle framework
@@ -46,6 +47,26 @@ static struct stop_psscr_table stop_psscr_table[CPUIDLE_STATE_MAX] __read_mostly
 static u64 default_snooze_timeout __read_mostly;
 static bool snooze_timeout_en __read_mostly;
 
+static u64 forced_wakeup_timeout(struct cpuidle_device *dev,
+				 struct cpuidle_driver *drv,
+				 int index)
+{
+	int i;
+
+	for (i = index + 1; i < drv->state_count; i++) {
+		struct cpuidle_state *s = &drv->states[i];
+		struct cpuidle_state_usage *su = &dev->states_usage[i];
+
+		if (s->disabled || su->disable)
+			continue;
+
+		return (s->target_residency + 2 * s->exit_latency) *
+			tb_ticks_per_usec;
+	}
+
+	return 0;
+}
+
 static u64 get_snooze_timeout(struct cpuidle_device *dev,
 			      struct cpuidle_driver *drv,
 			      int index)
@@ -144,8 +165,19 @@ static int stop_loop(struct cpuidle_device *dev,
 		     struct cpuidle_driver *drv,
 		     int index)
 {
+	u64 timeout_tb;
+	int forced_wakeup = 0;
+
+	timeout_tb = forced_wakeup_timeout(dev, drv, index);
+	if (timeout_tb)
+		forced_wakeup = set_dec_before_idle(timeout_tb);
+
 	power9_idle_type(stop_psscr_table[index].val,
 			 stop_psscr_table[index].mask);
+
+	if (forced_wakeup)
+		reset_dec_after_idle();
+
 	return index;
 }
 
-- 
2.17.1


WARNING: multiple messages have this Message-ID (diff)
From: Abhishek Goel <huntbag@linux.vnet.ibm.com>
To: linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	linux-pm@vger.kernel.org
Cc: ego@linux.vnet.ibm.com, daniel.lezcano@linaro.org,
	rjw@rjwysocki.net, npiggin@gmail.com,
	Abhishek Goel <huntbag@linux.vnet.ibm.com>,
	dja@axtens.net
Subject: [PATCH v3 1/3] cpuidle-powernv : forced wakeup for stop states
Date: Thu,  4 Jul 2019 04:18:25 -0500	[thread overview]
Message-ID: <20190704091827.19555-2-huntbag@linux.vnet.ibm.com> (raw)
In-Reply-To: <20190704091827.19555-1-huntbag@linux.vnet.ibm.com>

Currently, the cpuidle governors determine what idle state a idling CPU
should enter into based on heuristics that depend on the idle history on
that CPU. Given that no predictive heuristic is perfect, there are cases
where the governor predicts a shallow idle state, hoping that the CPU will
be busy soon. However, if no new workload is scheduled on that CPU in the
near future, the CPU may end up in the shallow state.

This is problematic, when the predicted state in the aforementioned
scenario is a shallow stop state on a tickless system. As we might get
stuck into shallow states for hours, in absence of ticks or interrupts.

To address this, We forcefully wakeup the cpu by setting the
decrementer. The decrementer is set to a value that corresponds with the
residency of the next available state. Thus firing up a timer that will
forcefully wakeup the cpu. Few such iterations will essentially train the
governor to select a deeper state for that cpu, as the timer here
corresponds to the next available cpuidle state residency. Thus, cpu will
eventually end up in the deepest possible state.

Signed-off-by: Abhishek Goel <huntbag@linux.vnet.ibm.com>
---

Auto-promotion
 v1 : started as auto promotion logic for cpuidle states in generic
driver
 v2 : Removed timeout_needed and rebased the code to upstream kernel
Forced-wakeup
 v1 : New patch with name of forced wakeup started
 v2 : Extending the forced wakeup logic for all states. Setting the
decrementer instead of queuing up a hrtimer to implement the logic.
 v3 : Cleanly handle setting/resetting of decrementer so as to not break
irq work 

 arch/powerpc/include/asm/time.h   |  2 ++
 arch/powerpc/kernel/time.c        | 40 +++++++++++++++++++++++++++++++
 drivers/cpuidle/cpuidle-powernv.c | 32 +++++++++++++++++++++++++
 3 files changed, 74 insertions(+)

diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h
index 54f4ec1f9..a3bd4f3c0 100644
--- a/arch/powerpc/include/asm/time.h
+++ b/arch/powerpc/include/asm/time.h
@@ -188,6 +188,8 @@ static inline unsigned long tb_ticks_since(unsigned long tstamp)
 extern u64 mulhdu(u64, u64);
 #endif
 
+extern int set_dec_before_idle(u64 timeout);
+extern void reset_dec_after_idle(void);
 extern void div128_by_32(u64 dividend_high, u64 dividend_low,
 			 unsigned divisor, struct div_result *dr);
 
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index 694522308..814de3469 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -576,6 +576,46 @@ void arch_irq_work_raise(void)
 
 #endif /* CONFIG_IRQ_WORK */
 
+/*
+ * Returns 1 if we have reprogrammed the decrementer for idle.
+ * Returns 0 if the decrementer is unchanged.
+ */
+int set_dec_before_idle(u64 timeout)
+{
+	u64 *next_tb = this_cpu_ptr(&decrementers_next_tb);
+	u64 now = get_tb_or_rtc();
+
+	/*
+	 * Ensure that the timeout is at least one microsecond
+	 * before the current decrement value. Else, we will
+	 * unnecesarily wakeup again within a microsecond.
+	 */
+	if (now + timeout + 512 > *next_tb)
+		return 0;
+
+	set_dec(timeout);
+
+	return 1;
+}
+
+void reset_dec_after_idle(void)
+{
+	u64 now;
+	u64 *next_tb;
+
+	if (test_irq_work_pending())
+		return;
+
+	now = get_tb_or_rtc();
+	next_tb = this_cpu_ptr(&decrementers_next_tb);
+	if (now >= *next_tb)
+		return;
+
+	set_dec(*next_tb - now);
+	if (test_irq_work_pending())
+		set_dec(1);
+}
+
 /*
  * timer_interrupt - gets called when the decrementer overflows,
  * with interrupts disabled.
diff --git a/drivers/cpuidle/cpuidle-powernv.c b/drivers/cpuidle/cpuidle-powernv.c
index 84b1ebe21..f51478460 100644
--- a/drivers/cpuidle/cpuidle-powernv.c
+++ b/drivers/cpuidle/cpuidle-powernv.c
@@ -21,6 +21,7 @@
 #include <asm/opal.h>
 #include <asm/runlatch.h>
 #include <asm/cpuidle.h>
+#include <asm/time.h>
 
 /*
  * Expose only those Hardware idle states via the cpuidle framework
@@ -46,6 +47,26 @@ static struct stop_psscr_table stop_psscr_table[CPUIDLE_STATE_MAX] __read_mostly
 static u64 default_snooze_timeout __read_mostly;
 static bool snooze_timeout_en __read_mostly;
 
+static u64 forced_wakeup_timeout(struct cpuidle_device *dev,
+				 struct cpuidle_driver *drv,
+				 int index)
+{
+	int i;
+
+	for (i = index + 1; i < drv->state_count; i++) {
+		struct cpuidle_state *s = &drv->states[i];
+		struct cpuidle_state_usage *su = &dev->states_usage[i];
+
+		if (s->disabled || su->disable)
+			continue;
+
+		return (s->target_residency + 2 * s->exit_latency) *
+			tb_ticks_per_usec;
+	}
+
+	return 0;
+}
+
 static u64 get_snooze_timeout(struct cpuidle_device *dev,
 			      struct cpuidle_driver *drv,
 			      int index)
@@ -144,8 +165,19 @@ static int stop_loop(struct cpuidle_device *dev,
 		     struct cpuidle_driver *drv,
 		     int index)
 {
+	u64 timeout_tb;
+	int forced_wakeup = 0;
+
+	timeout_tb = forced_wakeup_timeout(dev, drv, index);
+	if (timeout_tb)
+		forced_wakeup = set_dec_before_idle(timeout_tb);
+
 	power9_idle_type(stop_psscr_table[index].val,
 			 stop_psscr_table[index].mask);
+
+	if (forced_wakeup)
+		reset_dec_after_idle();
+
 	return index;
 }
 
-- 
2.17.1


  reply	other threads:[~2019-07-04  9:21 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-04  9:18 [PATCH v3 0/3] Forced-wakeup for stop states on Powernv Abhishek Goel
2019-07-04  9:18 ` Abhishek Goel
2019-07-04  9:18 ` Abhishek Goel [this message]
2019-07-04  9:18   ` [PATCH v3 1/3] cpuidle-powernv : forced wakeup for stop states Abhishek Goel
2019-07-07 10:13   ` Nicholas Piggin
2019-07-07 10:13     ` Nicholas Piggin
2019-07-09  5:32     ` Abhishek
2019-07-09  5:32       ` Abhishek
2019-07-04  9:18 ` [RFC v3 2/3] cpuidle : Add callback whenever a state usage is enabled/disabled Abhishek Goel
2019-07-04  9:18   ` Abhishek Goel
2019-07-04  9:18 ` [RFC v3 3/3] cpuidle-powernv : Recompute the idle-state timeouts when " Abhishek Goel
2019-07-04  9:18   ` Abhishek Goel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190704091827.19555-2-huntbag@linux.vnet.ibm.com \
    --to=huntbag@linux.vnet.ibm.com \
    --cc=daniel.lezcano@linaro.org \
    --cc=dja@axtens.net \
    --cc=ego@linux.vnet.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mpe@ellerman.id.au \
    --cc=npiggin@gmail.com \
    --cc=rjw@rjwysocki.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.