Linux-PM Archive on lore.kernel.org
 help / color / Atom feed
* [PATCH] cpuidle/drivers/mobile: Add new governor for mobile/embedded systems
@ 2019-06-20 11:58 Daniel Lezcano
  2019-06-22  3:52 ` kbuild test robot
                   ` (4 more replies)
  0 siblings, 5 replies; 10+ messages in thread
From: Daniel Lezcano @ 2019-06-20 11:58 UTC (permalink / raw)
  To: rafael
  Cc: linux-kernel, Rafael J. Wysocki, Thomas Gleixner,
	Greg Kroah-Hartman, linux-pm

The objective is the same for all the governors: save energy, but at
the end the governors menu, ladder and teo aim to improve the
performances with an acceptable energy drop for some workloads which
are identified for servers and desktops (with the help of a firmware).

The ladder governor is designed for server with a periodic tick
configuration.

The menu governor does not behave nicely with the mobile platform and
the energy saving for the multimedia workloads is worst than picking
up randomly an idle state.

The teo governor acts efficiently, it promotes shallower state for
performances which is perfect for the servers / desktop but inadequate
for mobile because the energy consumed is too high.

It is very difficult to do changes in these governors for embedded
systems without impacting performances on servers/desktops or ruin the
optimizations for the workloads on these platforms.

The mobile governor is a new governor targeting embedded systems
running on battery where the energy saving has a higher priority than
servers or desktops. This governor aims to save energy as much as
possible but with a performance degradation tolerance.

In this way, we can optimize the governor for specific mobile workload
and more generally embedded systems without impacting other platforms.

The mobile governor is built on top of the paradigm 'separate the wake
up sources signals and analyze them'. Three categories of wake up
signals are identified:
 - deterministic : timers
 - predictable : most of the devices interrupt
 - unpredictable : IPI rescheduling, random signals

The latter needs an iterative approach and the help of the scheduler
to give more input to the governor.

The governor uses the irq timings where we predict the next interrupt
occurrences on the current CPU and the next timer. It is well suited
for mobile and more generally embedded systems where the interrupts
are usually pinned on one CPU and where the power is more important
than the performances.

The multimedia applications on the embedded system spawn multiple
threads which are migrated across the different CPUs and waking
between them up. In order to catch this situation we have also to
track the idle task rescheduling duration with a relative degree of
confidence as the scheduler is involved in the task migrations. The
resched information is in the scope of the governor via the reflect
callback.

The governor begins with a clean foundation basing the prediction on
the irq behavior returned by the irq timings, the timers and the idle
task rescheduling. The advantage of the approach is we have a full
view of the wakeup sources as we identify them separately and then we
can control the situation without relying on biased heuristics.

This first iteration provides a basic prediction but improves on some
mobile platforms better energy for better performance for multimedia
workloads.

The scheduling aspect will be optimized iteratively with non
regression testing for previous identified workloads on an Android
reference platform.

Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
---
 drivers/cpuidle/Kconfig            |  11 ++-
 drivers/cpuidle/governors/Makefile |   1 +
 drivers/cpuidle/governors/mobile.c | 151 +++++++++++++++++++++++++++++
 3 files changed, 162 insertions(+), 1 deletion(-)
 create mode 100644 drivers/cpuidle/governors/mobile.c

diff --git a/drivers/cpuidle/Kconfig b/drivers/cpuidle/Kconfig
index a4ac31e4a58c..e2376d85e288 100644
--- a/drivers/cpuidle/Kconfig
+++ b/drivers/cpuidle/Kconfig
@@ -5,7 +5,7 @@ config CPU_IDLE
 	bool "CPU idle PM support"
 	default y if ACPI || PPC_PSERIES
 	select CPU_IDLE_GOV_LADDER if (!NO_HZ && !NO_HZ_IDLE)
-	select CPU_IDLE_GOV_MENU if (NO_HZ || NO_HZ_IDLE) && !CPU_IDLE_GOV_TEO
+	select CPU_IDLE_GOV_MENU if (NO_HZ || NO_HZ_IDLE) && !CPU_IDLE_GOV_TEO && !CPU_IDLE_GOV_MOBILE
 	help
 	  CPU idle is a generic framework for supporting software-controlled
 	  idle processor power management.  It includes modular cross-platform
@@ -33,6 +33,15 @@ config CPU_IDLE_GOV_TEO
 	  Some workloads benefit from using it and it generally should be safe
 	  to use.  Say Y here if you are not happy with the alternatives.
 
+config CPU_IDLE_GOV_MOBILE
+	bool "Mobile governor"
+	select IRQ_TIMINGS
+	help
+	  The mobile governor is based on irq timings measurements and
+	  pattern research combined with the next timer. This governor
+	  suits very well on embedded systems where the interrupts are
+	  grouped on a single core and the power is the priority.
+
 config DT_IDLE_STATES
 	bool
 
diff --git a/drivers/cpuidle/governors/Makefile b/drivers/cpuidle/governors/Makefile
index 42f44cc610dd..f09da7178670 100644
--- a/drivers/cpuidle/governors/Makefile
+++ b/drivers/cpuidle/governors/Makefile
@@ -6,3 +6,4 @@
 obj-$(CONFIG_CPU_IDLE_GOV_LADDER) += ladder.o
 obj-$(CONFIG_CPU_IDLE_GOV_MENU) += menu.o
 obj-$(CONFIG_CPU_IDLE_GOV_TEO) += teo.o
+obj-$(CONFIG_CPU_IDLE_GOV_MOBILE) += mobile.o
diff --git a/drivers/cpuidle/governors/mobile.c b/drivers/cpuidle/governors/mobile.c
new file mode 100644
index 000000000000..8fda0f9b960b
--- /dev/null
+++ b/drivers/cpuidle/governors/mobile.c
@@ -0,0 +1,151 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019, Linaro Ltd
+ * Author: Daniel Lezcano <daniel.lezcano@linaro.org>
+ */
+#include <linux/cpuidle.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/tick.h>
+#include <linux/interrupt.h>
+#include <linux/sched/clock.h>
+
+struct mobile_device {
+	u64 idle_ema_avg;
+	u64 idle_total;
+	unsigned long last_jiffies;
+};
+
+#define EMA_ALPHA_VAL		64
+#define EMA_ALPHA_SHIFT		7
+#define MAX_RESCHED_INTERVAL_MS	100
+
+static DEFINE_PER_CPU(struct mobile_device, mobile_devices);
+
+static int mobile_ema_new(s64 value, s64 ema_old)
+{
+	if (likely(ema_old))
+		return ema_old + (((value - ema_old) * EMA_ALPHA_VAL) >>
+				  EMA_ALPHA_SHIFT);
+	return value;
+}
+
+static void mobile_reflect(struct cpuidle_device *dev, int index)
+{
+        struct mobile_device *mobile_dev = this_cpu_ptr(&mobile_devices);
+	struct cpuidle_driver *drv = cpuidle_get_cpu_driver(dev);
+	struct cpuidle_state *s = &drv->states[index];
+	int residency;
+
+	/*
+	 * The idle task was not rescheduled since
+	 * MAX_RESCHED_INTERVAL_MS, let's consider the duration is
+	 * long enough to clear our stats.
+	 */
+	if (time_after(jiffies, mobile_dev->last_jiffies +
+		       msecs_to_jiffies(MAX_RESCHED_INTERVAL_MS)))
+		mobile_dev->idle_ema_avg = 0;
+
+	/*
+	 * Sum all the residencies in order to compute the total
+	 * duration of the idle task.
+	 */
+	residency = dev->last_residency - s->exit_latency;
+	if (residency > 0)
+		mobile_dev->idle_total += residency;
+
+	/*
+	 * We exited the idle state with the need_resched() flag, the
+	 * idle task will be rescheduled, so store the duration the
+	 * idle task was scheduled in an exponential moving average and
+	 * reset the total of the idle duration.
+	 */
+	if (need_resched()) {
+		mobile_dev->idle_ema_avg = mobile_ema_new(mobile_dev->idle_total,
+						      mobile_dev->idle_ema_avg);
+		mobile_dev->idle_total = 0;
+		mobile_dev->last_jiffies = jiffies;
+	}
+}
+
+static int mobile_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+		       bool *stop_tick)
+{
+	struct mobile_device *mobile_dev = this_cpu_ptr(&mobile_devices);
+	int latency_req = cpuidle_governor_latency_req(dev->cpu);
+	int i, index = 0;
+	ktime_t delta_next;
+	u64 now, irq_length, timer_length;
+	u64 idle_duration_us;
+
+	/*
+	 * Get the present time as reference for the next steps
+	 */
+	now = local_clock();
+
+	/*
+	 * Get the next interrupt event giving the 'now' as a
+	 * reference, if the next event appears to have already
+	 * expired then we get the 'now' returned which ends up with a
+	 * zero duration.
+	 */
+	irq_length = irq_timings_next_event(now) - now;
+
+	/*
+	 * Get the timer duration before expiration.
+	 */
+	timer_length = ktime_to_ns(tick_nohz_get_sleep_length(&delta_next));
+
+	/*
+	 * Get the smallest duration between the timer and the irq next event.
+	 */
+	idle_duration_us = min_t(u64, irq_length, timer_length) / NSEC_PER_USEC;
+
+	/*
+	 * Get the idle task duration average if the information is
+	 * available.
+	 */
+	if (mobile_dev->idle_ema_avg)
+		idle_duration_us = min_t(u64, idle_duration_us,
+					 mobile_dev->idle_ema_avg);
+
+	for (i = 0; i < drv->state_count; i++) {
+		struct cpuidle_state *s = &drv->states[i];
+		struct cpuidle_state_usage *su = &dev->states_usage[i];
+
+		if (s->disabled || su->disable)
+			continue;
+
+		if (s->exit_latency > latency_req)
+			break;
+
+		if (idle_duration_us > s->exit_latency)
+			idle_duration_us = idle_duration_us - s->exit_latency;
+
+		if (s->target_residency > idle_duration_us)
+			break;
+
+		index = i;
+	}
+
+	if (!index)
+		*stop_tick = false;
+
+	return index;
+}
+
+static struct cpuidle_governor mobile_governor = {
+	.name =		"mobile",
+	.rating =	20,
+	.select =	mobile_select,
+	.reflect =	mobile_reflect,
+};
+
+static int __init init_governor(void)
+{
+	irq_timings_enable();
+	return cpuidle_register_governor(&mobile_governor);
+}
+
+postcore_initcall(init_governor);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] cpuidle/drivers/mobile: Add new governor for mobile/embedded systems
  2019-06-20 11:58 [PATCH] cpuidle/drivers/mobile: Add new governor for mobile/embedded systems Daniel Lezcano
@ 2019-06-22  3:52 ` kbuild test robot
  2019-06-22 11:11 ` kbuild test robot
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: kbuild test robot @ 2019-06-22  3:52 UTC (permalink / raw)
  To: Daniel Lezcano
  Cc: kbuild-all, rafael, linux-kernel, Rafael J. Wysocki,
	Thomas Gleixner, Greg Kroah-Hartman,
	open list:CPU IDLE TIME MANAGEMENT FRAMEWORK

Hi Daniel,

I love your patch! Perhaps something to improve:

[auto build test WARNING on pm/linux-next]
[also build test WARNING on v5.2-rc5 next-20190621]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Daniel-Lezcano/cpuidle-drivers-mobile-Add-new-governor-for-mobile-embedded-systems/20190622-064303
base:   https://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git linux-next
reproduce:
        # apt-get install sparse
        # sparse version: v0.6.1-rc1-7-g2b96cd8-dirty
        make ARCH=x86_64 allmodconfig
        make C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__'

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>


sparse warnings: (new ones prefixed by >>)


vim +502 kernel/irq/timings.c

bbba0e7c Daniel Lezcano      2019-03-28  437  
e1c92149 Daniel Lezcano      2017-06-23  438  /**
e1c92149 Daniel Lezcano      2017-06-23  439   * irq_timings_next_event - Return when the next event is supposed to arrive
e1c92149 Daniel Lezcano      2017-06-23  440   *
e1c92149 Daniel Lezcano      2017-06-23  441   * During the last busy cycle, the number of interrupts is incremented
e1c92149 Daniel Lezcano      2017-06-23  442   * and stored in the irq_timings structure. This information is
e1c92149 Daniel Lezcano      2017-06-23  443   * necessary to:
e1c92149 Daniel Lezcano      2017-06-23  444   *
e1c92149 Daniel Lezcano      2017-06-23  445   * - know if the index in the table wrapped up:
e1c92149 Daniel Lezcano      2017-06-23  446   *
e1c92149 Daniel Lezcano      2017-06-23  447   *      If more than the array size interrupts happened during the
e1c92149 Daniel Lezcano      2017-06-23  448   *      last busy/idle cycle, the index wrapped up and we have to
e1c92149 Daniel Lezcano      2017-06-23  449   *      begin with the next element in the array which is the last one
e1c92149 Daniel Lezcano      2017-06-23  450   *      in the sequence, otherwise it is a the index 0.
e1c92149 Daniel Lezcano      2017-06-23  451   *
e1c92149 Daniel Lezcano      2017-06-23  452   * - have an indication of the interrupts activity on this CPU
e1c92149 Daniel Lezcano      2017-06-23  453   *   (eg. irq/sec)
e1c92149 Daniel Lezcano      2017-06-23  454   *
e1c92149 Daniel Lezcano      2017-06-23  455   * The values are 'consumed' after inserting in the statistical model,
e1c92149 Daniel Lezcano      2017-06-23  456   * thus the count is reinitialized.
e1c92149 Daniel Lezcano      2017-06-23  457   *
e1c92149 Daniel Lezcano      2017-06-23  458   * The array of values **must** be browsed in the time direction, the
e1c92149 Daniel Lezcano      2017-06-23  459   * timestamp must increase between an element and the next one.
e1c92149 Daniel Lezcano      2017-06-23  460   *
e1c92149 Daniel Lezcano      2017-06-23  461   * Returns a nanosec time based estimation of the earliest interrupt,
e1c92149 Daniel Lezcano      2017-06-23  462   * U64_MAX otherwise.
e1c92149 Daniel Lezcano      2017-06-23  463   */
e1c92149 Daniel Lezcano      2017-06-23  464  u64 irq_timings_next_event(u64 now)
e1c92149 Daniel Lezcano      2017-06-23  465  {
bbba0e7c Daniel Lezcano      2019-03-28  466  	struct irq_timings *irqts = this_cpu_ptr(&irq_timings);
bbba0e7c Daniel Lezcano      2019-03-28  467  	struct irqt_stat *irqs;
bbba0e7c Daniel Lezcano      2019-03-28  468  	struct irqt_stat __percpu *s;
bbba0e7c Daniel Lezcano      2019-03-28  469  	u64 ts, next_evt = U64_MAX;
bbba0e7c Daniel Lezcano      2019-03-28  470  	int i, irq = 0;
bbba0e7c Daniel Lezcano      2019-03-28  471  
e1c92149 Daniel Lezcano      2017-06-23  472  	/*
e1c92149 Daniel Lezcano      2017-06-23  473  	 * This function must be called with the local irq disabled in
e1c92149 Daniel Lezcano      2017-06-23  474  	 * order to prevent the timings circular buffer to be updated
e1c92149 Daniel Lezcano      2017-06-23  475  	 * while we are reading it.
e1c92149 Daniel Lezcano      2017-06-23  476  	 */
a934d4d1 Frederic Weisbecker 2017-11-06  477  	lockdep_assert_irqs_disabled();
e1c92149 Daniel Lezcano      2017-06-23  478  
bbba0e7c Daniel Lezcano      2019-03-28  479  	if (!irqts->count)
bbba0e7c Daniel Lezcano      2019-03-28  480  		return next_evt;
bbba0e7c Daniel Lezcano      2019-03-28  481  
bbba0e7c Daniel Lezcano      2019-03-28  482  	/*
bbba0e7c Daniel Lezcano      2019-03-28  483  	 * Number of elements in the circular buffer: If it happens it
bbba0e7c Daniel Lezcano      2019-03-28  484  	 * was flushed before, then the number of elements could be
bbba0e7c Daniel Lezcano      2019-03-28  485  	 * smaller than IRQ_TIMINGS_SIZE, so the count is used,
bbba0e7c Daniel Lezcano      2019-03-28  486  	 * otherwise the array size is used as we wrapped. The index
bbba0e7c Daniel Lezcano      2019-03-28  487  	 * begins from zero when we did not wrap. That could be done
bbba0e7c Daniel Lezcano      2019-03-28  488  	 * in a nicer way with the proper circular array structure
bbba0e7c Daniel Lezcano      2019-03-28  489  	 * type but with the cost of extra computation in the
bbba0e7c Daniel Lezcano      2019-03-28  490  	 * interrupt handler hot path. We choose efficiency.
bbba0e7c Daniel Lezcano      2019-03-28  491  	 *
bbba0e7c Daniel Lezcano      2019-03-28  492  	 * Inject measured irq/timestamp to the pattern prediction
bbba0e7c Daniel Lezcano      2019-03-28  493  	 * model while decrementing the counter because we consume the
bbba0e7c Daniel Lezcano      2019-03-28  494  	 * data from our circular buffer.
bbba0e7c Daniel Lezcano      2019-03-28  495  	 */
bbba0e7c Daniel Lezcano      2019-03-28  496  
bbba0e7c Daniel Lezcano      2019-03-28  497  	i = (irqts->count & IRQ_TIMINGS_MASK) - 1;
bbba0e7c Daniel Lezcano      2019-03-28  498  	irqts->count = min(IRQ_TIMINGS_SIZE, irqts->count);
bbba0e7c Daniel Lezcano      2019-03-28  499  
bbba0e7c Daniel Lezcano      2019-03-28  500  	for (; irqts->count > 0; irqts->count--, i = (i + 1) & IRQ_TIMINGS_MASK) {
bbba0e7c Daniel Lezcano      2019-03-28  501  		irq = irq_timing_decode(irqts->values[i], &ts);
bbba0e7c Daniel Lezcano      2019-03-28 @502  		s = idr_find(&irqt_stats, irq);
bbba0e7c Daniel Lezcano      2019-03-28  503  		if (s)
bbba0e7c Daniel Lezcano      2019-03-28  504  			irq_timings_store(irq, this_cpu_ptr(s), ts);
bbba0e7c Daniel Lezcano      2019-03-28  505  	}
bbba0e7c Daniel Lezcano      2019-03-28  506  
bbba0e7c Daniel Lezcano      2019-03-28  507  	/*
bbba0e7c Daniel Lezcano      2019-03-28  508  	 * Look in the list of interrupts' statistics, the earliest
bbba0e7c Daniel Lezcano      2019-03-28  509  	 * next event.
bbba0e7c Daniel Lezcano      2019-03-28  510  	 */
bbba0e7c Daniel Lezcano      2019-03-28 @511  	idr_for_each_entry(&irqt_stats, s, i) {
bbba0e7c Daniel Lezcano      2019-03-28  512  
bbba0e7c Daniel Lezcano      2019-03-28  513  		irqs = this_cpu_ptr(s);
bbba0e7c Daniel Lezcano      2019-03-28  514  
bbba0e7c Daniel Lezcano      2019-03-28  515  		ts = __irq_timings_next_event(irqs, i, now);
bbba0e7c Daniel Lezcano      2019-03-28  516  		if (ts <= now)
bbba0e7c Daniel Lezcano      2019-03-28  517  			return now;
bbba0e7c Daniel Lezcano      2019-03-28  518  
bbba0e7c Daniel Lezcano      2019-03-28  519  		if (ts < next_evt)
bbba0e7c Daniel Lezcano      2019-03-28  520  			next_evt = ts;
bbba0e7c Daniel Lezcano      2019-03-28  521  	}
bbba0e7c Daniel Lezcano      2019-03-28  522  
bbba0e7c Daniel Lezcano      2019-03-28  523  	return next_evt;
e1c92149 Daniel Lezcano      2017-06-23  524  }
e1c92149 Daniel Lezcano      2017-06-23  525  
e1c92149 Daniel Lezcano      2017-06-23  526  void irq_timings_free(int irq)
e1c92149 Daniel Lezcano      2017-06-23  527  {
e1c92149 Daniel Lezcano      2017-06-23  528  	struct irqt_stat __percpu *s;
e1c92149 Daniel Lezcano      2017-06-23  529  
e1c92149 Daniel Lezcano      2017-06-23  530  	s = idr_find(&irqt_stats, irq);
e1c92149 Daniel Lezcano      2017-06-23  531  	if (s) {
e1c92149 Daniel Lezcano      2017-06-23  532  		free_percpu(s);
e1c92149 Daniel Lezcano      2017-06-23  533  		idr_remove(&irqt_stats, irq);
e1c92149 Daniel Lezcano      2017-06-23  534  	}
e1c92149 Daniel Lezcano      2017-06-23  535  }
e1c92149 Daniel Lezcano      2017-06-23  536  
e1c92149 Daniel Lezcano      2017-06-23  537  int irq_timings_alloc(int irq)
e1c92149 Daniel Lezcano      2017-06-23  538  {
e1c92149 Daniel Lezcano      2017-06-23  539  	struct irqt_stat __percpu *s;
e1c92149 Daniel Lezcano      2017-06-23  540  	int id;
e1c92149 Daniel Lezcano      2017-06-23  541  
e1c92149 Daniel Lezcano      2017-06-23  542  	/*
e1c92149 Daniel Lezcano      2017-06-23  543  	 * Some platforms can have the same private interrupt per cpu,
e1c92149 Daniel Lezcano      2017-06-23  544  	 * so this function may be be called several times with the
e1c92149 Daniel Lezcano      2017-06-23  545  	 * same interrupt number. Just bail out in case the per cpu
e1c92149 Daniel Lezcano      2017-06-23  546  	 * stat structure is already allocated.
e1c92149 Daniel Lezcano      2017-06-23  547  	 */
e1c92149 Daniel Lezcano      2017-06-23  548  	s = idr_find(&irqt_stats, irq);
e1c92149 Daniel Lezcano      2017-06-23  549  	if (s)
e1c92149 Daniel Lezcano      2017-06-23  550  		return 0;
e1c92149 Daniel Lezcano      2017-06-23  551  
e1c92149 Daniel Lezcano      2017-06-23  552  	s = alloc_percpu(*s);
e1c92149 Daniel Lezcano      2017-06-23  553  	if (!s)
e1c92149 Daniel Lezcano      2017-06-23  554  		return -ENOMEM;
e1c92149 Daniel Lezcano      2017-06-23  555  
e1c92149 Daniel Lezcano      2017-06-23  556  	idr_preload(GFP_KERNEL);
e1c92149 Daniel Lezcano      2017-06-23 @557  	id = idr_alloc(&irqt_stats, s, irq, irq + 1, GFP_NOWAIT);

:::::: The code at line 502 was first introduced by commit
:::::: bbba0e7c5cdadb47a91edea1d5cd0caadbbb016f genirq/timings: Add array suffix computation code

:::::: TO: Daniel Lezcano <daniel.lezcano@linaro.org>
:::::: CC: Thomas Gleixner <tglx@linutronix.de>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] cpuidle/drivers/mobile: Add new governor for mobile/embedded systems
  2019-06-20 11:58 [PATCH] cpuidle/drivers/mobile: Add new governor for mobile/embedded systems Daniel Lezcano
  2019-06-22  3:52 ` kbuild test robot
@ 2019-06-22 11:11 ` kbuild test robot
  2019-06-22 11:45 ` kbuild test robot
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: kbuild test robot @ 2019-06-22 11:11 UTC (permalink / raw)
  To: Daniel Lezcano
  Cc: kbuild-all, rafael, linux-kernel, Rafael J. Wysocki,
	Thomas Gleixner, Greg Kroah-Hartman,
	open list:CPU IDLE TIME MANAGEMENT FRAMEWORK

[-- Attachment #1: Type: text/plain, Size: 1249 bytes --]

Hi Daniel,

I love your patch! Yet something to improve:

[auto build test ERROR on pm/linux-next]
[also build test ERROR on v5.2-rc5]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Daniel-Lezcano/cpuidle-drivers-mobile-Add-new-governor-for-mobile-embedded-systems/20190622-064303
base:   https://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git linux-next
config: mips-allmodconfig (attached as .config)
compiler: mips-linux-gcc (GCC) 7.4.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        GCC_VERSION=7.4.0 make.cross ARCH=mips 

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   drivers/cpuidle/governors/mobile.o: In function `mobile_select':
>> mobile.c:(.text.mobile_select+0xe0): undefined reference to `__udivdi3'

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 60704 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] cpuidle/drivers/mobile: Add new governor for mobile/embedded systems
  2019-06-20 11:58 [PATCH] cpuidle/drivers/mobile: Add new governor for mobile/embedded systems Daniel Lezcano
  2019-06-22  3:52 ` kbuild test robot
  2019-06-22 11:11 ` kbuild test robot
@ 2019-06-22 11:45 ` kbuild test robot
  2019-07-03 14:23 ` Doug Smythies
  2019-07-04 10:14 ` Rafael J. Wysocki
  4 siblings, 0 replies; 10+ messages in thread
From: kbuild test robot @ 2019-06-22 11:45 UTC (permalink / raw)
  To: Daniel Lezcano
  Cc: kbuild-all, rafael, linux-kernel, Rafael J. Wysocki,
	Thomas Gleixner, Greg Kroah-Hartman,
	open list:CPU IDLE TIME MANAGEMENT FRAMEWORK

[-- Attachment #1: Type: text/plain, Size: 1124 bytes --]

Hi Daniel,

I love your patch! Yet something to improve:

[auto build test ERROR on pm/linux-next]
[also build test ERROR on v5.2-rc5 next-20190621]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Daniel-Lezcano/cpuidle-drivers-mobile-Add-new-governor-for-mobile-embedded-systems/20190622-064303
base:   https://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git linux-next
config: arm-allmodconfig (attached as .config)
compiler: arm-linux-gnueabi-gcc (GCC) 7.4.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        GCC_VERSION=7.4.0 make.cross ARCH=arm 

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 70678 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH] cpuidle/drivers/mobile: Add new governor for mobile/embedded systems
  2019-06-20 11:58 [PATCH] cpuidle/drivers/mobile: Add new governor for mobile/embedded systems Daniel Lezcano
                   ` (2 preceding siblings ...)
  2019-06-22 11:45 ` kbuild test robot
@ 2019-07-03 14:23 ` Doug Smythies
  2019-07-03 15:16   ` Daniel Lezcano
  2019-07-04 10:14 ` Rafael J. Wysocki
  4 siblings, 1 reply; 10+ messages in thread
From: Doug Smythies @ 2019-07-03 14:23 UTC (permalink / raw)
  To: Daniel Lezcano
  Cc: linux-kernel, Rafael J. Wysocki, Thomas Gleixner,
	Greg Kroah-Hartman, open list:CPU IDLE TIME MANAGEMENT FRAMEWORK,
	rafael

Hi Daniel,

I tried your "mobile" governor, albeit not on a mobile device.

On 2019.06.20 04:58 Daniel Lezcano wrote:

...

> The mobile governor is a new governor targeting embedded systems
> running on battery where the energy saving has a higher priority than
> servers or desktops. This governor aims to save energy as much as
> possible but with a performance degradation tolerance.
>
> In this way, we can optimize the governor for specific mobile workload
> and more generally embedded systems without impacting other platforms.

I just wanted to observe the lower energy, accepting performance
degradation. My workloads may have been inappropriate.

...

> +
> +#define EMA_ALPHA_VAL		64
> +#define EMA_ALPHA_SHIFT		7
> +#define MAX_RESCHED_INTERVAL_MS	100
> +
> +static DEFINE_PER_CPU(struct mobile_device, mobile_devices);
> +
> +static int mobile_ema_new(s64 value, s64 ema_old)
> +{
> +	if (likely(ema_old))
> +		return ema_old + (((value - ema_old) * EMA_ALPHA_VAL) >>
> +				  EMA_ALPHA_SHIFT);
> +	return value;
> +}

Do you have any information as to why these numbers?
Without any background, the filter seems overly new value weighted to me.
It is an infinite impulse response type filter, currently at:

output = 0.5 * old + 0.5 * new.

I tried, but didn't get anything conclusive:

output = 0.875 * old + 0.125 * new.

I did it this way:

#define EMA_ALPHA_VAL           7
#define EMA_ALPHA_SHIFT         3
#define MAX_RESCHED_INTERVAL_MS 100

static DEFINE_PER_CPU(struct mobile_device, mobile_devices);

static int mobile_ema_new(s64 value, s64 ema_old)
{
        if (likely(ema_old))
                return ((ema_old * EMA_ALPHA_VAL) + value) >>
                                  EMA_ALPHA_SHIFT;
        return value;
}

...

> +	/*
> +	 * Sum all the residencies in order to compute the total
> +	 * duration of the idle task.
> +	 */
> +	residency = dev->last_residency - s->exit_latency;

What about when the CPU comes out of the idle state before it
even gets fully into it? Under such conditions it seems to hold
much too hard at idle states that are too deep, to the point
where energy goes up while performance goes down.

Anyway, I did a bunch of tests and such, but have deleted
most from this e-mail, because it's just noise. I'll
include just one set:

For a work load that would normally result in a lot of use
of shallow idle states (single core pipe-test * 2 cores).
I got (all kernel 5.2-rc5 + this patch):

Idle governor, teo; CPU frequency scaling: intel-cpufreq/ondemand;
Processor package power: 40.4 watts; 4.9 uSec/loop

Idle governor, teo; CPU frequency scaling: intel-cpufreq/ondemand;
Processor package power: 34 watts; 5.2 uSec/loop

Idle governor, mobile; CPU frequency scaling: intel-cpufreq/ondemand;
Processor package power: 25.9 watts; 11.1 uSec/loop

Idle governor, menu; CPU frequency scaling: intel-cpufreq/ondemand;
Processor package power: 34.2 watts; 5.23 uSec/loop

Idle governor, teo; CPU frequency scaling: intel-cpufreq/ondemand;
Maximum CPU frequency limited to 73% to match mobile energy.
Processor package power: 25.4 watts; 6.4 uSec/loop

... Doug



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] cpuidle/drivers/mobile: Add new governor for mobile/embedded systems
  2019-07-03 14:23 ` Doug Smythies
@ 2019-07-03 15:16   ` Daniel Lezcano
  2019-07-03 19:12     ` Doug Smythies
  0 siblings, 1 reply; 10+ messages in thread
From: Daniel Lezcano @ 2019-07-03 15:16 UTC (permalink / raw)
  To: Doug Smythies
  Cc: linux-kernel, Rafael J. Wysocki, Thomas Gleixner,
	Greg Kroah-Hartman, open list:CPU IDLE TIME MANAGEMENT FRAMEWORK,
	rafael


Hi Doug,

On 03/07/2019 16:23, Doug Smythies wrote:
> Hi Daniel,
> 
> I tried your "mobile" governor, albeit not on a mobile device.
> 
> On 2019.06.20 04:58 Daniel Lezcano wrote:
> 
> ...
> 
>> The mobile governor is a new governor targeting embedded systems
>> running on battery where the energy saving has a higher priority than
>> servers or desktops. This governor aims to save energy as much as
>> possible but with a performance degradation tolerance.
>>
>> In this way, we can optimize the governor for specific mobile workload
>> and more generally embedded systems without impacting other platforms.
> 
> I just wanted to observe the lower energy, accepting performance
> degradation. My workloads may have been inappropriate.

Thanks for trying the governor. It is still basic but will be improved
step by step regarding clearly identified workload and with more help
from the scheduler. This is the first phase of the governor providing
the base bricks.

>> +
>> +#define EMA_ALPHA_VAL		64
>> +#define EMA_ALPHA_SHIFT		7
>> +#define MAX_RESCHED_INTERVAL_MS	100
>> +
>> +static DEFINE_PER_CPU(struct mobile_device, mobile_devices);
>> +
>> +static int mobile_ema_new(s64 value, s64 ema_old)
>> +{
>> +	if (likely(ema_old))
>> +		return ema_old + (((value - ema_old) * EMA_ALPHA_VAL) >>
>> +				  EMA_ALPHA_SHIFT);
>> +	return value;
>> +}
> 
> Do you have any information as to why these numbers?
>
> Without any background, the filter seems overly new value weighted to me.
> It is an infinite impulse response type filter, currently at:
> 
> output = 0.5 * old + 0.5 * new.
> 
> I tried, but didn't get anything conclusive:
> 
> output = 0.875 * old + 0.125 * new.
> 
> I did it this way:
> 
> #define EMA_ALPHA_VAL           7
> #define EMA_ALPHA_SHIFT         3
> #define MAX_RESCHED_INTERVAL_MS 100

Ok, I will have a look at these values.

> static DEFINE_PER_CPU(struct mobile_device, mobile_devices);
> 
> static int mobile_ema_new(s64 value, s64 ema_old)
> {
>         if (likely(ema_old))
>                 return ((ema_old * EMA_ALPHA_VAL) + value) >>
>                                   EMA_ALPHA_SHIFT;
>         return value;
> }
> 
> ...
> 
>> +	/*
>> +	 * Sum all the residencies in order to compute the total
>> +	 * duration of the idle task.
>> +	 */
>> +	residency = dev->last_residency - s->exit_latency;
> 
> What about when the CPU comes out of the idle state before it
> even gets fully into it? Under such conditions it seems to hold
> much too hard at idle states that are too deep, to the point
> where energy goes up while performance goes down.

I'm not sure there is something we can do here :/


> Anyway, I did a bunch of tests and such, but have deleted
> most from this e-mail, because it's just noise. I'll
> include just one set:
> 
> For a work load that would normally result in a lot of use
> of shallow idle states (single core pipe-test * 2 cores).

Can you share the tests and the command lines?


> I got (all kernel 5.2-rc5 + this patch):
> 
> Idle governor, teo; CPU frequency scaling: intel-cpufreq/ondemand;
> Processor package power: 40.4 watts; 4.9 uSec/loop
> 
> Idle governor, teo; CPU frequency scaling: intel-cpufreq/ondemand;
> Processor package power: 34 watts; 5.2 uSec/loop
> 
> Idle governor, mobile; CPU frequency scaling: intel-cpufreq/ondemand;
> Processor package power: 25.9 watts; 11.1 uSec/loop
> 
> Idle governor, menu; CPU frequency scaling: intel-cpufreq/ondemand;
> Processor package power: 34.2 watts; 5.23 uSec/loop
> 
> Idle governor, teo; CPU frequency scaling: intel-cpufreq/ondemand;
> Maximum CPU frequency limited to 73% to match mobile energy.
> Processor package power: 25.4 watts; 6.4 uSec/loop

Ok that's interesting. Thanks for the values.

The governor can be better by selecting the shallow states, the
scheduler has to interact with the governor to give clues about the
load, that is identified and will be the next step.

Is it possible to check with the schedutil governor instead?



-- 
 <http://www.linaro.org/> Linaro.org │ Open source software for ARM SoCs

Follow Linaro:  <http://www.facebook.com/pages/Linaro> Facebook |
<http://twitter.com/#!/linaroorg> Twitter |
<http://www.linaro.org/linaro-blog/> Blog


^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH] cpuidle/drivers/mobile: Add new governor for mobile/embedded systems
  2019-07-03 15:16   ` Daniel Lezcano
@ 2019-07-03 19:12     ` Doug Smythies
  2019-07-07 17:02       ` Doug Smythies
  0 siblings, 1 reply; 10+ messages in thread
From: Doug Smythies @ 2019-07-03 19:12 UTC (permalink / raw)
  To: Daniel Lezcano
  Cc: linux-kernel, Rafael J. Wysocki, Thomas Gleixner,
	Greg Kroah-Hartman, open list:CPU IDLE TIME MANAGEMENT FRAMEWORK,
	rafael

On 2019.07.03 08:16 Daniel Lezcano wrote:
> On 03/07/2019 16:23, Doug Smythies wrote:
>> On 2019.06.20 04:58 Daniel Lezcano wrote:

...
>> Anyway, I did a bunch of tests and such, but have deleted
>> most from this e-mail, because it's just noise. I'll
>> include just one set:
>> 
>> For a work load that would normally result in a lot of use
>> of shallow idle states (single core pipe-test * 2 cores).
>
> Can you share the tests and the command lines?

Yes, give me a few days to repeat the tests and write
it up properly. I am leaving town in an hour and for a day.

It'll be similar to this:
http://www.smythies.com/~doug/linux/idle/teo8/pipe/index.html
parent page (which I will do a better version):
http://www.smythies.com/~doug/linux/idle/teo8/index.html
...

>> I got (all kernel 5.2-rc5 + this patch):
>> 
>> Idle governor, teo; CPU frequency scaling: intel-cpufreq/ondemand;
>> Processor package power: 40.4 watts; 4.9 uSec/loop
>> 
>> Idle governor, teo; CPU frequency scaling: intel-cpufreq/ondemand;
>> Processor package power: 34 watts; 5.2 uSec/loop
>> 
>> Idle governor, mobile; CPU frequency scaling: intel-cpufreq/ondemand;
>> Processor package power: 25.9 watts; 11.1 uSec/loop
>> 
>> Idle governor, menu; CPU frequency scaling: intel-cpufreq/ondemand;
>> Processor package power: 34.2 watts; 5.23 uSec/loop
>> 
>> Idle governor, teo; CPU frequency scaling: intel-cpufreq/ondemand;
>> Maximum CPU frequency limited to 73% to match mobile energy.
>> Processor package power: 25.4 watts; 6.4 uSec/loop
>
> Ok that's interesting. Thanks for the values.
>
> The governor can be better by selecting the shallow states, the
> scheduler has to interact with the governor to give clues about the
> load, that is identified and will be the next step.
>
> Is it possible to check with the schedutil governor instead?

Oh, I already have some data, just didn't include it before:

Idle governor, teo; CPU frequency scaling: intel-cpufreq/schedutil;
Processor package power: 40.4 watts; 4.9 uSec/loop

Idle governor, mobile; CPU frequency scaling: intel-cpufreq/schedutil;
Processor package power: 12.7 watts; 19.7 uSec/loop

Idle governor, teo; CPU frequency scaling: intel-cpufreq/schedutil;
Idle states 0-3 disabled (note: Idle state 4 is the deepest on my system)
Processor package power: 36.9 watts; 8.3 uSec/loop
In my notes I wrote: "Huh?? I do not understand this result, as I had
expected more similar to the mobile governor". But I did not investigate.

Anyway, the schedutil test is the one I'll repeat and write up better.

... Doug



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] cpuidle/drivers/mobile: Add new governor for mobile/embedded systems
  2019-06-20 11:58 [PATCH] cpuidle/drivers/mobile: Add new governor for mobile/embedded systems Daniel Lezcano
                   ` (3 preceding siblings ...)
  2019-07-03 14:23 ` Doug Smythies
@ 2019-07-04 10:14 ` Rafael J. Wysocki
  2019-07-08  9:57   ` Daniel Lezcano
  4 siblings, 1 reply; 10+ messages in thread
From: Rafael J. Wysocki @ 2019-07-04 10:14 UTC (permalink / raw)
  To: Daniel Lezcano
  Cc: rafael, linux-kernel, Thomas Gleixner, Greg Kroah-Hartman,
	open list:CPU IDLE TIME MANAGEMENT FRAMEWORK

On Thursday, June 20, 2019 1:58:08 PM CEST Daniel Lezcano wrote:
> The objective is the same for all the governors: save energy, but at
> the end the governors menu, ladder and teo aim to improve the
> performances with an acceptable energy drop for some workloads which
> are identified for servers and desktops (with the help of a firmware).
> 
> The ladder governor is designed for server with a periodic tick
> configuration.
> 
> The menu governor does not behave nicely with the mobile platform and
> the energy saving for the multimedia workloads is worst than picking
> up randomly an idle state.
> 
> The teo governor acts efficiently, it promotes shallower state for
> performances which is perfect for the servers / desktop but inadequate
> for mobile because the energy consumed is too high.
> 
> It is very difficult to do changes in these governors for embedded
> systems without impacting performances on servers/desktops or ruin the
> optimizations for the workloads on these platforms.
> 
> The mobile governor is a new governor targeting embedded systems
> running on battery where the energy saving has a higher priority than
> servers or desktops. This governor aims to save energy as much as
> possible but with a performance degradation tolerance.
> 
> In this way, we can optimize the governor for specific mobile workload
> and more generally embedded systems without impacting other platforms.
> 
> The mobile governor is built on top of the paradigm 'separate the wake
> up sources signals and analyze them'. Three categories of wake up
> signals are identified:
>  - deterministic : timers
>  - predictable : most of the devices interrupt
>  - unpredictable : IPI rescheduling, random signals
> 
> The latter needs an iterative approach and the help of the scheduler
> to give more input to the governor.
> 
> The governor uses the irq timings where we predict the next interrupt
> occurrences on the current CPU and the next timer. It is well suited
> for mobile and more generally embedded systems where the interrupts
> are usually pinned on one CPU and where the power is more important
> than the performances.
> 
> The multimedia applications on the embedded system spawn multiple
> threads which are migrated across the different CPUs and waking
> between them up. In order to catch this situation we have also to
> track the idle task rescheduling duration with a relative degree of
> confidence as the scheduler is involved in the task migrations. The
> resched information is in the scope of the governor via the reflect
> callback.
> 
> The governor begins with a clean foundation basing the prediction on
> the irq behavior returned by the irq timings, the timers and the idle
> task rescheduling. The advantage of the approach is we have a full
> view of the wakeup sources as we identify them separately and then we
> can control the situation without relying on biased heuristics.
> 
> This first iteration provides a basic prediction but improves on some
> mobile platforms better energy for better performance for multimedia
> workloads.
> 
> The scheduling aspect will be optimized iteratively with non
> regression testing for previous identified workloads on an Android
> reference platform.
> 
> Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>

Note that there are build issues reported by 0-day that need to be fixed.

Also, IMO this really should be documented better in the tree, not just in the changelog.
At least the use case to be covered by this governor should be clearly documented and
it would be good to describe the algorithm.

> ---
>  drivers/cpuidle/Kconfig            |  11 ++-
>  drivers/cpuidle/governors/Makefile |   1 +
>  drivers/cpuidle/governors/mobile.c | 151 +++++++++++++++++++++++++++++
>  3 files changed, 162 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/cpuidle/governors/mobile.c
> 
> diff --git a/drivers/cpuidle/Kconfig b/drivers/cpuidle/Kconfig
> index a4ac31e4a58c..e2376d85e288 100644
> --- a/drivers/cpuidle/Kconfig
> +++ b/drivers/cpuidle/Kconfig
> @@ -5,7 +5,7 @@ config CPU_IDLE
>  	bool "CPU idle PM support"
>  	default y if ACPI || PPC_PSERIES
>  	select CPU_IDLE_GOV_LADDER if (!NO_HZ && !NO_HZ_IDLE)
> -	select CPU_IDLE_GOV_MENU if (NO_HZ || NO_HZ_IDLE) && !CPU_IDLE_GOV_TEO
> +	select CPU_IDLE_GOV_MENU if (NO_HZ || NO_HZ_IDLE) && !CPU_IDLE_GOV_TEO && !CPU_IDLE_GOV_MOBILE
>  	help
>  	  CPU idle is a generic framework for supporting software-controlled
>  	  idle processor power management.  It includes modular cross-platform
> @@ -33,6 +33,15 @@ config CPU_IDLE_GOV_TEO
>  	  Some workloads benefit from using it and it generally should be safe
>  	  to use.  Say Y here if you are not happy with the alternatives.
>  
> +config CPU_IDLE_GOV_MOBILE
> +	bool "Mobile governor"
> +	select IRQ_TIMINGS
> +	help
> +	  The mobile governor is based on irq timings measurements and
> +	  pattern research combined with the next timer. This governor
> +	  suits very well on embedded systems where the interrupts are
> +	  grouped on a single core and the power is the priority.
> +
>  config DT_IDLE_STATES
>  	bool
>  
> diff --git a/drivers/cpuidle/governors/Makefile b/drivers/cpuidle/governors/Makefile
> index 42f44cc610dd..f09da7178670 100644
> --- a/drivers/cpuidle/governors/Makefile
> +++ b/drivers/cpuidle/governors/Makefile
> @@ -6,3 +6,4 @@
>  obj-$(CONFIG_CPU_IDLE_GOV_LADDER) += ladder.o
>  obj-$(CONFIG_CPU_IDLE_GOV_MENU) += menu.o
>  obj-$(CONFIG_CPU_IDLE_GOV_TEO) += teo.o
> +obj-$(CONFIG_CPU_IDLE_GOV_MOBILE) += mobile.o
> diff --git a/drivers/cpuidle/governors/mobile.c b/drivers/cpuidle/governors/mobile.c
> new file mode 100644
> index 000000000000..8fda0f9b960b
> --- /dev/null
> +++ b/drivers/cpuidle/governors/mobile.c
> @@ -0,0 +1,151 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019, Linaro Ltd
> + * Author: Daniel Lezcano <daniel.lezcano@linaro.org>
> + */
> +#include <linux/cpuidle.h>
> +#include <linux/kernel.h>
> +#include <linux/sched.h>
> +#include <linux/slab.h>
> +#include <linux/tick.h>
> +#include <linux/interrupt.h>
> +#include <linux/sched/clock.h>
> +
> +struct mobile_device {
> +	u64 idle_ema_avg;
> +	u64 idle_total;
> +	unsigned long last_jiffies;
> +};
> +
> +#define EMA_ALPHA_VAL		64
> +#define EMA_ALPHA_SHIFT		7
> +#define MAX_RESCHED_INTERVAL_MS	100
> +
> +static DEFINE_PER_CPU(struct mobile_device, mobile_devices);
> +
> +static int mobile_ema_new(s64 value, s64 ema_old)
> +{
> +	if (likely(ema_old))
> +		return ema_old + (((value - ema_old) * EMA_ALPHA_VAL) >>
> +				  EMA_ALPHA_SHIFT);
> +	return value;
> +}
> +
> +static void mobile_reflect(struct cpuidle_device *dev, int index)
> +{
> +        struct mobile_device *mobile_dev = this_cpu_ptr(&mobile_devices);
> +	struct cpuidle_driver *drv = cpuidle_get_cpu_driver(dev);
> +	struct cpuidle_state *s = &drv->states[index];
> +	int residency;
> +
> +	/*
> +	 * The idle task was not rescheduled since
> +	 * MAX_RESCHED_INTERVAL_MS, let's consider the duration is
> +	 * long enough to clear our stats.
> +	 */
> +	if (time_after(jiffies, mobile_dev->last_jiffies +
> +		       msecs_to_jiffies(MAX_RESCHED_INTERVAL_MS)))
> +		mobile_dev->idle_ema_avg = 0;

Why jiffies?  Any particular reason?

> +
> +	/*
> +	 * Sum all the residencies in order to compute the total
> +	 * duration of the idle task.
> +	 */
> +	residency = dev->last_residency - s->exit_latency;
> +	if (residency > 0)
> +		mobile_dev->idle_total += residency;
> +
> +	/*
> +	 * We exited the idle state with the need_resched() flag, the
> +	 * idle task will be rescheduled, so store the duration the
> +	 * idle task was scheduled in an exponential moving average and
> +	 * reset the total of the idle duration.
> +	 */
> +	if (need_resched()) {
> +		mobile_dev->idle_ema_avg = mobile_ema_new(mobile_dev->idle_total,
> +						      mobile_dev->idle_ema_avg);
> +		mobile_dev->idle_total = 0;
> +		mobile_dev->last_jiffies = jiffies;
> +	}
> +}
> +
> +static int mobile_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
> +		       bool *stop_tick)
> +{
> +	struct mobile_device *mobile_dev = this_cpu_ptr(&mobile_devices);
> +	int latency_req = cpuidle_governor_latency_req(dev->cpu);
> +	int i, index = 0;
> +	ktime_t delta_next;
> +	u64 now, irq_length, timer_length;
> +	u64 idle_duration_us;
> +
> +	/*
> +	 * Get the present time as reference for the next steps
> +	 */
> +	now = local_clock();
> +
> +	/*
> +	 * Get the next interrupt event giving the 'now' as a
> +	 * reference, if the next event appears to have already
> +	 * expired then we get the 'now' returned which ends up with a
> +	 * zero duration.
> +	 */
> +	irq_length = irq_timings_next_event(now) - now;
> +
> +	/*
> +	 * Get the timer duration before expiration.
> +	 */

This comment is rather redundant and the one below too. :-)

> +	timer_length = ktime_to_ns(tick_nohz_get_sleep_length(&delta_next));
> +
> +	/*
> +	 * Get the smallest duration between the timer and the irq next event.
> +	 */
> +	idle_duration_us = min_t(u64, irq_length, timer_length) / NSEC_PER_USEC;
> +
> +	/*
> +	 * Get the idle task duration average if the information is
> +	 * available.

IMO it would be good to explain this step in more detail, especially the purpose of it.

> +	 */
> +	if (mobile_dev->idle_ema_avg)
> +		idle_duration_us = min_t(u64, idle_duration_us,
> +					 mobile_dev->idle_ema_avg);
> +
> +	for (i = 0; i < drv->state_count; i++) {
> +		struct cpuidle_state *s = &drv->states[i];
> +		struct cpuidle_state_usage *su = &dev->states_usage[i];
> +
> +		if (s->disabled || su->disable)
> +			continue;
> +
> +		if (s->exit_latency > latency_req)
> +			break;
> +
> +		if (idle_duration_us > s->exit_latency)
> +			idle_duration_us = idle_duration_us - s->exit_latency;

Why do you want this?

It only causes you to miss an opportunity to select a deeper state sometimes,
so what's the reason?

Moreover, I don't think you should update idle_duration_us here, as the updated
value will go to the next step if the check below doesn't trigger.

> +
> +		if (s->target_residency > idle_duration_us)
> +			break;
> +
> +		index = i;
> +	}
> +
> +	if (!index)
> +		*stop_tick = false;

Well, this means that the tick is stopped for all idle states deeper than state 0.

If there are any states between state 0 and the deepest one and they are below
the tick boundary, you may very well suffer the "powernightmares" problem
because of this.

> +
> +	return index;
> +}
> +
> +static struct cpuidle_governor mobile_governor = {
> +	.name =		"mobile",
> +	.rating =	20,
> +	.select =	mobile_select,
> +	.reflect =	mobile_reflect,
> +};
> +
> +static int __init init_governor(void)
> +{
> +	irq_timings_enable();
> +	return cpuidle_register_governor(&mobile_governor);
> +}
> +
> +postcore_initcall(init_governor);
> 





^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH] cpuidle/drivers/mobile: Add new governor for mobile/embedded systems
  2019-07-03 19:12     ` Doug Smythies
@ 2019-07-07 17:02       ` Doug Smythies
  0 siblings, 0 replies; 10+ messages in thread
From: Doug Smythies @ 2019-07-07 17:02 UTC (permalink / raw)
  To: Daniel Lezcano
  Cc: linux-kernel, Rafael J. Wysocki, Thomas Gleixner,
	Greg Kroah-Hartman, open list:CPU IDLE TIME MANAGEMENT FRAMEWORK,
	rafael

On 2019.07.03 12:12 Doug Smythies wrote:
> On 2019.07.03 08:16 Daniel Lezcano wrote:
>> On 03/07/2019 16:23, Doug Smythies wrote:
>>> On 2019.06.20 04:58 Daniel Lezcano wrote:

> ...
>>> Anyway, I did a bunch of tests and such, but have deleted
>>> most from this e-mail, because it's just noise. I'll
>>> include just one set:
>>> 
>>> For a work load that would normally result in a lot of use
>>> of shallow idle states (single core pipe-test * 2 cores).
>>
>> Can you share the tests and the command lines?
>
> Yes, give me a few days to repeat the tests and write
> it up properly. I am leaving town in an hour and for a day.

O.K. I re-did the tests and made a new web page with, I think,
everything I used.

...

>> The governor can be better by selecting the shallow states, the
>> scheduler has to interact with the governor to give clues about the
>> load, that is identified and will be the next step.
>>
>> Is it possible to check with the schedutil governor instead?
>
> Oh, I already have some data, just didn't include it before:
>
> Idle governor, teo; CPU frequency scaling: intel-cpufreq/schedutil;
> Processor package power: 40.4 watts; 4.9 uSec/loop
>
> Idle governor, mobile; CPU frequency scaling: intel-cpufreq/schedutil;
> Processor package power: 12.7 watts; 19.7 uSec/loop
>
> Idle governor, teo; CPU frequency scaling: intel-cpufreq/schedutil;
> Idle states 0-3 disabled (note: Idle state 4 is the deepest on my system)
> Processor package power: 36.9 watts; 8.3 uSec/loop
> In my notes I wrote: "Huh?? I do not understand this result, as I had
> expected more similar to the mobile governor". But I did not investigate.

The reason for the big difference was/is that with the "mobile"
governor the CPU frequency never scales up for this test. It can be
sluggish to scale up with the teo and the menu governors, but always
eventually does. I also tried the acpi-cpufreq driver with similar results.

> Anyway, the schedutil test is the one I'll repeat and write up better.

New summary (similar to old):

governor 		usec/loop	watts
mobile		19.8		12.67
teo			4.87		40.28
menu			4.85		40.25
teo-idle-0-3-dis	8.30		36.85

Graphs, details and source codes:
http://www.smythies.com/~doug/linux/idle/mobile/index.html

... Doug



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] cpuidle/drivers/mobile: Add new governor for mobile/embedded systems
  2019-07-04 10:14 ` Rafael J. Wysocki
@ 2019-07-08  9:57   ` Daniel Lezcano
  0 siblings, 0 replies; 10+ messages in thread
From: Daniel Lezcano @ 2019-07-08  9:57 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: rafael, linux-kernel, Thomas Gleixner, Greg Kroah-Hartman,
	open list:CPU IDLE TIME MANAGEMENT FRAMEWORK


Hi Rafael,

On 04/07/2019 12:14, Rafael J. Wysocki wrote:
> On Thursday, June 20, 2019 1:58:08 PM CEST Daniel Lezcano wrote:
>> The objective is the same for all the governors: save energy, but at
>> the end the governors menu, ladder and teo aim to improve the
>> performances with an acceptable energy drop for some workloads which
>> are identified for servers and desktops (with the help of a firmware).
>>
>> The ladder governor is designed for server with a periodic tick
>> configuration.
>>
>> The menu governor does not behave nicely with the mobile platform and
>> the energy saving for the multimedia workloads is worst than picking
>> up randomly an idle state.
>>
>> The teo governor acts efficiently, it promotes shallower state for
>> performances which is perfect for the servers / desktop but inadequate
>> for mobile because the energy consumed is too high.
>>
>> It is very difficult to do changes in these governors for embedded
>> systems without impacting performances on servers/desktops or ruin the
>> optimizations for the workloads on these platforms.
>>
>> The mobile governor is a new governor targeting embedded systems
>> running on battery where the energy saving has a higher priority than
>> servers or desktops. This governor aims to save energy as much as
>> possible but with a performance degradation tolerance.
>>
>> In this way, we can optimize the governor for specific mobile workload
>> and more generally embedded systems without impacting other platforms.
>>
>> The mobile governor is built on top of the paradigm 'separate the wake
>> up sources signals and analyze them'. Three categories of wake up
>> signals are identified:
>>  - deterministic : timers
>>  - predictable : most of the devices interrupt
>>  - unpredictable : IPI rescheduling, random signals
>>
>> The latter needs an iterative approach and the help of the scheduler
>> to give more input to the governor.
>>
>> The governor uses the irq timings where we predict the next interrupt
>> occurrences on the current CPU and the next timer. It is well suited
>> for mobile and more generally embedded systems where the interrupts
>> are usually pinned on one CPU and where the power is more important
>> than the performances.
>>
>> The multimedia applications on the embedded system spawn multiple
>> threads which are migrated across the different CPUs and waking
>> between them up. In order to catch this situation we have also to
>> track the idle task rescheduling duration with a relative degree of
>> confidence as the scheduler is involved in the task migrations. The
>> resched information is in the scope of the governor via the reflect
>> callback.
>>
>> The governor begins with a clean foundation basing the prediction on
>> the irq behavior returned by the irq timings, the timers and the idle
>> task rescheduling. The advantage of the approach is we have a full
>> view of the wakeup sources as we identify them separately and then we
>> can control the situation without relying on biased heuristics.
>>
>> This first iteration provides a basic prediction but improves on some
>> mobile platforms better energy for better performance for multimedia
>> workloads.
>>
>> The scheduling aspect will be optimized iteratively with non
>> regression testing for previous identified workloads on an Android
>> reference platform.
>>
>> Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
> 
> Note that there are build issues reported by 0-day that need to be fixed.
> Also, IMO this really should be documented better in the tree, not just in the changelog.
> At least the use case to be covered by this governor should be clearly documented and
> it would be good to describe the algorithm.

Ok, I will add some documentation.

>> ---
>>  drivers/cpuidle/Kconfig            |  11 ++-
>>  drivers/cpuidle/governors/Makefile |   1 +
>>  drivers/cpuidle/governors/mobile.c | 151 +++++++++++++++++++++++++++++
>>  3 files changed, 162 insertions(+), 1 deletion(-)
>>  create mode 100644 drivers/cpuidle/governors/mobile.c
>>
>> diff --git a/drivers/cpuidle/Kconfig b/drivers/cpuidle/Kconfig
>> index a4ac31e4a58c..e2376d85e288 100644
>> --- a/drivers/cpuidle/Kconfig
>> +++ b/drivers/cpuidle/Kconfig
>> @@ -5,7 +5,7 @@ config CPU_IDLE
>>  	bool "CPU idle PM support"
>>  	default y if ACPI || PPC_PSERIES
>>  	select CPU_IDLE_GOV_LADDER if (!NO_HZ && !NO_HZ_IDLE)
>> -	select CPU_IDLE_GOV_MENU if (NO_HZ || NO_HZ_IDLE) && !CPU_IDLE_GOV_TEO
>> +	select CPU_IDLE_GOV_MENU if (NO_HZ || NO_HZ_IDLE) && !CPU_IDLE_GOV_TEO && !CPU_IDLE_GOV_MOBILE
>>  	help
>>  	  CPU idle is a generic framework for supporting software-controlled
>>  	  idle processor power management.  It includes modular cross-platform
>> @@ -33,6 +33,15 @@ config CPU_IDLE_GOV_TEO
>>  	  Some workloads benefit from using it and it generally should be safe
>>  	  to use.  Say Y here if you are not happy with the alternatives.
>>  
>> +config CPU_IDLE_GOV_MOBILE
>> +	bool "Mobile governor"
>> +	select IRQ_TIMINGS
>> +	help
>> +	  The mobile governor is based on irq timings measurements and
>> +	  pattern research combined with the next timer. This governor
>> +	  suits very well on embedded systems where the interrupts are
>> +	  grouped on a single core and the power is the priority.
>> +
>>  config DT_IDLE_STATES
>>  	bool
>>  
>> diff --git a/drivers/cpuidle/governors/Makefile b/drivers/cpuidle/governors/Makefile
>> index 42f44cc610dd..f09da7178670 100644
>> --- a/drivers/cpuidle/governors/Makefile
>> +++ b/drivers/cpuidle/governors/Makefile
>> @@ -6,3 +6,4 @@
>>  obj-$(CONFIG_CPU_IDLE_GOV_LADDER) += ladder.o
>>  obj-$(CONFIG_CPU_IDLE_GOV_MENU) += menu.o
>>  obj-$(CONFIG_CPU_IDLE_GOV_TEO) += teo.o
>> +obj-$(CONFIG_CPU_IDLE_GOV_MOBILE) += mobile.o
>> diff --git a/drivers/cpuidle/governors/mobile.c b/drivers/cpuidle/governors/mobile.c
>> new file mode 100644
>> index 000000000000..8fda0f9b960b
>> --- /dev/null
>> +++ b/drivers/cpuidle/governors/mobile.c
>> @@ -0,0 +1,151 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Copyright (C) 2019, Linaro Ltd
>> + * Author: Daniel Lezcano <daniel.lezcano@linaro.org>
>> + */
>> +#include <linux/cpuidle.h>
>> +#include <linux/kernel.h>
>> +#include <linux/sched.h>
>> +#include <linux/slab.h>
>> +#include <linux/tick.h>
>> +#include <linux/interrupt.h>
>> +#include <linux/sched/clock.h>
>> +
>> +struct mobile_device {
>> +	u64 idle_ema_avg;
>> +	u64 idle_total;
>> +	unsigned long last_jiffies;
>> +};
>> +
>> +#define EMA_ALPHA_VAL		64
>> +#define EMA_ALPHA_SHIFT		7
>> +#define MAX_RESCHED_INTERVAL_MS	100
>> +
>> +static DEFINE_PER_CPU(struct mobile_device, mobile_devices);
>> +
>> +static int mobile_ema_new(s64 value, s64 ema_old)
>> +{
>> +	if (likely(ema_old))
>> +		return ema_old + (((value - ema_old) * EMA_ALPHA_VAL) >>
>> +				  EMA_ALPHA_SHIFT);
>> +	return value;
>> +}
>> +
>> +static void mobile_reflect(struct cpuidle_device *dev, int index)
>> +{
>> +        struct mobile_device *mobile_dev = this_cpu_ptr(&mobile_devices);
>> +	struct cpuidle_driver *drv = cpuidle_get_cpu_driver(dev);
>> +	struct cpuidle_state *s = &drv->states[index];
>> +	int residency;
>> +
>> +	/*
>> +	 * The idle task was not rescheduled since
>> +	 * MAX_RESCHED_INTERVAL_MS, let's consider the duration is
>> +	 * long enough to clear our stats.
>> +	 */
>> +	if (time_after(jiffies, mobile_dev->last_jiffies +
>> +		       msecs_to_jiffies(MAX_RESCHED_INTERVAL_MS)))
>> +		mobile_dev->idle_ema_avg = 0;
> 
> Why jiffies?  Any particular reason?

I used jiffies to not use the local_clock() call for the overhead. I
agree the resolution could be too low. Perhaps it makes more sense to
move idle start and idle end variable from the cpuidle_enter function to
the cpuidle device structure, so the information can be reused by the
subsequent users.

>> +
>> +	/*
>> +	 * Sum all the residencies in order to compute the total
>> +	 * duration of the idle task.
>> +	 */
>> +	residency = dev->last_residency - s->exit_latency;
>> +	if (residency > 0)
>> +		mobile_dev->idle_total += residency;
>> +
>> +	/*
>> +	 * We exited the idle state with the need_resched() flag, the
>> +	 * idle task will be rescheduled, so store the duration the
>> +	 * idle task was scheduled in an exponential moving average and
>> +	 * reset the total of the idle duration.
>> +	 */
>> +	if (need_resched()) {
>> +		mobile_dev->idle_ema_avg = mobile_ema_new(mobile_dev->idle_total,
>> +						      mobile_dev->idle_ema_avg);
>> +		mobile_dev->idle_total = 0;
>> +		mobile_dev->last_jiffies = jiffies;
>> +	}
>> +}
>> +
>> +static int mobile_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
>> +		       bool *stop_tick)
>> +{
>> +	struct mobile_device *mobile_dev = this_cpu_ptr(&mobile_devices);
>> +	int latency_req = cpuidle_governor_latency_req(dev->cpu);
>> +	int i, index = 0;
>> +	ktime_t delta_next;
>> +	u64 now, irq_length, timer_length;
>> +	u64 idle_duration_us;
>> +
>> +	/*
>> +	 * Get the present time as reference for the next steps
>> +	 */
>> +	now = local_clock();
>> +
>> +	/*
>> +	 * Get the next interrupt event giving the 'now' as a
>> +	 * reference, if the next event appears to have already
>> +	 * expired then we get the 'now' returned which ends up with a
>> +	 * zero duration.
>> +	 */
>> +	irq_length = irq_timings_next_event(now) - now;
>> +
>> +	/*
>> +	 * Get the timer duration before expiration.
>> +	 */
> 
> This comment is rather redundant and the one below too. :-)

Right.

>> +	timer_length = ktime_to_ns(tick_nohz_get_sleep_length(&delta_next));
>> +
>> +	/*
>> +	 * Get the smallest duration between the timer and the irq next event.
>> +	 */
>> +	idle_duration_us = min_t(u64, irq_length, timer_length) / NSEC_PER_USEC;
>> +
>> +	/*
>> +	 * Get the idle task duration average if the information is
>> +	 * available.
> 
> IMO it would be good to explain this step in more detail, especially the purpose of it.

Ok.

>> +	 */
>> +	if (mobile_dev->idle_ema_avg)
>> +		idle_duration_us = min_t(u64, idle_duration_us,
>> +					 mobile_dev->idle_ema_avg);
>> +
>> +	for (i = 0; i < drv->state_count; i++) {
>> +		struct cpuidle_state *s = &drv->states[i];
>> +		struct cpuidle_state_usage *su = &dev->states_usage[i];
>> +
>> +		if (s->disabled || su->disable)
>> +			continue;
>> +
>> +		if (s->exit_latency > latency_req)
>> +			break;
>> +
>> +		if (idle_duration_us > s->exit_latency)
>> +			idle_duration_us = idle_duration_us - s->exit_latency;
> 
> Why do you want this?
> 
> It only causes you to miss an opportunity to select a deeper state sometimes,
> so what's the reason?

On the mobile platform the exit latencies are very high (with an order
of magnitude of several milliseconds) for a very limited number of idle
states. The real idle duration must be determined to compare to the
target residency. Without this test, the governor is constantly choosing
wrongly a deep idle state.

> Moreover, I don't think you should update idle_duration_us here, as the updated
> value will go to the next step if the check below doesn't trigger.

Right, I spotted it also and fixed with:

+               if (s->exit_latency >= idle_duration_us)
+                       break;

+               if (s->target_residency > (idle_duration_us -
s->exit_latency))
                        break;

>> +
>> +		if (s->target_residency > idle_duration_us)
>> +			break;
>> +
>> +		index = i;
>> +	}
>> +
>> +	if (!index)
>> +		*stop_tick = false;
> 
> Well, this means that the tick is stopped for all idle states deeper than state 0.
> 
> If there are any states between state 0 and the deepest one and they are below
> the tick boundary, you may very well suffer the "powernightmares" problem
> because of this.

What would you suggest?

if (!index || ((idle_duration_us < TICK_USEC) &&
		!tick_nohz_tick_stopped()))
	*stop_tick = false;

?

There are too few idle states to restart a selection at this point, so
preventing stopping the tick is enough at this point IMO.


>> +	return index;
>> +}
>> +
>> +static struct cpuidle_governor mobile_governor = {
>> +	.name =		"mobile",
>> +	.rating =	20,
>> +	.select =	mobile_select,
>> +	.reflect =	mobile_reflect,
>> +};
>> +
>> +static int __init init_governor(void)
>> +{
>> +	irq_timings_enable();
>> +	return cpuidle_register_governor(&mobile_governor);
>> +}
>> +
>> +postcore_initcall(init_governor);
>>
> 
> 
> 
> 


-- 
 <http://www.linaro.org/> Linaro.org │ Open source software for ARM SoCs

Follow Linaro:  <http://www.facebook.com/pages/Linaro> Facebook |
<http://twitter.com/#!/linaroorg> Twitter |
<http://www.linaro.org/linaro-blog/> Blog


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, back to index

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-20 11:58 [PATCH] cpuidle/drivers/mobile: Add new governor for mobile/embedded systems Daniel Lezcano
2019-06-22  3:52 ` kbuild test robot
2019-06-22 11:11 ` kbuild test robot
2019-06-22 11:45 ` kbuild test robot
2019-07-03 14:23 ` Doug Smythies
2019-07-03 15:16   ` Daniel Lezcano
2019-07-03 19:12     ` Doug Smythies
2019-07-07 17:02       ` Doug Smythies
2019-07-04 10:14 ` Rafael J. Wysocki
2019-07-08  9:57   ` Daniel Lezcano

Linux-PM Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-pm/0 linux-pm/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-pm linux-pm/ https://lore.kernel.org/linux-pm \
		linux-pm@vger.kernel.org linux-pm@archiver.kernel.org
	public-inbox-index linux-pm


Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-pm


AGPL code for this site: git clone https://public-inbox.org/ public-inbox