All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC/RFT][PATCH v3] cpuidle: New timer events oriented governor for tickless systems
@ 2018-11-04 16:31 Rafael J. Wysocki
  2018-11-05 19:32 ` Giovanni Gherdovich
  2018-11-06 17:04 ` Peter Zijlstra
  0 siblings, 2 replies; 22+ messages in thread
From: Rafael J. Wysocki @ 2018-11-04 16:31 UTC (permalink / raw)
  To: Linux PM, Giovanni Gherdovich, Doug Smythies
  Cc: Srinivas Pandruvada, Peter Zijlstra, LKML, Frederic Weisbecker,
	Mel Gorman, Daniel Lezcano

From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

The venerable menu governor does some thigns that are quite
questionable in my view.  First, it includes timer wakeups in
the pattern detection data and mixes them up with wakeups from
other sources which in some cases causes it to expect what
essentially would be a timer wakeup in a time frame in which
no timer wakeups are possible (becuase it knows the time until
the next timer event and that is later than the expected wakeup
time).  Second, it uses the extra exit latency limit based on
the predicted idle duration and depending on the number of tasks
waiting on I/O, even though those tasks may run on a different
CPU when they are woken up.  Moreover, the time ranges used by it
for the sleep length correction factors depend on whether or not
there are tasks waiting on I/O, which again doesn't imply anything
in particular, and they are not correlated to the list of available
idle states in any way whatever.  Also,  the pattern detection code
in menu may end up considering values that are too large to matter
at all, in which cases running it is a waste of time.

A major rework of the menu governor would be required to address
these issues and the performance of at least some workloads (tuned
specifically to the current behavior of the menu governor) is likely
to suffer from that.  It is thus better to introduce an entirely new
governor without them and let everybody use the governor that works
better with their actual workloads.

The new governor introduced here, the timer events oriented (TEO)
governor, uses the same basic strategy as menu: it always tries to
find the deepest idle state that can be used in the given conditions.
However, it applies a different approach to that problem.  First, it
doesn't use "correction factors" for the time till the closest timer,
but instead it tries to correlate the measured idle duration values
with the available idle states and use that information to pick up
the idle state that is most likely to "match" the upcoming CPU idle
interval.  Second, it doesn't take the number of "I/O waiters" into
account at all and the pattern detection code in it tries to avoid
taking timer wakeups into account.  It also only uses idle duration
values less than the current time till the closest timer (with the
tick excluded) for that purpose.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---

v2 -> v3:
 * Simplify the pattern detection code and make it return a value
	lower than the time to the closest timer if the majority of recent
	idle intervals are below it regardless of their variance (that should
	cause it to be slightly more aggressive).
 * Do not count wakeups from state 0 due to the time limit in poll_idle()
   as non-timer.

Note: I will be mostly offline tomorrow, so this goes slightly early.
I have tested it only very lightly, but it is not so much different from
the previous one.

It requires the same additional patches to apply as the previous one too.

---
 drivers/cpuidle/Kconfig            |   11 
 drivers/cpuidle/governors/Makefile |    1 
 drivers/cpuidle/governors/teo.c    |  491 +++++++++++++++++++++++++++++++++++++
 3 files changed, 503 insertions(+)

Index: linux-pm/drivers/cpuidle/governors/teo.c
===================================================================
--- /dev/null
+++ linux-pm/drivers/cpuidle/governors/teo.c
@@ -0,0 +1,491 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Timer events oriented CPU idle governor
+ *
+ * Copyright (C) 2018 Intel Corporation
+ * Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
+ *
+ * The idea of this governor is based on the observation that on many systems
+ * timer events are two or more orders of magnitude more frequent than any
+ * other interrupts, so they are likely to be the most significant source of CPU
+ * wakeups from idle states.  Moreover, information about what happened in the
+ * (relatively recent) past can be used to estimate whether or not the deepest
+ * idle state with target residency within the time to the closest timer is
+ * likely to be suitable for the upcoming idle time of the CPU and, if not, then
+ * which of the shallower idle states to choose.
+ *
+ * Of course, non-timer wakeup sources are more important in some use cases and
+ * they can be covered by detecting patterns among recent idle time intervals
+ * of the CPU.  However, even in that case it is not necessary to take idle
+ * duration values greater than the time till the closest timer into account, as
+ * the patterns that they may belong to produce average values close enough to
+ * the time till the closest timer (sleep length) anyway.
+ *
+ * Thus this governor estimates whether or not the upcoming idle time of the CPU
+ * is likely to be significantly shorter than the sleep length and selects an
+ * idle state for it in accordance with that, as follows:
+ *
+ * - If there is a pattern of 5 or more recent non-timer wakeups earlier than
+ *   the closest timer event, expect one more of them to occur and use the
+ *   average of the idle duration values corresponding to them to select an
+ *   idle state for the CPU.
+ *
+ * - Otherwise, find the state on the basis of the sleep length and state
+ *   statistics collected over time:
+ *
+ *   o Find the deepest idle state whose target residency is less than or euqal
+ *     to the sleep length.
+ *
+ *   o Select it if it matched both the sleep length and the idle duration
+ *     measured after wakeup in the past more often than it matched the sleep
+ *     length, but not the idle duration (i.e. the measured idle duration was
+ *     significantly shorter than the sleep length matched by that state).
+ *
+ *   o Otherwise, select the shallower state with the greatest matched "early"
+ *     wakeups metric.
+ */
+
+#include <linux/cpuidle.h>
+#include <linux/jiffies.h>
+#include <linux/kernel.h>
+#include <linux/sched/clock.h>
+#include <linux/tick.h>
+
+/*
+ * The SPIKE value is added to metrics when they grow and the DECAY_SHIFT value
+ * is used for decreasing metrics on a regular basis.
+ */
+#define SPIKE		1024
+#define DECAY_SHIFT	3
+
+/*
+ * Number of the most recent idle duration values to take into consideration for
+ * the detection of wakeup patterns.
+ */
+#define INTERVALS	8
+
+/*
+ * Ratio of the sample spread limit and the length of the interesting intervals
+ * range used for pattern detection, reptesented as a shift.
+ */
+#define MAX_SPREAD_SHIFT	3
+
+/**
+ * struct teo_idle_state - Idle state data used by the TEO cpuidle governor.
+ * @early_hits: "Early" CPU wakeups matched by this state.
+ * @hits: "On time" CPU wakeups matched by this state.
+ * @misses: CPU wakeups "missed" by this state.
+ *
+ * A CPU wakeup is "matched" by a given idle state if the idle duration measured
+ * after the wakeup is between the target residency of that state and the target
+ * residnecy of the next one (or if this is the deepest available idle state, it
+ * "matches" a CPU wakeup when the measured idle duration is at least equal to
+ * its target residency).
+ *
+ * Also, from the TEO governor prespective, a CPU wakeup from idle is "early" if
+ * it occurs significantly earlier than the closest expected timer event (that
+ * is, early enough to match an idle state shallower than the one matching the
+ * time till the closest timer event).  Otherwise, the wakeup is "on time", or
+ * it is a "hit".
+ *
+ * A "miss" occurs when the given state doesn't match the wakeup, but it matches
+ * the time till the closest timer event used for idle state selection.
+ */
+struct teo_idle_state {
+	unsigned int early_hits;
+	unsigned int hits;
+	unsigned int misses;
+};
+
+/**
+ * struct teo_cpu - CPU data used by the TEO cpuidle governor.
+ * @time_span_ns: Time between idle state selection and post-wakeup update.
+ * @sleep_length_ns: Time till the closest timer event (at the selection time).
+ * @states: Idle states data corresponding to this CPU.
+ * @last_state: Idle state entered by the CPU last time.
+ * @interval_idx: Index of the most recent saved idle interval.
+ * @intervals: Saved idle duration values.
+ */
+struct teo_cpu {
+	u64 time_span_ns;
+	u64 sleep_length_ns;
+	struct teo_idle_state states[CPUIDLE_STATE_MAX];
+	int last_state;
+	int interval_idx;
+	unsigned int intervals[INTERVALS];
+};
+
+static DEFINE_PER_CPU(struct teo_cpu, teo_cpus);
+
+/**
+ * teo_update - Update CPU data after wakeup.
+ * @drv: cpuidle driver containing state data.
+ * @dev: Target CPU.
+ */
+static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
+{
+	struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
+	unsigned int sleep_length_us = ktime_to_us(cpu_data->sleep_length_ns);
+	int i, idx_hit = -1, idx_timer = -1;
+	unsigned int measured_us;
+
+	if (cpu_data->time_span_ns == cpu_data->sleep_length_ns) {
+		/* One of the safety nets has triggered (most likely). */
+		measured_us = sleep_length_us;
+	} else {
+		measured_us = dev->last_residency;
+		i = cpu_data->last_state;
+		if (measured_us >= 2 * drv->states[i].exit_latency)
+			measured_us -= drv->states[i].exit_latency;
+		else
+			measured_us /= 2;
+	}
+
+	/*
+	 * Decay the "early hits" metric for all of the states and find the
+	 * states matching the sleep length and the measured idle duration.
+	 */
+	for (i = 0; i < drv->state_count; i++) {
+		unsigned int early_hits = cpu_data->states[i].early_hits;
+
+		cpu_data->states[i].early_hits -= early_hits >> DECAY_SHIFT;
+
+		if (drv->states[i].target_residency <= measured_us)
+			idx_hit = i;
+
+		if (drv->states[i].target_residency <= sleep_length_us)
+			idx_timer = i;
+	}
+
+	/*
+	 * Update the "hits" and "misses" data for the state matching the sleep
+	 * length.  If it matches the measured idle duration too, this is a hit,
+	 * so increase the "hits" metric for it then.  Otherwise, this is a
+	 * miss, so increase the "misses" metric for it.  In the latter case
+	 * also increase the "early hits" metric for the state that actually
+	 * matches the measured idle duration.
+	 */
+	if (idx_timer >= 0) {
+		unsigned int hits = cpu_data->states[idx_timer].hits;
+		unsigned int misses = cpu_data->states[idx_timer].misses;
+
+		hits -= hits >> DECAY_SHIFT;
+		misses -= misses >> DECAY_SHIFT;
+
+		if (idx_timer > idx_hit) {
+			misses += SPIKE;
+			if (idx_hit >= 0)
+				cpu_data->states[idx_hit].early_hits += SPIKE;
+		} else {
+			hits += SPIKE;
+		}
+
+		cpu_data->states[idx_timer].misses = misses;
+		cpu_data->states[idx_timer].hits = hits;
+	}
+
+	/*
+	 * Save idle duration values corresponding to non-timer wakeups for
+	 * pattern detection.
+	 *
+	 * If the total time span between idle state selection and the "reflect"
+	 * callback is greater than or equal to the sleep length determined at
+	 * the idle state selection time, the wakeup is likely to be due to a
+	 * timer event.
+	 */
+	if (cpu_data->time_span_ns >= cpu_data->sleep_length_ns)
+		measured_us = UINT_MAX;
+
+	cpu_data->intervals[cpu_data->interval_idx++] = measured_us;
+	if (cpu_data->interval_idx > INTERVALS)
+		cpu_data->interval_idx = 0;
+}
+
+/**
+ * teo_idle_duration - Estimate the duration of the upcoming CPU idle time.
+ * @drv: cpuidle driver containing state data.
+ * @cpu_data: Governor data for the target CPU.
+ * @sleep_length_us: Time till the closest timer event in microseconds.
+ */
+unsigned int teo_idle_duration(struct cpuidle_driver *drv,
+			       struct teo_cpu *cpu_data,
+			       unsigned int sleep_length_us)
+{
+	u64 range, max_spread, max, sum;
+	unsigned int count;
+
+	/*
+	 * If the sleep length is below the target residency of idle state 1,
+	 * the only viable choice is to select the first available (enabled)
+	 * idle state, so return immediately in that case.
+	 */
+	if (sleep_length_us < drv->states[1].target_residency)
+		return sleep_length_us;
+
+	/*
+	 * The purpose of this function is to check if there is a pattern of
+	 * wakeups indicating that it would be better to select a state
+	 * shallower than the deepest one matching the sleep length or the
+	 * deepest one at all if the sleep lenght is long.  Larger idle duration
+	 * values are beyond the interesting range.
+	 *
+	 * Narrowing the range of interesting values down upfront also helps to
+	 * avoid overflows during the computation below.
+	 */
+	range = drv->states[drv->state_count-1].target_residency;
+	range = min_t(u64, sleep_length_us, range + (range >> 2));
+
+	/*
+	 * This is the value to compare with the distance between the average
+	 * and the greatest sample to decide whether or not it is small enough.
+	 * Take 10 us as the total cap of it.
+	 */
+	max_spread = max_t(u64, range >> MAX_SPREAD_SHIFT, 10);
+
+	max = range;
+
+	do {
+		u64 cap = max;
+		int i;
+
+		/*
+		 * Compute the sum of the saved intervals below the cap and the
+		 * sum of of their squares.  Count them and find the maximum
+		 * interval below the cap.
+		 */
+		count = 0;
+		sum = 0;
+		max = 0;
+
+		for (i = 0; i < INTERVALS; i++) {
+			u64 val = cpu_data->intervals[i];
+
+			if (val >= cap)
+				continue;
+
+			count++;
+			sum += val;
+			if (max < val)
+				max = val;
+		}
+
+		/*
+		 * Give up if the total number of interesting samples is too
+		 * small.
+		 */
+		if (cap == range && count <= INTERVALS / 2)
+			return sleep_length_us;
+
+		/*
+		 * If the distance between the max and the average is too large,
+		 * discard the max an repeat.
+		 */
+	} while (count > 3 && max > max_spread && (max - max_spread) * count > sum);
+
+	return div64_u64(sum, count);
+}
+
+/**
+ * teo_select - Selects the next idle state to enter.
+ * @drv: cpuidle driver containing state data.
+ * @dev: Target CPU.
+ * @stop_tick: Indication on whether or not to stop the scheduler tick.
+ */
+static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+		      bool *stop_tick)
+{
+	struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
+	int latency_req = cpuidle_governor_latency_req(dev->cpu);
+	unsigned int sleep_length_us, duration_us;
+	unsigned int max_early_count;
+	int max_early_idx, idx, i;
+	ktime_t delta_tick;
+
+	if (cpu_data->last_state >= 0) {
+		teo_update(drv, dev);
+		cpu_data->last_state = -1;
+	}
+
+	cpu_data->time_span_ns = local_clock();
+
+	cpu_data->sleep_length_ns = tick_nohz_get_sleep_length(&delta_tick);
+	sleep_length_us = ktime_to_us(cpu_data->sleep_length_ns);
+
+	duration_us = teo_idle_duration(drv, cpu_data, sleep_length_us);
+
+	/*
+	 * If the time needed to enter and exit the idle state matching the
+	 * expected idle duration is comparable with the expected idle duration
+	 * itself, the time to spend in that state is likely to be small, so it
+	 * probably is better to select a shallower state then.  Tweak the
+	 * latency limit to enforce that.
+	 */
+	if (duration_us < latency_req)
+		latency_req = duration_us;
+
+	max_early_count = 0;
+	max_early_idx = -1;
+	idx = -1;
+
+	for (i = 0; i < drv->state_count; i++) {
+		struct cpuidle_state *s = &drv->states[i];
+		struct cpuidle_state_usage *su = &dev->states_usage[i];
+
+		if (s->disabled || su->disable) {
+			/*
+			 * If the "early hits" metric of a disabled state is
+			 * greater than the current maximum, it should be taken
+			 * into account, because it would be a mistake to select
+			 * a deeper state with lower "early hits" metric.  The
+			 * index cannot be changed to point to it, however, so
+			 * just increase the max count alone and let the index
+			 * still point to a shallower idle state.
+			 */
+			if (max_early_idx >= 0 &&
+			    max_early_count < cpu_data->states[i].early_hits)
+				max_early_count = cpu_data->states[i].early_hits;
+
+			continue;
+		}
+
+		if (idx < 0)
+			idx = i; /* first enabled state */
+
+		if (s->target_residency > duration_us) {
+			/*
+			 * If the next wakeup is expected to be "early", the
+			 * time frame of it is known already.
+			 */
+			if (duration_us < sleep_length_us)
+				break;
+
+			/*
+			 * If the "hits" metric of the state matching the sleep
+			 * length is greater than its "misses" metric, that is
+			 * the one to use.
+			 */
+			if (cpu_data->states[idx].hits >= cpu_data->states[idx].misses)
+				break;
+
+			/*
+			 * It is more likely that one of the shallower states
+			 * will match the idle duration measured after wakeup,
+			 * so take the one with the maximum "early hits" metric,
+			 * but if that cannot be determined, just use the state
+			 * selected so far.
+			 */
+			if (max_early_idx >= 0) {
+				idx = max_early_idx;
+				duration_us = drv->states[idx].target_residency;
+			}
+			break;
+		}
+		if (s->exit_latency > latency_req) {
+			/*
+			 * If we break out of the loop for latency reasons, use
+			 * the target residency of the selected state as the
+			 * expected idle duration to avoid stopping the tick
+			 * as long as that target residency is low enough.
+			 */
+			duration_us = drv->states[idx].target_residency;
+			break;
+		}
+
+		idx = i;
+
+		if (max_early_count < cpu_data->states[i].early_hits) {
+			max_early_count = cpu_data->states[i].early_hits;
+			max_early_idx = i;
+		}
+	}
+
+	if (idx < 0)
+		idx = 0; /* No states enabled. Must use 0. */
+
+	/*
+	 * Don't stop the tick if the selected state is a polling one or if the
+	 * expected idle duration is shorter than the tick period length.
+	 */
+	if (((drv->states[idx].flags & CPUIDLE_FLAG_POLLING) ||
+	    duration_us < TICK_USEC) && !tick_nohz_tick_stopped()) {
+		unsigned int delta_tick_us = ktime_to_us(delta_tick);
+
+		*stop_tick = false;
+
+		if (idx > 0 && drv->states[idx].target_residency > delta_tick_us) {
+			/*
+			 * The tick is not going to be stopped and the target
+			 * residency of the state to be returned is not within
+			 * the time until the closest timer event including the
+			 * tick, so try to correct that.
+			 */
+			for (i = idx - 1; i > 0; i--) {
+				if (drv->states[i].disabled ||
+				    dev->states_usage[i].disable)
+					continue;
+
+				if (drv->states[i].target_residency <= delta_tick_us)
+					break;
+			}
+			idx = i;
+		}
+	}
+
+	return idx;
+}
+
+/**
+ * teo_reflect - Note that governor data for the CPU need to be updated.
+ * @dev: Target CPU.
+ * @state: Entered state.
+ */
+static void teo_reflect(struct cpuidle_device *dev, int state)
+{
+	struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
+
+	cpu_data->last_state = state;
+	/*
+	 * If the wakeup was not "natural", but triggered by one of the safety
+	 * nets, assume that the CPU might have been idle for the entire sleep
+	 * length time.
+	 */
+	if (dev->poll_time_limit ||
+	    (tick_nohz_idle_got_tick() && cpu_data->sleep_length_ns > TICK_NSEC))
+		cpu_data->time_span_ns = cpu_data->sleep_length_ns;
+	else
+		cpu_data->time_span_ns = local_clock() - cpu_data->time_span_ns;
+}
+
+/**
+ * teo_enable_device - Initialize the governor's data for the target CPU.
+ * @drv: cpuidle driver (not used).
+ * @dev: Target CPU.
+ */
+static int teo_enable_device(struct cpuidle_driver *drv,
+			     struct cpuidle_device *dev)
+{
+	struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
+	int i;
+
+	memset(cpu_data, 0, sizeof(*cpu_data));
+
+	for (i = 0; i < INTERVALS; i++)
+		cpu_data->intervals[i] = UINT_MAX;
+
+	return 0;
+}
+
+static struct cpuidle_governor teo_governor = {
+	.name =		"teo",
+	.rating =	22,
+	.enable =	teo_enable_device,
+	.select =	teo_select,
+	.reflect =	teo_reflect,
+};
+
+static int __init teo_governor_init(void)
+{
+	return cpuidle_register_governor(&teo_governor);
+}
+
+postcore_initcall(teo_governor_init);
Index: linux-pm/drivers/cpuidle/Kconfig
===================================================================
--- linux-pm.orig/drivers/cpuidle/Kconfig
+++ linux-pm/drivers/cpuidle/Kconfig
@@ -23,6 +23,17 @@ config CPU_IDLE_GOV_LADDER
 config CPU_IDLE_GOV_MENU
 	bool "Menu governor (for tickless system)"
 
+config CPU_IDLE_GOV_TEO
+	bool "Timer events oriented governor (for tickless systems)"
+	help
+	  Menu governor derivative that uses a simplified idle state
+	  selection method focused on timer events and does not do any
+	  interactivity boosting.
+
+	  Some workloads benefit from using this governor and it generally
+	  should be safe to use.  Say Y here if you are not happy with the
+	  alternatives.
+
 config DT_IDLE_STATES
 	bool
 
Index: linux-pm/drivers/cpuidle/governors/Makefile
===================================================================
--- linux-pm.orig/drivers/cpuidle/governors/Makefile
+++ linux-pm/drivers/cpuidle/governors/Makefile
@@ -4,3 +4,4 @@
 
 obj-$(CONFIG_CPU_IDLE_GOV_LADDER) += ladder.o
 obj-$(CONFIG_CPU_IDLE_GOV_MENU) += menu.o
+obj-$(CONFIG_CPU_IDLE_GOV_TEO) += teo.o


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC/RFT][PATCH v3] cpuidle: New timer events oriented governor for tickless systems
  2018-11-04 16:31 [RFC/RFT][PATCH v3] cpuidle: New timer events oriented governor for tickless systems Rafael J. Wysocki
@ 2018-11-05 19:32 ` Giovanni Gherdovich
  2018-11-06 14:55   ` Rafael J. Wysocki
  2018-11-06 17:04 ` Peter Zijlstra
  1 sibling, 1 reply; 22+ messages in thread
From: Giovanni Gherdovich @ 2018-11-05 19:32 UTC (permalink / raw)
  To: Rafael J. Wysocki, Linux PM, Doug Smythies
  Cc: Srinivas Pandruvada, Peter Zijlstra, LKML, Frederic Weisbecker,
	Mel Gorman, Daniel Lezcano

On Sun, 2018-11-04 at 17:31 +0100, Rafael J. Wysocki wrote:
> From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> 
> The venerable menu governor does some thigns that are quite
> questionable in my view.  First, it includes timer wakeups in
> the pattern detection data and mixes them up with wakeups from
> other sources which in some cases causes it to expect what
> essentially would be a timer wakeup in a time frame in which
> no timer wakeups are possible (becuase it knows the time until
> the next timer event and that is later than the expected wakeup
> time).  Second, it uses the extra exit latency limit based on
> the predicted idle duration and depending on the number of tasks
> waiting on I/O, even though those tasks may run on a different
> CPU when they are woken up.  Moreover, the time ranges used by it
> for the sleep length correction factors depend on whether or not
> there are tasks waiting on I/O, which again doesn't imply anything
> in particular, and they are not correlated to the list of available
> idle states in any way whatever.  Also,  the pattern detection code
> in menu may end up considering values that are too large to matter
> at all, in which cases running it is a waste of time.
> 
> A major rework of the menu governor would be required to address
> these issues and the performance of at least some workloads (tuned
> specifically to the current behavior of the menu governor) is likely
> to suffer from that.  It is thus better to introduce an entirely new
> governor without them and let everybody use the governor that works
> better with their actual workloads.
> 
> The new governor introduced here, the timer events oriented (TEO)
> governor, uses the same basic strategy as menu: it always tries to
> find the deepest idle state that can be used in the given conditions.
> However, it applies a different approach to that problem.  First, it
> doesn't use "correction factors" for the time till the closest timer,
> but instead it tries to correlate the measured idle duration values
> with the available idle states and use that information to pick up
> the idle state that is most likely to "match" the upcoming CPU idle
> interval.  Second, it doesn't take the number of "I/O waiters" into
> account at all and the pattern detection code in it tries to avoid
> taking timer wakeups into account.  It also only uses idle duration
> values less than the current time till the closest timer (with the
> tick excluded) for that purpose.
> 
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> ---
> 
> v2 -> v3:
>  * Simplify the pattern detection code and make it return a value
>         lower than the time to the closest timer if the majority of recent
>         idle intervals are below it regardless of their variance (that should
>         cause it to be slightly more aggressive).
>  * Do not count wakeups from state 0 due to the time limit in poll_idle()
>    as non-timer.
> 
> Note: I will be mostly offline tomorrow, so this goes slightly early.
> I have tested it only very lightly, but it is not so much different from
> the previous one.
> 
> It requires the same additional patches to apply as the previous one too.
> 
> ---
>  drivers/cpuidle/Kconfig            |   11 
>  drivers/cpuidle/governors/Makefile |    1 
>  drivers/cpuidle/governors/teo.c    |  491 +++++++++++++++++++++++++++++++++++++
>  3 files changed, 503 insertions(+)
> 
> Index: linux-pm/drivers/cpuidle/governors/teo.c
> ===================================================================
> --- /dev/null
> +++ linux-pm/drivers/cpuidle/governors/teo.c
> @@ -0,0 +1,491 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Timer events oriented CPU idle governor
> + *
> + * Copyright (C) 2018 Intel Corporation
> + * Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> + *
> + * The idea of this governor is based on the observation that on many systems
> + * timer events are two or more orders of magnitude more frequent than any
> + * other interrupts, so they are likely to be the most significant source of CPU
> + * wakeups from idle states.  Moreover, information about what happened in the
> + * (relatively recent) past can be used to estimate whether or not the deepest
> + * idle state with target residency within the time to the closest timer is
> + * likely to be suitable for the upcoming idle time of the CPU and, if not, then
> + * which of the shallower idle states to choose.
> + *
> + * Of course, non-timer wakeup sources are more important in some use cases and
> + * they can be covered by detecting patterns among recent idle time intervals
> + * of the CPU.  However, even in that case it is not necessary to take idle
> + * duration values greater than the time till the closest timer into account, as
> + * the patterns that they may belong to produce average values close enough to
> + * the time till the closest timer (sleep length) anyway.
> + *
> + * Thus this governor estimates whether or not the upcoming idle time of the CPU
> + * is likely to be significantly shorter than the sleep length and selects an
> + * idle state for it in accordance with that, as follows:
> + *
> + * - If there is a pattern of 5 or more recent non-timer wakeups earlier than
> + *   the closest timer event, expect one more of them to occur and use the
> + *   average of the idle duration values corresponding to them to select an
> + *   idle state for the CPU.
> + *
> + * - Otherwise, find the state on the basis of the sleep length and state
> + *   statistics collected over time:
> + *
> + *   o Find the deepest idle state whose target residency is less than or euqal
> + *     to the sleep length.
> + *
> + *   o Select it if it matched both the sleep length and the idle duration
> + *     measured after wakeup in the past more often than it matched the sleep
> + *     length, but not the idle duration (i.e. the measured idle duration was
> + *     significantly shorter than the sleep length matched by that state).
> + *
> + *   o Otherwise, select the shallower state with the greatest matched "early"
> + *     wakeups metric.
> + */
> +
> +#include <linux/cpuidle.h>
> +#include <linux/jiffies.h>
> +#include <linux/kernel.h>
> +#include <linux/sched/clock.h>
> +#include <linux/tick.h>
> +
> +/*
> + * The SPIKE value is added to metrics when they grow and the DECAY_SHIFT value
> + * is used for decreasing metrics on a regular basis.
> + */
> +#define SPIKE          1024
> +#define DECAY_SHIFT    3
> +
> +/*
> + * Number of the most recent idle duration values to take into consideration for
> + * the detection of wakeup patterns.
> + */
> +#define INTERVALS      8
> +
> +/*
> + * Ratio of the sample spread limit and the length of the interesting intervals
> + * range used for pattern detection, reptesented as a shift.
> + */
> +#define MAX_SPREAD_SHIFT       3
> +
> +/**
> + * struct teo_idle_state - Idle state data used by the TEO cpuidle governor.
> + * @early_hits: "Early" CPU wakeups matched by this state.
> + * @hits: "On time" CPU wakeups matched by this state.
> + * @misses: CPU wakeups "missed" by this state.
> + *
> + * A CPU wakeup is "matched" by a given idle state if the idle duration measured
> + * after the wakeup is between the target residency of that state and the target
> + * residnecy of the next one (or if this is the deepest available idle state, it
> + * "matches" a CPU wakeup when the measured idle duration is at least equal to
> + * its target residency).
> + *
> + * Also, from the TEO governor prespective, a CPU wakeup from idle is "early" if
> + * it occurs significantly earlier than the closest expected timer event (that
> + * is, early enough to match an idle state shallower than the one matching the
> + * time till the closest timer event).  Otherwise, the wakeup is "on time", or
> + * it is a "hit".
> + *
> + * A "miss" occurs when the given state doesn't match the wakeup, but it matches
> + * the time till the closest timer event used for idle state selection.
> + */
> +struct teo_idle_state {
> +       unsigned int early_hits;
> +       unsigned int hits;
> +       unsigned int misses;
> +};
> +
> +/**
> + * struct teo_cpu - CPU data used by the TEO cpuidle governor.
> + * @time_span_ns: Time between idle state selection and post-wakeup update.
> + * @sleep_length_ns: Time till the closest timer event (at the selection time).
> + * @states: Idle states data corresponding to this CPU.
> + * @last_state: Idle state entered by the CPU last time.
> + * @interval_idx: Index of the most recent saved idle interval.
> + * @intervals: Saved idle duration values.
> + */
> +struct teo_cpu {
> +       u64 time_span_ns;
> +       u64 sleep_length_ns;
> +       struct teo_idle_state states[CPUIDLE_STATE_MAX];
> +       int last_state;
> +       int interval_idx;
> +       unsigned int intervals[INTERVALS];
> +};
> +
> +static DEFINE_PER_CPU(struct teo_cpu, teo_cpus);
> +
> +/**
> + * teo_update - Update CPU data after wakeup.
> + * @drv: cpuidle driver containing state data.
> + * @dev: Target CPU.
> + */
> +static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
> +{
> +       struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
> +       unsigned int sleep_length_us = ktime_to_us(cpu_data->sleep_length_ns);
> +       int i, idx_hit = -1, idx_timer = -1;
> +       unsigned int measured_us;
> +
> +       if (cpu_data->time_span_ns == cpu_data->sleep_length_ns) {
> +               /* One of the safety nets has triggered (most likely). */
> +               measured_us = sleep_length_us;
> +       } else {
> +               measured_us = dev->last_residency;
> +               i = cpu_data->last_state;
> +               if (measured_us >= 2 * drv->states[i].exit_latency)
> +                       measured_us -= drv->states[i].exit_latency;
> +               else
> +                       measured_us /= 2;
> +       }
> +

I haven't read this v3 yet, so just a little comment on the bit above (which
is there since v1).

When you check for measured_us >= 2 * exit_latency, is that because
dev->last_residency is composed by an "entry" latency, then the actual
residency, and finally the exit_latency? I'm asking about the 2x factor
there.

If that succeeds, you proceed to remove the exit_latency from
measured_us... just once. Given how the condition is formulated, I expected
measured_us -= 2 * exit_latency there.

More: you acknowledge, in that snippet of code, that there can be
dev->last_residency's smaller than twice the exit_latency, i.e. not even the
time to entry/exit the state. Am I reading this right? Is that because both
exit_latency and dev->last_residency are only approximations?

I actually see quite a few of those extra-short residencies in my traces, even
with dev->last_residency < exit_latency.


Giovanni

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC/RFT][PATCH v3] cpuidle: New timer events oriented governor for tickless systems
  2018-11-05 19:32 ` Giovanni Gherdovich
@ 2018-11-06 14:55   ` Rafael J. Wysocki
  0 siblings, 0 replies; 22+ messages in thread
From: Rafael J. Wysocki @ 2018-11-06 14:55 UTC (permalink / raw)
  To: Giovanni Gherdovich
  Cc: Linux PM, Doug Smythies, Srinivas Pandruvada, Peter Zijlstra,
	LKML, Frederic Weisbecker, Mel Gorman, Daniel Lezcano

On Monday, November 5, 2018 8:32:51 PM CET Giovanni Gherdovich wrote:
> On Sun, 2018-11-04 at 17:31 +0100, Rafael J. Wysocki wrote:
> > From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> > 

[cut]

> > +
> > +/**
> > + * teo_update - Update CPU data after wakeup.
> > + * @drv: cpuidle driver containing state data.
> > + * @dev: Target CPU.
> > + */
> > +static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
> > +{
> > +       struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
> > +       unsigned int sleep_length_us = ktime_to_us(cpu_data->sleep_length_ns);
> > +       int i, idx_hit = -1, idx_timer = -1;
> > +       unsigned int measured_us;
> > +
> > +       if (cpu_data->time_span_ns == cpu_data->sleep_length_ns) {
> > +               /* One of the safety nets has triggered (most likely). */
> > +               measured_us = sleep_length_us;
> > +       } else {
> > +               measured_us = dev->last_residency;
> > +               i = cpu_data->last_state;
> > +               if (measured_us >= 2 * drv->states[i].exit_latency)
> > +                       measured_us -= drv->states[i].exit_latency;
> > +               else
> > +                       measured_us /= 2;
> > +       }
> > +
> 
> I haven't read this v3 yet,

The pattern detection code in it has a minor issue that needs addressing, so
I'm going to send a v4 later today (after testing it somewhat).

> so just a little comment on the bit above (which
> is there since v1).

To be precise, this piece is there in the menu governor too and I thought that
it made sense, so I copied it from there. :-)

> When you check for measured_us >= 2 * exit_latency, is that because
> dev->last_residency is composed by an "entry" latency, then the actual
> residency, and finally the exit_latency? I'm asking about the 2x factor
> there.

If measured_us is less than twice the state's exit latency, the difference
between it and the exit latency may be very small, so it is better to take a
half of it then as an upper bound estimate of it in case there are some
inaccuracies in the measurement.

> If that succeeds, you proceed to remove the exit_latency from
> measured_us... just once. Given how the condition is formulated, I expected
> measured_us -= 2 * exit_latency there.

The exit latency is subtracted once, because we are interested in the time
between asking the hardware to enter the idle state and the wakeup event.

The exit latency is the time between the wakeup event and the first instruction,
so it shouldn't be counted, roughly speaking.

> More: you acknowledge, in that snippet of code, that there can be
> dev->last_residency's smaller than twice the exit_latency, i.e. not even the
> time to entry/exit the state. Am I reading this right? Is that because both
> exit_latency and dev->last_residency are only approximations?

Yes, they are just approximations.

> I actually see quite a few of those extra-short residencies in my traces, even
> with dev->last_residency < exit_latency.

Well, they are expected to occur at least occasionally.

Cheers,
Rafael


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC/RFT][PATCH v3] cpuidle: New timer events oriented governor for tickless systems
  2018-11-04 16:31 [RFC/RFT][PATCH v3] cpuidle: New timer events oriented governor for tickless systems Rafael J. Wysocki
  2018-11-05 19:32 ` Giovanni Gherdovich
@ 2018-11-06 17:04 ` Peter Zijlstra
  2018-11-06 18:19   ` Rafael J. Wysocki
  1 sibling, 1 reply; 22+ messages in thread
From: Peter Zijlstra @ 2018-11-06 17:04 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Linux PM, Giovanni Gherdovich, Doug Smythies,
	Srinivas Pandruvada, LKML, Frederic Weisbecker, Mel Gorman,
	Daniel Lezcano

On Sun, Nov 04, 2018 at 05:31:20PM +0100, Rafael J. Wysocki wrote:
> + * - If there is a pattern of 5 or more recent non-timer wakeups earlier than
> + *   the closest timer event, expect one more of them to occur and use the
> + *   average of the idle duration values corresponding to them to select an
> + *   idle state for the CPU.


> +/**
> + * teo_idle_duration - Estimate the duration of the upcoming CPU idle time.
> + * @drv: cpuidle driver containing state data.
> + * @cpu_data: Governor data for the target CPU.
> + * @sleep_length_us: Time till the closest timer event in microseconds.
> + */
> +unsigned int teo_idle_duration(struct cpuidle_driver *drv,
> +			       struct teo_cpu *cpu_data,
> +			       unsigned int sleep_length_us)
> +{
> +	u64 range, max_spread, max, sum;
> +	unsigned int count;
> +
> +	/*
> +	 * If the sleep length is below the target residency of idle state 1,
> +	 * the only viable choice is to select the first available (enabled)
> +	 * idle state, so return immediately in that case.
> +	 */
> +	if (sleep_length_us < drv->states[1].target_residency)
> +		return sleep_length_us;
> +
> +	/*
> +	 * The purpose of this function is to check if there is a pattern of
> +	 * wakeups indicating that it would be better to select a state
> +	 * shallower than the deepest one matching the sleep length or the
> +	 * deepest one at all if the sleep lenght is long.  Larger idle duration
> +	 * values are beyond the interesting range.
> +	 *
> +	 * Narrowing the range of interesting values down upfront also helps to
> +	 * avoid overflows during the computation below.
> +	 */
> +	range = drv->states[drv->state_count-1].target_residency;
> +	range = min_t(u64, sleep_length_us, range + (range >> 2));
> +
> +	/*
> +	 * This is the value to compare with the distance between the average
> +	 * and the greatest sample to decide whether or not it is small enough.
> +	 * Take 10 us as the total cap of it.
> +	 */
> +	max_spread = max_t(u64, range >> MAX_SPREAD_SHIFT, 10);
> +
> +	max = range;
> +
> +	do {
> +		u64 cap = max;
> +		int i;
> +
> +		/*
> +		 * Compute the sum of the saved intervals below the cap and the
> +		 * sum of of their squares.  Count them and find the maximum
> +		 * interval below the cap.
> +		 */
> +		count = 0;
> +		sum = 0;
> +		max = 0;
> +
> +		for (i = 0; i < INTERVALS; i++) {
> +			u64 val = cpu_data->intervals[i];
> +
> +			if (val >= cap)
> +				continue;
> +
> +			count++;
> +			sum += val;
> +			if (max < val)
> +				max = val;
> +		}
> +
> +		/*
> +		 * Give up if the total number of interesting samples is too
> +		 * small.
> +		 */
> +		if (cap == range && count <= INTERVALS / 2)
> +			return sleep_length_us;
> +
> +		/*
> +		 * If the distance between the max and the average is too large,
> +		 * discard the max an repeat.
> +		 */
> +	} while (count > 3 && max > max_spread && (max - max_spread) * count > sum);
> +
> +	return div64_u64(sum, count);
> +}

Instead of this detector; why haven't you used the code from
kernel/irq/timings.c ?

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC/RFT][PATCH v3] cpuidle: New timer events oriented governor for tickless systems
  2018-11-06 17:04 ` Peter Zijlstra
@ 2018-11-06 18:19   ` Rafael J. Wysocki
  2018-11-06 19:51     ` Peter Zijlstra
  0 siblings, 1 reply; 22+ messages in thread
From: Rafael J. Wysocki @ 2018-11-06 18:19 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Rafael J. Wysocki, Linux PM, Giovanni Gherdovich, Doug Smythies,
	Srinivas Pandruvada, Linux Kernel Mailing List,
	Frederic Weisbecker, Mel Gorman, Daniel Lezcano

On Tue, Nov 6, 2018 at 6:04 PM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Sun, Nov 04, 2018 at 05:31:20PM +0100, Rafael J. Wysocki wrote:
> > + * - If there is a pattern of 5 or more recent non-timer wakeups earlier than
> > + *   the closest timer event, expect one more of them to occur and use the
> > + *   average of the idle duration values corresponding to them to select an
> > + *   idle state for the CPU.
>
>
> > +/**
> > + * teo_idle_duration - Estimate the duration of the upcoming CPU idle time.
> > + * @drv: cpuidle driver containing state data.
> > + * @cpu_data: Governor data for the target CPU.
> > + * @sleep_length_us: Time till the closest timer event in microseconds.
> > + */
> > +unsigned int teo_idle_duration(struct cpuidle_driver *drv,
> > +                            struct teo_cpu *cpu_data,
> > +                            unsigned int sleep_length_us)
> > +{
> > +     u64 range, max_spread, max, sum;
> > +     unsigned int count;
> > +
> > +     /*
> > +      * If the sleep length is below the target residency of idle state 1,
> > +      * the only viable choice is to select the first available (enabled)
> > +      * idle state, so return immediately in that case.
> > +      */
> > +     if (sleep_length_us < drv->states[1].target_residency)
> > +             return sleep_length_us;
> > +
> > +     /*
> > +      * The purpose of this function is to check if there is a pattern of
> > +      * wakeups indicating that it would be better to select a state
> > +      * shallower than the deepest one matching the sleep length or the
> > +      * deepest one at all if the sleep lenght is long.  Larger idle duration
> > +      * values are beyond the interesting range.
> > +      *
> > +      * Narrowing the range of interesting values down upfront also helps to
> > +      * avoid overflows during the computation below.
> > +      */
> > +     range = drv->states[drv->state_count-1].target_residency;
> > +     range = min_t(u64, sleep_length_us, range + (range >> 2));
> > +
> > +     /*
> > +      * This is the value to compare with the distance between the average
> > +      * and the greatest sample to decide whether or not it is small enough.
> > +      * Take 10 us as the total cap of it.
> > +      */
> > +     max_spread = max_t(u64, range >> MAX_SPREAD_SHIFT, 10);
> > +
> > +     max = range;
> > +
> > +     do {
> > +             u64 cap = max;
> > +             int i;
> > +
> > +             /*
> > +              * Compute the sum of the saved intervals below the cap and the
> > +              * sum of of their squares.  Count them and find the maximum
> > +              * interval below the cap.
> > +              */
> > +             count = 0;
> > +             sum = 0;
> > +             max = 0;
> > +
> > +             for (i = 0; i < INTERVALS; i++) {
> > +                     u64 val = cpu_data->intervals[i];
> > +
> > +                     if (val >= cap)
> > +                             continue;
> > +
> > +                     count++;
> > +                     sum += val;
> > +                     if (max < val)
> > +                             max = val;
> > +             }
> > +
> > +             /*
> > +              * Give up if the total number of interesting samples is too
> > +              * small.
> > +              */
> > +             if (cap == range && count <= INTERVALS / 2)
> > +                     return sleep_length_us;
> > +
> > +             /*
> > +              * If the distance between the max and the average is too large,
> > +              * discard the max an repeat.
> > +              */
> > +     } while (count > 3 && max > max_spread && (max - max_spread) * count > sum);
> > +
> > +     return div64_u64(sum, count);
> > +}
>
> Instead of this detector; why haven't you used the code from
> kernel/irq/timings.c ?

Because it doesn't help much AFAICS.

Wakeups need not be interrupts in particular and interrupt patterns
that show up when the CPU is busy may not be relevant for when it is
idle.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC/RFT][PATCH v3] cpuidle: New timer events oriented governor for tickless systems
  2018-11-06 18:19   ` Rafael J. Wysocki
@ 2018-11-06 19:51     ` Peter Zijlstra
  2018-11-06 23:39       ` Rafael J. Wysocki
  0 siblings, 1 reply; 22+ messages in thread
From: Peter Zijlstra @ 2018-11-06 19:51 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Rafael J. Wysocki, Linux PM, Giovanni Gherdovich, Doug Smythies,
	Srinivas Pandruvada, Linux Kernel Mailing List,
	Frederic Weisbecker, Mel Gorman, Daniel Lezcano

On Tue, Nov 06, 2018 at 07:19:24PM +0100, Rafael J. Wysocki wrote:
> On Tue, Nov 6, 2018 at 6:04 PM Peter Zijlstra <peterz@infradead.org> wrote:

> > Instead of this detector; why haven't you used the code from
> > kernel/irq/timings.c ?
> 
> Because it doesn't help much AFAICS.
> 
> Wakeups need not be interrupts in particular 

You're alluding to the MWAIT wakeup through the MONITOR address ?

> and interrupt patterns that show up when the CPU is busy may not be
> relevant for when it is idle.

I think that is not always true; consider things like the periodic
interrupt from frame rendering or audio; if there is nothing more going
on in the system than say playing your favourite tune, it gets the
'need more data soon' interrupt from the audio card, wakes up, does a little
mp3/flac/ogg/whatever decode to fill up the buffer and goes back to
sleep. Same for video playback I assume, the vsync interrupt for buffer
flips is fairly predictable.

The interrupt predictor we have in kernel/irq/timings.c should be very
accurate in predicting those interrupts.



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC/RFT][PATCH v3] cpuidle: New timer events oriented governor for tickless systems
  2018-11-06 19:51     ` Peter Zijlstra
@ 2018-11-06 23:39       ` Rafael J. Wysocki
  2018-11-07  8:59         ` Peter Zijlstra
                           ` (2 more replies)
  0 siblings, 3 replies; 22+ messages in thread
From: Rafael J. Wysocki @ 2018-11-06 23:39 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Rafael J. Wysocki, Rafael J. Wysocki, Linux PM,
	Giovanni Gherdovich, Doug Smythies, Srinivas Pandruvada,
	Linux Kernel Mailing List, Frederic Weisbecker, Mel Gorman,
	Daniel Lezcano

On Tue, Nov 6, 2018 at 8:51 PM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Tue, Nov 06, 2018 at 07:19:24PM +0100, Rafael J. Wysocki wrote:
> > On Tue, Nov 6, 2018 at 6:04 PM Peter Zijlstra <peterz@infradead.org> wrote:
>
> > > Instead of this detector; why haven't you used the code from
> > > kernel/irq/timings.c ?
> >
> > Because it doesn't help much AFAICS.
> >
> > Wakeups need not be interrupts in particular
>
> You're alluding to the MWAIT wakeup through the MONITOR address ?

Yes.

> > and interrupt patterns that show up when the CPU is busy may not be
> > relevant for when it is idle.
>
> I think that is not always true; consider things like the periodic
> interrupt from frame rendering or audio; if there is nothing more going
> on in the system than say playing your favourite tune, it gets the
> 'need more data soon' interrupt from the audio card, wakes up, does a little
> mp3/flac/ogg/whatever decode to fill up the buffer and goes back to
> sleep. Same for video playback I assume, the vsync interrupt for buffer
> flips is fairly predictable.
>
> The interrupt predictor we have in kernel/irq/timings.c should be very
> accurate in predicting those interrupts.

In the above case the interrupts should produce a detectable pattern
of wakeups anyway.

In general, however, I need to be convinced that interrupts that
didn't wake up the CPU from idle are relevant for next wakeup
prediction.  I see that this may be the case, but to what extent is
rather unclear to me and it looks like calling
irq_timings_next_event() would add considerable overhead.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC/RFT][PATCH v3] cpuidle: New timer events oriented governor for tickless systems
  2018-11-06 23:39       ` Rafael J. Wysocki
@ 2018-11-07  8:59         ` Peter Zijlstra
  2018-11-07  9:46           ` [PATCH] irq/timings: Fix model validity Peter Zijlstra
  2018-11-07 12:09             ` Rafael J. Wysocki
  2018-11-07 10:09         ` [RFC][PATCH] irq/timings: Ignore predictions in the past Peter Zijlstra
  2018-11-07 10:13         ` [RFC/RFT][PATCH v3] cpuidle: New timer events oriented governor for tickless systems Daniel Lezcano
  2 siblings, 2 replies; 22+ messages in thread
From: Peter Zijlstra @ 2018-11-07  8:59 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Rafael J. Wysocki, Linux PM, Giovanni Gherdovich, Doug Smythies,
	Srinivas Pandruvada, Linux Kernel Mailing List,
	Frederic Weisbecker, Mel Gorman, Daniel Lezcano

On Wed, Nov 07, 2018 at 12:39:31AM +0100, Rafael J. Wysocki wrote:
> On Tue, Nov 6, 2018 at 8:51 PM Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > On Tue, Nov 06, 2018 at 07:19:24PM +0100, Rafael J. Wysocki wrote:
> > > On Tue, Nov 6, 2018 at 6:04 PM Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > > > Instead of this detector; why haven't you used the code from
> > > > kernel/irq/timings.c ?
> > >
> > > Because it doesn't help much AFAICS.
> > >
> > > Wakeups need not be interrupts in particular
> >
> > You're alluding to the MWAIT wakeup through the MONITOR address ?
> 
> Yes.

Right, those will not be accounted for and will need something else.

> > > and interrupt patterns that show up when the CPU is busy may not be
> > > relevant for when it is idle.
> >
> > I think that is not always true; consider things like the periodic
> > interrupt from frame rendering or audio; if there is nothing more going
> > on in the system than say playing your favourite tune, it gets the
> > 'need more data soon' interrupt from the audio card, wakes up, does a little
> > mp3/flac/ogg/whatever decode to fill up the buffer and goes back to
> > sleep. Same for video playback I assume, the vsync interrupt for buffer
> > flips is fairly predictable.
> >
> > The interrupt predictor we have in kernel/irq/timings.c should be very
> > accurate in predicting those interrupts.
> 
> In the above case the interrupts should produce a detectable pattern
> of wakeups anyway.

Ah, not so. Suppose you have both the audio and video interrupt going at
a steady rate but different rate, then the combined pattern isn't
trivial at all.

> In general, however, I need to be convinced that interrupts that
> didn't wake up the CPU from idle are relevant for next wakeup
> prediction.  I see that this may be the case, but to what extent is
> rather unclear to me and it looks like calling
> irq_timings_next_event() would add considerable overhead.

How about we add a (debug) knob so that people can play with it for now?
If it turns out to be useful, we'll learn.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH] irq/timings: Fix model validity
  2018-11-07  8:59         ` Peter Zijlstra
@ 2018-11-07  9:46           ` Peter Zijlstra
  2018-11-07 10:52             ` Daniel Lezcano
  2018-11-07 12:09             ` Rafael J. Wysocki
  1 sibling, 1 reply; 22+ messages in thread
From: Peter Zijlstra @ 2018-11-07  9:46 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Rafael J. Wysocki, Linux PM, Giovanni Gherdovich, Doug Smythies,
	Srinivas Pandruvada, Linux Kernel Mailing List,
	Frederic Weisbecker, Mel Gorman, Daniel Lezcano, Nicolas Pitre

On Wed, Nov 07, 2018 at 09:59:36AM +0100, Peter Zijlstra wrote:
> On Wed, Nov 07, 2018 at 12:39:31AM +0100, Rafael J. Wysocki wrote:

> > In general, however, I need to be convinced that interrupts that
> > didn't wake up the CPU from idle are relevant for next wakeup
> > prediction.  I see that this may be the case, but to what extent is
> > rather unclear to me and it looks like calling
> > irq_timings_next_event() would add considerable overhead.
> 
> How about we add a (debug) knob so that people can play with it for now?
> If it turns out to be useful, we'll learn.

That said; Daniel, I think there is a problem with how irqs_update()
sets irqs->valid. We seem to set valid even when we're still training.

---
Subject: irq/timings: Fix model validity

The per IRQ timing predictor will produce a 'valid' prediction even if
the model is still training. This should not happen.

Fix this by moving the actual training (online stddev algorithm) up a
bit and returning early (before predicting) when we've not yet reached
the sample threshold.

A direct concequence is that the predictor will only ever run with at
least that many samples, which means we can remove one branch.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 kernel/irq/timings.c | 66 +++++++++++++++++++++++++++++-----------------------
 1 file changed, 37 insertions(+), 29 deletions(-)

diff --git a/kernel/irq/timings.c b/kernel/irq/timings.c
index 1e4cb63a5c82..5d22fd5facd5 100644
--- a/kernel/irq/timings.c
+++ b/kernel/irq/timings.c
@@ -28,6 +28,13 @@ struct irqt_stat {
 	int	valid;
 };
 
+/*
+ * The rule of thumb in statistics for the normal distribution
+ * is having at least 30 samples in order to have the model to
+ * apply.
+ */
+#define SAMPLE_THRESHOLD	30
+
 static DEFINE_IDR(irqt_stats);
 
 void irq_timings_enable(void)
@@ -101,7 +108,6 @@ void irq_timings_disable(void)
  * distribution appears when the number of samples is 30 (it is the
  * rule of thumb in statistics, cf. "30 samples" on Internet). When
  * there are three consecutive anomalies, the statistics are resetted.
- *
  */
 static void irqs_update(struct irqt_stat *irqs, u64 ts)
 {
@@ -146,11 +152,38 @@ static void irqs_update(struct irqt_stat *irqs, u64 ts)
 	 */
 	diff = interval - irqs->avg;
 
+	/*
+	 * Online average algorithm:
+	 *
+	 *  new_average = average + ((value - average) / count)
+	 *
+	 * The variance computation depends on the new average
+	 * to be computed here first.
+	 *
+	 */
+	irqs->avg = irqs->avg + (diff >> IRQ_TIMINGS_SHIFT);
+
+	/*
+	 * Online variance algorithm:
+	 *
+	 *  new_variance = variance + (value - average) x (value - new_average)
+	 *
+	 * Warning: irqs->avg is updated with the line above, hence
+	 * 'interval - irqs->avg' is no longer equal to 'diff'
+	 */
+	irqs->variance = irqs->variance + (diff * (interval - irqs->avg));
+
 	/*
 	 * Increment the number of samples.
 	 */
 	irqs->nr_samples++;
 
+	/*
+	 * If we're still training the model, we can't make any predictions yet.
+	 */
+	if (irqs->nr_samples < SAMPLE_THRESHOLD)
+		return;
+
 	/*
 	 * Online variance divided by the number of elements if there
 	 * is more than one sample.  Normally the formula is division
@@ -158,16 +191,12 @@ static void irqs_update(struct irqt_stat *irqs, u64 ts)
 	 * more than 32 and dividing by 32 instead of 31 is enough
 	 * precise.
 	 */
-	if (likely(irqs->nr_samples > 1))
-		variance = irqs->variance >> IRQ_TIMINGS_SHIFT;
+	variance = irqs->variance >> IRQ_TIMINGS_SHIFT;
 
 	/*
-	 * The rule of thumb in statistics for the normal distribution
-	 * is having at least 30 samples in order to have the model to
-	 * apply. Values outside the interval are considered as an
-	 * anomaly.
+	 * Values outside the interval are considered as an anomaly.
 	 */
-	if ((irqs->nr_samples >= 30) && ((diff * diff) > (9 * variance))) {
+	if ((diff * diff) > (9 * variance)) {
 		/*
 		 * After three consecutive anomalies, we reset the
 		 * stats as it is no longer stable enough.
@@ -191,27 +220,6 @@ static void irqs_update(struct irqt_stat *irqs, u64 ts)
 	 */
 	irqs->valid = 1;
 
-	/*
-	 * Online average algorithm:
-	 *
-	 *  new_average = average + ((value - average) / count)
-	 *
-	 * The variance computation depends on the new average
-	 * to be computed here first.
-	 *
-	 */
-	irqs->avg = irqs->avg + (diff >> IRQ_TIMINGS_SHIFT);
-
-	/*
-	 * Online variance algorithm:
-	 *
-	 *  new_variance = variance + (value - average) x (value - new_average)
-	 *
-	 * Warning: irqs->avg is updated with the line above, hence
-	 * 'interval - irqs->avg' is no longer equal to 'diff'
-	 */
-	irqs->variance = irqs->variance + (diff * (interval - irqs->avg));
-
 	/*
 	 * Update the next event
 	 */

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [RFC][PATCH] irq/timings: Ignore predictions in the past
  2018-11-06 23:39       ` Rafael J. Wysocki
  2018-11-07  8:59         ` Peter Zijlstra
@ 2018-11-07 10:09         ` Peter Zijlstra
  2018-11-07 10:13         ` [RFC/RFT][PATCH v3] cpuidle: New timer events oriented governor for tickless systems Daniel Lezcano
  2 siblings, 0 replies; 22+ messages in thread
From: Peter Zijlstra @ 2018-11-07 10:09 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Rafael J. Wysocki, Linux PM, Giovanni Gherdovich, Doug Smythies,
	Srinivas Pandruvada, Linux Kernel Mailing List,
	Frederic Weisbecker, Mel Gorman, Daniel Lezcano, Nicolas Pitre

On Wed, Nov 07, 2018 at 12:39:31AM +0100, Rafael J. Wysocki wrote:
> In general, however, I need to be convinced that interrupts that
> didn't wake up the CPU from idle are relevant for next wakeup
> prediction.

So you're worried about the case where we're 100% busy and generating IO
interrupts (disk, network, etc..) and then go idle and those interrupts
stop happening, but we'll base a prediction off of them?

Any predictor will have some of those trancients, but note that we'll
only actually train the irq timing thing when here is idle time. If
we're 100% busy, we'll never actually consume the data.

Also, I think the below delta (on top of my earlier patch) would make
sense; as I don't see how the current code makes sense there.

--- a/kernel/irq/timings.c
+++ b/kernel/irq/timings.c
@@ -306,22 +306,17 @@ u64 irq_timings_next_event(u64 now)
 			continue;
 
 		if (irqs->next_evt <= now) {
-			irq = i;
-			next_evt = now;
-
 			/*
-			 * This interrupt mustn't use in the future
-			 * until new events occur and update the
-			 * statistics.
+			 * If the last prediction was in the past, mark the IRQ
+			 * as invalid until a next interrupt updates the
+			 * prediction.
 			 */
 			irqs->valid = 0;
-			break;
+			continue;
 		}
 
-		if (irqs->next_evt < next_evt) {
-			irq = i;
+		if (irqs->next_evt < next_evt)
 			next_evt = irqs->next_evt;
-		}
 	}
 
 	return next_evt;

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC/RFT][PATCH v3] cpuidle: New timer events oriented governor for tickless systems
  2018-11-06 23:39       ` Rafael J. Wysocki
  2018-11-07  8:59         ` Peter Zijlstra
  2018-11-07 10:09         ` [RFC][PATCH] irq/timings: Ignore predictions in the past Peter Zijlstra
@ 2018-11-07 10:13         ` Daniel Lezcano
  2 siblings, 0 replies; 22+ messages in thread
From: Daniel Lezcano @ 2018-11-07 10:13 UTC (permalink / raw)
  To: Rafael J. Wysocki, Peter Zijlstra
  Cc: Rafael J. Wysocki, Linux PM, Giovanni Gherdovich, Doug Smythies,
	Srinivas Pandruvada, Linux Kernel Mailing List,
	Frederic Weisbecker, Mel Gorman

On 07/11/2018 00:39, Rafael J. Wysocki wrote:
> On Tue, Nov 6, 2018 at 8:51 PM Peter Zijlstra <peterz@infradead.org> wrote:
>>
>> On Tue, Nov 06, 2018 at 07:19:24PM +0100, Rafael J. Wysocki wrote:
>>> On Tue, Nov 6, 2018 at 6:04 PM Peter Zijlstra <peterz@infradead.org> wrote:
>>
>>>> Instead of this detector; why haven't you used the code from
>>>> kernel/irq/timings.c ?
>>>
>>> Because it doesn't help much AFAICS.
>>>
>>> Wakeups need not be interrupts in particular
>>
>> You're alluding to the MWAIT wakeup through the MONITOR address ?
> 
> Yes.
> 
>>> and interrupt patterns that show up when the CPU is busy may not be
>>> relevant for when it is idle.
>>
>> I think that is not always true; consider things like the periodic
>> interrupt from frame rendering or audio; if there is nothing more going
>> on in the system than say playing your favourite tune, it gets the
>> 'need more data soon' interrupt from the audio card, wakes up, does a little
>> mp3/flac/ogg/whatever decode to fill up the buffer and goes back to
>> sleep. Same for video playback I assume, the vsync interrupt for buffer
>> flips is fairly predictable.
>>
>> The interrupt predictor we have in kernel/irq/timings.c should be very
>> accurate in predicting those interrupts.
> 
> In the above case the interrupts should produce a detectable pattern
> of wakeups anyway.
> 
> In general, however, I need to be convinced that interrupts that
> didn't wake up the CPU from idle are relevant for next wakeup
> prediction.  I see that this may be the case, but to what extent is
> rather unclear to me and it looks like calling
> irq_timings_next_event() would add considerable overhead.

I agree the current irq next event can add an overhead and actually it
works well if there are interrupts with regular interrupts but with
irregular timings interval we need to search a pattern.

The first version called 'anova' I shared with you (Peter and Rafael)
privately is pretty good to detect such intervals but the algorithm has
a cost in terms of computation in the self-learning phase. I dropped
this approach as it could be overkill.

I'm working on a different algorithm to detect such patterns which
should be faster.

That said, the pattern detection process shouldn't be happening every
time we enter idle but when the pattern is invalidated. Until the
pattern gives a successful prediction, we keep it and the repeating
period is used to give the next event.

So IMO, the cost to detect the pattern should be compensated by the
computation saving we do if the pattern is found successfully.

These are all assumptions, hopefully I can provide a patch series and
numbers in January ...



-- 
 <http://www.linaro.org/> Linaro.org │ Open source software for ARM SoCs

Follow Linaro:  <http://www.facebook.com/pages/Linaro> Facebook |
<http://twitter.com/#!/linaroorg> Twitter |
<http://www.linaro.org/linaro-blog/> Blog


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] irq/timings: Fix model validity
  2018-11-07  9:46           ` [PATCH] irq/timings: Fix model validity Peter Zijlstra
@ 2018-11-07 10:52             ` Daniel Lezcano
  2018-11-07 13:05               ` Peter Zijlstra
  0 siblings, 1 reply; 22+ messages in thread
From: Daniel Lezcano @ 2018-11-07 10:52 UTC (permalink / raw)
  To: Peter Zijlstra, Rafael J. Wysocki
  Cc: Rafael J. Wysocki, Linux PM, Giovanni Gherdovich, Doug Smythies,
	Srinivas Pandruvada, Linux Kernel Mailing List,
	Frederic Weisbecker, Mel Gorman, Nicolas Pitre

On 07/11/2018 10:46, Peter Zijlstra wrote:
> On Wed, Nov 07, 2018 at 09:59:36AM +0100, Peter Zijlstra wrote:
>> On Wed, Nov 07, 2018 at 12:39:31AM +0100, Rafael J. Wysocki wrote:
> 
>>> In general, however, I need to be convinced that interrupts that
>>> didn't wake up the CPU from idle are relevant for next wakeup
>>> prediction.  I see that this may be the case, but to what extent is
>>> rather unclear to me and it looks like calling
>>> irq_timings_next_event() would add considerable overhead.
>>
>> How about we add a (debug) knob so that people can play with it for now?
>> If it turns out to be useful, we'll learn.
> 
> That said; Daniel, I think there is a problem with how irqs_update()
> sets irqs->valid. We seem to set valid even when we're still training.

Yes, the fix seems right.

Thanks for fixing it.

  -- Daniel

> ---
> Subject: irq/timings: Fix model validity
> 
> The per IRQ timing predictor will produce a 'valid' prediction even if
> the model is still training. This should not happen.
> 
> Fix this by moving the actual training (online stddev algorithm) up a
> bit and returning early (before predicting) when we've not yet reached
> the sample threshold.
> 
> A direct concequence is that the predictor will only ever run with at
> least that many samples, which means we can remove one branch.
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
>  kernel/irq/timings.c | 66 +++++++++++++++++++++++++++++-----------------------
>  1 file changed, 37 insertions(+), 29 deletions(-)
> 
> diff --git a/kernel/irq/timings.c b/kernel/irq/timings.c
> index 1e4cb63a5c82..5d22fd5facd5 100644
> --- a/kernel/irq/timings.c
> +++ b/kernel/irq/timings.c
> @@ -28,6 +28,13 @@ struct irqt_stat {
>  	int	valid;
>  };
>  
> +/*
> + * The rule of thumb in statistics for the normal distribution
> + * is having at least 30 samples in order to have the model to
> + * apply.
> + */
> +#define SAMPLE_THRESHOLD	30
> +
>  static DEFINE_IDR(irqt_stats);
>  
>  void irq_timings_enable(void)
> @@ -101,7 +108,6 @@ void irq_timings_disable(void)
>   * distribution appears when the number of samples is 30 (it is the
>   * rule of thumb in statistics, cf. "30 samples" on Internet). When
>   * there are three consecutive anomalies, the statistics are resetted.
> - *
>   */
>  static void irqs_update(struct irqt_stat *irqs, u64 ts)
>  {
> @@ -146,11 +152,38 @@ static void irqs_update(struct irqt_stat *irqs, u64 ts)
>  	 */
>  	diff = interval - irqs->avg;
>  
> +	/*
> +	 * Online average algorithm:
> +	 *
> +	 *  new_average = average + ((value - average) / count)
> +	 *
> +	 * The variance computation depends on the new average
> +	 * to be computed here first.
> +	 *
> +	 */
> +	irqs->avg = irqs->avg + (diff >> IRQ_TIMINGS_SHIFT);
> +
> +	/*
> +	 * Online variance algorithm:
> +	 *
> +	 *  new_variance = variance + (value - average) x (value - new_average)
> +	 *
> +	 * Warning: irqs->avg is updated with the line above, hence
> +	 * 'interval - irqs->avg' is no longer equal to 'diff'
> +	 */
> +	irqs->variance = irqs->variance + (diff * (interval - irqs->avg));
> +
>  	/*
>  	 * Increment the number of samples.
>  	 */
>  	irqs->nr_samples++;
>  
> +	/*
> +	 * If we're still training the model, we can't make any predictions yet.
> +	 */
> +	if (irqs->nr_samples < SAMPLE_THRESHOLD)
> +		return;
> +
>  	/*
>  	 * Online variance divided by the number of elements if there
>  	 * is more than one sample.  Normally the formula is division
> @@ -158,16 +191,12 @@ static void irqs_update(struct irqt_stat *irqs, u64 ts)
>  	 * more than 32 and dividing by 32 instead of 31 is enough
>  	 * precise.
>  	 */
> -	if (likely(irqs->nr_samples > 1))
> -		variance = irqs->variance >> IRQ_TIMINGS_SHIFT;
> +	variance = irqs->variance >> IRQ_TIMINGS_SHIFT;
>  
>  	/*
> -	 * The rule of thumb in statistics for the normal distribution
> -	 * is having at least 30 samples in order to have the model to
> -	 * apply. Values outside the interval are considered as an
> -	 * anomaly.
> +	 * Values outside the interval are considered as an anomaly.
>  	 */
> -	if ((irqs->nr_samples >= 30) && ((diff * diff) > (9 * variance))) {
> +	if ((diff * diff) > (9 * variance)) {
>  		/*
>  		 * After three consecutive anomalies, we reset the
>  		 * stats as it is no longer stable enough.
> @@ -191,27 +220,6 @@ static void irqs_update(struct irqt_stat *irqs, u64 ts)
>  	 */
>  	irqs->valid = 1;
>  
> -	/*
> -	 * Online average algorithm:
> -	 *
> -	 *  new_average = average + ((value - average) / count)
> -	 *
> -	 * The variance computation depends on the new average
> -	 * to be computed here first.
> -	 *
> -	 */
> -	irqs->avg = irqs->avg + (diff >> IRQ_TIMINGS_SHIFT);
> -
> -	/*
> -	 * Online variance algorithm:
> -	 *
> -	 *  new_variance = variance + (value - average) x (value - new_average)
> -	 *
> -	 * Warning: irqs->avg is updated with the line above, hence
> -	 * 'interval - irqs->avg' is no longer equal to 'diff'
> -	 */
> -	irqs->variance = irqs->variance + (diff * (interval - irqs->avg));
> -
>  	/*
>  	 * Update the next event
>  	 */
> 


-- 
 <http://www.linaro.org/> Linaro.org │ Open source software for ARM SoCs

Follow Linaro:  <http://www.facebook.com/pages/Linaro> Facebook |
<http://twitter.com/#!/linaroorg> Twitter |
<http://www.linaro.org/linaro-blog/> Blog


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC/RFT][PATCH v3] cpuidle: New timer events oriented governor for tickless systems
  2018-11-07  8:59         ` Peter Zijlstra
@ 2018-11-07 12:09             ` Rafael J. Wysocki
  2018-11-07 12:09             ` Rafael J. Wysocki
  1 sibling, 0 replies; 22+ messages in thread
From: Rafael J. Wysocki @ 2018-11-07 12:09 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Rafael J. Wysocki, Rafael J. Wysocki, Linux PM,
	Giovanni Gherdovich, Doug Smythies, Srinivas Pandruvada,
	Linux Kernel Mailing List, Frederic Weisbecker, Mel Gorman,
	Daniel Lezcano

On Wed, Nov 7, 2018 at 9:59 AM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Wed, Nov 07, 2018 at 12:39:31AM +0100, Rafael J. Wysocki wrote:
> > On Tue, Nov 6, 2018 at 8:51 PM Peter Zijlstra <peterz@infradead.org> wrote:
> > >
> > > On Tue, Nov 06, 2018 at 07:19:24PM +0100, Rafael J. Wysocki wrote:
> > > > On Tue, Nov 6, 2018 at 6:04 PM Peter Zijlstra <peterz@infradead.org> wrote:
> > >
> > > > > Instead of this detector; why haven't you used the code from
> > > > > kernel/irq/timings.c ?
> > > >
> > > > Because it doesn't help much AFAICS.
> > > >
> > > > Wakeups need not be interrupts in particular
> > >
> > > You're alluding to the MWAIT wakeup through the MONITOR address ?
> >
> > Yes.
>
> Right, those will not be accounted for and will need something else.

Precisely. :-)

> > > > and interrupt patterns that show up when the CPU is busy may not be
> > > > relevant for when it is idle.
> > >
> > > I think that is not always true; consider things like the periodic
> > > interrupt from frame rendering or audio; if there is nothing more going
> > > on in the system than say playing your favourite tune, it gets the
> > > 'need more data soon' interrupt from the audio card, wakes up, does a little
> > > mp3/flac/ogg/whatever decode to fill up the buffer and goes back to
> > > sleep. Same for video playback I assume, the vsync interrupt for buffer
> > > flips is fairly predictable.
> > >
> > > The interrupt predictor we have in kernel/irq/timings.c should be very
> > > accurate in predicting those interrupts.
> >
> > In the above case the interrupts should produce a detectable pattern
> > of wakeups anyway.
>
> Ah, not so. Suppose you have both the audio and video interrupt going at
> a steady rate but different rate, then the combined pattern isn't
> trivial at all.

It isn't trivial, but it will appear as a "pattern" to the pattern
detector in v4 of the patch.

Basically, all of that is about looking for a reason to select a
shallower idle state than indicated by the time till the closest timer
alone.

From that standpoint it makes sense to look at several most recent
wakeups and see if a pattern is forming in there, which is what the
code in v4 of the patch does.  It also makes sense to look at
interrupt patters, but that is costly, so the overhead needs to be
justified.  The question to me is whether or not there is any
additional value in that given that the most recent wakeups are (and
IMO need to be) taken into consideration anyway.

> > In general, however, I need to be convinced that interrupts that
> > didn't wake up the CPU from idle are relevant for next wakeup
> > prediction.  I see that this may be the case, but to what extent is
> > rather unclear to me and it looks like calling
> > irq_timings_next_event() would add considerable overhead.
>
> How about we add a (debug) knob so that people can play with it for now?
> If it turns out to be useful, we'll learn.

So I'm inclined to try to invoke irq_timings_next_event() in addition
to looking at the most recent wakeups and see how far that can get us.

With some extra instrumentation in place it should be possible to
verify how much value that brings to the table.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC/RFT][PATCH v3] cpuidle: New timer events oriented governor for tickless systems
@ 2018-11-07 12:09             ` Rafael J. Wysocki
  0 siblings, 0 replies; 22+ messages in thread
From: Rafael J. Wysocki @ 2018-11-07 12:09 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Rafael J. Wysocki, Rafael J. Wysocki, Linux PM,
	Giovanni Gherdovich, Doug Smythies, Srinivas Pandruvada,
	Linux Kernel Mailing List, Frederic Weisbecker, Mel Gorman,
	Daniel Lezcano

On Wed, Nov 7, 2018 at 9:59 AM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Wed, Nov 07, 2018 at 12:39:31AM +0100, Rafael J. Wysocki wrote:
> > On Tue, Nov 6, 2018 at 8:51 PM Peter Zijlstra <peterz@infradead.org> wrote:
> > >
> > > On Tue, Nov 06, 2018 at 07:19:24PM +0100, Rafael J. Wysocki wrote:
> > > > On Tue, Nov 6, 2018 at 6:04 PM Peter Zijlstra <peterz@infradead.org> wrote:
> > >
> > > > > Instead of this detector; why haven't you used the code from
> > > > > kernel/irq/timings.c ?
> > > >
> > > > Because it doesn't help much AFAICS.
> > > >
> > > > Wakeups need not be interrupts in particular
> > >
> > > You're alluding to the MWAIT wakeup through the MONITOR address ?
> >
> > Yes.
>
> Right, those will not be accounted for and will need something else.

Precisely. :-)

> > > > and interrupt patterns that show up when the CPU is busy may not be
> > > > relevant for when it is idle.
> > >
> > > I think that is not always true; consider things like the periodic
> > > interrupt from frame rendering or audio; if there is nothing more going
> > > on in the system than say playing your favourite tune, it gets the
> > > 'need more data soon' interrupt from the audio card, wakes up, does a little
> > > mp3/flac/ogg/whatever decode to fill up the buffer and goes back to
> > > sleep. Same for video playback I assume, the vsync interrupt for buffer
> > > flips is fairly predictable.
> > >
> > > The interrupt predictor we have in kernel/irq/timings.c should be very
> > > accurate in predicting those interrupts.
> >
> > In the above case the interrupts should produce a detectable pattern
> > of wakeups anyway.
>
> Ah, not so. Suppose you have both the audio and video interrupt going at
> a steady rate but different rate, then the combined pattern isn't
> trivial at all.

It isn't trivial, but it will appear as a "pattern" to the pattern
detector in v4 of the patch.

Basically, all of that is about looking for a reason to select a
shallower idle state than indicated by the time till the closest timer
alone.

>From that standpoint it makes sense to look at several most recent
wakeups and see if a pattern is forming in there, which is what the
code in v4 of the patch does.  It also makes sense to look at
interrupt patters, but that is costly, so the overhead needs to be
justified.  The question to me is whether or not there is any
additional value in that given that the most recent wakeups are (and
IMO need to be) taken into consideration anyway.

> > In general, however, I need to be convinced that interrupts that
> > didn't wake up the CPU from idle are relevant for next wakeup
> > prediction.  I see that this may be the case, but to what extent is
> > rather unclear to me and it looks like calling
> > irq_timings_next_event() would add considerable overhead.
>
> How about we add a (debug) knob so that people can play with it for now?
> If it turns out to be useful, we'll learn.

So I'm inclined to try to invoke irq_timings_next_event() in addition
to looking at the most recent wakeups and see how far that can get us.

With some extra instrumentation in place it should be possible to
verify how much value that brings to the table.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] irq/timings: Fix model validity
  2018-11-07 10:52             ` Daniel Lezcano
@ 2018-11-07 13:05               ` Peter Zijlstra
  2018-11-08  8:10                 ` Daniel Lezcano
  0 siblings, 1 reply; 22+ messages in thread
From: Peter Zijlstra @ 2018-11-07 13:05 UTC (permalink / raw)
  To: Daniel Lezcano
  Cc: Rafael J. Wysocki, Rafael J. Wysocki, Linux PM,
	Giovanni Gherdovich, Doug Smythies, Srinivas Pandruvada,
	Linux Kernel Mailing List, Frederic Weisbecker, Mel Gorman,
	Nicolas Pitre

On Wed, Nov 07, 2018 at 11:52:31AM +0100, Daniel Lezcano wrote:
> > @@ -146,11 +152,38 @@ static void irqs_update(struct irqt_stat *irqs, u64 ts)
> >  	 */
> >  	diff = interval - irqs->avg;
> >  
> > +	/*
> > +	 * Online average algorithm:
> > +	 *
> > +	 *  new_average = average + ((value - average) / count)
> > +	 *
> > +	 * The variance computation depends on the new average
> > +	 * to be computed here first.
> > +	 *
> > +	 */
> > +	irqs->avg = irqs->avg + (diff >> IRQ_TIMINGS_SHIFT);
> > +
> > +	/*
> > +	 * Online variance algorithm:
> > +	 *
> > +	 *  new_variance = variance + (value - average) x (value - new_average)
> > +	 *
> > +	 * Warning: irqs->avg is updated with the line above, hence
> > +	 * 'interval - irqs->avg' is no longer equal to 'diff'
> > +	 */
> > +	irqs->variance = irqs->variance + (diff * (interval - irqs->avg));
> > +
> >  	/*
> >  	 * Increment the number of samples.
> >  	 */
> >  	irqs->nr_samples++;

FWIW, I'm confused on this. The normal (Welford's) online algorithm
does:

	count++;
	delta = value - mean;
	mean += delta / count;
	M2 += delta * (value - mean);

But the above uses:

	mean += delta / 32;

Which, for count >> 32, over-estimates the mean adjustment. But worse,
it significantly under-estimates the mean during training.

How is the computed variance still correct with this? I can not find any
comments that clarifies this. I'm thinking that since the mean will
slowly wander towards it's actual location (assuming an actual standard
distribution input) the resulting variance will be far too large, since
the (value - mean) term will be much larger than 'expected'.

> > @@ -158,16 +191,12 @@ static void irqs_update(struct irqt_stat *irqs, u64 ts)
> >  	 * more than 32 and dividing by 32 instead of 31 is enough
> >  	 * precise.
> >  	 */
> > +	variance = irqs->variance >> IRQ_TIMINGS_SHIFT;

Worse; variance is actually (as the comment states):

	s^2 = M2 / (count -1)

But instead you compute:

	s^2 = M2 / 32;

Which is again much larger than the actual result; assuming count >> 32.

So you compute a variance that is inflated in two different ways.


I'm not seeing how this thing works reliably.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] irq/timings: Fix model validity
  2018-11-07 13:05               ` Peter Zijlstra
@ 2018-11-08  8:10                 ` Daniel Lezcano
  0 siblings, 0 replies; 22+ messages in thread
From: Daniel Lezcano @ 2018-11-08  8:10 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Rafael J. Wysocki, Rafael J. Wysocki, Linux PM,
	Giovanni Gherdovich, Doug Smythies, Srinivas Pandruvada,
	Linux Kernel Mailing List, Frederic Weisbecker, Mel Gorman,
	Nicolas Pitre

On 07/11/2018 14:05, Peter Zijlstra wrote:
> On Wed, Nov 07, 2018 at 11:52:31AM +0100, Daniel Lezcano wrote:
>>> @@ -146,11 +152,38 @@ static void irqs_update(struct irqt_stat *irqs, u64 ts)
>>>  	 */
>>>  	diff = interval - irqs->avg;
>>>  
>>> +	/*
>>> +	 * Online average algorithm:
>>> +	 *
>>> +	 *  new_average = average + ((value - average) / count)
>>> +	 *
>>> +	 * The variance computation depends on the new average
>>> +	 * to be computed here first.
>>> +	 *
>>> +	 */
>>> +	irqs->avg = irqs->avg + (diff >> IRQ_TIMINGS_SHIFT);
>>> +
>>> +	/*
>>> +	 * Online variance algorithm:
>>> +	 *
>>> +	 *  new_variance = variance + (value - average) x (value - new_average)
>>> +	 *
>>> +	 * Warning: irqs->avg is updated with the line above, hence
>>> +	 * 'interval - irqs->avg' is no longer equal to 'diff'
>>> +	 */
>>> +	irqs->variance = irqs->variance + (diff * (interval - irqs->avg));
>>> +
>>>  	/*
>>>  	 * Increment the number of samples.
>>>  	 */
>>>  	irqs->nr_samples++;
> 
> FWIW, I'm confused on this. The normal (Welford's) online algorithm
> does:
> 
> 	count++;
> 	delta = value - mean;
> 	mean += delta / count;
> 	M2 += delta * (value - mean);
> 
> But the above uses:
> 
> 	mean += delta / 32;
> 
> Which, for count >> 32, over-estimates the mean adjustment. But worse,
> it significantly under-estimates the mean during training.
> 
> How is the computed variance still correct with this? I can not find any
> comments that clarifies this. I'm thinking that since the mean will
> slowly wander towards it's actual location (assuming an actual standard
> distribution input) the resulting variance will be far too large, since
> the (value - mean) term will be much larger than 'expected'.

You are right, initially it was divided by min(count, 32) but for
optimization reason, we decided to change that by a power of two
constant assuming the number of samples will reach quickly 32 and the
compiler will replace that by a shift.

https://lkml.org/lkml/2017/3/23/696

>>> @@ -158,16 +191,12 @@ static void irqs_update(struct irqt_stat *irqs, u64 ts)
>>>  	 * more than 32 and dividing by 32 instead of 31 is enough
>>>  	 * precise.
>>>  	 */
>>> +	variance = irqs->variance >> IRQ_TIMINGS_SHIFT;
> 
> Worse; variance is actually (as the comment states):
> 
> 	s^2 = M2 / (count -1)
> 
> But instead you compute:
> 
> 	s^2 = M2 / 32;
> 
> Which is again much larger than the actual result; assuming count >> 32.
> 
> So you compute a variance that is inflated in two different ways.
> 
> 
> I'm not seeing how this thing works reliably.

I have to revisit this part of code soon, I will double check that.


-- 
 <http://www.linaro.org/> Linaro.org │ Open source software for ARM SoCs

Follow Linaro:  <http://www.facebook.com/pages/Linaro> Facebook |
<http://twitter.com/#!/linaroorg> Twitter |
<http://www.linaro.org/linaro-blog/> Blog


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC/RFT][PATCH v3] cpuidle: New timer events oriented governor for tickless systems
  2018-11-14  6:26 ` Doug Smythies
@ 2018-11-15  2:29   ` Rafael J. Wysocki
  0 siblings, 0 replies; 22+ messages in thread
From: Rafael J. Wysocki @ 2018-11-15  2:29 UTC (permalink / raw)
  To: Doug Smythies
  Cc: Rafael J. Wysocki, Giovanni Gherdovich, Srinivas Pandruvada,
	Peter Zijlstra, Linux Kernel Mailing List, Frederic Weisbecker,
	Mel Gorman, Daniel Lezcano, Linux PM

On Wed, Nov 14, 2018 at 7:26 AM Doug Smythies <dsmythies@telus.net> wrote:
>
> On 2018.11.08 00:00 Rafael J. Wysocki wrote:
> > On Wednesday, November 7, 2018 6:04:12 PM CET Doug Smythies wrote:
> >> On 2018.11.04 08:31 Rafael J. Wysocki wrote:
>
> ...[snip]...
> >> The results are:
> >> http://fast.smythies.com/linux-pm/k420/k420-dbench-teo3.htm
> >> http://fast.smythies.com/linux-pm/k420/histo_compare.htm
>
> ...[snip]...
>
> >> There are some odd long idle durations with TEOv3 for idle
> >> states 1, 2, and 3 that I'll watch for with v4 testing.
> >
> > That unfortunately is a result of bugs in the v4 (and v2 - v3 too).
> >
> > Namely, it doesn't take the cases when the tick has been stopped already
> > into account correctly.  IOW, all of the data points beyond the tick boundary
> > should go into the "final" peak.
> >
> > I'll send a v5.
>
> With v5 there were no long idle durations for idle states 1, 2, and 3 for
> this same Phoronix dbench test.

That's good news, thank you!

^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: [RFC/RFT][PATCH v3] cpuidle: New timer events oriented governor for tickless systems
  2018-11-07 17:04 Doug Smythies
  2018-11-08  8:00 ` Rafael J. Wysocki
@ 2018-11-14  6:26 ` Doug Smythies
  2018-11-15  2:29   ` Rafael J. Wysocki
  1 sibling, 1 reply; 22+ messages in thread
From: Doug Smythies @ 2018-11-14  6:26 UTC (permalink / raw)
  To: 'Rafael J. Wysocki'
  Cc: 'Giovanni Gherdovich', 'Srinivas Pandruvada',
	'Peter Zijlstra', 'LKML',
	'Frederic Weisbecker', 'Mel Gorman',
	'Daniel Lezcano', 'Linux PM',
	Doug Smythies

On 2018.11.08 00:00 Rafael J. Wysocki wrote:
> On Wednesday, November 7, 2018 6:04:12 PM CET Doug Smythies wrote:
>> On 2018.11.04 08:31 Rafael J. Wysocki wrote:

...[snip]...
>> The results are:
>> http://fast.smythies.com/linux-pm/k420/k420-dbench-teo3.htm
>> http://fast.smythies.com/linux-pm/k420/histo_compare.htm

...[snip]...

>> There are some odd long idle durations with TEOv3 for idle
>> states 1, 2, and 3 that I'll watch for with v4 testing.
>
> That unfortunately is a result of bugs in the v4 (and v2 - v3 too).
>
> Namely, it doesn't take the cases when the tick has been stopped already
> into account correctly.  IOW, all of the data points beyond the tick boundary
> should go into the "final" peak.
>
> I'll send a v5.

With v5 there were no long idle durations for idle states 1, 2, and 3 for
this same Phoronix dbench test.

... Doug



^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: [RFC/RFT][PATCH v3] cpuidle: New timer events oriented governor for tickless systems
@ 2018-11-12  3:48 Doug Smythies
  0 siblings, 0 replies; 22+ messages in thread
From: Doug Smythies @ 2018-11-12  3:48 UTC (permalink / raw)
  To: 'Rafael J. Wysocki'
  Cc: 'Srinivas Pandruvada', 'Peter Zijlstra',
	'LKML', 'Frederic Weisbecker',
	'Mel Gorman', 'Daniel Lezcano',
	'Linux PM', 'Giovanni Gherdovich',
	'Doug Smythies'

On 2018.11.10 13:48 Doug Smythies wrote:
> On 2018.11.07 09:04 Doug Smythies wrote:
>
>> The Phoronix dbench test was run under the option to run all
>> the tests, instead of just one number of clients. This was done
>> with a reference/baseline kernel of 4.20-rc1, and also with this
>> TEO version 3 patch. The tests were also repeated with trace
>> enabled for 5000 seconds. Idle information and processor
>> package power were sampled once per minute in all test runs.
>>
>> The results are:
>> http://fast.smythies.com/linux-pm/k420/k420-dbench-teo3.htm
>> http://fast.smythies.com/linux-pm/k420/histo_compare.htm
>
> Another observation from the data, and for the reference/
> baseline 4.20-rc1 kernel, idle state 0 histogram plots
> is that there are several (280) long idle durations.
>
> For unknown reasons, these are consistently dominated by
> CPU 5 on my system (264 of the 280, in this case).
>
> No other test that I have tried shows this issue,
> But other tests also tend to set the need_resched
> flag whereas the dbench test doesn't.
>
> Older kernels also have the issue. I tried: 4.19, 4.18
> 4.17, 4.16+"V9" idle re-work patch set of the time.
> There is no use going back further, because "V9" was
> to address excessively long durations in shallow idle states.
>
> I have not made progress towards determining the root issue.

For whatever reason, sometimes a softirq_entry occurs and it is
a considerably longer than usual time until the softirq_exit.
Sometimes there are bunch of interrupts piled up, and sometimes
some delay between the softirq_entry and anything else happening,
which perhaps just means I didn't figure out what else to enable
in the trace to observe it.

Anyway, it seems likely that there is no problem here after all.
 
... Doug



^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: [RFC/RFT][PATCH v3] cpuidle: New timer events oriented governor for tickless systems
@ 2018-11-10 21:47 Doug Smythies
  0 siblings, 0 replies; 22+ messages in thread
From: Doug Smythies @ 2018-11-10 21:47 UTC (permalink / raw)
  To: 'Rafael J. Wysocki'
  Cc: 'Srinivas Pandruvada', 'Peter Zijlstra',
	'LKML', 'Frederic Weisbecker',
	'Mel Gorman', 'Daniel Lezcano',
	'Linux PM', 'Doug Smythies',
	'Giovanni Gherdovich'

On 2018.11.07 09:04 Doug Smythies wrote:

> The Phoronix dbench test was run under the option to run all
> the tests, instead of just one number of clients. This was done
> with a reference/baseline kernel of 4.20-rc1, and also with this
> TEO version 3 patch. The tests were also repeated with trace
> enabled for 5000 seconds. Idle information and processor
> package power were sampled once per minute in all test runs.
>
> The results are:
> http://fast.smythies.com/linux-pm/k420/k420-dbench-teo3.htm
> http://fast.smythies.com/linux-pm/k420/histo_compare.htm

Another observation from the data, and for the reference/
baseline 4.20-rc1 kernel, idle state 0 histogram plots
is that there are several (280) long idle durations.

For unknown reasons, these are consistently dominated by
CPU 5 on my system (264 of the 280, in this case).

No other test that I have tried shows this issue,
But other tests also tend to set the need_resched
flag whereas the dbench test doesn't.

Older kernels also have the issue. I tried: 4.19, 4.18
4.17, 4.16+"V9" idle re-work patch set of the time.
There is no use going back further, because "V9" was
to address excessively long durations in shallow idle states.

I have not made progress towards determining the root issue.

... Doug



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC/RFT][PATCH v3] cpuidle: New timer events oriented governor for tickless systems
  2018-11-07 17:04 Doug Smythies
@ 2018-11-08  8:00 ` Rafael J. Wysocki
  2018-11-14  6:26 ` Doug Smythies
  1 sibling, 0 replies; 22+ messages in thread
From: Rafael J. Wysocki @ 2018-11-08  8:00 UTC (permalink / raw)
  To: Doug Smythies
  Cc: 'Giovanni Gherdovich', 'Srinivas Pandruvada',
	'Peter Zijlstra', 'LKML',
	'Frederic Weisbecker', 'Mel Gorman',
	'Daniel Lezcano', 'Linux PM'

On Wednesday, November 7, 2018 6:04:12 PM CET Doug Smythies wrote:
> On 2018.11.04 08:31 Rafael J. Wysocki wrote:
> 
> > v2 -> v3:
> > * Simplify the pattern detection code and make it return a value
> >	lower than the time to the closest timer if the majority of recent
> >	idle intervals are below it regardless of their variance (that should
> >	cause it to be slightly more aggressive).
> > * Do not count wakeups from state 0 due to the time limit in poll_idle()
> >    as non-timer.
> >
> > Note: I will be mostly offline tomorrow, so this goes slightly early.
> > I have tested it only very lightly, but it is not so much different from
> > the previous one.
> > 
> > It requires the same additional patches to apply as the previous one too.
> 
> Even though this v3 has now been superseded by v4, I completed some test
> work in progress for v3 anyhow.

That's useful anyway, thanks for doing that!

> The main reason to complete the work, and write up, was because, and for my
> own interest as much as anything, I wanted to specifically test for the
> influence of running trace on the system under test.
> Reference: https://marc.info/?l=linux-kernel&m=154145580925439&w=2
> 
> The Phoronix dbench test was run under the option to run all
> the tests, instead of just one number of clients. This was done
> with a reference/baseline kernel of 4.20-rc1, and also with this
> TEO version 3 patch. The tests were also repeated with trace
> enabled for 5000 seconds. Idle information and processor
> package power were sampled once per minute in all test runs.
> 
> The results are:
> http://fast.smythies.com/linux-pm/k420/k420-dbench-teo3.htm
> http://fast.smythies.com/linux-pm/k420/histo_compare.htm

Thanks a bunch for these!

> Conclusion: trace has negligible effect, until the system gets
> severely overloaded.
> 
> There are some odd long idle durations with TEOv3 for idle
> states 1, 2, and 3 that I'll watch for with v4 testing.

That unfortunately is a result of bugs in the v4 (and v2 - v3 too).

Namely, it doesn't take the cases when the tick has been stopped already
into account correctly.  IOW, all of the data points beyond the tick boundary
should go into the "final" peak.

I'll send a v5.

> Other information:
> Processor: Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
> The kernels were 1000 Hz.
> Idle latency/residency info:
> STATE: state0   DESC: CPUIDLE CORE POLL IDLE    NAME: POLL      LATENCY: 0      RESIDENCY: 0
> STATE: state1   DESC: MWAIT 0x00        NAME: C1        LATENCY: 2      RESIDENCY: 2
> STATE: state2   DESC: MWAIT 0x01        NAME: C1E       LATENCY: 10     RESIDENCY: 20
> STATE: state3   DESC: MWAIT 0x10        NAME: C3        LATENCY: 80     RESIDENCY: 211
> STATE: state4   DESC: MWAIT 0x20        NAME: C6        LATENCY: 104    RESIDENCY: 345

Thanks,
Rafael


^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: [RFC/RFT][PATCH v3] cpuidle: New timer events oriented governor for tickless systems
@ 2018-11-07 17:04 Doug Smythies
  2018-11-08  8:00 ` Rafael J. Wysocki
  2018-11-14  6:26 ` Doug Smythies
  0 siblings, 2 replies; 22+ messages in thread
From: Doug Smythies @ 2018-11-07 17:04 UTC (permalink / raw)
  To: 'Rafael J. Wysocki', 'Giovanni Gherdovich'
  Cc: 'Srinivas Pandruvada', 'Peter Zijlstra',
	'LKML', 'Frederic Weisbecker',
	'Mel Gorman', 'Daniel Lezcano',
	'Linux PM',
	Doug Smythies

On 2018.11.04 08:31 Rafael J. Wysocki wrote:

> v2 -> v3:
> * Simplify the pattern detection code and make it return a value
>	lower than the time to the closest timer if the majority of recent
>	idle intervals are below it regardless of their variance (that should
>	cause it to be slightly more aggressive).
> * Do not count wakeups from state 0 due to the time limit in poll_idle()
>    as non-timer.
>
> Note: I will be mostly offline tomorrow, so this goes slightly early.
> I have tested it only very lightly, but it is not so much different from
> the previous one.
> 
> It requires the same additional patches to apply as the previous one too.

Even though this v3 has now been superseded by v4, I completed some test
work in progress for v3 anyhow.

The main reason to complete the work, and write up, was because, and for my
own interest as much as anything, I wanted to specifically test for the
influence of running trace on the system under test.
Reference: https://marc.info/?l=linux-kernel&m=154145580925439&w=2

The Phoronix dbench test was run under the option to run all
the tests, instead of just one number of clients. This was done
with a reference/baseline kernel of 4.20-rc1, and also with this
TEO version 3 patch. The tests were also repeated with trace
enabled for 5000 seconds. Idle information and processor
package power were sampled once per minute in all test runs.

The results are:
http://fast.smythies.com/linux-pm/k420/k420-dbench-teo3.htm
http://fast.smythies.com/linux-pm/k420/histo_compare.htm

Conclusion: trace has negligible effect, until the system gets
severely overloaded.

There are some odd long idle durations with TEOv3 for idle
states 1, 2, and 3 that I'll watch for with v4 testing.

Other information:
Processor: Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
The kernels were 1000 Hz.
Idle latency/residency info:
STATE: state0   DESC: CPUIDLE CORE POLL IDLE    NAME: POLL      LATENCY: 0      RESIDENCY: 0
STATE: state1   DESC: MWAIT 0x00        NAME: C1        LATENCY: 2      RESIDENCY: 2
STATE: state2   DESC: MWAIT 0x01        NAME: C1E       LATENCY: 10     RESIDENCY: 20
STATE: state3   DESC: MWAIT 0x10        NAME: C3        LATENCY: 80     RESIDENCY: 211
STATE: state4   DESC: MWAIT 0x20        NAME: C6        LATENCY: 104    RESIDENCY: 345

... Doug
 


^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2018-11-15  2:30 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-04 16:31 [RFC/RFT][PATCH v3] cpuidle: New timer events oriented governor for tickless systems Rafael J. Wysocki
2018-11-05 19:32 ` Giovanni Gherdovich
2018-11-06 14:55   ` Rafael J. Wysocki
2018-11-06 17:04 ` Peter Zijlstra
2018-11-06 18:19   ` Rafael J. Wysocki
2018-11-06 19:51     ` Peter Zijlstra
2018-11-06 23:39       ` Rafael J. Wysocki
2018-11-07  8:59         ` Peter Zijlstra
2018-11-07  9:46           ` [PATCH] irq/timings: Fix model validity Peter Zijlstra
2018-11-07 10:52             ` Daniel Lezcano
2018-11-07 13:05               ` Peter Zijlstra
2018-11-08  8:10                 ` Daniel Lezcano
2018-11-07 12:09           ` [RFC/RFT][PATCH v3] cpuidle: New timer events oriented governor for tickless systems Rafael J. Wysocki
2018-11-07 12:09             ` Rafael J. Wysocki
2018-11-07 10:09         ` [RFC][PATCH] irq/timings: Ignore predictions in the past Peter Zijlstra
2018-11-07 10:13         ` [RFC/RFT][PATCH v3] cpuidle: New timer events oriented governor for tickless systems Daniel Lezcano
2018-11-07 17:04 Doug Smythies
2018-11-08  8:00 ` Rafael J. Wysocki
2018-11-14  6:26 ` Doug Smythies
2018-11-15  2:29   ` Rafael J. Wysocki
2018-11-10 21:47 Doug Smythies
2018-11-12  3:48 Doug Smythies

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.