linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC/RFT][PATCH v5] cpuidle: New timer events oriented governor for tickless systems
@ 2018-11-08 17:25 Rafael J. Wysocki
  2018-11-10 19:10 ` Giovanni Gherdovich
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Rafael J. Wysocki @ 2018-11-08 17:25 UTC (permalink / raw)
  To: Linux PM, Giovanni Gherdovich, Doug Smythies
  Cc: Srinivas Pandruvada, Peter Zijlstra, LKML, Frederic Weisbecker,
	Mel Gorman, Daniel Lezcano

From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Subject: [PATCH] cpuidle: New timer events oriented governor for tickless systems

The venerable menu governor does some thigns that are quite
questionable in my view.

First, it includes timer wakeups in the pattern detection data and
mixes them up with wakeups from other sources which in some cases
causes it to expect what essentially would be a timer wakeup in a
time frame in which no timer wakeups are possible (becuase it knows
the time until the next timer event and that is later than the
expected wakeup time).

Second, it uses the extra exit latency limit based on the predicted
idle duration and depending on the number of tasks waiting on I/O,
even though those tasks may run on a different CPU when they are
woken up.  Moreover, the time ranges used by it for the sleep length
correction factors depend on whether or not there are tasks waiting
on I/O, which again doesn't imply anything in particular, and they
are not correlated to the list of available idle states in any way
whatever.

Also, the pattern detection code in menu may end up considering
values that are too large to matter at all, in which cases running
it is a waste of time.

A major rework of the menu governor would be required to address
these issues and the performance of at least some workloads (tuned
specifically to the current behavior of the menu governor) is likely
to suffer from that.  It is thus better to introduce an entirely new
governor without them and let everybody use the governor that works
better with their actual workloads.

The new governor introduced here, the timer events oriented (TEO)
governor, uses the same basic strategy as menu: it always tries to
find the deepest idle state that can be used in the given conditions.
However, it applies a different approach to that problem.

First, it doesn't use "correction factors" for the time till the
closest timer, but instead it tries to correlate the measured idle
duration values with the available idle states and use that
information to pick up the idle state that is most likely to "match"
the upcoming CPU idle interval.

Second, it doesn't take the number of "I/O waiters" into account at
all and the pattern detection code in it avoids taking timer wakeups
into account.  It also only uses idle duration values less than the
current time till the closest timer (with the tick excluded) for that
purpose.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---

v4 -> v5:
 * Avoid using shallow idle states when the tick has been stopped already.

v3 -> v4:
 * Make the pattern detection avoid returning too early if the minimum
   sample is too far from the average.
 * Reformat the changelog (as requested by Peter).

v2 -> v3:
 * Simplify the pattern detection code and make it return a value
	lower than the time to the closest timer if the majority of recent
	idle intervals are below it regardless of their variance (that should
	cause it to be slightly more aggressive).
 * Do not count wakeups from state 0 due to the time limit in poll_idle()
   as non-timer.

---
 drivers/cpuidle/Kconfig            |   11 
 drivers/cpuidle/governors/Makefile |    1 
 drivers/cpuidle/governors/teo.c    |  521 +++++++++++++++++++++++++++++++++++++
 3 files changed, 533 insertions(+)

Index: linux-pm/drivers/cpuidle/governors/teo.c
===================================================================
--- /dev/null
+++ linux-pm/drivers/cpuidle/governors/teo.c
@@ -0,0 +1,521 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Timer events oriented CPU idle governor
+ *
+ * Copyright (C) 2018 Intel Corporation
+ * Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
+ *
+ * The idea of this governor is based on the observation that on many systems
+ * timer events are two or more orders of magnitude more frequent than any
+ * other interrupts, so they are likely to be the most significant source of CPU
+ * wakeups from idle states.  Moreover, information about what happened in the
+ * (relatively recent) past can be used to estimate whether or not the deepest
+ * idle state with target residency within the time to the closest timer is
+ * likely to be suitable for the upcoming idle time of the CPU and, if not, then
+ * which of the shallower idle states to choose.
+ *
+ * Of course, non-timer wakeup sources are more important in some use cases and
+ * they can be covered by detecting patterns among recent idle time intervals
+ * of the CPU.  However, even in that case it is not necessary to take idle
+ * duration values greater than the time till the closest timer into account, as
+ * the patterns that they may belong to produce average values close enough to
+ * the time till the closest timer (sleep length) anyway.
+ *
+ * Thus this governor estimates whether or not the upcoming idle time of the CPU
+ * is likely to be significantly shorter than the sleep length and selects an
+ * idle state for it in accordance with that, as follows:
+ *
+ * - If there is a pattern of 5 or more recent non-timer wakeups earlier than
+ *   the closest timer event, expect one more of them to occur and use the
+ *   average of the idle duration values corresponding to them to select an
+ *   idle state for the CPU.
+ *
+ * - Otherwise, find the state on the basis of the sleep length and state
+ *   statistics collected over time:
+ *
+ *   o Find the deepest idle state whose target residency is less than or euqal
+ *     to the sleep length.
+ *
+ *   o Select it if it matched both the sleep length and the idle duration
+ *     measured after wakeup in the past more often than it matched the sleep
+ *     length, but not the idle duration (i.e. the measured idle duration was
+ *     significantly shorter than the sleep length matched by that state).
+ *
+ *   o Otherwise, select the shallower state with the greatest matched "early"
+ *     wakeups metric.
+ */
+
+#include <linux/cpuidle.h>
+#include <linux/jiffies.h>
+#include <linux/kernel.h>
+#include <linux/sched/clock.h>
+#include <linux/tick.h>
+
+/*
+ * The SPIKE value is added to metrics when they grow and the DECAY_SHIFT value
+ * is used for decreasing metrics on a regular basis.
+ */
+#define SPIKE		1024
+#define DECAY_SHIFT	3
+
+/*
+ * Number of the most recent idle duration values to take into consideration for
+ * the detection of wakeup patterns.
+ */
+#define INTERVALS	8
+
+/*
+ * Ratio of the sample spread limit and the length of the interesting intervals
+ * range used for pattern detection, reptesented as a shift.
+ */
+#define MAX_SPREAD_SHIFT	3
+
+/**
+ * struct teo_idle_state - Idle state data used by the TEO cpuidle governor.
+ * @early_hits: "Early" CPU wakeups matched by this state.
+ * @hits: "On time" CPU wakeups matched by this state.
+ * @misses: CPU wakeups "missed" by this state.
+ *
+ * A CPU wakeup is "matched" by a given idle state if the idle duration measured
+ * after the wakeup is between the target residency of that state and the target
+ * residnecy of the next one (or if this is the deepest available idle state, it
+ * "matches" a CPU wakeup when the measured idle duration is at least equal to
+ * its target residency).
+ *
+ * Also, from the TEO governor prespective, a CPU wakeup from idle is "early" if
+ * it occurs significantly earlier than the closest expected timer event (that
+ * is, early enough to match an idle state shallower than the one matching the
+ * time till the closest timer event).  Otherwise, the wakeup is "on time", or
+ * it is a "hit".
+ *
+ * A "miss" occurs when the given state doesn't match the wakeup, but it matches
+ * the time till the closest timer event used for idle state selection.
+ */
+struct teo_idle_state {
+	unsigned int early_hits;
+	unsigned int hits;
+	unsigned int misses;
+};
+
+/**
+ * struct teo_cpu - CPU data used by the TEO cpuidle governor.
+ * @time_span_ns: Time between idle state selection and post-wakeup update.
+ * @sleep_length_ns: Time till the closest timer event (at the selection time).
+ * @states: Idle states data corresponding to this CPU.
+ * @last_state: Idle state entered by the CPU last time.
+ * @interval_idx: Index of the most recent saved idle interval.
+ * @intervals: Saved idle duration values.
+ */
+struct teo_cpu {
+	u64 time_span_ns;
+	u64 sleep_length_ns;
+	struct teo_idle_state states[CPUIDLE_STATE_MAX];
+	int last_state;
+	int interval_idx;
+	unsigned int intervals[INTERVALS];
+};
+
+static DEFINE_PER_CPU(struct teo_cpu, teo_cpus);
+
+/**
+ * teo_update - Update CPU data after wakeup.
+ * @drv: cpuidle driver containing state data.
+ * @dev: Target CPU.
+ */
+static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
+{
+	struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
+	unsigned int sleep_length_us = ktime_to_us(cpu_data->sleep_length_ns);
+	int i, idx_hit = -1, idx_timer = -1;
+	unsigned int measured_us;
+
+	if (cpu_data->time_span_ns == cpu_data->sleep_length_ns) {
+		/* One of the safety nets has triggered (most likely). */
+		measured_us = sleep_length_us;
+	} else {
+		measured_us = dev->last_residency;
+		i = cpu_data->last_state;
+		if (measured_us >= 2 * drv->states[i].exit_latency)
+			measured_us -= drv->states[i].exit_latency;
+		else
+			measured_us /= 2;
+	}
+
+	/*
+	 * Decay the "early hits" metric for all of the states and find the
+	 * states matching the sleep length and the measured idle duration.
+	 */
+	for (i = 0; i < drv->state_count; i++) {
+		unsigned int early_hits = cpu_data->states[i].early_hits;
+
+		cpu_data->states[i].early_hits -= early_hits >> DECAY_SHIFT;
+
+		if (drv->states[i].target_residency <= measured_us)
+			idx_hit = i;
+
+		if (drv->states[i].target_residency <= sleep_length_us)
+			idx_timer = i;
+	}
+
+	/*
+	 * Update the "hits" and "misses" data for the state matching the sleep
+	 * length.  If it matches the measured idle duration too, this is a hit,
+	 * so increase the "hits" metric for it then.  Otherwise, this is a
+	 * miss, so increase the "misses" metric for it.  In the latter case
+	 * also increase the "early hits" metric for the state that actually
+	 * matches the measured idle duration.
+	 */
+	if (idx_timer >= 0) {
+		unsigned int hits = cpu_data->states[idx_timer].hits;
+		unsigned int misses = cpu_data->states[idx_timer].misses;
+
+		hits -= hits >> DECAY_SHIFT;
+		misses -= misses >> DECAY_SHIFT;
+
+		if (idx_timer > idx_hit) {
+			misses += SPIKE;
+			if (idx_hit >= 0)
+				cpu_data->states[idx_hit].early_hits += SPIKE;
+		} else {
+			hits += SPIKE;
+		}
+
+		cpu_data->states[idx_timer].misses = misses;
+		cpu_data->states[idx_timer].hits = hits;
+	}
+
+	/*
+	 * Save idle duration values corresponding to non-timer wakeups for
+	 * pattern detection.
+	 *
+	 * If the total time span between idle state selection and the "reflect"
+	 * callback is greater than or equal to the sleep length determined at
+	 * the idle state selection time, the wakeup is likely to be due to a
+	 * timer event.
+	 */
+	if (cpu_data->time_span_ns >= cpu_data->sleep_length_ns)
+		measured_us = UINT_MAX;
+
+	cpu_data->intervals[cpu_data->interval_idx++] = measured_us;
+	if (cpu_data->interval_idx > INTERVALS)
+		cpu_data->interval_idx = 0;
+}
+
+/**
+ * teo_idle_duration - Estimate the duration of the upcoming CPU idle time.
+ * @drv: cpuidle driver containing state data.
+ * @cpu_data: Governor data for the target CPU.
+ * @sleep_length_us: Time till the closest timer event in microseconds.
+ */
+unsigned int teo_idle_duration(struct cpuidle_driver *drv,
+			       struct teo_cpu *cpu_data,
+			       unsigned int sleep_length_us)
+{
+	u64 range, max_spread, sum, max, min;
+	unsigned int i, count;
+
+	/*
+	 * If the sleep length is below the target residency of idle state 1,
+	 * the only viable choice is to select the first available (enabled)
+	 * idle state, so return immediately in that case.
+	 */
+	if (sleep_length_us < drv->states[1].target_residency)
+		return sleep_length_us;
+
+	/*
+	 * The purpose of this function is to check if there is a pattern of
+	 * wakeups indicating that it would be better to select a state
+	 * shallower than the deepest one matching the sleep length or the
+	 * deepest one at all if the sleep lenght is long.  Larger idle duration
+	 * values are beyond the interesting range.
+	 */
+	range = drv->states[drv->state_count-1].target_residency;
+	range = min_t(u64, sleep_length_us, range + (range >> 2));
+
+	/*
+	 * This is the value to compare with the distance between the average
+	 * and the greatest sample to decide whether or not it is small enough.
+	 * Take 10 us as the total cap of it.
+	 */
+	max_spread = max_t(u64, range >> MAX_SPREAD_SHIFT, 10);
+
+	/*
+	 * First pass: compute the sum of interesting samples, the minimum and
+	 * maximum of them and count them.
+	 */
+	count = 0;
+	sum = 0;
+	max = 0;
+	min = UINT_MAX;
+
+	for (i = 0; i < INTERVALS; i++) {
+		u64 val = cpu_data->intervals[i];
+
+		if (val >= range)
+			continue;
+
+		count++;
+		sum += val;
+		if (max < val)
+			max = val;
+
+		if (min > val)
+			min = val;
+	}
+
+	/* Give up if the number of interesting samples is too small. */
+	if (count <= INTERVALS / 2)
+		return sleep_length_us;
+
+	/*
+	 * If the distance between the max or min and the average is too large,
+	 * try to refine by discarding the max, as long as the count is above 3.
+	 */
+	while (count > 3 && max > max_spread &&
+	       ((max - max_spread) * count > sum ||
+	        (min + max_spread) * count < sum)) {
+
+		range = max;
+
+		/*
+		 * Compute the sum of samples in the interesting range.  Count
+		 * them and find the maximum of them.
+		 */
+		count = 0;
+		sum = 0;
+		max = 0;
+
+		for (i = 0; i < INTERVALS; i++) {
+			u64 val = cpu_data->intervals[i];
+
+			if (val >= range)
+				continue;
+
+			count++;
+			sum += val;
+			if (max < val)
+				max = val;
+		}
+	}
+
+	return div64_u64(sum, count);
+}
+
+/**
+ * teo_select - Selects the next idle state to enter.
+ * @drv: cpuidle driver containing state data.
+ * @dev: Target CPU.
+ * @stop_tick: Indication on whether or not to stop the scheduler tick.
+ */
+static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+		      bool *stop_tick)
+{
+	struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
+	int latency_req = cpuidle_governor_latency_req(dev->cpu);
+	unsigned int sleep_length_us, duration_us;
+	unsigned int max_early_count;
+	int max_early_idx, idx, i;
+	ktime_t delta_tick;
+
+	if (cpu_data->last_state >= 0) {
+		teo_update(drv, dev);
+		cpu_data->last_state = -1;
+	}
+
+	cpu_data->time_span_ns = local_clock();
+
+	cpu_data->sleep_length_ns = tick_nohz_get_sleep_length(&delta_tick);
+	sleep_length_us = ktime_to_us(cpu_data->sleep_length_ns);
+
+	duration_us = teo_idle_duration(drv, cpu_data, sleep_length_us);
+	if (tick_nohz_tick_stopped()) {
+		/*
+		 * If the tick is already stopped, the cost of possible short
+		 * idle duration misprediction is much higher, because the CPU
+		 * may be stuck in a shallow idle state for a long time as a
+		 * result of it.  In that case say we might mispredict and use
+		 * the known time till the closest timer event for the idle
+		 * state selection.
+		 */
+		if (duration_us < TICK_USEC)
+			duration_us = sleep_length_us;
+	} else {
+		/*
+		 * If the time needed to enter and exit the idle state matching
+		 * the expected idle duration is comparable with the expected
+		 * idle duration itself, the time to spend in that state is
+		 * likely to be small, so it probably is better to select a
+		 * shallower state.  Tweak the latency limit to enforce that.
+		 */
+		if (duration_us < latency_req)
+			latency_req = duration_us;
+	}
+
+	max_early_count = 0;
+	max_early_idx = -1;
+	idx = -1;
+
+	for (i = 0; i < drv->state_count; i++) {
+		struct cpuidle_state *s = &drv->states[i];
+		struct cpuidle_state_usage *su = &dev->states_usage[i];
+
+		if (s->disabled || su->disable) {
+			/*
+			 * If the "early hits" metric of a disabled state is
+			 * greater than the current maximum, it should be taken
+			 * into account, because it would be a mistake to select
+			 * a deeper state with lower "early hits" metric.  The
+			 * index cannot be changed to point to it, however, so
+			 * just increase the max count alone and let the index
+			 * still point to a shallower idle state.
+			 */
+			if (max_early_idx >= 0 &&
+			    max_early_count < cpu_data->states[i].early_hits)
+				max_early_count = cpu_data->states[i].early_hits;
+
+			continue;
+		}
+
+		if (idx < 0)
+			idx = i; /* first enabled state */
+
+		if (s->target_residency > duration_us) {
+			/*
+			 * If the next wakeup is expected to be "early", the
+			 * time frame of it is known already.
+			 */
+			if (duration_us < sleep_length_us)
+				break;
+
+			/*
+			 * If the "hits" metric of the state matching the sleep
+			 * length is greater than its "misses" metric, that is
+			 * the one to use.
+			 */
+			if (cpu_data->states[idx].hits >= cpu_data->states[idx].misses)
+				break;
+
+			/*
+			 * It is more likely that one of the shallower states
+			 * will match the idle duration measured after wakeup,
+			 * so take the one with the maximum "early hits" metric,
+			 * but if that cannot be determined, just use the state
+			 * selected so far.
+			 */
+			if (max_early_idx >= 0) {
+				idx = max_early_idx;
+				duration_us = drv->states[idx].target_residency;
+			}
+			break;
+		}
+		if (s->exit_latency > latency_req) {
+			/*
+			 * If we break out of the loop for latency reasons, use
+			 * the target residency of the selected state as the
+			 * expected idle duration to avoid stopping the tick
+			 * as long as that target residency is low enough.
+			 */
+			duration_us = drv->states[idx].target_residency;
+			break;
+		}
+
+		idx = i;
+
+		if (max_early_count < cpu_data->states[i].early_hits &&
+		    !(tick_nohz_tick_stopped() &&
+		      drv->states[i].target_residency < TICK_USEC)) {
+			max_early_count = cpu_data->states[i].early_hits;
+			max_early_idx = i;
+		}
+	}
+
+	if (idx < 0)
+		idx = 0; /* No states enabled. Must use 0. */
+
+	/*
+	 * Don't stop the tick if the selected state is a polling one or if the
+	 * expected idle duration is shorter than the tick period length.
+	 */
+	if (((drv->states[idx].flags & CPUIDLE_FLAG_POLLING) ||
+	    duration_us < TICK_USEC) && !tick_nohz_tick_stopped()) {
+		unsigned int delta_tick_us = ktime_to_us(delta_tick);
+
+		*stop_tick = false;
+
+		if (idx > 0 && drv->states[idx].target_residency > delta_tick_us) {
+			/*
+			 * The tick is not going to be stopped and the target
+			 * residency of the state to be returned is not within
+			 * the time until the closest timer event including the
+			 * tick, so try to correct that.
+			 */
+			for (i = idx - 1; i > 0; i--) {
+				if (drv->states[i].disabled ||
+				    dev->states_usage[i].disable)
+					continue;
+
+				if (drv->states[i].target_residency <= delta_tick_us)
+					break;
+			}
+			idx = i;
+		}
+	}
+
+	return idx;
+}
+
+/**
+ * teo_reflect - Note that governor data for the CPU need to be updated.
+ * @dev: Target CPU.
+ * @state: Entered state.
+ */
+static void teo_reflect(struct cpuidle_device *dev, int state)
+{
+	struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
+
+	cpu_data->last_state = state;
+	/*
+	 * If the wakeup was not "natural", but triggered by one of the safety
+	 * nets, assume that the CPU might have been idle for the entire sleep
+	 * length time.
+	 */
+	if (dev->poll_time_limit ||
+	    (tick_nohz_idle_got_tick() && cpu_data->sleep_length_ns > TICK_NSEC))
+		cpu_data->time_span_ns = cpu_data->sleep_length_ns;
+	else
+		cpu_data->time_span_ns = local_clock() - cpu_data->time_span_ns;
+}
+
+/**
+ * teo_enable_device - Initialize the governor's data for the target CPU.
+ * @drv: cpuidle driver (not used).
+ * @dev: Target CPU.
+ */
+static int teo_enable_device(struct cpuidle_driver *drv,
+			     struct cpuidle_device *dev)
+{
+	struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
+	int i;
+
+	memset(cpu_data, 0, sizeof(*cpu_data));
+
+	for (i = 0; i < INTERVALS; i++)
+		cpu_data->intervals[i] = UINT_MAX;
+
+	return 0;
+}
+
+static struct cpuidle_governor teo_governor = {
+	.name =		"teo",
+	.rating =	22,
+	.enable =	teo_enable_device,
+	.select =	teo_select,
+	.reflect =	teo_reflect,
+};
+
+static int __init teo_governor_init(void)
+{
+	return cpuidle_register_governor(&teo_governor);
+}
+
+postcore_initcall(teo_governor_init);
Index: linux-pm/drivers/cpuidle/Kconfig
===================================================================
--- linux-pm.orig/drivers/cpuidle/Kconfig
+++ linux-pm/drivers/cpuidle/Kconfig
@@ -23,6 +23,17 @@ config CPU_IDLE_GOV_LADDER
 config CPU_IDLE_GOV_MENU
 	bool "Menu governor (for tickless system)"
 
+config CPU_IDLE_GOV_TEO
+	bool "Timer events oriented governor (for tickless systems)"
+	help
+	  Menu governor derivative that uses a simplified idle state
+	  selection method focused on timer events and does not do any
+	  interactivity boosting.
+
+	  Some workloads benefit from using this governor and it generally
+	  should be safe to use.  Say Y here if you are not happy with the
+	  alternatives.
+
 config DT_IDLE_STATES
 	bool
 
Index: linux-pm/drivers/cpuidle/governors/Makefile
===================================================================
--- linux-pm.orig/drivers/cpuidle/governors/Makefile
+++ linux-pm/drivers/cpuidle/governors/Makefile
@@ -4,3 +4,4 @@
 
 obj-$(CONFIG_CPU_IDLE_GOV_LADDER) += ladder.o
 obj-$(CONFIG_CPU_IDLE_GOV_MENU) += menu.o
+obj-$(CONFIG_CPU_IDLE_GOV_TEO) += teo.o


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC/RFT][PATCH v5] cpuidle: New timer events oriented governor for tickless systems
  2018-11-08 17:25 [RFC/RFT][PATCH v5] cpuidle: New timer events oriented governor for tickless systems Rafael J. Wysocki
@ 2018-11-10 19:10 ` Giovanni Gherdovich
  2018-11-15  2:56   ` Rafael J. Wysocki
  2018-11-11 15:20 ` Peter Zijlstra
  2018-11-11 15:40 ` Peter Zijlstra
  2 siblings, 1 reply; 8+ messages in thread
From: Giovanni Gherdovich @ 2018-11-10 19:10 UTC (permalink / raw)
  To: Rafael J. Wysocki, Linux PM, Doug Smythies
  Cc: Srinivas Pandruvada, Peter Zijlstra, LKML, Frederic Weisbecker,
	Mel Gorman, Daniel Lezcano

On Thu, 2018-11-08 at 18:25 +0100, Rafael J. Wysocki wrote:
> From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> Subject: [PATCH] cpuidle: New timer events oriented governor for tickless systems
> 
> The venerable menu governor does some thigns that are quite
> questionable in my view.
> 
> First, it includes timer wakeups in the pattern detection data and
> mixes them up with wakeups from other sources which in some cases
> causes it to expect what essentially would be a timer wakeup in a
> time frame in which no timer wakeups are possible (becuase it knows
> the time until the next timer event and that is later than the
> expected wakeup time).
> 
> Second, it uses the extra exit latency limit based on the predicted
> idle duration and depending on the number of tasks waiting on I/O,
> even though those tasks may run on a different CPU when they are
> woken up.  Moreover, the time ranges used by it for the sleep length
> correction factors depend on whether or not there are tasks waiting
> on I/O, which again doesn't imply anything in particular, and they
> are not correlated to the list of available idle states in any way
> whatever.
> 
> Also, the pattern detection code in menu may end up considering
> values that are too large to matter at all, in which cases running
> it is a waste of time.
> 
> A major rework of the menu governor would be required to address
> these issues and the performance of at least some workloads (tuned
> specifically to the current behavior of the menu governor) is likely
> to suffer from that.  It is thus better to introduce an entirely new
> governor without them and let everybody use the governor that works
> better with their actual workloads.
> 
> The new governor introduced here, the timer events oriented (TEO)
> governor, uses the same basic strategy as menu: it always tries to
> find the deepest idle state that can be used in the given conditions.
> However, it applies a different approach to that problem.
> 
> First, it doesn't use "correction factors" for the time till the
> closest timer, but instead it tries to correlate the measured idle
> duration values with the available idle states and use that
> information to pick up the idle state that is most likely to "match"
> the upcoming CPU idle interval.
> 
> Second, it doesn't take the number of "I/O waiters" into account at
> all and the pattern detection code in it avoids taking timer wakeups
> into account.  It also only uses idle duration values less than the
> current time till the closest timer (with the tick excluded) for that
> purpose.
> 
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> ---
> 
> v4 -> v5:
>  * Avoid using shallow idle states when the tick has been stopped already.
> 
> v3 -> v4:
>  * Make the pattern detection avoid returning too early if the minimum
>    sample is too far from the average.
>  * Reformat the changelog (as requested by Peter).
> 
> v2 -> v3:
>  * Simplify the pattern detection code and make it return a value
> 	lower than the time to the closest timer if the majority of recent
> 	idle intervals are below it regardless of their variance (that should
> 	cause it to be slightly more aggressive).
>  * Do not count wakeups from state 0 due to the time limit in poll_idle()
>    as non-timer.
>
> [...]

[NOTE: the tables in this message are quite wide, ~130 columns. If this
doesn't get to you properly formatted you can read a copy of this message at
the URL https://beta.suse.com/private/ggherdovich/teo-eval/teo-eval.html ]


Hello Rafael,

I have results for v3 and v5. Regarding v4, I made a mistake and didn't get
valid data; as I saw v5 coming shortly after, I didn't rerun v4.

I'm replying to the v5 thread because that's where these results belong, but
I'm quoting your text from the v2 email at
https://lore.kernel.org/lkml/4168371.zz0pVZtGOY@aspire.rjw.lan so that's
easier to follow along.

The quick summary is:

---> sockperf on loopback over UDP, mode "throughput":
     this had a 12% regression in v2 on 48x-HASWELL-NUMA, which is completely
     recovered in v3 and v5. Good stuff.

---> dbench on xfs:
     this was down 16% in v2 on 48x-HASWELL-NUMA. On v5 we're at a 10%
     regression. Slight improvement. What's really hurting here is the single
     client scenario.

---> netperf-udp on loopback:
     had 6% regression on v2 on 8x-SKYLAKE-UMA, which is the same as what
     happens in v5.

---> tbench on loopback:
     was down 10% in v2 on 8x-SKYLAKE-UMA, now slightly worse in v5 with a 12%
     regression. As in dbench, it's at low number of clients that the results
     are worst. Note that this machine is different from the one that has the
     dbench regression.

A more detailed report follows below.

I maintain my original opinion from v2 that this governor is largely
performance-neutral and I'm not overly worried about the numbers above:

* results change from machine to machine: dbench is down 10% on
  48x-HASWELL-NUMA, but it also gives you the largest win on the board with a
  4% improvement on 8x-SKYLAKE-UMA. All regressions I mention only manifest on
  one out of three machines.

* similar benchmarks give contradicting results: dbench seems highly sensitive
  to this patch, but pgbench, sqlite, and fio are not. netperf-udp is slightly
  down on 48x-HASWELL-NUMA but sockperf-udp-throughput has benefited from v5
  on that same machine.

To raise an alert from the performance angle I have to see red on my board
from an entire category of benchmarks (ie I/O, or networking, or
scheduler-intensive, etc) and on a variety of hardware configurations. That's
not happening here.

On Sun, 2018-11-04 at 11:06 +0100, Rafael J. Wysocki wrote:
> On Wednesday, October 31, 2018 7:36:21 PM CET Giovanni Gherdovich wrote:
> >
> > [...]
> > I've tested your patches applying them on v4.18 (plus the backport
> > necessary for v2 as Doug helpfully noted), just because it was the latest
> > release when I started preparing this.

I did the same for v3 and v5: baseline is v4.18, using that backport from
linux-next.

> > 
> > I've tested it on three machines, with different generations of Intel CPUs:
> > 
> > * single socket E3-1240 v5 (Skylake 8 cores, which I'll call 8x-SKYLAKE-UMA)
> > * two sockets E5-2698 v4 (Broadwell 80 cores, 80x-BROADWELL-NUMA from here onwards)
> > * two sockets E5-2670 v3 (Haswell 48 cores, 48x-HASWELL-NUMA from here onwards)
> >

Same machines.

> > 
> > BENCHMARKS WITH NEUTRAL RESULTS
> > ===============================
> > 
> > These are the workloads where no noticeable difference is measured (on both
> > v1 and v2, all machines), together with the corresponding MMTests[1]
> > configuration file name:
> > 
> > * pgbench read-only on xfs, pgbench read/write on xfs
> >     * global-dhp__db-pgbench-timed-ro-small-xfs
> >     * global-dhp__db-pgbench-timed-rw-small-xfs
> > * siege
> >     * global-dhp__http-siege
> > * hackbench, pipetest
> >     * global-dhp__scheduler-unbound
> > * Linux kernel compilation
> >     * global-dhp__workload_kerndevel-xfs
> > * NASA Parallel Benchmarks, C-Class (linear algebra; run both with OpenMP
> >   and OpenMPI, over xfs)
> >     * global-dhp__nas-c-class-mpi-full-xfs
> >     * global-dhp__nas-c-class-omp-full
> > * FIO (Flexible IO) in several configurations
> >     * global-dhp__io-fio-randread-async-randwrite-xfs
> >     * global-dhp__io-fio-randread-async-seqwrite-xfs
> >     * global-dhp__io-fio-seqread-doublemem-32k-4t-xfs
> >     * global-dhp__io-fio-seqread-doublemem-4k-4t-xfs
> > * netperf on loopback over TCP
> >     * global-dhp__network-netperf-unbound
> 
> The above is great to know.

All of the above are confirmed, plus we can add to the group of neutral
benchmarks:

* xfsrepair
    * global-dhp__io-xfsrepair-xfs
* sqlite (insert operations on xfs)
    * global-dhp__db-sqlite-insert-medium-xfs
* schbench
    * global-dhp__workload_schbench
* gitsource on xfs (git unit tests, shell intensive)
    * global-dhp__workload_shellscripts-xfs

> 
> > BENCHMARKS WITH NON-NEUTRAL RESULTS: OVERVIEW
> > =============================================
> > 
> > These are benchmarks which exhibit a variation in their performance;
> > you'll see the magnitude of the changes is moderate and it's highly variable
> > from machine to machine. All percentages refer to the v4.18 baseline. In
> > more than one case the Haswell machine seems to prefer v1 to v2.
> >
> > [...]
> >
> > * netperf on loopback over UDP
> >     * global-dhp__network-netperf-unbound
> > 
> >                                     teo-v1          teo-v2
> >             -------------------------------------------------
> >             8x-SKYLAKE-UMA          no change       6% worse
> >             80x-BROADWELL-NUMA      1% worse        4% worse
> >             48x-HASWELL-NUMA        3% better       5% worse
> >

New data for netperf-udp, as 8x-SKYLAKE-UMA looked slightly off:

* netperf on loopback over UDP
    * global-dhp__network-netperf-unbound

                        teo-v1          teo-v2          teo-v3          teo-v5
  ------------------------------------------------------------------------------
  8x-SKYLAKE-UMA        no change       6% worse        4% worse        6% worse
  80x-BROADWELL-NUMA    1% worse        4% worse        no change       no change
  48x-HASWELL-NUMA      3% better       5% worse        7% worse        5% worse


> > [...]
> >
> > * sockperf on loopback over UDP, mode "throughput"
> >     * global-dhp__network-sockperf-unbound
> 
> Generally speaking, I'm not worried about single-digit percent differences,
> because overall they tend to fall into the noise range in the grand picture.
> 
> >                                     teo-v1          teo-v2
> >             -------------------------------------------------
> >             8x-SKYLAKE-UMA          1% worse        1% worse
> >             80x-BROADWELL-NUMA      3% better       2% better
> >             48x-HASWELL-NUMA        4% better       12% worse
> 
> But the 12% difference here is slightly worrisome.

Following up on the above:

* sockperf on loopback over UDP, mode "throughput"
    * global-dhp__network-sockperf-unbound

                        teo-v1          teo-v2          teo-v3          teo-v5
  -------------------------------------------------------------------------------
  8x-SKYLAKE-UMA        1% worse        1% worse        1% worse        1% worse
  80x-BROADWELL-NUMA    3% better       2% better       5% better       3% worse
  48x-HASWELL-NUMA      4% better       12% worse       no change       no change

> >
> > [...]
> > 
> > * dbench on xfs
> >         * global-dhp__io-dbench4-async-xfs
> > 
> >                                     teo-v1          teo-v2
> >             -------------------------------------------------
> >             8x-SKYLAKE-UMA          3% better       4% better
> >             80x-BROADWELL-NUMA      no change       no change
> >             48x-HASWELL-NUMA        6% worse        16% worse
> 
> And same here.

With new data:

* dbench on xfs
    * global-dhp__io-dbench4-async-xfs

                        teo-v1          teo-v2          teo-v3          teo-v5   
  -------------------------------------------------------------------------------
  8x-SKYLAKE-UMA        3% better       4% better       6% better       4% better
  80x-BROADWELL-NUMA    no change       no change       1% worse        3% worse
  48x-HASWELL-NUMA      6% worse        16% worse       8% worse        10% worse 


> 
> > * tbench on loopback
> >     * global-dhp__network-tbench
> > 
> >                                     teo-v1          teo-v2
> >             -------------------------------------------------
> >             8x-SKYLAKE-UMA          1% worse        10% worse
> >             80x-BROADWELL-NUMA      1% worse        1% worse
> >             48x-HASWELL-NUMA        1% worse        2% worse
> >

Update on tbench:

* tbench on loopback
    * global-dhp__network-tbench

                        teo-v1          teo-v2          teo-v3          teo-v5
  ------------------------------------------------------------------------------
  8x-SKYLAKE-UMA        1% worse        10% worse       11% worse       12% worse
  80x-BROADWELL-NUMA    1% worse        1% worse        no cahnge       1% worse
  48x-HASWELL-NUMA      1% worse        2% worse        1% worse        1% worse

> > [...]
> > 
> > BENCHMARKS WITH NON-NEUTRAL RESULTS: DETAIL
> > ===========================================
> > 
> > Now some more detail. Each benchmark is run in a variety of configurations
> > (eg. number of threads, number of concurrent connections and so forth) each
> > of them giving a result. What you see above is the geometric mean of
> > "sub-results"; below is the detailed view where there was a regression
> > larger than 5% (either in v1 or v2, on any of the machines). That means
> > I'll exclude xfsrepar, sqlite, schbench and the git unit tests "gitsource"
> > that have negligible swings from the baseline.
> > 
> > In all tables asterisks indicate a statement about statistical
> > significance: the difference with baseline has a p-value smaller than 0.1
> > (small p-values indicate that the difference is real and not just random
> > noise).
> > 
> > NETPERF-UDP
> > ===========
> > NOTES: Test run in mode "stream" over UDP. The varying parameter is the
> >     message size in bytes. Each measurement is taken 5 times and the
> >     harmonic mean is reported.
> > MEASURES: Throughput in MBits/second, both on the sender and on the receiver end.
> > HIGHER is better
> > 
> > machine: 8x-SKYLAKE-UMA
> >                                      4.18.0                 4.18.0                 4.18.0
> >                                     vanilla                 teo-v1        teo-v2+backport
> > -----------------------------------------------------------------------------------------
> > Hmean     send-64         362.27 (   0.00%)      362.87 (   0.16%)      318.85 * -11.99%*
> > Hmean     send-128        723.17 (   0.00%)      723.66 (   0.07%)      660.96 *  -8.60%*
> > Hmean     send-256       1435.24 (   0.00%)     1427.08 (  -0.57%)     1346.22 *  -6.20%*
> > Hmean     send-1024      5563.78 (   0.00%)     5529.90 *  -0.61%*     5228.28 *  -6.03%*
> > Hmean     send-2048     10935.42 (   0.00%)    10809.66 *  -1.15%*    10521.14 *  -3.79%*
> > Hmean     send-3312     16898.66 (   0.00%)    16539.89 *  -2.12%*    16240.87 *  -3.89%*
> > Hmean     send-4096     19354.33 (   0.00%)    19185.43 (  -0.87%)    18600.52 *  -3.89%*
> > Hmean     send-8192     32238.80 (   0.00%)    32275.57 (   0.11%)    29850.62 *  -7.41%*
> > Hmean     send-16384    48146.75 (   0.00%)    49297.23 *   2.39%*    48295.51 (   0.31%)
> > Hmean     recv-64         362.16 (   0.00%)      362.87 (   0.19%)      318.82 * -11.97%*
> > Hmean     recv-128        723.01 (   0.00%)      723.66 (   0.09%)      660.89 *  -8.59%*
> > Hmean     recv-256       1435.06 (   0.00%)     1426.94 (  -0.57%)     1346.07 *  -6.20%*
> > Hmean     recv-1024      5562.68 (   0.00%)     5529.90 *  -0.59%*     5228.28 *  -6.01%*
> > Hmean     recv-2048     10934.36 (   0.00%)    10809.66 *  -1.14%*    10519.89 *  -3.79%*
> > Hmean     recv-3312     16898.65 (   0.00%)    16538.21 *  -2.13%*    16240.86 *  -3.89%*
> > Hmean     recv-4096     19351.99 (   0.00%)    19183.17 (  -0.87%)    18598.33 *  -3.89%*
> > Hmean     recv-8192     32238.74 (   0.00%)    32275.13 (   0.11%)    29850.39 *  -7.41%*
> > Hmean     recv-16384    48146.59 (   0.00%)    49296.23 *   2.39%*    48295.03 (   0.31%)
> 
> That is a bit worse than I would like it to be TBH.

update on netperf-udp:

machine: 8x-SKYLAKE-UMA
                                     4.18.0                 4.18.0                 4.18.0                 4.18.0                 4.18.0
                                    vanilla                    teo        teo-v2+backport        teo-v3+backport        teo-v5+backport
---------------------------------------------------------------------------------------------------------------------------------------
Hmean     send-64         362.27 (   0.00%)      362.87 (   0.16%)      318.85 * -11.99%*      347.08 *  -4.19%*      333.48 *  -7.95%*
Hmean     send-128        723.17 (   0.00%)      723.66 (   0.07%)      660.96 *  -8.60%*      676.46 *  -6.46%*      650.71 * -10.02%*
Hmean     send-256       1435.24 (   0.00%)     1427.08 (  -0.57%)     1346.22 *  -6.20%*     1359.59 *  -5.27%*     1323.83 *  -7.76%*
Hmean     send-1024      5563.78 (   0.00%)     5529.90 *  -0.61%*     5228.28 *  -6.03%*     5382.04 *  -3.27%*     5271.99 *  -5.24%*
Hmean     send-2048     10935.42 (   0.00%)    10809.66 *  -1.15%*    10521.14 *  -3.79%*    10610.29 *  -2.97%*    10544.58 *  -3.57%*
Hmean     send-3312     16898.66 (   0.00%)    16539.89 *  -2.12%*    16240.87 *  -3.89%*    16271.23 *  -3.71%*    15968.89 *  -5.50%*
Hmean     send-4096     19354.33 (   0.00%)    19185.43 (  -0.87%)    18600.52 *  -3.89%*    18692.16 *  -3.42%*    18408.69 *  -4.89%*
Hmean     send-8192     32238.80 (   0.00%)    32275.57 (   0.11%)    29850.62 *  -7.41%*    30066.83 *  -6.74%*    29824.62 *  -7.49%*
Hmean     send-16384    48146.75 (   0.00%)    49297.23 *   2.39%*    48295.51 (   0.31%)    48800.37 *   1.36%*    48247.73 (   0.21%)
Hmean     recv-64         362.16 (   0.00%)      362.87 (   0.19%)      318.82 * -11.97%*      347.07 *  -4.17%*      333.48 *  -7.92%*
Hmean     recv-128        723.01 (   0.00%)      723.66 (   0.09%)      660.89 *  -8.59%*      676.39 *  -6.45%*      650.63 * -10.01%*
Hmean     recv-256       1435.06 (   0.00%)     1426.94 (  -0.57%)     1346.07 *  -6.20%*     1359.45 *  -5.27%*     1323.81 *  -7.75%*
Hmean     recv-1024      5562.68 (   0.00%)     5529.90 *  -0.59%*     5228.28 *  -6.01%*     5381.37 *  -3.26%*     5271.45 *  -5.24%*
Hmean     recv-2048     10934.36 (   0.00%)    10809.66 *  -1.14%*    10519.89 *  -3.79%*    10610.28 *  -2.96%*    10544.58 *  -3.56%*
Hmean     recv-3312     16898.65 (   0.00%)    16538.21 *  -2.13%*    16240.86 *  -3.89%*    16269.34 *  -3.72%*    15967.13 *  -5.51%*
Hmean     recv-4096     19351.99 (   0.00%)    19183.17 (  -0.87%)    18598.33 *  -3.89%*    18690.13 *  -3.42%*    18407.45 *  -4.88%*
Hmean     recv-8192     32238.74 (   0.00%)    32275.13 (   0.11%)    29850.39 *  -7.41%*    30062.78 *  -6.75%*    29824.30 *  -7.49%*
Hmean     recv-16384    48146.59 (   0.00%)    49296.23 *   2.39%*    48295.03 (   0.31%)    48786.88 *   1.33%*    48246.71 (   0.21%)

Here is a plot of the raw benchmark data, you can better see the distribution
and variability of the results:
https://beta.suse.com/private/ggherdovich/teo-eval/teo-eval.html#netperf-udp

> > [...]
> >
> > SOCKPERF-UDP-THROUGHPUT
> > =======================
> > NOTES: Test run in mode "throughput" over UDP. The varying parameter is the
> >     message size.
> > MEASURES: Throughput, in MBits/second
> > HIGHER is better
> > 
> > machine: 48x-HASWELL-NUMA
> >                               4.18.0                 4.18.0                 4.18.0
> >                              vanilla                 teo-v1        teo-v2+backport
> > ----------------------------------------------------------------------------------
> > Hmean     14        48.16 (   0.00%)       50.94 *   5.77%*       42.50 * -11.77%*
> > Hmean     100      346.77 (   0.00%)      358.74 *   3.45%*      303.31 * -12.53%*
> > Hmean     300     1018.06 (   0.00%)     1053.75 *   3.51%*      895.55 * -12.03%*
> > Hmean     500     1693.07 (   0.00%)     1754.62 *   3.64%*     1489.61 * -12.02%*
> > Hmean     850     2853.04 (   0.00%)     2948.73 *   3.35%*     2473.50 * -13.30%*
> 
> Well, in this case the consistent improvement in v1 turned into a consistent decline
> in the v2, and over 10% for that matter.  Needs improvement IMO.

Update: this one got resolved in v5,

machine: 48x-HASWELL-NUMA
                              4.18.0                 4.18.0                 4.18.0                 4.18.0                 4.18.0
                             vanilla                    teo        teo-v2+backport        teo-v3+backport        teo-v5+backport
--------------------------------------------------------------------------------------------------------------------------------
Hmean     14        48.16 (   0.00%)       50.94 *   5.77%*       42.50 * -11.77%*       48.91 *   1.55%*       49.06 *   1.87%*
Hmean     100      346.77 (   0.00%)      358.74 *   3.45%*      303.31 * -12.53%*      350.75 (   1.15%)      347.52 (   0.22%)
Hmean     300     1018.06 (   0.00%)     1053.75 *   3.51%*      895.55 * -12.03%*     1014.00 (  -0.40%)     1023.99 (   0.58%)
Hmean     500     1693.07 (   0.00%)     1754.62 *   3.64%*     1489.61 * -12.02%*     1688.50 (  -0.27%)     1698.43 (   0.32%)
Hmean     850     2853.04 (   0.00%)     2948.73 *   3.35%*     2473.50 * -13.30%*     2836.13 (  -0.59%)     2767.66 *  -2.99%*

plots of raw data at
https://beta.suse.com/private/ggherdovich/teo-eval/teo-eval.html#sockperf-udp-throughput

> 
> > DBENCH4
> > =======
> > NOTES: asyncronous IO; varies the number of clients up to NUMCPUS*8.
> > MEASURES: latency (millisecs)
> > LOWER is better
> > 
> > machine: 48x-HASWELL-NUMA
> >                               4.18.0                 4.18.0                 4.18.0
> >                              vanilla                 teo-v1        teo-v2+backport
> > ----------------------------------------------------------------------------------
> > Amean      1        37.15 (   0.00%)       50.10 ( -34.86%)       39.02 (  -5.03%)
> > Amean      2        43.75 (   0.00%)       45.50 (  -4.01%)       44.36 (  -1.39%)
> > Amean      4        54.42 (   0.00%)       58.85 (  -8.15%)       58.17 (  -6.89%)
> > Amean      8        75.72 (   0.00%)       74.25 (   1.94%)       82.76 (  -9.30%)
> > Amean      16      116.56 (   0.00%)      119.88 (  -2.85%)      164.14 ( -40.82%)
> > Amean      32      570.02 (   0.00%)      561.92 (   1.42%)      681.94 ( -19.63%)
> > Amean      64     3185.20 (   0.00%)     3291.80 (  -3.35%)     4337.43 ( -36.17%)
> 
> This one too.

Update:

machine: 48x-HASWELL-NUMA
                              4.18.0                 4.18.0                 4.18.0                 4.18.0                 4.18.0
                             vanilla                    teo        teo-v2+backport        teo-v3+backport        teo-v5+backport
--------------------------------------------------------------------------------------------------------------------------------
Amean      1        37.15 (   0.00%)       50.10 ( -34.86%)       39.02 (  -5.03%)       52.24 ( -40.63%)       51.62 ( -38.96%)
Amean      2        43.75 (   0.00%)       45.50 (  -4.01%)       44.36 (  -1.39%)       47.25 (  -8.00%)       44.20 (  -1.03%)
Amean      4        54.42 (   0.00%)       58.85 (  -8.15%)       58.17 (  -6.89%)       55.12 (  -1.29%)       58.07 (  -6.70%)
Amean      8        75.72 (   0.00%)       74.25 (   1.94%)       82.76 (  -9.30%)       78.63 (  -3.84%)       85.33 ( -12.68%)
Amean      16      116.56 (   0.00%)      119.88 (  -2.85%)      164.14 ( -40.82%)      124.87 (  -7.13%)      124.54 (  -6.85%)
Amean      32      570.02 (   0.00%)      561.92 (   1.42%)      681.94 ( -19.63%)      568.93 (   0.19%)      571.23 (  -0.21%)
Amean      64     3185.20 (   0.00%)     3291.80 (  -3.35%)     4337.43 ( -36.17%)     3181.13 (   0.13%)     3382.48 (  -6.19%)

It doesn't do well in the single-client scenario; v2 was a lot better at
that, but on the other side it suffered at saturation (64 clients on a 48
cores box). Plot at

https://beta.suse.com/private/ggherdovich/teo-eval/teo-eval.html#dbench4

> 
> > TBENCH4
> > =======
> > NOTES: networking counterpart of dbench. Varies the number of clients up to NUMCPUS*4
> > MEASURES: Throughput, MB/sec
> > HIGHER is better
> > 
> > machine: 8x-SKYLAKE-UMA
> >                                     4.18.0                 4.18.0                 4.18.0
> >                                    vanilla                    teo        teo-v2+backport
> > ----------------------------------------------------------------------------------------
> > Hmean     mb/sec-1       620.52 (   0.00%)      613.98 *  -1.05%*      502.47 * -19.03%*
> > Hmean     mb/sec-2      1179.05 (   0.00%)     1112.84 *  -5.62%*      820.57 * -30.40%*
> > Hmean     mb/sec-4      2072.29 (   0.00%)     2040.55 *  -1.53%*     2036.11 *  -1.75%*
> > Hmean     mb/sec-8      4238.96 (   0.00%)     4205.01 *  -0.80%*     4124.59 *  -2.70%*
> > Hmean     mb/sec-16     3515.96 (   0.00%)     3536.23 *   0.58%*     3500.02 *  -0.45%*
> > Hmean     mb/sec-32     3452.92 (   0.00%)     3448.94 *  -0.12%*     3428.08 *  -0.72%*
> > 
> 
> And same here.

New data:

machine: 8x-SKYLAKE-UMA
                                    4.18.0                 4.18.0                 4.18.0                 4.18.0                 4.18.0
                                   vanilla                    teo        teo-v2+backport        teo-v3+backport        teo-v5+backport
--------------------------------------------------------------------------------------------------------------------------------------
Hmean     mb/sec-1       620.52 (   0.00%)      613.98 *  -1.05%*      502.47 * -19.03%*      492.77 * -20.59%*      464.52 * -25.14%*
Hmean     mb/sec-2      1179.05 (   0.00%)     1112.84 *  -5.62%*      820.57 * -30.40%*      831.23 * -29.50%*      780.97 * -33.76%*
Hmean     mb/sec-4      2072.29 (   0.00%)     2040.55 *  -1.53%*     2036.11 *  -1.75%*     2016.97 *  -2.67%*     2019.79 *  -2.53%*
Hmean     mb/sec-8      4238.96 (   0.00%)     4205.01 *  -0.80%*     4124.59 *  -2.70%*     4098.06 *  -3.32%*     4171.64 *  -1.59%*
Hmean     mb/sec-16     3515.96 (   0.00%)     3536.23 *   0.58%*     3500.02 *  -0.45%*     3438.60 *  -2.20%*     3456.89 *  -1.68%*
Hmean     mb/sec-32     3452.92 (   0.00%)     3448.94 *  -0.12%*     3428.08 *  -0.72%*     3369.30 *  -2.42%*     3430.09 *  -0.66%*

Again, the pain point is at low client count; v1 OTOH was neutral. Plot at
https://beta.suse.com/private/ggherdovich/teo-eval/teo-eval.html#tbench4


Cheers,
Giovanni


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC/RFT][PATCH v5] cpuidle: New timer events oriented governor for tickless systems
  2018-11-08 17:25 [RFC/RFT][PATCH v5] cpuidle: New timer events oriented governor for tickless systems Rafael J. Wysocki
  2018-11-10 19:10 ` Giovanni Gherdovich
@ 2018-11-11 15:20 ` Peter Zijlstra
  2018-11-15  2:57   ` Rafael J. Wysocki
  2018-11-11 15:40 ` Peter Zijlstra
  2 siblings, 1 reply; 8+ messages in thread
From: Peter Zijlstra @ 2018-11-11 15:20 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Linux PM, Giovanni Gherdovich, Doug Smythies,
	Srinivas Pandruvada, LKML, Frederic Weisbecker, Mel Gorman,
	Daniel Lezcano

On Thu, Nov 08, 2018 at 06:25:07PM +0100, Rafael J. Wysocki wrote:
> +/*
> + * The SPIKE value is added to metrics when they grow and the DECAY_SHIFT value
> + * is used for decreasing metrics on a regular basis.
> + */
> +#define SPIKE		1024
> +#define DECAY_SHIFT	3

> +	if (idx_timer >= 0) {
> +		unsigned int hits = cpu_data->states[idx_timer].hits;
> +		unsigned int misses = cpu_data->states[idx_timer].misses;
> +
> +		hits -= hits >> DECAY_SHIFT;
> +		misses -= misses >> DECAY_SHIFT;
> +
> +		if (idx_timer > idx_hit) {
> +			misses += SPIKE;
> +			if (idx_hit >= 0)
> +				cpu_data->states[idx_hit].early_hits += SPIKE;
> +		} else {
> +			hits += SPIKE;
> +		}
> +
> +		cpu_data->states[idx_timer].misses = misses;
> +		cpu_data->states[idx_timer].hits = hits;
> +	}

That's a pulse-density-modulator, with a signal bound of:

  1024 == x/8 -> x := 8192

Anyway; I would suggest s/SPIKE/PULSE/ because of that.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC/RFT][PATCH v5] cpuidle: New timer events oriented governor for tickless systems
  2018-11-08 17:25 [RFC/RFT][PATCH v5] cpuidle: New timer events oriented governor for tickless systems Rafael J. Wysocki
  2018-11-10 19:10 ` Giovanni Gherdovich
  2018-11-11 15:20 ` Peter Zijlstra
@ 2018-11-11 15:40 ` Peter Zijlstra
  2018-11-15  3:15   ` Rafael J. Wysocki
  2 siblings, 1 reply; 8+ messages in thread
From: Peter Zijlstra @ 2018-11-11 15:40 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Linux PM, Giovanni Gherdovich, Doug Smythies,
	Srinivas Pandruvada, LKML, Frederic Weisbecker, Mel Gorman,
	Daniel Lezcano

On Thu, Nov 08, 2018 at 06:25:07PM +0100, Rafael J. Wysocki wrote:
> +unsigned int teo_idle_duration(struct cpuidle_driver *drv,
> +			       struct teo_cpu *cpu_data,
> +			       unsigned int sleep_length_us)
> +{
> +	u64 range, max_spread, sum, max, min;
> +	unsigned int i, count;
> +
> +	/*
> +	 * If the sleep length is below the target residency of idle state 1,
> +	 * the only viable choice is to select the first available (enabled)
> +	 * idle state, so return immediately in that case.
> +	 */
> +	if (sleep_length_us < drv->states[1].target_residency)
> +		return sleep_length_us;
> +
> +	/*
> +	 * The purpose of this function is to check if there is a pattern of
> +	 * wakeups indicating that it would be better to select a state
> +	 * shallower than the deepest one matching the sleep length or the
> +	 * deepest one at all if the sleep lenght is long.  Larger idle duration
> +	 * values are beyond the interesting range.
> +	 */
> +	range = drv->states[drv->state_count-1].target_residency;
> +	range = min_t(u64, sleep_length_us, range + (range >> 2));
> +
> +	/*
> +	 * This is the value to compare with the distance between the average
> +	 * and the greatest sample to decide whether or not it is small enough.
> +	 * Take 10 us as the total cap of it.
> +	 */
> +	max_spread = max_t(u64, range >> MAX_SPREAD_SHIFT, 10);
> +
> +	/*
> +	 * First pass: compute the sum of interesting samples, the minimum and
> +	 * maximum of them and count them.
> +	 */
> +	count = 0;
> +	sum = 0;
> +	max = 0;
> +	min = UINT_MAX;
> +
> +	for (i = 0; i < INTERVALS; i++) {
> +		u64 val = cpu_data->intervals[i];
> +
> +		if (val >= range)
> +			continue;
> +
> +		count++;
> +		sum += val;
> +		if (max < val)
> +			max = val;
> +
> +		if (min > val)
> +			min = val;
> +	}
> +
> +	/* Give up if the number of interesting samples is too small. */
> +	if (count <= INTERVALS / 2)
> +		return sleep_length_us;
> +
> +	/*
> +	 * If the distance between the max or min and the average is too large,
> +	 * try to refine by discarding the max, as long as the count is above 3.
> +	 */
> +	while (count > 3 && max > max_spread &&
> +	       ((max - max_spread) * count > sum ||
> +	        (min + max_spread) * count < sum)) {
> +
> +		range = max;
> +
> +		/*
> +		 * Compute the sum of samples in the interesting range.  Count
> +		 * them and find the maximum of them.
> +		 */
> +		count = 0;
> +		sum = 0;
> +		max = 0;
> +
> +		for (i = 0; i < INTERVALS; i++) {
> +			u64 val = cpu_data->intervals[i];
> +
> +			if (val >= range)
> +				continue;
> +
> +			count++;
> +			sum += val;
> +			if (max < val)
> +				max = val;
> +		}
> +	}
> +
> +	return div64_u64(sum, count);
> +}

By always discarding the larger value; you're searching for the first or
shortest peak, right?

While that is always a safe value; it might not be the best value.

Also; I think you can write the whole thing shorter; maybe like:


	do {
		count = sum = max = 0;
		min = UINT_MAX;

		for (i = 0; i < INTERVALS; i++) {
			u64 val = cpu_data->intervals[i];

			if (val >= range)
				continue;

			count++;
			sum += val;
			max = max(max, val);
			min = min(min, val);
		}

		range = max;

	} while (count > 3 && max > max_spread &&
	         ((max - max_spread) * count > sum ||
		  (min + max_spread) * count < sum));

per the fact that <= INTERVALS/2 := > 3, without assuming that you need
one more condition in there for the first pass or something.


Anyway; a fair while ago I proposed a different estimator. I've not had
time to dig through the 4 prior versions so I cannot tell if you've
already tried this, but the idea was simple:

  - track the last @n wakeup distances in the @idle-states buckets;
  - sum the buckets in increasing idle state and pick the state before
    you reach 50% of @n.

That is computationally cheaper than what you have; and should allow you
to increase @n without making the computation more expensive.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC/RFT][PATCH v5] cpuidle: New timer events oriented governor for tickless systems
  2018-11-10 19:10 ` Giovanni Gherdovich
@ 2018-11-15  2:56   ` Rafael J. Wysocki
  0 siblings, 0 replies; 8+ messages in thread
From: Rafael J. Wysocki @ 2018-11-15  2:56 UTC (permalink / raw)
  To: Giovanni Gherdovich
  Cc: Linux PM, Doug Smythies, Srinivas Pandruvada, Peter Zijlstra,
	LKML, Frederic Weisbecker, Mel Gorman, Daniel Lezcano

On Saturday, November 10, 2018 8:10:01 PM CET Giovanni Gherdovich wrote:
> On Thu, 2018-11-08 at 18:25 +0100, Rafael J. Wysocki wrote:
> > From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> > Subject: [PATCH] cpuidle: New timer events oriented governor for tickless systems
> > 

[cut]

> [NOTE: the tables in this message are quite wide, ~130 columns. If this
> doesn't get to you properly formatted you can read a copy of this message at
> the URL https://beta.suse.com/private/ggherdovich/teo-eval/teo-eval.html ]
> 
> 
> Hello Rafael,
> 
> I have results for v3 and v5. Regarding v4, I made a mistake and didn't get
> valid data; as I saw v5 coming shortly after, I didn't rerun v4.
> 
> I'm replying to the v5 thread because that's where these results belong, but
> I'm quoting your text from the v2 email at
> https://lore.kernel.org/lkml/4168371.zz0pVZtGOY@aspire.rjw.lan so that's
> easier to follow along.

Thanks for the results, much appreciated!

> The quick summary is:
> 
> ---> sockperf on loopback over UDP, mode "throughput":
>      this had a 12% regression in v2 on 48x-HASWELL-NUMA, which is completely
>      recovered in v3 and v5. Good stuff.

That's good news, thanks!

> ---> dbench on xfs:
>      this was down 16% in v2 on 48x-HASWELL-NUMA. On v5 we're at a 10%
>      regression. Slight improvement. What's really hurting here is the single
>      client scenario.
> 
> ---> netperf-udp on loopback:
>      had 6% regression on v2 on 8x-SKYLAKE-UMA, which is the same as what
>      happens in v5.
> 
> ---> tbench on loopback:
>      was down 10% in v2 on 8x-SKYLAKE-UMA, now slightly worse in v5 with a 12%
>      regression. As in dbench, it's at low number of clients that the results
>      are worst. Note that this machine is different from the one that has the
>      dbench regression.

Clearly, playing with the pattern detection part of the governor alone is not
sufficient to make all of the workloads happy at the same time.

> A more detailed report follows below.
> 
> I maintain my original opinion from v2 that this governor is largely
> performance-neutral and I'm not overly worried about the numbers above:

OK, fair enough.

Cheers,
Rafael


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC/RFT][PATCH v5] cpuidle: New timer events oriented governor for tickless systems
  2018-11-11 15:20 ` Peter Zijlstra
@ 2018-11-15  2:57   ` Rafael J. Wysocki
  0 siblings, 0 replies; 8+ messages in thread
From: Rafael J. Wysocki @ 2018-11-15  2:57 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Linux PM, Giovanni Gherdovich, Doug Smythies,
	Srinivas Pandruvada, LKML, Frederic Weisbecker, Mel Gorman,
	Daniel Lezcano

On Sunday, November 11, 2018 4:20:34 PM CET Peter Zijlstra wrote:
> On Thu, Nov 08, 2018 at 06:25:07PM +0100, Rafael J. Wysocki wrote:
> > +/*
> > + * The SPIKE value is added to metrics when they grow and the DECAY_SHIFT value
> > + * is used for decreasing metrics on a regular basis.
> > + */
> > +#define SPIKE		1024
> > +#define DECAY_SHIFT	3
> 
> > +	if (idx_timer >= 0) {
> > +		unsigned int hits = cpu_data->states[idx_timer].hits;
> > +		unsigned int misses = cpu_data->states[idx_timer].misses;
> > +
> > +		hits -= hits >> DECAY_SHIFT;
> > +		misses -= misses >> DECAY_SHIFT;
> > +
> > +		if (idx_timer > idx_hit) {
> > +			misses += SPIKE;
> > +			if (idx_hit >= 0)
> > +				cpu_data->states[idx_hit].early_hits += SPIKE;
> > +		} else {
> > +			hits += SPIKE;
> > +		}
> > +
> > +		cpu_data->states[idx_timer].misses = misses;
> > +		cpu_data->states[idx_timer].hits = hits;
> > +	}
> 
> That's a pulse-density-modulator, with a signal bound of:
> 
>   1024 == x/8 -> x := 8192
> 
> Anyway; I would suggest s/SPIKE/PULSE/ because of that.

OK



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC/RFT][PATCH v5] cpuidle: New timer events oriented governor for tickless systems
  2018-11-11 15:40 ` Peter Zijlstra
@ 2018-11-15  3:15   ` Rafael J. Wysocki
  2018-11-21 23:44     ` Rafael J. Wysocki
  0 siblings, 1 reply; 8+ messages in thread
From: Rafael J. Wysocki @ 2018-11-15  3:15 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Linux PM, Giovanni Gherdovich, Doug Smythies,
	Srinivas Pandruvada, LKML, Frederic Weisbecker, Mel Gorman,
	Daniel Lezcano

On Sunday, November 11, 2018 4:40:17 PM CET Peter Zijlstra wrote:
> On Thu, Nov 08, 2018 at 06:25:07PM +0100, Rafael J. Wysocki wrote:
> > +unsigned int teo_idle_duration(struct cpuidle_driver *drv,
> > +			       struct teo_cpu *cpu_data,
> > +			       unsigned int sleep_length_us)
> > +{
> > +	u64 range, max_spread, sum, max, min;
> > +	unsigned int i, count;
> > +
> > +	/*
> > +	 * If the sleep length is below the target residency of idle state 1,
> > +	 * the only viable choice is to select the first available (enabled)
> > +	 * idle state, so return immediately in that case.
> > +	 */
> > +	if (sleep_length_us < drv->states[1].target_residency)
> > +		return sleep_length_us;
> > +
> > +	/*
> > +	 * The purpose of this function is to check if there is a pattern of
> > +	 * wakeups indicating that it would be better to select a state
> > +	 * shallower than the deepest one matching the sleep length or the
> > +	 * deepest one at all if the sleep lenght is long.  Larger idle duration
> > +	 * values are beyond the interesting range.
> > +	 */
> > +	range = drv->states[drv->state_count-1].target_residency;
> > +	range = min_t(u64, sleep_length_us, range + (range >> 2));
> > +
> > +	/*
> > +	 * This is the value to compare with the distance between the average
> > +	 * and the greatest sample to decide whether or not it is small enough.
> > +	 * Take 10 us as the total cap of it.
> > +	 */
> > +	max_spread = max_t(u64, range >> MAX_SPREAD_SHIFT, 10);
> > +
> > +	/*
> > +	 * First pass: compute the sum of interesting samples, the minimum and
> > +	 * maximum of them and count them.
> > +	 */
> > +	count = 0;
> > +	sum = 0;
> > +	max = 0;
> > +	min = UINT_MAX;
> > +
> > +	for (i = 0; i < INTERVALS; i++) {
> > +		u64 val = cpu_data->intervals[i];
> > +
> > +		if (val >= range)
> > +			continue;
> > +
> > +		count++;
> > +		sum += val;
> > +		if (max < val)
> > +			max = val;
> > +
> > +		if (min > val)
> > +			min = val;
> > +	}
> > +
> > +	/* Give up if the number of interesting samples is too small. */
> > +	if (count <= INTERVALS / 2)
> > +		return sleep_length_us;
> > +
> > +	/*
> > +	 * If the distance between the max or min and the average is too large,
> > +	 * try to refine by discarding the max, as long as the count is above 3.
> > +	 */
> > +	while (count > 3 && max > max_spread &&
> > +	       ((max - max_spread) * count > sum ||
> > +	        (min + max_spread) * count < sum)) {
> > +
> > +		range = max;
> > +
> > +		/*
> > +		 * Compute the sum of samples in the interesting range.  Count
> > +		 * them and find the maximum of them.
> > +		 */
> > +		count = 0;
> > +		sum = 0;
> > +		max = 0;
> > +
> > +		for (i = 0; i < INTERVALS; i++) {
> > +			u64 val = cpu_data->intervals[i];
> > +
> > +			if (val >= range)
> > +				continue;
> > +
> > +			count++;
> > +			sum += val;
> > +			if (max < val)
> > +				max = val;
> > +		}
> > +	}
> > +
> > +	return div64_u64(sum, count);
> > +}
> 
> By always discarding the larger value; you're searching for the first or
> shortest peak, right?

Yes, basically.

> While that is always a safe value; it might not be the best value.

Sure.

> Also; I think you can write the whole thing shorter; maybe like:
> 
> 
> 	do {
> 		count = sum = max = 0;
> 		min = UINT_MAX;
> 
> 		for (i = 0; i < INTERVALS; i++) {
> 			u64 val = cpu_data->intervals[i];
> 
> 			if (val >= range)
> 				continue;
> 
> 			count++;
> 			sum += val;
> 			max = max(max, val);
> 			min = min(min, val);
> 		}
> 
> 		range = max;
> 
> 	} while (count > 3 && max > max_spread &&
> 	         ((max - max_spread) * count > sum ||
> 		  (min + max_spread) * count < sum));
> 
> per the fact that <= INTERVALS/2 := > 3, without assuming that you need
> one more condition in there for the first pass or something.

The reason I wrote it this way was because I needed to compute the min
in the first pass only and then the count from the first pass was compared
with INTERVALS/2 to decide whether or not to do more passes.

> Anyway; a fair while ago I proposed a different estimator. I've not had
> time to dig through the 4 prior versions so I cannot tell if you've
> already tried this, but the idea was simple:
> 
>   - track the last @n wakeup distances in the @idle-states buckets;
>   - sum the buckets in increasing idle state and pick the state before
>     you reach 50% of @n.
> 
> That is computationally cheaper than what you have; and should allow you
> to increase @n without making the computation more expensive.

I can do something similar (actually, I have a prototype ready already),
but do it after walking the idle states once (which will give us a preliminary
idle state choice based on the time to the closest timer and the "history")
and then sum the buckets below the idle state selected so far in the decreasing
order (that will tend to select a shallower state in "tie" situations).


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC/RFT][PATCH v5] cpuidle: New timer events oriented governor for tickless systems
  2018-11-15  3:15   ` Rafael J. Wysocki
@ 2018-11-21 23:44     ` Rafael J. Wysocki
  0 siblings, 0 replies; 8+ messages in thread
From: Rafael J. Wysocki @ 2018-11-21 23:44 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Rafael J. Wysocki, Linux PM, Giovanni Gherdovich, Doug Smythies,
	Srinivas Pandruvada, LKML, Frederic Weisbecker, Mel Gorman,
	Daniel Lezcano

On Thursday, November 15, 2018 4:15:59 AM CET Rafael J. Wysocki wrote:
> On Sunday, November 11, 2018 4:40:17 PM CET Peter Zijlstra wrote:

[cut]

> 
> > Anyway; a fair while ago I proposed a different estimator. I've not had
> > time to dig through the 4 prior versions so I cannot tell if you've
> > already tried this, but the idea was simple:
> > 
> >   - track the last @n wakeup distances in the @idle-states buckets;
> >   - sum the buckets in increasing idle state and pick the state before
> >     you reach 50% of @n.
> > 
> > That is computationally cheaper than what you have; and should allow you
> > to increase @n without making the computation more expensive.
> 
> I can do something similar (actually, I have a prototype ready already),
> but do it after walking the idle states once (which will give us a preliminary
> idle state choice based on the time to the closest timer and the "history")
> and then sum the buckets below the idle state selected so far in the decreasing
> order (that will tend to select a shallower state in "tie" situations).

I did that, but the result was not encouraging as it caused state 0 (polling)
to be selected way too often (and the total time spent in it was significantly
greater too).

I have gone back to averaging over the most recent observed idle duration
values, but now I'm doing that after selecting a candidate idle state
(based on the time to the closest timer and the "history") and only taking
values below the target residency of that state into account.  Seems to work
reasonably well.

I'll send a v6 tomorrow if all goes well.


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2018-11-21 23:44 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-08 17:25 [RFC/RFT][PATCH v5] cpuidle: New timer events oriented governor for tickless systems Rafael J. Wysocki
2018-11-10 19:10 ` Giovanni Gherdovich
2018-11-15  2:56   ` Rafael J. Wysocki
2018-11-11 15:20 ` Peter Zijlstra
2018-11-15  2:57   ` Rafael J. Wysocki
2018-11-11 15:40 ` Peter Zijlstra
2018-11-15  3:15   ` Rafael J. Wysocki
2018-11-21 23:44     ` Rafael J. Wysocki

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).