* [PATCH] cpuidle: fix the menu governor to enhance IO performance
@ 2009-12-10 11:26 Yu, Ke
2009-12-11 16:07 ` Yu, Ke
0 siblings, 1 reply; 2+ messages in thread
From: Yu, Ke @ 2009-12-10 11:26 UTC (permalink / raw)
To: Keir Fraser, Wei, Gang; +Cc: Xen-Devel
[-- Attachment #1: Type: text/plain, Size: 14217 bytes --]
cpuidle: fix the menu governor to enhance IO performance
this is a revised version of linux upstream commit 69d25870f20c4b2563304f2b79c5300dd60a067e:
"
cpuidle: fix the menu governor to boost IO performance
Fix the menu idle governor which balances power savings, energy efficiency
and performance impact.
The reason for a reworked governor is that there have been serious
performance issues reported with the existing code on Nehalem server
systems.
To show this I'm sure Andrew wants to see benchmark results:
(benchmark is "fio", "no cstates" is using "idle=poll")
no cstates current linux new algorithm
1 disk 107 Mb/s 85 Mb/s 105 Mb/s
2 disks 215 Mb/s 123 Mb/s 209 Mb/s
12 disks 590 Mb/s 320 Mb/s 585 Mb/s
In various power benchmark measurements, no degredation was found by our
measurement&diagnostics team. Obviously a small percentage more power was
used in the "fio" benchmark, due to the much higher performance.
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Cc: Len Brown <lenb@kernel.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Yanmin Zhang <yanmin_zhang@linux.intel.com>
Acked-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
"
in Xen version, most logic is similar and with only one exception: linux use nr_iowait
and loadavg to track the pending I/O request, which however is not visible to Xen. so Xen
use the do_irq frequency to estimate the I/O pressure. this is not as accurate as linux,
and the better approach is to convey guest latency requirement to hypervisor by virtual C
state. this can be the future enhancement.
the detail algorithm description is in code comment. with this new algorithm, fio
benchmark performance improve ~5% with 1 disk. and no power degration is found in
idle case.
Signed-off-by: Yu Ke <ke.yu@intel.com>
diff -r 8f304c003af4 xen/arch/x86/acpi/cpuidle_menu.c
--- a/xen/arch/x86/acpi/cpuidle_menu.c
+++ b/xen/arch/x86/acpi/cpuidle_menu.c
@@ -30,26 +30,154 @@
#include <xen/acpi.h>
#include <xen/timer.h>
#include <xen/cpuidle.h>
+#include <asm/irq.h>
-#define BREAK_FUZZ 4 /* 4 us */
-#define PRED_HISTORY_PCT 50
-#define USEC_PER_SEC 1000000
+#define BUCKETS 6
+#define RESOLUTION 1024
+#define DECAY 4
+#define MAX_INTERESTING 50000
+
+/*
+ * Concepts and ideas behind the menu governor
+ *
+ * For the menu governor, there are 3 decision factors for picking a C
+ * state:
+ * 1) Energy break even point
+ * 2) Performance impact
+ * 3) Latency tolerance (TBD: from guest virtual C state)
+ * These these three factors are treated independently.
+ *
+ * Energy break even point
+ * -----------------------
+ * C state entry and exit have an energy cost, and a certain amount of time in
+ * the C state is required to actually break even on this cost. CPUIDLE
+ * provides us this duration in the "target_residency" field. So all that we
+ * need is a good prediction of how long we'll be idle. Like the traditional
+ * menu governor, we start with the actual known "next timer event" time.
+ *
+ * Since there are other source of wakeups (interrupts for example) than
+ * the next timer event, this estimation is rather optimistic. To get a
+ * more realistic estimate, a correction factor is applied to the estimate,
+ * that is based on historic behavior. For example, if in the past the actual
+ * duration always was 50% of the next timer tick, the correction factor will
+ * be 0.5.
+ *
+ * menu uses a running average for this correction factor, however it uses a
+ * set of factors, not just a single factor. This stems from the realization
+ * that the ratio is dependent on the order of magnitude of the expected
+ * duration; if we expect 500 milliseconds of idle time the likelihood of
+ * getting an interrupt very early is much higher than if we expect 50 micro
+ * seconds of idle time.
+ * For this reason we keep an array of 6 independent factors, that gets
+ * indexed based on the magnitude of the expected duration
+ *
+ * Limiting Performance Impact
+ * ---------------------------
+ * C states, especially those with large exit latencies, can have a real
+ * noticable impact on workloads, which is not acceptable for most sysadmins,
+ * and in addition, less performance has a power price of its own.
+ *
+ * As a general rule of thumb, menu assumes that the following heuristic
+ * holds:
+ * The busier the system, the less impact of C states is acceptable
+ *
+ * This rule-of-thumb is implemented using a performance-multiplier:
+ * If the exit latency times the performance multiplier is longer than
+ * the predicted duration, the C state is not considered a candidate
+ * for selection due to a too high performance impact. So the higher
+ * this multiplier is, the longer we need to be idle to pick a deep C
+ * state, and thus the less likely a busy CPU will hit such a deep
+ * C state.
+ *
+ * Currently one factors are used in determing this multiplier:
+ * the do_irq frequency during sampling period (5 milisec), and 4X
+ * multiplier is added to irq frequency.
+ * (these values are experimentally determined)
+ *
+ */
+
+struct perf_factor{
+ unsigned int last_irq_count;
+ unsigned int irq_count_sum;
+ s_time_t time_stamp;
+ unsigned int factor;
+};
struct menu_device
{
int last_state_idx;
unsigned int expected_us;
- unsigned int predicted_us;
- unsigned int current_predicted_us;
- unsigned int last_measured_us;
- unsigned int elapsed_us;
+ u64 predicted_us;
+ unsigned int measured_us;
+ unsigned int exit_us;
+ unsigned int bucket;
+ u64 correction_factor[BUCKETS];
+ struct perf_factor pf;
};
static DEFINE_PER_CPU(struct menu_device, menu_devices);
+static inline int which_bucket(unsigned int duration)
+{
+ int bucket = 0;
+
+ if (duration < 10)
+ return bucket;
+ if (duration < 100)
+ return bucket + 1;
+ if (duration < 1000)
+ return bucket + 2;
+ if (duration < 10000)
+ return bucket + 3;
+ if (duration < 100000)
+ return bucket + 4;
+ return bucket + 5;
+}
+
+/*
+ * Return a multiplier for the exit latency that is intended
+ * to take performance requirements into account.
+ * The more performance critical we estimate the system
+ * to be, the higher this multiplier, and thus the higher
+ * the barrier to go to an expensive C state.
+ */
+
+/* 5 milisec sampling period */
+#define SAMPLING_PERIOD 5000000
+
+/* 4x experimental multiplier for IO intensive */
+#define IO_MILTIPLIER 4
+
+static inline int performance_multiplier(void)
+{
+ int mult = 1;
+ unsigned int factor, irq_count_delta;
+ struct menu_device *data = &__get_cpu_var(menu_devices);
+ s_time_t duration, now;
+
+ now = NOW();
+ duration = now - data->pf.time_stamp;
+
+ irq_count_delta = IO_MILTIPLIER *
+ (this_cpu(irq_count) - data->pf.last_irq_count);
+
+ if ( duration < SAMPLING_PERIOD){
+ mult += (data->pf.factor + irq_count_delta * (DECAY-1)) / DECAY;
+ }
+ else{
+ factor = irq_count_delta * SAMPLING_PERIOD / duration;
+ data->pf.factor = (data->pf.factor + factor * (DECAY-1)) / DECAY;
+ data->pf.time_stamp = now;
+ data->pf.last_irq_count = this_cpu(irq_count);
+ mult += data->pf.factor;
+ }
+
+ return mult;
+}
+
static unsigned int get_sleep_length_us(void)
{
- s_time_t us = (per_cpu(timer_deadline, smp_processor_id()) - NOW()) / 1000;
+ s_time_t us = DIV_ROUND_UP(this_cpu(timer_deadline) - NOW() , 1000);
/*
* while us < 0 or us > (u32)-1, return a large u32,
* choose (unsigned int)-2000 to avoid wrapping while added with exit
@@ -62,57 +190,86 @@ static int menu_select(struct acpi_proce
{
struct menu_device *data = &__get_cpu_var(menu_devices);
int i;
+ int multiplier;
- /* determine the expected residency time */
+ /* TBD: Change to 0 if C0(polling mode) support is added later*/
+ data->last_state_idx = CPUIDLE_DRIVER_STATE_START;
+ data->exit_us = 0;
+
+ /* determine the expected residency time, round up */
data->expected_us = get_sleep_length_us();
- /* Recalculate predicted_us based on prediction_history_pct */
- data->predicted_us *= PRED_HISTORY_PCT;
- data->predicted_us += (100 - PRED_HISTORY_PCT) *
- data->current_predicted_us;
- data->predicted_us /= 100;
+ data->bucket = which_bucket(data->expected_us);
+
+ multiplier = performance_multiplier();
+
+ /*
+ * if the correction factor is 0 (eg first time init or cpu hotplug
+ * etc), we actually want to start out with a unity factor.
+ */
+ if (data->correction_factor[data->bucket] == 0)
+ data->correction_factor[data->bucket] = RESOLUTION * DECAY;
+
+ /* Make sure to round up for half microseconds */
+ data->predicted_us = DIV_ROUND(
+ data->expected_us * data->correction_factor[data->bucket],
+ RESOLUTION * DECAY);
/* find the deepest idle state that satisfies our constraints */
- for ( i = 2; i < power->count; i++ )
+ for ( i = CPUIDLE_DRIVER_STATE_START + 1; i < power->count; i++ )
{
struct acpi_processor_cx *s = &power->states[i];
- if ( s->target_residency > data->expected_us + s->latency )
+ if (s->target_residency > data->predicted_us)
break;
- if ( s->target_residency > data->predicted_us )
+ if (s->latency * multiplier > data->predicted_us)
break;
/* TBD: we need to check the QoS requirment in future */
+ data->exit_us = s->latency;
+ data->last_state_idx = i;
}
- data->last_state_idx = i - 1;
- return i - 1;
+ return data->last_state_idx;
}
static void menu_reflect(struct acpi_processor_power *power)
{
struct menu_device *data = &__get_cpu_var(menu_devices);
- struct acpi_processor_cx *target = &power->states[data->last_state_idx];
- unsigned int last_residency;
+ unsigned int last_idle_us = power->last_residency;
unsigned int measured_us;
+ u64 new_factor;
- last_residency = power->last_residency;
- measured_us = last_residency + data->elapsed_us;
+ measured_us = last_idle_us;
- /* if wrapping, set to max uint (-1) */
- measured_us = data->elapsed_us <= measured_us ? measured_us : -1;
+ /*
+ * We correct for the exit latency; we are assuming here that the
+ * exit latency happens after the event that we're interested in.
+ */
+ if (measured_us > data->exit_us)
+ measured_us -= data->exit_us;
- /* Predict time remaining until next break event */
- data->current_predicted_us = max(measured_us, data->last_measured_us);
+ /* update our correction ratio */
- /* Distinguish between expected & non-expected events */
- if ( last_residency + BREAK_FUZZ
- < data->expected_us + target->latency )
- {
- data->last_measured_us = measured_us;
- data->elapsed_us = 0;
- }
+ new_factor = data->correction_factor[data->bucket]
+ * (DECAY - 1) / DECAY;
+
+ if (data->expected_us > 0 && data->measured_us < MAX_INTERESTING)
+ new_factor += RESOLUTION * measured_us / data->expected_us;
else
- data->elapsed_us = measured_us;
+ /*
+ * we were idle so long that we count it as a perfect
+ * prediction
+ */
+ new_factor += RESOLUTION;
+
+ /*
+ * We don't want 0 as factor; we always want at least
+ * a tiny bit of estimated time.
+ */
+ if (new_factor == 0)
+ new_factor = 1;
+
+ data->correction_factor[data->bucket] = new_factor;
}
static int menu_enable_device(struct acpi_processor_power *power)
diff -r 8f304c003af4 xen/arch/x86/irq.c
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -517,6 +517,8 @@ void irq_set_affinity(int irq, cpumask_t
cpus_copy(desc->pending_mask, mask);
}
+DEFINE_PER_CPU(unsigned int, irq_count);
+
asmlinkage void do_IRQ(struct cpu_user_regs *regs)
{
struct irqaction *action;
@@ -527,6 +529,8 @@ asmlinkage void do_IRQ(struct cpu_user_r
struct cpu_user_regs *old_regs = set_irq_regs(regs);
perfc_incr(irqs);
+
+ this_cpu(irq_count)++;
if (irq < 0) {
ack_APIC_irq();
diff -r 8f304c003af4 xen/include/asm-x86/irq.h
--- a/xen/include/asm-x86/irq.h
+++ b/xen/include/asm-x86/irq.h
@@ -105,6 +105,8 @@ extern atomic_t irq_err_count;
extern atomic_t irq_err_count;
extern atomic_t irq_mis_count;
+DECLARE_PER_CPU(unsigned int, irq_count);
+
int pirq_shared(struct domain *d , int irq);
int map_domain_pirq(struct domain *d, int pirq, int irq, int type,
diff -r 8f304c003af4 xen/include/xen/cpuidle.h
--- a/xen/include/xen/cpuidle.h
+++ b/xen/include/xen/cpuidle.h
@@ -86,4 +86,6 @@ extern struct cpuidle_governor *cpuidle_
extern struct cpuidle_governor *cpuidle_current_governor;
void cpuidle_disable_deep_cstate(void);
+#define CPUIDLE_DRIVER_STATE_START 1
+
#endif /* _XEN_CPUIDLE_H */
diff -r 8f304c003af4 xen/include/xen/lib.h
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -44,6 +44,7 @@ do {
do { typeof(_a) _t = (_a); (_a) = (_b); (_b) = _t; } while ( 0 )
#define DIV_ROUND(x, y) (((x) + (y) / 2) / (y))
+#define DIV_ROUND_UP(x,y) (((x) + (y) - 1) / (y))
#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]) + __must_be_array(x))
[-- Attachment #2: cpuidle-io.patch --]
[-- Type: application/octet-stream, Size: 13834 bytes --]
cpuidle: fix the menu governor to enhance IO performance
this is a revised version of linux upstream commit 69d25870f20c4b2563304f2b79c5300dd60a067e:
"
cpuidle: fix the menu governor to boost IO performance
Fix the menu idle governor which balances power savings, energy efficiency
and performance impact.
The reason for a reworked governor is that there have been serious
performance issues reported with the existing code on Nehalem server
systems.
To show this I'm sure Andrew wants to see benchmark results:
(benchmark is "fio", "no cstates" is using "idle=poll")
no cstates current linux new algorithm
1 disk 107 Mb/s 85 Mb/s 105 Mb/s
2 disks 215 Mb/s 123 Mb/s 209 Mb/s
12 disks 590 Mb/s 320 Mb/s 585 Mb/s
In various power benchmark measurements, no degredation was found by our
measurement&diagnostics team. Obviously a small percentage more power was
used in the "fio" benchmark, due to the much higher performance.
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Cc: Len Brown <lenb@kernel.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Yanmin Zhang <yanmin_zhang@linux.intel.com>
Acked-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
"
in Xen version, most logic is similar and with only one exception: linux use nr_iowait
and loadavg to track the pending I/O request, which however is not visible to Xen. so Xen
use the do_irq frequency to estimate the I/O pressure. this is not as accurate as linux,
and the better approach is to convey guest latency requirement to hypervisor by virtual C
state. this can be the future enhancement.
the detail algorithm description is in code comment. with this new algorithm, fio
benchmark performance improve ~5% with 1 disk. and no power degration is found in
idle case.
Signed-off-by: Yu Ke <ke.yu@intel.com>
diff -r 8f304c003af4 xen/arch/x86/acpi/cpuidle_menu.c
--- a/xen/arch/x86/acpi/cpuidle_menu.c
+++ b/xen/arch/x86/acpi/cpuidle_menu.c
@@ -30,26 +30,154 @@
#include <xen/acpi.h>
#include <xen/timer.h>
#include <xen/cpuidle.h>
+#include <asm/irq.h>
-#define BREAK_FUZZ 4 /* 4 us */
-#define PRED_HISTORY_PCT 50
-#define USEC_PER_SEC 1000000
+#define BUCKETS 6
+#define RESOLUTION 1024
+#define DECAY 4
+#define MAX_INTERESTING 50000
+
+/*
+ * Concepts and ideas behind the menu governor
+ *
+ * For the menu governor, there are 3 decision factors for picking a C
+ * state:
+ * 1) Energy break even point
+ * 2) Performance impact
+ * 3) Latency tolerance (TBD: from guest virtual C state)
+ * These these three factors are treated independently.
+ *
+ * Energy break even point
+ * -----------------------
+ * C state entry and exit have an energy cost, and a certain amount of time in
+ * the C state is required to actually break even on this cost. CPUIDLE
+ * provides us this duration in the "target_residency" field. So all that we
+ * need is a good prediction of how long we'll be idle. Like the traditional
+ * menu governor, we start with the actual known "next timer event" time.
+ *
+ * Since there are other source of wakeups (interrupts for example) than
+ * the next timer event, this estimation is rather optimistic. To get a
+ * more realistic estimate, a correction factor is applied to the estimate,
+ * that is based on historic behavior. For example, if in the past the actual
+ * duration always was 50% of the next timer tick, the correction factor will
+ * be 0.5.
+ *
+ * menu uses a running average for this correction factor, however it uses a
+ * set of factors, not just a single factor. This stems from the realization
+ * that the ratio is dependent on the order of magnitude of the expected
+ * duration; if we expect 500 milliseconds of idle time the likelihood of
+ * getting an interrupt very early is much higher than if we expect 50 micro
+ * seconds of idle time.
+ * For this reason we keep an array of 6 independent factors, that gets
+ * indexed based on the magnitude of the expected duration
+ *
+ * Limiting Performance Impact
+ * ---------------------------
+ * C states, especially those with large exit latencies, can have a real
+ * noticable impact on workloads, which is not acceptable for most sysadmins,
+ * and in addition, less performance has a power price of its own.
+ *
+ * As a general rule of thumb, menu assumes that the following heuristic
+ * holds:
+ * The busier the system, the less impact of C states is acceptable
+ *
+ * This rule-of-thumb is implemented using a performance-multiplier:
+ * If the exit latency times the performance multiplier is longer than
+ * the predicted duration, the C state is not considered a candidate
+ * for selection due to a too high performance impact. So the higher
+ * this multiplier is, the longer we need to be idle to pick a deep C
+ * state, and thus the less likely a busy CPU will hit such a deep
+ * C state.
+ *
+ * Currently one factors are used in determing this multiplier:
+ * the do_irq frequency during sampling period (5 milisec), and 4X
+ * multiplier is added to irq frequency.
+ * (these values are experimentally determined)
+ *
+ */
+
+struct perf_factor{
+ unsigned int last_irq_count;
+ unsigned int irq_count_sum;
+ s_time_t time_stamp;
+ unsigned int factor;
+};
struct menu_device
{
int last_state_idx;
unsigned int expected_us;
- unsigned int predicted_us;
- unsigned int current_predicted_us;
- unsigned int last_measured_us;
- unsigned int elapsed_us;
+ u64 predicted_us;
+ unsigned int measured_us;
+ unsigned int exit_us;
+ unsigned int bucket;
+ u64 correction_factor[BUCKETS];
+ struct perf_factor pf;
};
static DEFINE_PER_CPU(struct menu_device, menu_devices);
+static inline int which_bucket(unsigned int duration)
+{
+ int bucket = 0;
+
+ if (duration < 10)
+ return bucket;
+ if (duration < 100)
+ return bucket + 1;
+ if (duration < 1000)
+ return bucket + 2;
+ if (duration < 10000)
+ return bucket + 3;
+ if (duration < 100000)
+ return bucket + 4;
+ return bucket + 5;
+}
+
+/*
+ * Return a multiplier for the exit latency that is intended
+ * to take performance requirements into account.
+ * The more performance critical we estimate the system
+ * to be, the higher this multiplier, and thus the higher
+ * the barrier to go to an expensive C state.
+ */
+
+/* 5 milisec sampling period */
+#define SAMPLING_PERIOD 5000000
+
+/* 4x experimental multiplier for IO intensive */
+#define IO_MILTIPLIER 4
+
+static inline int performance_multiplier(void)
+{
+ int mult = 1;
+ unsigned int factor, irq_count_delta;
+ struct menu_device *data = &__get_cpu_var(menu_devices);
+ s_time_t duration, now;
+
+ now = NOW();
+ duration = now - data->pf.time_stamp;
+
+ irq_count_delta = IO_MILTIPLIER *
+ (this_cpu(irq_count) - data->pf.last_irq_count);
+
+ if ( duration < SAMPLING_PERIOD){
+ mult += (data->pf.factor + irq_count_delta * (DECAY-1)) / DECAY;
+ }
+ else{
+ factor = irq_count_delta * SAMPLING_PERIOD / duration;
+ data->pf.factor = (data->pf.factor + factor * (DECAY-1)) / DECAY;
+ data->pf.time_stamp = now;
+ data->pf.last_irq_count = this_cpu(irq_count);
+ mult += data->pf.factor;
+ }
+
+ return mult;
+}
+
static unsigned int get_sleep_length_us(void)
{
- s_time_t us = (per_cpu(timer_deadline, smp_processor_id()) - NOW()) / 1000;
+ s_time_t us = DIV_ROUND_UP(this_cpu(timer_deadline) - NOW() , 1000);
/*
* while us < 0 or us > (u32)-1, return a large u32,
* choose (unsigned int)-2000 to avoid wrapping while added with exit
@@ -62,57 +190,86 @@ static int menu_select(struct acpi_proce
{
struct menu_device *data = &__get_cpu_var(menu_devices);
int i;
+ int multiplier;
- /* determine the expected residency time */
+ /* TBD: Change to 0 if C0(polling mode) support is added later*/
+ data->last_state_idx = CPUIDLE_DRIVER_STATE_START;
+ data->exit_us = 0;
+
+ /* determine the expected residency time, round up */
data->expected_us = get_sleep_length_us();
- /* Recalculate predicted_us based on prediction_history_pct */
- data->predicted_us *= PRED_HISTORY_PCT;
- data->predicted_us += (100 - PRED_HISTORY_PCT) *
- data->current_predicted_us;
- data->predicted_us /= 100;
+ data->bucket = which_bucket(data->expected_us);
+
+ multiplier = performance_multiplier();
+
+ /*
+ * if the correction factor is 0 (eg first time init or cpu hotplug
+ * etc), we actually want to start out with a unity factor.
+ */
+ if (data->correction_factor[data->bucket] == 0)
+ data->correction_factor[data->bucket] = RESOLUTION * DECAY;
+
+ /* Make sure to round up for half microseconds */
+ data->predicted_us = DIV_ROUND(
+ data->expected_us * data->correction_factor[data->bucket],
+ RESOLUTION * DECAY);
/* find the deepest idle state that satisfies our constraints */
- for ( i = 2; i < power->count; i++ )
+ for ( i = CPUIDLE_DRIVER_STATE_START + 1; i < power->count; i++ )
{
struct acpi_processor_cx *s = &power->states[i];
- if ( s->target_residency > data->expected_us + s->latency )
+ if (s->target_residency > data->predicted_us)
break;
- if ( s->target_residency > data->predicted_us )
+ if (s->latency * multiplier > data->predicted_us)
break;
/* TBD: we need to check the QoS requirment in future */
+ data->exit_us = s->latency;
+ data->last_state_idx = i;
}
- data->last_state_idx = i - 1;
- return i - 1;
+ return data->last_state_idx;
}
static void menu_reflect(struct acpi_processor_power *power)
{
struct menu_device *data = &__get_cpu_var(menu_devices);
- struct acpi_processor_cx *target = &power->states[data->last_state_idx];
- unsigned int last_residency;
+ unsigned int last_idle_us = power->last_residency;
unsigned int measured_us;
+ u64 new_factor;
- last_residency = power->last_residency;
- measured_us = last_residency + data->elapsed_us;
+ measured_us = last_idle_us;
- /* if wrapping, set to max uint (-1) */
- measured_us = data->elapsed_us <= measured_us ? measured_us : -1;
+ /*
+ * We correct for the exit latency; we are assuming here that the
+ * exit latency happens after the event that we're interested in.
+ */
+ if (measured_us > data->exit_us)
+ measured_us -= data->exit_us;
- /* Predict time remaining until next break event */
- data->current_predicted_us = max(measured_us, data->last_measured_us);
+ /* update our correction ratio */
- /* Distinguish between expected & non-expected events */
- if ( last_residency + BREAK_FUZZ
- < data->expected_us + target->latency )
- {
- data->last_measured_us = measured_us;
- data->elapsed_us = 0;
- }
+ new_factor = data->correction_factor[data->bucket]
+ * (DECAY - 1) / DECAY;
+
+ if (data->expected_us > 0 && data->measured_us < MAX_INTERESTING)
+ new_factor += RESOLUTION * measured_us / data->expected_us;
else
- data->elapsed_us = measured_us;
+ /*
+ * we were idle so long that we count it as a perfect
+ * prediction
+ */
+ new_factor += RESOLUTION;
+
+ /*
+ * We don't want 0 as factor; we always want at least
+ * a tiny bit of estimated time.
+ */
+ if (new_factor == 0)
+ new_factor = 1;
+
+ data->correction_factor[data->bucket] = new_factor;
}
static int menu_enable_device(struct acpi_processor_power *power)
diff -r 8f304c003af4 xen/arch/x86/irq.c
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -517,6 +517,8 @@ void irq_set_affinity(int irq, cpumask_t
cpus_copy(desc->pending_mask, mask);
}
+DEFINE_PER_CPU(unsigned int, irq_count);
+
asmlinkage void do_IRQ(struct cpu_user_regs *regs)
{
struct irqaction *action;
@@ -527,6 +529,8 @@ asmlinkage void do_IRQ(struct cpu_user_r
struct cpu_user_regs *old_regs = set_irq_regs(regs);
perfc_incr(irqs);
+
+ this_cpu(irq_count)++;
if (irq < 0) {
ack_APIC_irq();
diff -r 8f304c003af4 xen/include/asm-x86/irq.h
--- a/xen/include/asm-x86/irq.h
+++ b/xen/include/asm-x86/irq.h
@@ -105,6 +105,8 @@ extern atomic_t irq_err_count;
extern atomic_t irq_err_count;
extern atomic_t irq_mis_count;
+DECLARE_PER_CPU(unsigned int, irq_count);
+
int pirq_shared(struct domain *d , int irq);
int map_domain_pirq(struct domain *d, int pirq, int irq, int type,
diff -r 8f304c003af4 xen/include/xen/cpuidle.h
--- a/xen/include/xen/cpuidle.h
+++ b/xen/include/xen/cpuidle.h
@@ -86,4 +86,6 @@ extern struct cpuidle_governor *cpuidle_
extern struct cpuidle_governor *cpuidle_current_governor;
void cpuidle_disable_deep_cstate(void);
+#define CPUIDLE_DRIVER_STATE_START 1
+
#endif /* _XEN_CPUIDLE_H */
diff -r 8f304c003af4 xen/include/xen/lib.h
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -44,6 +44,7 @@ do {
do { typeof(_a) _t = (_a); (_a) = (_b); (_b) = _t; } while ( 0 )
#define DIV_ROUND(x, y) (((x) + (y) / 2) / (y))
+#define DIV_ROUND_UP(x,y) (((x) + (y) - 1) / (y))
#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]) + __must_be_array(x))
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply [flat|nested] 2+ messages in thread
* RE: [PATCH] cpuidle: fix the menu governor to enhance IO performance
2009-12-10 11:26 [PATCH] cpuidle: fix the menu governor to enhance IO performance Yu, Ke
@ 2009-12-11 16:07 ` Yu, Ke
0 siblings, 0 replies; 2+ messages in thread
From: Yu, Ke @ 2009-12-11 16:07 UTC (permalink / raw)
To: Keir Fraser, Wei, Gang; +Cc: Xen-Devel
[-- Attachment #1: Type: text/plain, Size: 15344 bytes --]
Hi Keir,
Please use the attached version 2 patch. it has the following updates:
- rebase to latest cset 20625, fixing the code conflict (mainly caused by cset 20611)
- remove hpet irq from I/O multiplier calculation, since hpet is mainly for lapic timer broadcast, and not related to I/O request.
- refine the I/O multiplier by using average interrupt interval, which is more logically clean and also maintain the performance as v1.
Best Regards
Ke
>-----Original Message-----
>From: xen-devel-bounces@lists.xensource.com
>[mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of Yu, Ke
>Sent: Thursday, December 10, 2009 7:26 PM
>To: Keir Fraser; Wei, Gang
>Cc: Xen-Devel
>Subject: [Xen-devel] [PATCH] cpuidle: fix the menu governor to enhance IO
>performance
>
>cpuidle: fix the menu governor to enhance IO performance
>
>this is a revised version of linux upstream commit
>69d25870f20c4b2563304f2b79c5300dd60a067e:
>"
> cpuidle: fix the menu governor to boost IO performance
>
> Fix the menu idle governor which balances power savings, energy efficiency
> and performance impact.
>
> The reason for a reworked governor is that there have been serious
> performance issues reported with the existing code on Nehalem server
> systems.
>
> To show this I'm sure Andrew wants to see benchmark results:
> (benchmark is "fio", "no cstates" is using "idle=poll")
>
> no cstates current linux new algorithm
> 1 disk 107 Mb/s 85 Mb/s 105 Mb/s
> 2 disks 215 Mb/s 123 Mb/s 209 Mb/s
> 12 disks 590 Mb/s 320 Mb/s 585 Mb/s
>
> In various power benchmark measurements, no degredation was found by
>our
> measurement&diagnostics team. Obviously a small percentage more
>power was
> used in the "fio" benchmark, due to the much higher performance.
>
> Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
> Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
> Cc: Len Brown <lenb@kernel.org>
> Cc: Ingo Molnar <mingo@elte.hu>
> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
> Cc: Yanmin Zhang <yanmin_zhang@linux.intel.com>
> Acked-by: Ingo Molnar <mingo@elte.hu>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
>"
>
> in Xen version, most logic is similar and with only one exception: linux use
>nr_iowait
> and loadavg to track the pending I/O request, which however is not visible to
>Xen. so Xen
> use the do_irq frequency to estimate the I/O pressure. this is not as accurate
>as linux,
> and the better approach is to convey guest latency requirement to
>hypervisor by virtual C
> state. this can be the future enhancement.
>
> the detail algorithm description is in code comment. with this new algorithm,
>fio
> benchmark performance improve ~5% with 1 disk. and no power degration is
>found in
> idle case.
>
> Signed-off-by: Yu Ke <ke.yu@intel.com>
>
>diff -r 8f304c003af4 xen/arch/x86/acpi/cpuidle_menu.c
>--- a/xen/arch/x86/acpi/cpuidle_menu.c
>+++ b/xen/arch/x86/acpi/cpuidle_menu.c
>@@ -30,26 +30,154 @@
> #include <xen/acpi.h>
> #include <xen/timer.h>
> #include <xen/cpuidle.h>
>+#include <asm/irq.h>
>
>-#define BREAK_FUZZ 4 /* 4 us */
>-#define PRED_HISTORY_PCT 50
>-#define USEC_PER_SEC 1000000
>+#define BUCKETS 6
>+#define RESOLUTION 1024
>+#define DECAY 4
>+#define MAX_INTERESTING 50000
>+
>+/*
>+ * Concepts and ideas behind the menu governor
>+ *
>+ * For the menu governor, there are 3 decision factors for picking a C
>+ * state:
>+ * 1) Energy break even point
>+ * 2) Performance impact
>+ * 3) Latency tolerance (TBD: from guest virtual C state)
>+ * These these three factors are treated independently.
>+ *
>+ * Energy break even point
>+ * -----------------------
>+ * C state entry and exit have an energy cost, and a certain amount of time in
>+ * the C state is required to actually break even on this cost. CPUIDLE
>+ * provides us this duration in the "target_residency" field. So all that we
>+ * need is a good prediction of how long we'll be idle. Like the traditional
>+ * menu governor, we start with the actual known "next timer event" time.
>+ *
>+ * Since there are other source of wakeups (interrupts for example) than
>+ * the next timer event, this estimation is rather optimistic. To get a
>+ * more realistic estimate, a correction factor is applied to the estimate,
>+ * that is based on historic behavior. For example, if in the past the actual
>+ * duration always was 50% of the next timer tick, the correction factor will
>+ * be 0.5.
>+ *
>+ * menu uses a running average for this correction factor, however it uses a
>+ * set of factors, not just a single factor. This stems from the realization
>+ * that the ratio is dependent on the order of magnitude of the expected
>+ * duration; if we expect 500 milliseconds of idle time the likelihood of
>+ * getting an interrupt very early is much higher than if we expect 50 micro
>+ * seconds of idle time.
>+ * For this reason we keep an array of 6 independent factors, that gets
>+ * indexed based on the magnitude of the expected duration
>+ *
>+ * Limiting Performance Impact
>+ * ---------------------------
>+ * C states, especially those with large exit latencies, can have a real
>+ * noticable impact on workloads, which is not acceptable for most sysadmins,
>+ * and in addition, less performance has a power price of its own.
>+ *
>+ * As a general rule of thumb, menu assumes that the following heuristic
>+ * holds:
>+ * The busier the system, the less impact of C states is acceptable
>+ *
>+ * This rule-of-thumb is implemented using a performance-multiplier:
>+ * If the exit latency times the performance multiplier is longer than
>+ * the predicted duration, the C state is not considered a candidate
>+ * for selection due to a too high performance impact. So the higher
>+ * this multiplier is, the longer we need to be idle to pick a deep C
>+ * state, and thus the less likely a busy CPU will hit such a deep
>+ * C state.
>+ *
>+ * Currently one factors are used in determing this multiplier:
>+ * the do_irq frequency during sampling period (5 milisec), and 4X
>+ * multiplier is added to irq frequency.
>+ * (these values are experimentally determined)
>+ *
>+ */
>+
>+struct perf_factor{
>+ unsigned int last_irq_count;
>+ unsigned int irq_count_sum;
>+ s_time_t time_stamp;
>+ unsigned int factor;
>+};
>
> struct menu_device
> {
> int last_state_idx;
> unsigned int expected_us;
>- unsigned int predicted_us;
>- unsigned int current_predicted_us;
>- unsigned int last_measured_us;
>- unsigned int elapsed_us;
>+ u64 predicted_us;
>+ unsigned int measured_us;
>+ unsigned int exit_us;
>+ unsigned int bucket;
>+ u64 correction_factor[BUCKETS];
>+ struct perf_factor pf;
> };
>
> static DEFINE_PER_CPU(struct menu_device, menu_devices);
>
>+static inline int which_bucket(unsigned int duration)
>+{
>+ int bucket = 0;
>+
>+ if (duration < 10)
>+ return bucket;
>+ if (duration < 100)
>+ return bucket + 1;
>+ if (duration < 1000)
>+ return bucket + 2;
>+ if (duration < 10000)
>+ return bucket + 3;
>+ if (duration < 100000)
>+ return bucket + 4;
>+ return bucket + 5;
>+}
>+
>+/*
>+ * Return a multiplier for the exit latency that is intended
>+ * to take performance requirements into account.
>+ * The more performance critical we estimate the system
>+ * to be, the higher this multiplier, and thus the higher
>+ * the barrier to go to an expensive C state.
>+ */
>+
>+/* 5 milisec sampling period */
>+#define SAMPLING_PERIOD 5000000
>+
>+/* 4x experimental multiplier for IO intensive */
>+#define IO_MILTIPLIER 4
>+
>+static inline int performance_multiplier(void)
>+{
>+ int mult = 1;
>+ unsigned int factor, irq_count_delta;
>+ struct menu_device *data = &__get_cpu_var(menu_devices);
>+ s_time_t duration, now;
>+
>+ now = NOW();
>+ duration = now - data->pf.time_stamp;
>+
>+ irq_count_delta = IO_MILTIPLIER *
>+ (this_cpu(irq_count) - data->pf.last_irq_count);
>+
>+ if ( duration < SAMPLING_PERIOD){
>+ mult += (data->pf.factor + irq_count_delta * (DECAY-1)) / DECAY;
>+ }
>+ else{
>+ factor = irq_count_delta * SAMPLING_PERIOD / duration;
>+ data->pf.factor = (data->pf.factor + factor * (DECAY-1)) / DECAY;
>+ data->pf.time_stamp = now;
>+ data->pf.last_irq_count = this_cpu(irq_count);
>+ mult += data->pf.factor;
>+ }
>+
>+ return mult;
>+}
>+
> static unsigned int get_sleep_length_us(void)
> {
>- s_time_t us = (per_cpu(timer_deadline, smp_processor_id()) - NOW()) /
>1000;
>+ s_time_t us = DIV_ROUND_UP(this_cpu(timer_deadline) - NOW() , 1000);
> /*
> * while us < 0 or us > (u32)-1, return a large u32,
> * choose (unsigned int)-2000 to avoid wrapping while added with exit
>@@ -62,57 +190,86 @@ static int menu_select(struct acpi_proce
> {
> struct menu_device *data = &__get_cpu_var(menu_devices);
> int i;
>+ int multiplier;
>
>- /* determine the expected residency time */
>+ /* TBD: Change to 0 if C0(polling mode) support is added later*/
>+ data->last_state_idx = CPUIDLE_DRIVER_STATE_START;
>+ data->exit_us = 0;
>+
>+ /* determine the expected residency time, round up */
> data->expected_us = get_sleep_length_us();
>
>- /* Recalculate predicted_us based on prediction_history_pct */
>- data->predicted_us *= PRED_HISTORY_PCT;
>- data->predicted_us += (100 - PRED_HISTORY_PCT) *
>- data->current_predicted_us;
>- data->predicted_us /= 100;
>+ data->bucket = which_bucket(data->expected_us);
>+
>+ multiplier = performance_multiplier();
>+
>+ /*
>+ * if the correction factor is 0 (eg first time init or cpu hotplug
>+ * etc), we actually want to start out with a unity factor.
>+ */
>+ if (data->correction_factor[data->bucket] == 0)
>+ data->correction_factor[data->bucket] = RESOLUTION * DECAY;
>+
>+ /* Make sure to round up for half microseconds */
>+ data->predicted_us = DIV_ROUND(
>+ data->expected_us * data->correction_factor[data->bucket],
>+ RESOLUTION * DECAY);
>
> /* find the deepest idle state that satisfies our constraints */
>- for ( i = 2; i < power->count; i++ )
>+ for ( i = CPUIDLE_DRIVER_STATE_START + 1; i < power->count; i++ )
> {
> struct acpi_processor_cx *s = &power->states[i];
>
>- if ( s->target_residency > data->expected_us + s->latency )
>+ if (s->target_residency > data->predicted_us)
> break;
>- if ( s->target_residency > data->predicted_us )
>+ if (s->latency * multiplier > data->predicted_us)
> break;
> /* TBD: we need to check the QoS requirment in future */
>+ data->exit_us = s->latency;
>+ data->last_state_idx = i;
> }
>
>- data->last_state_idx = i - 1;
>- return i - 1;
>+ return data->last_state_idx;
> }
>
> static void menu_reflect(struct acpi_processor_power *power)
> {
> struct menu_device *data = &__get_cpu_var(menu_devices);
>- struct acpi_processor_cx *target = &power->states[data->last_state_idx];
>- unsigned int last_residency;
>+ unsigned int last_idle_us = power->last_residency;
> unsigned int measured_us;
>+ u64 new_factor;
>
>- last_residency = power->last_residency;
>- measured_us = last_residency + data->elapsed_us;
>+ measured_us = last_idle_us;
>
>- /* if wrapping, set to max uint (-1) */
>- measured_us = data->elapsed_us <= measured_us ? measured_us : -1;
>+ /*
>+ * We correct for the exit latency; we are assuming here that the
>+ * exit latency happens after the event that we're interested in.
>+ */
>+ if (measured_us > data->exit_us)
>+ measured_us -= data->exit_us;
>
>- /* Predict time remaining until next break event */
>- data->current_predicted_us = max(measured_us, data->last_measured_us);
>+ /* update our correction ratio */
>
>- /* Distinguish between expected & non-expected events */
>- if ( last_residency + BREAK_FUZZ
>- < data->expected_us + target->latency )
>- {
>- data->last_measured_us = measured_us;
>- data->elapsed_us = 0;
>- }
>+ new_factor = data->correction_factor[data->bucket]
>+ * (DECAY - 1) / DECAY;
>+
>+ if (data->expected_us > 0 && data->measured_us < MAX_INTERESTING)
>+ new_factor += RESOLUTION * measured_us / data->expected_us;
> else
>- data->elapsed_us = measured_us;
>+ /*
>+ * we were idle so long that we count it as a perfect
>+ * prediction
>+ */
>+ new_factor += RESOLUTION;
>+
>+ /*
>+ * We don't want 0 as factor; we always want at least
>+ * a tiny bit of estimated time.
>+ */
>+ if (new_factor == 0)
>+ new_factor = 1;
>+
>+ data->correction_factor[data->bucket] = new_factor;
> }
>
> static int menu_enable_device(struct acpi_processor_power *power)
>diff -r 8f304c003af4 xen/arch/x86/irq.c
>--- a/xen/arch/x86/irq.c
>+++ b/xen/arch/x86/irq.c
>@@ -517,6 +517,8 @@ void irq_set_affinity(int irq, cpumask_t
> cpus_copy(desc->pending_mask, mask);
> }
>
>+DEFINE_PER_CPU(unsigned int, irq_count);
>+
> asmlinkage void do_IRQ(struct cpu_user_regs *regs)
> {
> struct irqaction *action;
>@@ -527,6 +529,8 @@ asmlinkage void do_IRQ(struct cpu_user_r
> struct cpu_user_regs *old_regs = set_irq_regs(regs);
>
> perfc_incr(irqs);
>+
>+ this_cpu(irq_count)++;
>
> if (irq < 0) {
> ack_APIC_irq();
>diff -r 8f304c003af4 xen/include/asm-x86/irq.h
>--- a/xen/include/asm-x86/irq.h
>+++ b/xen/include/asm-x86/irq.h
>@@ -105,6 +105,8 @@ extern atomic_t irq_err_count;
> extern atomic_t irq_err_count;
> extern atomic_t irq_mis_count;
>
>+DECLARE_PER_CPU(unsigned int, irq_count);
>+
> int pirq_shared(struct domain *d , int irq);
>
> int map_domain_pirq(struct domain *d, int pirq, int irq, int type,
>diff -r 8f304c003af4 xen/include/xen/cpuidle.h
>--- a/xen/include/xen/cpuidle.h
>+++ b/xen/include/xen/cpuidle.h
>@@ -86,4 +86,6 @@ extern struct cpuidle_governor *cpuidle_
> extern struct cpuidle_governor *cpuidle_current_governor;
> void cpuidle_disable_deep_cstate(void);
>
>+#define CPUIDLE_DRIVER_STATE_START 1
>+
> #endif /* _XEN_CPUIDLE_H */
>diff -r 8f304c003af4 xen/include/xen/lib.h
>--- a/xen/include/xen/lib.h
>+++ b/xen/include/xen/lib.h
>@@ -44,6 +44,7 @@ do {
> do { typeof(_a) _t = (_a); (_a) = (_b); (_b) = _t; } while ( 0 )
>
> #define DIV_ROUND(x, y) (((x) + (y) / 2) / (y))
>+#define DIV_ROUND_UP(x,y) (((x) + (y) - 1) / (y))
>
> #define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]) + __must_be_array(x))
[-- Attachment #2: cpuidle-io-v2.patch --]
[-- Type: application/octet-stream, Size: 13670 bytes --]
cpuidle: fix the menu governor to enhance IO performance
this is a revised version of linux upstream commit 69d25870f20c4b2563304f2b79c5300dd60a067e:
"
cpuidle: fix the menu governor to boost IO performance
Fix the menu idle governor which balances power savings, energy efficiency
and performance impact.
The reason for a reworked governor is that there have been serious
performance issues reported with the existing code on Nehalem server
systems.
To show this I'm sure Andrew wants to see benchmark results:
(benchmark is "fio", "no cstates" is using "idle=poll")
no cstates current linux new algorithm
1 disk 107 Mb/s 85 Mb/s 105 Mb/s
2 disks 215 Mb/s 123 Mb/s 209 Mb/s
12 disks 590 Mb/s 320 Mb/s 585 Mb/s
In various power benchmark measurements, no degredation was found by our
measurement&diagnostics team. Obviously a small percentage more power was
used in the "fio" benchmark, due to the much higher performance.
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Cc: Len Brown <lenb@kernel.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Yanmin Zhang <yanmin_zhang@linux.intel.com>
Acked-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
"
in Xen version, most logic is similar and with only one exception: linux use nr_iowait
and loadavg to track the pending I/O request, which however is not visible to Xen. so Xen
use the average irq interval to estimate the I/O pressure. this is not as accurate as linux,
and the better approach is to convey guest latency requirement to hypervisor by virtual C
state. this can be the future enhancement.
the detail algorithm description is in code comment. with this new algorithm, fio
benchmark performance improve ~5% with 1 disk. and no power degration is found in
idle case.
Signed-off-by: Yu Ke <ke.yu@intel.com>
diff -r b928797213ac xen/arch/x86/acpi/cpuidle_menu.c
--- a/xen/arch/x86/acpi/cpuidle_menu.c
+++ b/xen/arch/x86/acpi/cpuidle_menu.c
@@ -30,22 +30,146 @@
#include <xen/acpi.h>
#include <xen/timer.h>
#include <xen/cpuidle.h>
+#include <asm/irq.h>
-#define BREAK_FUZZ 4 /* 4 us */
-#define PRED_HISTORY_PCT 50
-#define USEC_PER_SEC 1000000
+#define BUCKETS 6
+#define RESOLUTION 1024
+#define DECAY 4
+#define MAX_INTERESTING 50000
+
+/*
+ * Concepts and ideas behind the menu governor
+ *
+ * For the menu governor, there are 3 decision factors for picking a C
+ * state:
+ * 1) Energy break even point
+ * 2) Performance impact
+ * 3) Latency tolerance (TBD: from guest virtual C state)
+ * These these three factors are treated independently.
+ *
+ * Energy break even point
+ * -----------------------
+ * C state entry and exit have an energy cost, and a certain amount of time in
+ * the C state is required to actually break even on this cost. CPUIDLE
+ * provides us this duration in the "target_residency" field. So all that we
+ * need is a good prediction of how long we'll be idle. Like the traditional
+ * menu governor, we start with the actual known "next timer event" time.
+ *
+ * Since there are other source of wakeups (interrupts for example) than
+ * the next timer event, this estimation is rather optimistic. To get a
+ * more realistic estimate, a correction factor is applied to the estimate,
+ * that is based on historic behavior. For example, if in the past the actual
+ * duration always was 50% of the next timer tick, the correction factor will
+ * be 0.5.
+ *
+ * menu uses a running average for this correction factor, however it uses a
+ * set of factors, not just a single factor. This stems from the realization
+ * that the ratio is dependent on the order of magnitude of the expected
+ * duration; if we expect 500 milliseconds of idle time the likelihood of
+ * getting an interrupt very early is much higher than if we expect 50 micro
+ * seconds of idle time.
+ * For this reason we keep an array of 6 independent factors, that gets
+ * indexed based on the magnitude of the expected duration
+ *
+ * Limiting Performance Impact
+ * ---------------------------
+ * C states, especially those with large exit latencies, can have a real
+ * noticable impact on workloads, which is not acceptable for most sysadmins,
+ * and in addition, less performance has a power price of its own.
+ *
+ * As a general rule of thumb, menu assumes that the following heuristic
+ * holds:
+ * The busier the system, the less impact of C states is acceptable
+ *
+ * This rule-of-thumb is implemented using average interrupt interval:
+ * If the exit latency times multiplier is longer than the average
+ * interrupt interval, the C state is not considered a candidate
+ * for selection due to a too high performance impact. So the smaller
+ * the average interrupt interval is, the smaller C state latency should be
+ * and thus the less likely a busy CPU will hit such a deep C state.
+ *
+ */
+
+struct perf_factor{
+ s_time_t time_stamp;
+ s_time_t duration;
+ unsigned int irq_count_stamp;
+ unsigned int irq_sum;
+};
struct menu_device
{
int last_state_idx;
unsigned int expected_us;
- unsigned int predicted_us;
- unsigned int current_predicted_us;
- unsigned int last_measured_us;
- unsigned int elapsed_us;
+ u64 predicted_us;
+ unsigned int measured_us;
+ unsigned int exit_us;
+ unsigned int bucket;
+ u64 correction_factor[BUCKETS];
+ struct perf_factor pf;
};
static DEFINE_PER_CPU(struct menu_device, menu_devices);
+
+static inline int which_bucket(unsigned int duration)
+{
+ int bucket = 0;
+
+ if (duration < 10)
+ return bucket;
+ if (duration < 100)
+ return bucket + 1;
+ if (duration < 1000)
+ return bucket + 2;
+ if (duration < 10000)
+ return bucket + 3;
+ if (duration < 100000)
+ return bucket + 4;
+ return bucket + 5;
+}
+
+/*
+ * Return the average interrupt interval to take I/O performance
+ * requirements into account. The smaller the average interrupt
+ * interval to be, the more busy I/O activity, and thus the higher
+ * the barrier to go to an expensive C state.
+ */
+
+/* 5 milisec sampling period */
+#define SAMPLING_PERIOD 5000000
+
+/* for I/O interrupt, we give 8x multiplier compared to C state latency*/
+#define IO_MULTIPLIER 8
+
+static inline s_time_t avg_intr_interval_us(void)
+{
+ struct menu_device *data = &__get_cpu_var(menu_devices);
+ s_time_t duration, now;
+ s_time_t avg_interval;
+ unsigned int irq_sum;
+
+ now = NOW();
+ duration = (data->pf.duration + (now - data->pf.time_stamp)
+ * (DECAY - 1)) / DECAY;
+
+ irq_sum = (data->pf.irq_sum + (this_cpu(irq_count) - data->pf.irq_count_stamp)
+ * (DECAY - 1)) / DECAY;
+
+ if (irq_sum == 0)
+ /* no irq recently, so return a big enough interval: 1 sec */
+ avg_interval = 1000000;
+ else
+ avg_interval = duration / irq_sum / 1000; /* in us */
+
+ if ( duration >= SAMPLING_PERIOD){
+ data->pf.time_stamp = now;
+ data->pf.duration = duration;
+ data->pf.irq_count_stamp= this_cpu(irq_count);
+ data->pf.irq_sum = irq_sum;
+ }
+
+ return avg_interval;
+}
static unsigned int get_sleep_length_us(void)
{
@@ -62,57 +186,86 @@ static int menu_select(struct acpi_proce
{
struct menu_device *data = &__get_cpu_var(menu_devices);
int i;
+ s_time_t io_interval;
- /* determine the expected residency time */
+ /* TBD: Change to 0 if C0(polling mode) support is added later*/
+ data->last_state_idx = CPUIDLE_DRIVER_STATE_START;
+ data->exit_us = 0;
+
+ /* determine the expected residency time, round up */
data->expected_us = get_sleep_length_us();
- /* Recalculate predicted_us based on prediction_history_pct */
- data->predicted_us *= PRED_HISTORY_PCT;
- data->predicted_us += (100 - PRED_HISTORY_PCT) *
- data->current_predicted_us;
- data->predicted_us /= 100;
+ data->bucket = which_bucket(data->expected_us);
+
+ io_interval = avg_intr_interval_us();
+
+ /*
+ * if the correction factor is 0 (eg first time init or cpu hotplug
+ * etc), we actually want to start out with a unity factor.
+ */
+ if (data->correction_factor[data->bucket] == 0)
+ data->correction_factor[data->bucket] = RESOLUTION * DECAY;
+
+ /* Make sure to round up for half microseconds */
+ data->predicted_us = DIV_ROUND(
+ data->expected_us * data->correction_factor[data->bucket],
+ RESOLUTION * DECAY);
/* find the deepest idle state that satisfies our constraints */
- for ( i = 2; i < power->count; i++ )
+ for ( i = CPUIDLE_DRIVER_STATE_START + 1; i < power->count; i++ )
{
struct acpi_processor_cx *s = &power->states[i];
- if ( s->target_residency > data->expected_us + s->latency )
+ if (s->target_residency > data->predicted_us)
break;
- if ( s->target_residency > data->predicted_us )
+ if (s->latency * IO_MULTIPLIER > io_interval)
break;
/* TBD: we need to check the QoS requirment in future */
+ data->exit_us = s->latency;
+ data->last_state_idx = i;
}
- data->last_state_idx = i - 1;
- return i - 1;
+ return data->last_state_idx;
}
static void menu_reflect(struct acpi_processor_power *power)
{
struct menu_device *data = &__get_cpu_var(menu_devices);
- struct acpi_processor_cx *target = &power->states[data->last_state_idx];
- unsigned int last_residency;
+ unsigned int last_idle_us = power->last_residency;
unsigned int measured_us;
+ u64 new_factor;
- last_residency = power->last_residency;
- measured_us = last_residency + data->elapsed_us;
+ measured_us = last_idle_us;
- /* if wrapping, set to max uint (-1) */
- measured_us = data->elapsed_us <= measured_us ? measured_us : -1;
+ /*
+ * We correct for the exit latency; we are assuming here that the
+ * exit latency happens after the event that we're interested in.
+ */
+ if (measured_us > data->exit_us)
+ measured_us -= data->exit_us;
- /* Predict time remaining until next break event */
- data->current_predicted_us = max(measured_us, data->last_measured_us);
+ /* update our correction ratio */
- /* Distinguish between expected & non-expected events */
- if ( last_residency + BREAK_FUZZ
- < data->expected_us + target->latency )
- {
- data->last_measured_us = measured_us;
- data->elapsed_us = 0;
- }
+ new_factor = data->correction_factor[data->bucket]
+ * (DECAY - 1) / DECAY;
+
+ if (data->expected_us > 0 && data->measured_us < MAX_INTERESTING)
+ new_factor += RESOLUTION * measured_us / data->expected_us;
else
- data->elapsed_us = measured_us;
+ /*
+ * we were idle so long that we count it as a perfect
+ * prediction
+ */
+ new_factor += RESOLUTION;
+
+ /*
+ * We don't want 0 as factor; we always want at least
+ * a tiny bit of estimated time.
+ */
+ if (new_factor == 0)
+ new_factor = 1;
+
+ data->correction_factor[data->bucket] = new_factor;
}
static int menu_enable_device(struct acpi_processor_power *power)
diff -r b928797213ac xen/arch/x86/hpet.c
--- a/xen/arch/x86/hpet.c
+++ b/xen/arch/x86/hpet.c
@@ -211,6 +211,9 @@ static void hpet_interrupt_handler(int i
struct cpu_user_regs *regs)
{
struct hpet_event_channel *ch = (struct hpet_event_channel *)data;
+
+ this_cpu(irq_count)--;
+
if ( !ch->event_handler )
{
printk(XENLOG_WARNING "Spurious HPET timer interrupt on HPET timer %d\n", ch->idx);
@@ -692,6 +695,8 @@ int hpet_broadcast_is_available(void)
int hpet_legacy_irq_tick(void)
{
+ this_cpu(irq_count)--;
+
if ( !legacy_hpet_event.event_handler )
return 0;
legacy_hpet_event.event_handler(&legacy_hpet_event);
diff -r b928797213ac xen/arch/x86/irq.c
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -517,6 +517,8 @@ void irq_set_affinity(int irq, cpumask_t
cpus_copy(desc->pending_mask, mask);
}
+DEFINE_PER_CPU(unsigned int, irq_count);
+
asmlinkage void do_IRQ(struct cpu_user_regs *regs)
{
struct irqaction *action;
@@ -527,6 +529,8 @@ asmlinkage void do_IRQ(struct cpu_user_r
struct cpu_user_regs *old_regs = set_irq_regs(regs);
perfc_incr(irqs);
+
+ this_cpu(irq_count)++;
if (irq < 0) {
ack_APIC_irq();
diff -r b928797213ac xen/include/asm-x86/irq.h
--- a/xen/include/asm-x86/irq.h
+++ b/xen/include/asm-x86/irq.h
@@ -105,6 +105,8 @@ extern atomic_t irq_err_count;
extern atomic_t irq_err_count;
extern atomic_t irq_mis_count;
+DECLARE_PER_CPU(unsigned int, irq_count);
+
int pirq_shared(struct domain *d , int irq);
int map_domain_pirq(struct domain *d, int pirq, int irq, int type,
diff -r b928797213ac xen/include/xen/cpuidle.h
--- a/xen/include/xen/cpuidle.h
+++ b/xen/include/xen/cpuidle.h
@@ -86,4 +86,6 @@ extern struct cpuidle_governor *cpuidle_
extern struct cpuidle_governor *cpuidle_current_governor;
void cpuidle_disable_deep_cstate(void);
+#define CPUIDLE_DRIVER_STATE_START 1
+
#endif /* _XEN_CPUIDLE_H */
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2009-12-11 16:07 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-12-10 11:26 [PATCH] cpuidle: fix the menu governor to enhance IO performance Yu, Ke
2009-12-11 16:07 ` Yu, Ke
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.