linux-pm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] Sapphire Rapids C0.x idle states support
@ 2023-03-10 12:21 Artem Bityutskiy
  2023-03-10 12:21 ` [PATCH v2 1/3] x86/mwait: Add support for idle via umwait Artem Bityutskiy
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Artem Bityutskiy @ 2023-03-10 12:21 UTC (permalink / raw)
  To: x86, Linux PM Mailing List; +Cc: Artem Bityutskiy

From: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>

This patch-set enables C0.2 idle state support on Sapphire Rapids Xeon (later -
SPR). The new idle state is added between POLL and C1 states.

This patch-set is based on the "linux-next" branch of the "linux-pm" tree
(base commit ID at the end of this letter).

Changelog
---------

* v2
  - Do not mix 'raw_local_irq_enable()' and 'local_irq_disable()'. I failed to
    directly verify it, but I believe it'll address the '.noinstr.text' warning.
  - Minor kerneldoc commentary fix.

Motivation
----------

According to my measurements, SPR C0.2 idle state is 5-20% more
energy-efficient than Linux POLL state, and its latency is close to POLL
latency.

When a CPU rests in C0.2, saved power increases power budget of a busy CPU.
This way, C0.2 may improve turbo boost of a busy CPU and improve performance.

Latency and power impact
------------------------

I compared POLL to C0.2 using 'wult' tool (https://github.com/intel/wult).
Wult measures C-state exit latency and several other metrics, of which socket
level AC power and RAPL CPU package power are most interesting in this case.

I collected 1000000 datapoints with wult, and measured 4 configurations:
1. POLL + LFM
2. POLL + HFM
3. C0.2 + LFM
4. C0.2 + HFM

* In "POLL" experiments, all C-states except for POLL were disabled.
* In "C0.2" experiments, all C-states except for C0.2 were disabled.
* In "LFM" experiments, frequency of all CPUs was locked to 800MHz (low frequency
  mode on my SPR system).
* In "HFM" experiments, frequency of all CPUs was locked to 2GHz (high
  frequency mode on my SPR system).

Here are the measurement results. The numbers are percent change from POLL to
C0.2. The formula was:

% change of a value = [(POLL value - C0.2 value) / POLL value] * 100%

----|-----------|-----------|----------|-----------
    | Median IR | 99th % IR | AC Power | RAPL power
----|-----------|-----------|----------|-----------
HFM | 22%       | 11%       | -13%     | -19%
LFM | 23%       | 6%        | -5%      | -18%
----------------------------|----------|-----------

* IR stands for interrupt latency. The table provides the median and 99th
  percentile. Wult measures IR as TSC value in interrupt handler minus TSC value
  of the moment when the interrupt fired.
* AC Power is the socket level AC power, measured with Yokogawa WT310 power
  meter.
* RAPL power is the CPU package power, measured using the 'turbostat' tool.

Conclusion: C0.2 has higher latency comparing to POLL, but it comes with
substantial energy savings.

Cyclictest measurements
-----------------------

I used 'cyclictest' to measure average timer latency for timers armed 100us and
1.5ms ahead.

The executed commands:
# Arming timers 100us ahead.
cyclictest --default-system --nsecs -a 1 --loops 10000000 --distance 100
# Arming timers 1.5ms ahead.
cyclictest --default-system --nsecs -a 1 --loops 10000000 --distance 15000

The numbers below are percent change for the average timer latency.

----|-----------|-------
    | 100us     | 1.5ms
----|-----------|-------
HFM | 0.1%      | -0.1%
LFM | -0.2%     | -0.2%

Conclusion: C0.2 has very small latency impact, as per 'cyclictest'.

On wult vs cyclictest latency data.
- Latency measured with 'wult' is in range of [0.2,1] microseconds, depending
  on configuration (CPU frequency, etc). The number includes HW latency and
  some amount of SW overhead.
- Latency measured with 'cyclictest' is in tens of microseconds, and it is
  dominated by the software overhead component: time run the HW interrupt, exit
  idle, switch the context, reach the user-space handler, and so on.

This explains the significant difference in latency percent change between
'wult' and 'cyclictest'.

Hackbench measurements
----------------------

I ran the 'hackbench' benchmark using the following commands:

# 4 groups, 200 threads
hackbench -s 128 -l 100000000 -g4 -f 25 -P
# 8 groups, 400 threads.
hackbench -s 128 -l 100000000 -g8 -f 25 -P

My system has 224 CPUs, so the first command did not use all CPUs, the second
command used all of them. However, in both cases CPU power reached TDP.

I did not lock CPU frequency, so all frequencies were allowed on all CPUs. I
tested these 2 configurations for both 8 and 4 group runs (so total 4
experiments):
- POLL: only POLL state was enabled, all other C-states were disabled.
- POLL+C0.2: only POLL and C0.2 were enabled, all other C-states were disabled.

I ran hackbench 3 times and compared hackbench "score" (which is basically the
execution time) between POL' and POLL+C0.2 runs.

The below table contains hackbench score % change for hackbench 4 groups. C0.2
residency in these runs was about 20%.

------|--------------
Run # | Time decrease
------|--------------
1     |  -6.5%
2     |  -12.3%
3     |  -7.1%

The below table contains hackbench score % change for hackbench 8 groups. C0.2
residency in these runs was about 10%.

------|---------------
Run # | Time decrease
------|---------------
1     |  -1.6%
2     |  -0.6%
3     |  -0.9%

Conclusion: even with small C0.2 residency (~10%), hackbench shows some
performance improvement. With larger C0.2 residency the improvement is more
pronounced.

Q&A
---

1. Can C0.2 be disabled?

C0.2 can be disabled via sysfs and with the following kernel boot option:

  intel_idle.states_off=2

2. Why C0.2, not C0.1?

I measured both C0.1 and C0.2, but did not notice a clear C0.1 advantage in
terms of latency, but did notice that C0.2 saves more power.

But if users want to try using C0.1 instead of C0.2, they can do this:

echo 0 > /sys/devices/system/cpu/umwait_control/enable_c02

This will make sure that C0.2 requests from 'intel_idle' are automatically
converted to C0.1 requests.

3. How did you verify that system enters C0.2?

I used 'perf' to read the corresponding PMU counters:

perf stat -e CPU_CLK_UNHALTED.C01,CPU_CLK_UNHALTED.C02,cycles -a sleep 1

Artem Bityutskiy (3):
  x86/mwait: Add support for idle via umwait
  x86/umwait: Increase tpause and umwait quanta
  intel_idle: add C0.2 state for Sapphire Rapids Xeon

 arch/x86/include/asm/mwait.h | 63 ++++++++++++++++++++++++++++++++++++
 arch/x86/kernel/cpu/umwait.c |  4 +--
 drivers/idle/intel_idle.c    | 58 ++++++++++++++++++++++++++++++++-
 3 files changed, 122 insertions(+), 3 deletions(-)


base-commit: 06401c1b98b0d0ba33789b770c3c0083deaa652f

-- 
2.38.1

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 1/3] x86/mwait: Add support for idle via umwait
  2023-03-10 12:21 [PATCH v2 0/3] Sapphire Rapids C0.x idle states support Artem Bityutskiy
@ 2023-03-10 12:21 ` Artem Bityutskiy
  2023-03-10 12:21 ` [PATCH v2 2/3] x86/umwait: Increase tpause and umwait quanta Artem Bityutskiy
  2023-03-10 12:21 ` [PATCH v2 3/3] intel_idle: add C0.2 state for Sapphire Rapids Xeon Artem Bityutskiy
  2 siblings, 0 replies; 11+ messages in thread
From: Artem Bityutskiy @ 2023-03-10 12:21 UTC (permalink / raw)
  To: x86, Linux PM Mailing List; +Cc: Artem Bityutskiy

From: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>

On Intel platforms, C-states are requested using the 'monitor/mwait'
instructions pair, as implemented in 'mwait_idle_with_hints()'. This
mechanism allows for entering C1 and deeper C-states.

Sapphire Rapids Xeon supports new idle states - C0.1 and C0.2 (later C0.x).
These idle states have lower latency comparing to C1, and can be requested
with either 'tpause' and 'umwait' instructions.

Linux already uses the 'tpause' instruction in delay functions like
'udelay()'. This patch adds 'umwait' and 'umonitor' instructions support.

'umwait' and 'tpause' instructions are very similar - both send the CPU to
C0.x and have the same break out rules. But unlike 'tpause', 'umwait' works
together with 'umonitor' and exits the C0.x when the monitored memory
address is modified (similar idea as with 'monitor/mwait').

This patch implements the 'umwait_idle()' function, which works very
similarly to existing 'mwait_idle_with_hints()', but requests C0.x. The
intention is to use it from the 'intel_idle' driver.

Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
---
 arch/x86/include/asm/mwait.h | 63 ++++++++++++++++++++++++++++++++++++
 1 file changed, 63 insertions(+)

diff --git a/arch/x86/include/asm/mwait.h b/arch/x86/include/asm/mwait.h
index 778df05f8539..a8612de3212a 100644
--- a/arch/x86/include/asm/mwait.h
+++ b/arch/x86/include/asm/mwait.h
@@ -141,4 +141,67 @@ static inline void __tpause(u32 ecx, u32 edx, u32 eax)
 	#endif
 }
 
+#ifdef CONFIG_X86_64
+/*
+ * Monitor a memory address at 'rcx' using the 'umonitor' instruction.
+ */
+static inline void __umonitor(const void *rcx)
+{
+	/* "umonitor %rcx" */
+#ifdef CONFIG_AS_TPAUSE
+	asm volatile("umonitor %%rcx\n"
+		     :
+		     : "c"(rcx));
+#else
+	asm volatile(".byte 0xf3, 0x0f, 0xae, 0xf1\t\n"
+		     :
+		     : "c"(rcx));
+#endif
+}
+
+/*
+ * Same as '__tpause()', but uses the 'umwait' instruction. It is very
+ * similar to 'tpause', but also breaks out if the data at the address
+ * monitored with 'umonitor' is modified.
+ */
+static inline void __umwait(u32 ecx, u32 edx, u32 eax)
+{
+	/* "umwait %ecx, %edx, %eax;" */
+#ifdef CONFIG_AS_TPAUSE
+	asm volatile("umwait %%ecx\n"
+		     :
+		     : "c"(ecx), "d"(edx), "a"(eax));
+#else
+	asm volatile(".byte 0xf2, 0x0f, 0xae, 0xf1\t\n"
+		     :
+		     : "c"(ecx), "d"(edx), "a"(eax));
+#endif
+}
+
+/*
+ * Enter C0.1 or C0.2 state and stay there until an event happens (an interrupt
+ * or the 'need_resched()'), or the deadline is reached. The deadline is the
+ * absolute TSC value to exit the idle state at. However, if deadline exceeds
+ * the global limit in the IA32_UMWAIT_CONTROL register, the global limit
+ * prevails, and the idle state is exited earlier than the deadline.
+ */
+static inline void umwait_idle(u64 deadline, u32 state)
+{
+	if (!current_set_polling_and_test()) {
+		u32 eax, edx;
+
+		eax = lower_32_bits(deadline);
+		edx = upper_32_bits(deadline);
+
+		__umonitor(&current_thread_info()->flags);
+		if (!need_resched())
+			__umwait(state, edx, eax);
+	}
+	current_clr_polling();
+}
+#else
+#define umwait_idle(deadline, state) \
+		WARN_ONCE(1, "umwait CPU instruction is not supported")
+#endif /* CONFIG_X86_64 */
+
 #endif /* _ASM_X86_MWAIT_H */
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 2/3] x86/umwait: Increase tpause and umwait quanta
  2023-03-10 12:21 [PATCH v2 0/3] Sapphire Rapids C0.x idle states support Artem Bityutskiy
  2023-03-10 12:21 ` [PATCH v2 1/3] x86/mwait: Add support for idle via umwait Artem Bityutskiy
@ 2023-03-10 12:21 ` Artem Bityutskiy
  2023-03-10 12:21 ` [PATCH v2 3/3] intel_idle: add C0.2 state for Sapphire Rapids Xeon Artem Bityutskiy
  2 siblings, 0 replies; 11+ messages in thread
From: Artem Bityutskiy @ 2023-03-10 12:21 UTC (permalink / raw)
  To: x86, Linux PM Mailing List; +Cc: Artem Bityutskiy

From: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>

== Background ==

The 'umwait' and 'tpause' instructions both put the CPU in a low power
state while they wait.  They amount of time they wait is influenced by
an explicit deadline value passed in a register and an implicit value
written to an shared MSR (MSR_IA32_UMWAIT_CONTROL).

Existing 'tpause' users (udelay()) can tolerate a wide range of
MSR_IA32_UMWAIT_CONTROL MSR values.  The explicit deadline trumps the
MSR for short delays.  Longer delays will see extra wakeups, but no
functional issues.

== Problem ==

Extra wakeups mean extra power.  That translates into worse idle power
when 'umwait' gets used for idle.

== Solution ==

Increase MSR_IA32_UMWAIT_CONTROL by factor of 100 to decrease idle power
when using 'umwait'.  Make 'tpause' rely on its explicit deadline more
often and reduce the number of wakeups and save power during long delays.

Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
---
 arch/x86/kernel/cpu/umwait.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/umwait.c b/arch/x86/kernel/cpu/umwait.c
index ec8064c0ae03..17c23173da0f 100644
--- a/arch/x86/kernel/cpu/umwait.c
+++ b/arch/x86/kernel/cpu/umwait.c
@@ -14,9 +14,9 @@
 
 /*
  * Cache IA32_UMWAIT_CONTROL MSR. This is a systemwide control. By default,
- * umwait max time is 100000 in TSC-quanta and C0.2 is enabled
+ * umwait max time is 10,000,000 in TSC-quanta and C0.2 is enabled.
  */
-static u32 umwait_control_cached = UMWAIT_CTRL_VAL(100000, UMWAIT_C02_ENABLE);
+static u32 umwait_control_cached = UMWAIT_CTRL_VAL(10000000, UMWAIT_C02_ENABLE);
 
 /*
  * Cache the original IA32_UMWAIT_CONTROL MSR value which is configured by
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 3/3] intel_idle: add C0.2 state for Sapphire Rapids Xeon
  2023-03-10 12:21 [PATCH v2 0/3] Sapphire Rapids C0.x idle states support Artem Bityutskiy
  2023-03-10 12:21 ` [PATCH v2 1/3] x86/mwait: Add support for idle via umwait Artem Bityutskiy
  2023-03-10 12:21 ` [PATCH v2 2/3] x86/umwait: Increase tpause and umwait quanta Artem Bityutskiy
@ 2023-03-10 12:21 ` Artem Bityutskiy
  2023-03-20 14:50   ` Peter Zijlstra
  2 siblings, 1 reply; 11+ messages in thread
From: Artem Bityutskiy @ 2023-03-10 12:21 UTC (permalink / raw)
  To: x86, Linux PM Mailing List; +Cc: Artem Bityutskiy

From: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>

Add Sapphire Rapids Xeon C0.2 state support. This state has a lower exit
latency comparing to C1, and saves energy comparing to POLL (in range of
5-20%).

This patch also improves performance (e.g., as measured by 'hackbench'),
because idle CPU power savings in C0.2 increase busy CPU power budget and
therefore, improve turbo boost of the busy CPU.

Suggested-by: Len Brown <len.brown@intel.com>
Suggested-by: Arjan Van De Ven <arjan.van.de.ven@intel.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
---
 drivers/idle/intel_idle.c | 58 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 57 insertions(+), 1 deletion(-)

diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c
index 938c17f25d94..0d0e45de610e 100644
--- a/drivers/idle/intel_idle.c
+++ b/drivers/idle/intel_idle.c
@@ -51,11 +51,13 @@
 #include <linux/notifier.h>
 #include <linux/cpu.h>
 #include <linux/moduleparam.h>
+#include <linux/units.h>
 #include <asm/cpu_device_id.h>
 #include <asm/intel-family.h>
 #include <asm/nospec-branch.h>
 #include <asm/mwait.h>
 #include <asm/msr.h>
+#include <asm/tsc.h>
 #include <asm/fpu/api.h>
 
 #define INTEL_IDLE_VERSION "0.5.1"
@@ -73,6 +75,8 @@ static struct cpuidle_device __percpu *intel_idle_cpuidle_devices;
 
 static unsigned long auto_demotion_disable_flags;
 
+static u64 umwait_limit;
+
 static enum {
 	C1E_PROMOTION_PRESERVE,
 	C1E_PROMOTION_ENABLE,
@@ -225,6 +229,27 @@ static __cpuidle int intel_idle_s2idle(struct cpuidle_device *dev,
 	return 0;
 }
 
+/**
+ * intel_idle_umwait_irq - Request C0.x using the 'umwait' instruction.
+ * @dev: cpuidle device of the target CPU.
+ * @drv: cpuidle driver (assumed to point to intel_idle_driver).
+ * @index: Target idle state index.
+ *
+ * Request C0.1 or C0.2 using 'umwait' instruction with interrupts enabled.
+ */
+static __cpuidle int intel_idle_umwait_irq(struct cpuidle_device *dev,
+					   struct cpuidle_driver *drv,
+					   int index)
+{
+	u32 state = flg2MWAIT(drv->states[index].flags);
+
+	raw_local_irq_enable();
+	umwait_idle(rdtsc() + umwait_limit, state);
+	raw_local_irq_disable();
+
+	return index;
+}
+
 /*
  * States are indexed by the cstate number,
  * which is also the index into the MWAIT hint array.
@@ -968,6 +993,13 @@ static struct cpuidle_state adl_n_cstates[] __initdata = {
 };
 
 static struct cpuidle_state spr_cstates[] __initdata = {
+	{
+		.name = "C0.2",
+		.desc = "UMWAIT C0.2",
+		.flags = MWAIT2flg(TPAUSE_C02_STATE) | CPUIDLE_FLAG_IRQ_ENABLE,
+		.exit_latency_ns = 100,
+		.target_residency_ns = 100,
+		.enter = &intel_idle_umwait_irq, },
 	{
 		.name = "C1",
 		.desc = "MWAIT 0x00",
@@ -1894,7 +1926,8 @@ static void __init intel_idle_init_cstates_icpu(struct cpuidle_driver *drv)
 		/* Structure copy. */
 		drv->states[drv->state_count] = cpuidle_state_table[cstate];
 
-		if ((cpuidle_state_table[cstate].flags & CPUIDLE_FLAG_IRQ_ENABLE) || force_irq_on) {
+		if (cpuidle_state_table[cstate].enter == intel_idle &&
+		    ((cpuidle_state_table[cstate].flags & CPUIDLE_FLAG_IRQ_ENABLE) || force_irq_on)) {
 			printk("intel_idle: forced intel_idle_irq for state %d\n", cstate);
 			drv->states[drv->state_count].enter = intel_idle_irq;
 		}
@@ -1926,6 +1959,28 @@ static void __init intel_idle_init_cstates_icpu(struct cpuidle_driver *drv)
 	}
 }
 
+/**
+ * umwait_limit_init - initialize time limit value for 'umwait'.
+ *
+ * C0.1 and C0.2 (later C0.x) idle states are requested via the 'umwait'
+ * instruction. The 'umwait' instruction requires the "deadline" - the TSC
+ * counter value to break out of C0.x (unless it broke out because of an
+ * interrupt or some other event).
+ *
+ * The deadline is specified as an absolute TSC value, and it is calculated as
+ * current TSC value + 'umwait_limit'. This function initializes the
+ * 'umwait_limit' variable to count of cycles per tick. The motivation is:
+ *   * the tick is not disabled for shallow states like C0.x so, so idle will
+ *     not last longer than a tick anyway
+ *   * limit idle time to give cpuidle a chance to re-evaluate its C-state
+ *     selection decision and possibly select a deeper C-state.
+ */
+static void __init umwait_limit_init(void)
+{
+	umwait_limit = (u64)TICK_NSEC * tsc_khz;
+	do_div(umwait_limit, MICRO);
+}
+
 /**
  * intel_idle_cpuidle_driver_init - Create the list of available idle states.
  * @drv: cpuidle driver structure to initialize.
@@ -1933,6 +1988,7 @@ static void __init intel_idle_init_cstates_icpu(struct cpuidle_driver *drv)
 static void __init intel_idle_cpuidle_driver_init(struct cpuidle_driver *drv)
 {
 	cpuidle_poll_state_init(drv);
+	umwait_limit_init();
 
 	if (disabled_states_mask & BIT(0))
 		drv->states[0].flags |= CPUIDLE_FLAG_OFF;
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 3/3] intel_idle: add C0.2 state for Sapphire Rapids Xeon
  2023-03-10 12:21 ` [PATCH v2 3/3] intel_idle: add C0.2 state for Sapphire Rapids Xeon Artem Bityutskiy
@ 2023-03-20 14:50   ` Peter Zijlstra
  2023-03-20 18:27     ` Andy Lutomirski
  0 siblings, 1 reply; 11+ messages in thread
From: Peter Zijlstra @ 2023-03-20 14:50 UTC (permalink / raw)
  To: Artem Bityutskiy; +Cc: x86, Linux PM Mailing List, Andy Lutomirski

On Fri, Mar 10, 2023 at 02:21:10PM +0200, Artem Bityutskiy wrote:
> +/**
> + * umwait_limit_init - initialize time limit value for 'umwait'.
> + *
> + * C0.1 and C0.2 (later C0.x) idle states are requested via the 'umwait'
> + * instruction. The 'umwait' instruction requires the "deadline" - the TSC
> + * counter value to break out of C0.x (unless it broke out because of an
> + * interrupt or some other event).
> + *
> + * The deadline is specified as an absolute TSC value, and it is calculated as
> + * current TSC value + 'umwait_limit'. This function initializes the
> + * 'umwait_limit' variable to count of cycles per tick. The motivation is:
> + *   * the tick is not disabled for shallow states like C0.x so, so idle will
> + *     not last longer than a tick anyway
> + *   * limit idle time to give cpuidle a chance to re-evaluate its C-state
> + *     selection decision and possibly select a deeper C-state.
> + */
> +static void __init umwait_limit_init(void)
> +{
> +	umwait_limit = (u64)TICK_NSEC * tsc_khz;
> +	do_div(umwait_limit, MICRO);
> +}

Would it not make sense to put this limit in the MSR instead? By
randomly increasing the MSR limit you also change userspace behaviour vs
NOHZ_FULL.

That was part of the reason why Andy insisted on having the MSR low.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 3/3] intel_idle: add C0.2 state for Sapphire Rapids Xeon
  2023-03-20 14:50   ` Peter Zijlstra
@ 2023-03-20 18:27     ` Andy Lutomirski
  2023-03-20 20:29       ` Andy Lutomirski
                         ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Andy Lutomirski @ 2023-03-20 18:27 UTC (permalink / raw)
  To: Peter Zijlstra (Intel), Artem Bityutskiy
  Cc: the arch/x86 maintainers, Linux PM Mailing List, luto

On Mon, Mar 20, 2023, at 7:50 AM, Peter Zijlstra wrote:
> On Fri, Mar 10, 2023 at 02:21:10PM +0200, Artem Bityutskiy wrote:
>> +/**
>> + * umwait_limit_init - initialize time limit value for 'umwait'.
>> + *
>> + * C0.1 and C0.2 (later C0.x) idle states are requested via the 'umwait'
>> + * instruction. The 'umwait' instruction requires the "deadline" - the TSC
>> + * counter value to break out of C0.x (unless it broke out because of an
>> + * interrupt or some other event).
>> + *
>> + * The deadline is specified as an absolute TSC value, and it is calculated as
>> + * current TSC value + 'umwait_limit'. This function initializes the
>> + * 'umwait_limit' variable to count of cycles per tick. The motivation is:
>> + *   * the tick is not disabled for shallow states like C0.x so, so idle will
>> + *     not last longer than a tick anyway
>> + *   * limit idle time to give cpuidle a chance to re-evaluate its C-state
>> + *     selection decision and possibly select a deeper C-state.
>> + */
>> +static void __init umwait_limit_init(void)
>> +{
>> +	umwait_limit = (u64)TICK_NSEC * tsc_khz;
>> +	do_div(umwait_limit, MICRO);
>> +}
>
> Would it not make sense to put this limit in the MSR instead? By
> randomly increasing the MSR limit you also change userspace behaviour vs
> NOHZ_FULL.
>
> That was part of the reason why Andy insisted on having the MSR low.

This is all busted.

UMWAIT has a U for *user mode*.  We have the MSR set to a small value because USER waits are a big can of worms, and long user waits don't actually behave in any particularly intelligent manner unless that core is dedicated to just one user task, and no virt is involved, and the user code involved is extremely careful.

But now UMWAIT got extended in a way to make it useful for the kernel, but it's controlled by the same MSR.  And this is busted.  What we want is for CPL0 UMWAIT to ignore the MSR or use a different MSR (for virt, sigh, except that this whole mechanism is presuambly still useless on virt).  Or for a different instruction to be used from the kernel, maybe spelled MWAIT.

Can we please get some hardware folks to stop randomly adding features and start thinking about the fact that real users involve a kernel, in virt and bare metal, and user code, generally running in a preemptive kernel, sometimes under virt, and to FIGURE OUT WHAT THESE FEATURES SHOULD DO IN THESE CONTEXTS!

In the mean time, I assume that this stuff is baked into a CPU coming soon to real users, and there is no correct way to program it.  But we could set a small limit and eat a small power penalty if C0.2 transitions are really that fast.

Also, this series needs to be tested on virt.  Because UMWAIT, if it works at all on virt, is going to have all manner of odd concequences due to the fact that the hypervisor hasn't the faintest clue what's going on because there's no feedback.  For all that UiPI is nasty and half-baked, at least it tries to notify the next privilege level up as to what's going on.  Explicit wakeups virtualize much better than cacheline monitors.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 3/3] intel_idle: add C0.2 state for Sapphire Rapids Xeon
  2023-03-20 18:27     ` Andy Lutomirski
@ 2023-03-20 20:29       ` Andy Lutomirski
  2023-03-20 20:32       ` Andy Lutomirski
                         ` (2 subsequent siblings)
  3 siblings, 0 replies; 11+ messages in thread
From: Andy Lutomirski @ 2023-03-20 20:29 UTC (permalink / raw)
  To: Peter Zijlstra (Intel), Artem Bityutskiy
  Cc: the arch/x86 maintainers, Linux PM Mailing List, luto



On Mon, Mar 20, 2023, at 11:27 AM, Andy Lutomirski wrote:
> On Mon, Mar 20, 2023, at 7:50 AM, Peter Zijlstra wrote:
>> On Fri, Mar 10, 2023 at 02:21:10PM +0200, Artem Bityutskiy wrote:
>>> +/**
>>> + * umwait_limit_init - initialize time limit value for 'umwait'.
>>> + *
>>> + * C0.1 and C0.2 (later C0.x) idle states are requested via the 'umwait'
>>> + * instruction. The 'umwait' instruction requires the "deadline" - the TSC
>>> + * counter value to break out of C0.x (unless it broke out because of an
>>> + * interrupt or some other event).
>>> + *
>>> + * The deadline is specified as an absolute TSC value, and it is calculated as
>>> + * current TSC value + 'umwait_limit'. This function initializes the
>>> + * 'umwait_limit' variable to count of cycles per tick. The motivation is:
>>> + *   * the tick is not disabled for shallow states like C0.x so, so idle will
>>> + *     not last longer than a tick anyway
>>> + *   * limit idle time to give cpuidle a chance to re-evaluate its C-state
>>> + *     selection decision and possibly select a deeper C-state.
>>> + */
>>> +static void __init umwait_limit_init(void)
>>> +{
>>> +	umwait_limit = (u64)TICK_NSEC * tsc_khz;
>>> +	do_div(umwait_limit, MICRO);
>>> +}
>>
>> Would it not make sense to put this limit in the MSR instead? By
>> randomly increasing the MSR limit you also change userspace behaviour vs
>> NOHZ_FULL.
>>
>> That was part of the reason why Andy insisted on having the MSR low.
>
> This is all busted.
>
> UMWAIT has a U for *user mode*.  We have the MSR set to a small value 
> because USER waits are a big can of worms, and long user waits don't 
> actually behave in any particularly intelligent manner unless that core 
> is dedicated to just one user task, and no virt is involved, and the 
> user code involved is extremely careful.
>
> But now UMWAIT got extended in a way to make it useful for the kernel, 
> but it's controlled by the same MSR.  And this is busted.  What we want 
> is for CPL0 UMWAIT to ignore the MSR or use a different MSR (for virt, 
> sigh, except that this whole mechanism is presuambly still useless on 
> virt).  Or for a different instruction to be used from the kernel, 
> maybe spelled MWAIT.
>
> Can we please get some hardware folks to stop randomly adding features 
> and start thinking about the fact that real users involve a kernel, in 
> virt and bare metal, and user code, generally running in a preemptive 
> kernel, sometimes under virt, and to FIGURE OUT WHAT THESE FEATURES 
> SHOULD DO IN THESE CONTEXTS!
>
> In the mean time, I assume that this stuff is baked into a CPU coming 
> soon to real users, and there is no correct way to program it.  But we 
> could set a small limit and eat a small power penalty if C0.2 
> transitions are really that fast.
>
> Also, this series needs to be tested on virt.  Because UMWAIT, if it 
> works at all on virt, is going to have all manner of odd concequences 
> due to the fact that the hypervisor hasn't the faintest clue what's 
> going on because there's no feedback.  For all that UiPI is nasty and 
> half-baked, at least it tries to notify the next privilege level up as 
> to what's going on.  Explicit wakeups virtualize much better than 
> cacheline monitors.

At the very least, we need to know whether increasing the UMWAIT limit has a real benefit.  Because UMWAIT is really just a fancy busy wait, and it will still work with the low limit.  What happens if we just drop that part of this patch?

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 3/3] intel_idle: add C0.2 state for Sapphire Rapids Xeon
  2023-03-20 18:27     ` Andy Lutomirski
  2023-03-20 20:29       ` Andy Lutomirski
@ 2023-03-20 20:32       ` Andy Lutomirski
  2023-03-22 10:18       ` Peter Zijlstra
  2023-03-29  7:32       ` Artem Bityutskiy
  3 siblings, 0 replies; 11+ messages in thread
From: Andy Lutomirski @ 2023-03-20 20:32 UTC (permalink / raw)
  To: Peter Zijlstra (Intel), Artem Bityutskiy
  Cc: the arch/x86 maintainers, Linux PM Mailing List, luto

On Mon, Mar 20, 2023, at 11:27 AM, Andy Lutomirski wrote:

> Also, this series needs to be tested on virt.  Because UMWAIT, if it 
> works at all on virt, is going to have all manner of odd concequences 
> due to the fact that the hypervisor hasn't the faintest clue what's 
> going on because there's no feedback.  For all that UiPI is nasty and 
> half-baked, at least it tries to notify the next privilege level up as 
> to what's going on.  Explicit wakeups virtualize much better than 
> cacheline monitors.

Sorry to keep replying to myself.  -ETOOLITTLESLEEP.

This needs more than testing on virt.  It needs explicit documentation and handling of virt so we don't end up using UMWAIT on virt and doing something utterly daft like busy-waiting instead of properly going to sleep and not noticing because few people are actually testing on virt on a CPU that has this ability right now.

(Also, there's a surprising ability to thoroughly break idle without anyone reporting it for an impressively long time.  The system still serves cute cat photos, so it doesn't end up on the dashboard!)

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 3/3] intel_idle: add C0.2 state for Sapphire Rapids Xeon
  2023-03-20 18:27     ` Andy Lutomirski
  2023-03-20 20:29       ` Andy Lutomirski
  2023-03-20 20:32       ` Andy Lutomirski
@ 2023-03-22 10:18       ` Peter Zijlstra
  2023-03-29  7:32       ` Artem Bityutskiy
  3 siblings, 0 replies; 11+ messages in thread
From: Peter Zijlstra @ 2023-03-22 10:18 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Artem Bityutskiy, the arch/x86 maintainers, Linux PM Mailing List, luto

On Mon, Mar 20, 2023 at 11:27:54AM -0700, Andy Lutomirski wrote:

> This is all busted.

Well, yes.

> UMWAIT has a U for *user mode*.  We have the MSR set to a small value
> because USER waits are a big can of worms, and long user waits don't
> actually behave in any particularly intelligent manner unless that
> core is dedicated to just one user task, and no virt is involved, and
> the user code involved is extremely careful.

Idem for virt. [U]MWAIT only really works for virt when the CPU is
dedicated to the one (vcpu) task.

> But now UMWAIT got extended in a way to make it useful for the kernel,
> but it's controlled by the same MSR.  And this is busted.  What we
> want is for CPL0 UMWAIT to ignore the MSR or use a different MSR (for
> virt, sigh, except that this whole mechanism is presuambly still
> useless on virt).  Or for a different instruction to be used from the
> kernel, maybe spelled MWAIT.

Yes, CPL0 usage should not be subject to the same limit. I'm not sure if
there's a good argument to have a different limit on virt vs random
other userspace.

> Also, this series needs to be tested on virt.  Because UMWAIT, if it
> works at all on virt, is going to have all manner of odd concequences
> due to the fact that the hypervisor hasn't the faintest clue what's
> going on because there's no feedback.  For all that UiPI is nasty and
> half-baked, at least it tries to notify the next privilege level up as
> to what's going on.  Explicit wakeups virtualize much better than
> cacheline monitors.

Virt is supposedly a big trumpet case for UMWAIT because VMMs
(rightfully) restrict MWAIT. At the same time, you *REALLY* do not want
your vcpu task doing long UMWAITs when there's other vcpu threads
waiting to do real work, so the MSR value *SHOULD* be really low.

Increasing this value randomly is bad.. increasing it beyond 1 tick is
abysmal.

Unless dedicated vcpu:cpu relations, in which case kvm already should be
exposing MWAIT in any case.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 3/3] intel_idle: add C0.2 state for Sapphire Rapids Xeon
  2023-03-20 18:27     ` Andy Lutomirski
                         ` (2 preceding siblings ...)
  2023-03-22 10:18       ` Peter Zijlstra
@ 2023-03-29  7:32       ` Artem Bityutskiy
  2023-04-10 17:24         ` Andy Lutomirski
  3 siblings, 1 reply; 11+ messages in thread
From: Artem Bityutskiy @ 2023-03-29  7:32 UTC (permalink / raw)
  To: Andy Lutomirski, Peter Zijlstra (Intel)
  Cc: the arch/x86 maintainers, Linux PM Mailing List, luto

There is a lot of feedback. Let me summarize a bit.

1. C0.x time limit is controlled by MSR 0xE1 - IA32_UMWAIT_CONTROL[31:2]. This
limit applies to both CPL0 and CPL3. Your feedback is that this MSR should be
ignored in CPL0, or there should be a different MSR for CPL3.

Interesting point. I am discussing this with HW folks internally now and trying
to figure it out.


2. Peter Zijlstra said earlier: why C0.x states are not available via 'mwait'.

Also good point. Similarly, I am now discussing this with HW engineers and
trying to figure it out.


3. What happens if you do not increase the global limit in
IA32_UMWAIT_CONTROL[31:2]? May be just drop that patch.

I am taking this approach now. Measuring and benchmarking the system now.


4. Test this in virtual environment.

Will do.


5. Then there were several references to virualization, and this is the part of
your feedback I did not understand. I admit I do not know much about
virtualization.

Below are few questions. I apologize in advance because if they are naive,
please, bear with me.


Question #1.

Userspace applications can do many strange things. For example, just busy-loop
on a variable waiting for it to change.

Not social behavior. May be good idea for special apps, having dedicated CPU, as
you pointed. E.g., DPDK or other latency-sensitive apps. Bad idea for a general
app.

However, we can't control apps, in general. If they want to busy-loop they will.
If someone buys a VM, they may decide that they payed for it and do whatever
they want.

Now take this sort of anti-social app. Replace the busy-loop with umwait or
tpause. You get the same result, but it saves energy. So it is an optimization,
good thing.

What am I missing?

May be you implied that umwait should be designed in a way that hypervisor could
take over the core when the app or guest kernel uses it. Then the hyperwisor
could do something else meanwhile.

But think it would increase C0.x latency observed by umwait users. That would
make umwait not useful for those few apps that do have a good reason for using
umwait (like DPDK).


Question #2.

Why can't we just set this global A32_UMWAIT_CONTROL[31:2] limit to 0, which
means "forever", "no limit"?

Any user of tpause or umwait must have a loop checking for the "end of wait"
condition. Inside this loop, there is a umwait or tpause (as optimization for
busy-looping).

Both tpause and umwait break out on interrupts. Scheduler will preempt the user
task when its time-slice is over or there is something more important to run.
Then when the task starts again, it will continue waiting in its loop, doing
tpause of umwait inside the loop.

What is the problem I am missing?

Question #3:

You wrote :

> Also, this series needs to be tested on virt.  Because UMWAIT, if it works at
> all on virt, is going to have all manner of odd concequences due to the fact
> that the hypervisor hasn't the faintest clue what's going on because there's
> no feedback.

What feed-back would hypervisor need? And what would it do with it?

Is your expectation that huperviser will do something else on the CPU, like run
another VM, while original VM is umwait'ing? If so, the umwait latency will be
way longer than sub-1us...

Thanks in advance!



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 3/3] intel_idle: add C0.2 state for Sapphire Rapids Xeon
  2023-03-29  7:32       ` Artem Bityutskiy
@ 2023-04-10 17:24         ` Andy Lutomirski
  0 siblings, 0 replies; 11+ messages in thread
From: Andy Lutomirski @ 2023-04-10 17:24 UTC (permalink / raw)
  To: Artem Bityutskiy, Peter Zijlstra (Intel)
  Cc: the arch/x86 maintainers, Linux PM Mailing List, luto



On Wed, Mar 29, 2023, at 12:32 AM, Artem Bityutskiy wrote:
> There is a lot of feedback. Let me summarize a bit.
>
> 1. C0.x time limit is controlled by MSR 0xE1 - IA32_UMWAIT_CONTROL[31:2]. This
> limit applies to both CPL0 and CPL3. Your feedback is that this MSR should be
> ignored in CPL0, or there should be a different MSR for CPL3.
>
> Interesting point. I am discussing this with HW folks internally now and trying
> to figure it out.
>
>
> 2. Peter Zijlstra said earlier: why C0.x states are not available via 'mwait'.
>
> Also good point. Similarly, I am now discussing this with HW engineers and
> trying to figure it out.
>
>
> 3. What happens if you do not increase the global limit in
> IA32_UMWAIT_CONTROL[31:2]? May be just drop that patch.
>
> I am taking this approach now. Measuring and benchmarking the system now.
>
>
> 4. Test this in virtual environment.
>
> Will do.
>
>
> 5. Then there were several references to virualization, and this is the part of
> your feedback I did not understand. I admit I do not know much about
> virtualization.
>
> Below are few questions. I apologize in advance because if they are naive,
> please, bear with me.
>
>
> Question #1.
>
> Userspace applications can do many strange things. For example, just busy-loop
> on a variable waiting for it to change.
>
> Not social behavior. May be good idea for special apps, having dedicated CPU, as
> you pointed. E.g., DPDK or other latency-sensitive apps. Bad idea for a general
> app.
>
> However, we can't control apps, in general. If they want to busy-loop they will.
> If someone buys a VM, they may decide that they payed for it and do whatever
> they want.
>
> Now take this sort of anti-social app. Replace the busy-loop with umwait or
> tpause. You get the same result, but it saves energy. So it is an optimization,
> good thing.
>
> What am I missing?

You're not missing anything extremely critical, but there are some less critical issues.

Dredging up an old email:

https://lore.kernel.org/all/CALCETrVJsCAWYSnUE+Ju_VmZfZBUBwUq-uFjV9=Vy+wddtJVCw@mail.gmail.com/

First, UMWAIT doesn't have a wakeup mechanism that can notify a supervisor (or hypervisor) of wakeups.  So if a task is UMWAITing and gets interrupted, the scheduler cannot tell when it should schedule it back in.  UIPI can do this (poorly), but UMWAIT doesn't even try.

So, with that caveat, UMWAIT is, as you are noting, a busy-loop that happens to be somewhat more power efficient than a plain loop or a bunch of REP NOPs.  And this is just fine for the top of the stack of supervisor things -- a kernel on bare metal, a hypervisor that is not itself nested, etc.  It's *also* fine for a well-designed assisted polling loop, paravirt style -- if a thread knows that the thread (vCPU, etc) that will wake it is running, then polling briefly may be a very good idea.  But it should be brief, and it should coordinate with a real wait/wake mechanism.

On top of all this, UMWAIT has the somewhat unfortunate effect that it sets CF according to whether the MSR deadline elapsed.  So if I do:

UMONITOR
UMWAIT (deadline  = far future)

and the MSR is set to a large value, I can do this many, many times (possibly infinite) without ever seeing CF=1.  But if the MSR is set to a low value, I'll get CF=1 fairly often.

So setting the MSR to a low value prevents anyone from deluding themselves that UMWAIT is anything other than a busy-wait that can be interrupted based on a TSC timeout or an interrupt or whatever on a very regular basis.

--Andy

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2023-04-10 17:24 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-10 12:21 [PATCH v2 0/3] Sapphire Rapids C0.x idle states support Artem Bityutskiy
2023-03-10 12:21 ` [PATCH v2 1/3] x86/mwait: Add support for idle via umwait Artem Bityutskiy
2023-03-10 12:21 ` [PATCH v2 2/3] x86/umwait: Increase tpause and umwait quanta Artem Bityutskiy
2023-03-10 12:21 ` [PATCH v2 3/3] intel_idle: add C0.2 state for Sapphire Rapids Xeon Artem Bityutskiy
2023-03-20 14:50   ` Peter Zijlstra
2023-03-20 18:27     ` Andy Lutomirski
2023-03-20 20:29       ` Andy Lutomirski
2023-03-20 20:32       ` Andy Lutomirski
2023-03-22 10:18       ` Peter Zijlstra
2023-03-29  7:32       ` Artem Bityutskiy
2023-04-10 17:24         ` Andy Lutomirski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).