archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/4] Sapphire Rapids C0.x idle states support
@ 2023-07-10  9:30 Artem Bityutskiy
  2023-07-10  9:30 ` [PATCH v4 1/4] x86/umwait: use 'IS_ENABLED()' Artem Bityutskiy
                   ` (5 more replies)
  0 siblings, 6 replies; 17+ messages in thread
From: Artem Bityutskiy @ 2023-07-10  9:30 UTC (permalink / raw)
  To: x86, Rafael J. Wysocki
  Cc: Linux PM Mailing List, Arjan van de Ven, Artem Bityutskiy,
	Thomas Gleixner

From: Artem Bityutskiy <>


Idle states reduce power consumption when a CPU has no work to do. The most
shallow CPU idle state is "POLL". It has lowest wake up latency, but saves
little power. The next idle state on Intel platforms is "C1". It has has higher
latency, but saves more power than "POLL".

Sapphire Rapids Xeons add new C0.1 and C0.2 (later C0.x) idle states which
conceptually sit between "POLL" and "C1". These provide a very attractive
midpoint: near-POLL wake-up latency and power consumption halfway between
"POLL" and "C1".

In other words, the expectation is that most latency-sensitive users will
prefer C0.x over POLL.

Enable C0.2 idle state support on Sapphire Rapids Xeon (later - SPR) by adding
it between POLL and C1.

Base commit

Based on the "linux-next" branch of "linux-pm" git tree.

base-commit: bd9bb08847da3b1eba2ea8cebf514d9287e7f4fb


* v4:
  - Address issues pointed out by Thomas Gleixner.
    . mwait.h: use 'IS_ENABLED()' instead of '#ifdef'.
    . mwait.h: use '__always_inline'.
    . mwait.h: use inline stub instead for macro for "!CONFIG_X86_64" case.
    . mwait.h: use proper commentaries for '#endif' and '#else'.
    . mwait.h: tested with llvm/clang.
    . Use imperative form (removed "this patch").
  - intel_idle: rename 'intel_idle_hlt_irq_on()' for consistency.
* v3
  - Dropped patch 'x86/umwait: Increase tpause and umwait quanta' after, as
    suggested by Andy Lutomirski.
  - Followed Peter Zijlstra's suggestion and removed explicit 'umwait'
    deadline. Rely on the global implicit deadline instead.
  - Rebased on top of Arjan's patches.
  - C0.2 was tested in a VM by Arjan van de Ven.
  - Re-measured on 2S and 4S Sapphire Rapids Xeon.
* v2
  - Do not mix 'raw_local_irq_enable()' and 'local_irq_disable()'. I failed to
    directly verify it, but I believe it'll address the '.noinstr.text' warning.
  - Minor kerneldoc commentary fix.

C0.2 vs POLL latency and power

I compared POLL to C0.2 using 'wult' tool (,
which measures idle states latency.

* In "POLL" experiments, all C-states except for POLL were disabled.
* In "C0.2" experiments, all C-states except for POLL and C0.2 were disabled.

Here are the measurement results. The numbers are percent change from POLL to

 Median IR | 99th % IR | AC Power | RAPL power
 24%       | 12%       | -13%     | -18%

* IR stands for interrupt latency. The table provides the median and 99th
  percentile. Wult measures it as the delay between the moment a timer
  interrupt fires to the moment the CPU reaches the interrupt handler.
* AC Power is the wall socket AC power.
* RAPL power is the CPU package power, measured using the 'turbostat' tool.

Hackbench measurements

I ran the 'hackbench' benchmark using the following commands:

# 4 groups, 200 threads
hackbench -s 128 -l 100000000 -g4 -f 25 -P
# 8 groups, 400 threads.
hackbench -s 128 -l 100000000 -g8 -f 25 -P

My SPR system has 224 CPUs, so the first command did not use all CPUs, the
second command used all of them. However, in both cases CPU power reached TDP.

I ran hackbench 5 times for every configuration and compared hackbench "score"

In case of 4 groups, C0.2 improved the score by about 4%, and in case of 8
groups by about 0.6%.


1. Can C0.2 be disabled?

C0.2 can be disabled via sysfs and with the following kernel boot option:


2. Why C0.2, not C0.1?

I measured both C0.1 and C0.2, but did not notice a clear C0.1 advantage in
terms of latency, but did notice that C0.2 saves more power.

But if users want to try using C0.1 instead of C0.2, they can do this:

echo 0 > /sys/devices/system/cpu/umwait_control/enable_c02

This will make sure that C0.2 requests from 'intel_idle' are automatically
converted to C0.1 requests.

3. How did you verify that system enters C0.2?

I used 'perf' to read the corresponding PMU counters:

perf stat -e CPU_CLK_UNHALTED.C01,CPU_CLK_UNHALTED.C02,cycles -a sleep 1

4. Ho to change the global explicit 'umwait' deadline?

Via '/sys/devices/system/cpu/umwait_control/max_time'

Artem Bityutskiy (4):
  x86/umwait: use 'IS_ENABLED()'
  x86/mwait: Add support for idle via umwait
  intel_idle: rename 'intel_idle_hlt_irq_on()'
  intel_idle: add C0.2 state for Sapphire Rapids Xeon

 arch/x86/include/asm/mwait.h | 85 ++++++++++++++++++++++++++++++++----
 drivers/idle/intel_idle.c    | 52 +++++++++++++++++++---
 2 files changed, 123 insertions(+), 14 deletions(-)


^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2023-09-13 12:55 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-07-10  9:30 [PATCH v4 0/4] Sapphire Rapids C0.x idle states support Artem Bityutskiy
2023-07-10  9:30 ` [PATCH v4 1/4] x86/umwait: use 'IS_ENABLED()' Artem Bityutskiy
2023-07-10  9:30 ` [PATCH v4 2/4] x86/mwait: Add support for idle via umwait Artem Bityutskiy
2023-07-10  9:30 ` [PATCH v4 3/4] intel_idle: rename 'intel_idle_hlt_irq_on()' Artem Bityutskiy
2023-07-14 15:34   ` Rafael J. Wysocki
2023-07-14 15:39     ` Arjan van de Ven
2023-07-14 18:11     ` Artem Bityutskiy
2023-07-14 21:01     ` Peter Zijlstra
2023-07-14 21:02       ` Arjan van de Ven
2023-07-14 21:12         ` Peter Zijlstra
2023-07-10  9:31 ` [PATCH v4 4/4] intel_idle: add C0.2 state for Sapphire Rapids Xeon Artem Bityutskiy
2023-07-20 18:35 ` [PATCH v4 0/4] Sapphire Rapids C0.x idle states support Rafael J. Wysocki
2023-08-28 16:43 ` Artem Bityutskiy
2023-09-13 11:37   ` Artem Bityutskiy
2023-09-13 12:34     ` Rafael J. Wysocki
2023-09-13 12:49       ` Artem Bityutskiy
2023-09-13 12:55         ` Rafael J. Wysocki

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).