From: Julia Lawall <julia.lawall@inria.fr>
To: "Rafael J. Wysocki" <rafael@kernel.org>
Cc: Julia Lawall <julia.lawall@inria.fr>,
Francisco Jerez <currojerez@riseup.net>,
Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>,
Len Brown <lenb@kernel.org>,
Viresh Kumar <viresh.kumar@linaro.org>,
Linux PM <linux-pm@vger.kernel.org>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Ingo Molnar <mingo@redhat.com>,
Peter Zijlstra <peterz@infradead.org>,
Juri Lelli <juri.lelli@redhat.com>,
Vincent Guittot <vincent.guittot@linaro.org>
Subject: Re: cpufreq: intel_pstate: map utilization into the pstate range
Date: Thu, 30 Dec 2021 19:44:06 +0100 (CET) [thread overview]
Message-ID: <alpine.DEB.2.22.394.2112301942360.15550@hadrien> (raw)
In-Reply-To: <CAJZ5v0haa5QWvTUUg+wwSHvuWyk8pic1N0kox=E1ZKNrHSFuzw@mail.gmail.com>
[-- Attachment #1: Type: text/plain, Size: 3647 bytes --]
On Thu, 30 Dec 2021, Rafael J. Wysocki wrote:
> On Thu, Dec 30, 2021 at 7:21 PM Julia Lawall <julia.lawall@inria.fr> wrote:
> >
> >
> >
> > On Thu, 30 Dec 2021, Rafael J. Wysocki wrote:
> >
> > > On Thu, Dec 30, 2021 at 6:54 PM Julia Lawall <julia.lawall@inria.fr> wrote:
> > > >
> > > > > > The effect is the same. But that approach is indeed simpler than patching
> > > > > > the kernel.
> > > > >
> > > > > It is also applicable when intel_pstate runs in the active mode.
> > > > >
> > > > > As for the results that you have reported, it looks like the package
> > > > > power on these systems is dominated by package voltage and going from
> > > > > P-state 20 to P-state 21 causes that voltage to increase significantly
> > > > > (the observed RAM energy usage pattern is consistent with that). This
> > > > > means that running at P-states above 20 is only really justified if
> > > > > there is a strict performance requirement that can't be met otherwise.
> > > > >
> > > > > Can you please check what value is there in the base_frequency sysfs
> > > > > attribute under cpuX/cpufreq/?
> > > >
> > > > 2100000, which should be pstate 21
> > > >
> > > > >
> > > > > I'm guessing that the package voltage level for P-states 10 and 20 is
> > > > > the same, so the power difference between them is not significant
> > > > > relative to the difference between P-state 20 and 21 and if increasing
> > > > > the P-state causes some extra idle time to appear in the workload
> > > > > (even though there is not enough of it to prevent to overall
> > > > > utilization from increasing), then the overall power draw when running
> > > > > at P-state 10 may be greater that for P-state 20.
> > > >
> > > > My impression is that the package voltage level for P-states 10 to 20 is
> > > > high enough that increasing the frequency has little impact. But the code
> > > > runs twice as fast, which reduces the execution time a lot, saving energy.
> > > >
> > > > My first experiment had only one running thread. I also tried running 32
> > > > spinning threads for 10 seconds, ie using up one package and leaving the
> > > > other idle. In this case, instead of staying around 600J for pstates
> > > > 10-20, the pstate rises from 743 to 946. But there is still a gap between
> > > > 20 and 21, with 21 being 1392J.
> > > >
> > > > > You can check if there is any C-state residency difference between
> > > > > these two cases by running the workload under turbostat in each of
> > > > > them.
> > > >
> > > > The C1 and C6 cases (CPU%c1 and CPU%c6) are about the same between 20 and
> > > > 21, whether with 1 thread or with 32 thread.
> > >
> > > I meant to compare P-state 10 and P-state 20.
> > >
> > > 20 and 21 are really close as far as the performance is concerned, so
> > > I wouldn't expect to see any significant C-state residency difference
> > > between them.
> >
> > There's also no difference between 10 and 20. This seems normal, because
> > the same cores are either fully used or fully idle in both cases. The
> > idle ones are almost always in C6.
>
> The turbostat output sent by you previously shows that the CPUs doing
> the work are only about 15-or-less percent busy, though, and you get
> quite a bit of C-state residency on them. I'm assuming that this is
> for 1 running thread.
>
> Can you please run the 32 spinning threads workload (ie. on one
> package) and with P-state locked to 10 and then to 20 under turbostat
> and send me the turbostat output for both runs?
Attached.
Pstate 10: spin_minmax_10_dahu-9_5.15.0freq_schedutil_11.turbo
Pstate 20: spin_minmax_20_dahu-9_5.15.0freq_schedutil_11.turbo
julia
[-- Attachment #2: Type: text/plain, Size: 4843 bytes --]
10.041290 sec
Package Core CPU Avg_MHz Busy% Bzy_MHz TSC_MHz IRQ SMI POLL C1 C1E C6 POLL% C1% C1E% C6% CPU%c1 CPU%c6 CoreTmp PkgTmp Pkg%pc2 Pkg%pc6 Pkg_J RAM_J PKG_% RAM_%
- - - 16 1.57 1000 2101 4971 0 1 21 227 2512 0.00 0.00 0.02 98.69 1.86 96.56 36 36 49.15 0.25 580.08 318.91 0.00 0.00
0 0 0 2 0.18 1000 2095 105 0 0 0 0 355 0.00 0.00 0.00 99.84 1.90 97.92 32 36 0.29 0.25 300.17 153.07 0.00 0.00
0 0 32 0 0.02 1001 2095 22 0 0 0 1 27 0.00 0.00 0.01 99.98 2.06
0 1 4 0 0.01 1000 2095 22 0 0 0 2 25 0.00 0.00 0.01 99.98 0.12 99.87 34
0 1 36 0 0.01 1000 2095 23 0 0 0 1 25 0.00 0.00 0.01 99.98 0.12
0 2 8 0 0.02 1000 2095 47 0 0 0 2 48 0.00 0.00 0.01 99.97 0.25 99.73 34
0 2 40 0 0.01 1000 2095 27 0 0 0 1 23 0.00 0.00 0.01 99.99 0.26
0 3 12 0 0.01 1000 2095 24 0 0 0 1 25 0.00 0.00 0.00 99.99 0.13 99.86 33
0 3 44 0 0.01 1000 2095 21 0 0 0 1 21 0.00 0.00 0.00 99.99 0.13
0 4 14 0 0.01 1000 2095 23 0 0 0 0 26 0.00 0.00 0.00 99.99 0.13 99.85 35
0 4 46 0 0.01 1000 2095 21 0 0 0 0 22 0.00 0.00 0.00 99.99 0.14
0 5 10 991 99.31 1000 2095 2523 0 0 0 1 18 0.00 0.00 0.01 0.69 0.08 0.61 34
0 5 42 0 0.01 1000 2095 20 0 0 0 0 21 0.00 0.00 0.00 99.99 99.38
0 6 6 0 0.01 1000 2095 32 0 0 0 0 32 0.00 0.00 0.00 99.99 0.18 99.81 33
0 6 38 0 0.01 1000 2095 32 0 0 0 0 32 0.00 0.00 0.00 99.99 0.17
0 7 2 0 0.01 1000 2095 27 0 0 0 0 27 0.00 0.00 0.00 99.99 0.14 99.85 36
0 7 34 0 0.01 1000 2095 27 0 0 0 0 26 0.00 0.00 0.00 99.99 0.14
0 8 16 0 0.01 1000 2095 24 0 0 0 0 25 0.00 0.00 0.00 99.99 0.13 99.86 34
0 8 48 0 0.01 1000 2095 23 0 0 0 0 23 0.00 0.00 0.00 99.99 0.13
0 9 20 1 0.13 1000 2095 561 0 1 1 200 362 0.00 0.00 0.91 98.98 2.48 97.39 33
0 9 52 0 0.02 1001 2095 36 0 0 0 0 36 0.00 0.00 0.00 99.99 2.58
0 10 24 0 0.01 1000 2095 23 0 0 0 1 23 0.00 0.00 0.01 99.98 0.13 99.86 34
0 10 56 0 0.01 1000 2095 22 0 0 0 1 23 0.00 0.00 0.00 99.99 0.13
0 11 28 0 0.01 1000 2095 23 0 0 0 1 23 0.00 0.00 0.00 99.99 0.14 99.85 34
0 11 60 0 0.01 1000 2095 25 0 0 0 0 25 0.00 0.00 0.00 99.99 0.14
0 12 30 0 0.01 1000 2095 26 0 0 0 0 27 0.00 0.00 0.00 99.99 0.13 99.85 35
0 12 62 0 0.01 1000 2095 19 0 0 0 0 19 0.00 0.00 0.00 99.99 0.14
0 13 26 0 0.01 1000 2095 20 0 0 0 0 22 0.00 0.00 0.00 99.99 0.13 99.85 34
0 13 58 0 0.01 1000 2095 23 0 0 0 0 24 0.00 0.00 0.00 99.99 0.13
0 14 22 0 0.02 1000 2095 27 0 0 0 0 28 0.00 0.00 0.00 99.99 0.15 99.84 33
0 14 54 0 0.01 1000 2095 27 0 0 0 0 26 0.00 0.00 0.00 99.99 0.15
0 15 18 0 0.01 1000 2095 21 0 0 0 0 22 0.00 0.00 0.00 99.99 0.12 99.87 35
0 15 50 0 0.01 1000 2095 22 0 0 0 0 21 0.00 0.00 0.00 99.99 0.12
1 0 1 0 0.02 1000 2095 22 0 0 20 0 25 0.00 0.09 0.00 99.90 0.22 99.76 28 30 97.98 0.25 279.91 165.85 0.00 0.00
1 0 33 0 0.01 1000 2095 28 0 0 0 0 28 0.00 0.00 0.00 99.99 0.22
1 1 5 0 0.02 1000 2095 42 0 0 0 0 41 0.00 0.00 0.00 99.98 0.17 99.80 29
1 1 37 0 0.01 1000 2095 22 0 0 0 0 22 0.00 0.00 0.00 99.99 0.19
1 2 9 0 0.02 1000 2095 38 0 0 0 1 38 0.00 0.00 0.01 99.98 0.23 99.75 28
1 2 41 0 0.02 1000 2095 38 0 0 0 1 36 0.00 0.00 0.00 99.99 0.24
1 3 13 1 0.08 1000 2095 134 0 0 0 2 133 0.00 0.00 0.00 99.93 0.49 99.43 28
1 3 45 0 0.02 1001 2095 27 0 0 0 0 28 0.00 0.00 0.00 99.99 0.55
1 4 15 0 0.01 1000 2095 30 0 0 0 0 32 0.00 0.00 0.00 99.99 0.15 99.84 29
1 4 47 0 0.01 1000 2095 19 0 0 0 0 19 0.00 0.00 0.00 99.99 0.15
1 5 11 0 0.02 1000 2095 32 0 0 0 0 36 0.00 0.00 0.00 99.98 0.26 99.72 27
1 5 43 0 0.02 1000 2095 56 0 0 0 2 54 0.00 0.00 0.00 99.98 0.26
1 6 7 0 0.01 1000 2095 22 0 0 0 0 20 0.00 0.00 0.00 99.99 0.10 99.89 29
1 6 39 0 0.01 1000 2095 22 0 0 0 0 21 0.00 0.00 0.00 99.99 0.10
1 7 3 0 0.01 1000 2095 23 0 0 0 0 22 0.00 0.00 0.00 99.99 0.17 99.82 30
1 7 35 0 0.03 1000 2095 29 0 0 0 0 31 0.00 0.00 0.00 99.98 0.15
1 8 17 0 0.02 1000 2095 21 0 0 0 0 27 0.00 0.00 0.00 99.99 0.13 99.85 28
1 8 49 0 0.01 1000 2095 16 0 0 0 0 16 0.00 0.00 0.00 99.99 0.14
1 9 21 0 0.01 1000 2095 18 0 0 0 1 17 0.00 0.00 0.01 99.98 0.21 99.78 27
1 9 53 1 0.06 1000 2095 32 0 0 0 2 31 0.00 0.00 0.01 99.93 0.16
1 10 25 0 0.01 1000 2095 20 0 0 0 1 21 0.00 0.00 0.00 99.99 0.31 99.68 29
1 10 57 1 0.08 1000 2095 50 0 0 0 2 56 0.00 0.00 0.00 99.92 0.24
1 11 29 0 0.01 1000 2095 19 0 0 0 0 17 0.00 0.00 0.00 99.99 0.17 99.82 28
1 11 61 0 0.02 1000 2095 37 0 0 0 0 37 0.00 0.00 0.00 99.98 0.16
1 12 31 0 0.01 1000 2095 17 0 0 0 0 19 0.00 0.00 0.00 99.99 0.23 99.76 28
1 12 63 0 0.04 1000 2095 46 0 0 0 0 49 0.00 0.00 0.00 99.97 0.20
1 13 27 0 0.01 1000 2095 16 0 0 0 0 15 0.00 0.00 0.00 99.68 0.52 99.47 29
1 13 59 0 0.04 1000 2095 45 0 0 0 1 40 0.00 0.00 0.01 99.95 0.18
1 14 23 0 0.01 1000 2095 31 0 0 0 0 18 0.00 0.00 0.00 99.99 0.20 99.79 28
1 14 55 0 0.03 1000 2095 48 0 0 0 1 42 0.00 0.00 0.00 99.97 0.18
1 15 19 0 0.01 1000 2095 31 0 0 0 0 22 0.00 0.00 0.00 99.99 0.19 99.80 29
1 15 51 1 0.07 1000 2095 22 0 0 0 0 17 0.00 0.00 0.00 99.94 0.13
[-- Attachment #3: Type: text/plain, Size: 4832 bytes --]
10.041491 sec
Package Core CPU Avg_MHz Busy% Bzy_MHz TSC_MHz IRQ SMI POLL C1 C1E C6 POLL% C1% C1E% C6% CPU%c1 CPU%c6 CoreTmp PkgTmp Pkg%pc2 Pkg%pc6 Pkg_J RAM_J PKG_% RAM_%
- - - 16 1.57 1000 2097 4274 0 1 20 19 1863 0.00 0.01 0.00 98.49 1.80 96.63 38 38 49.14 0.07 579.96 320.76 0.00 0.00
0 0 0 2 0.18 1000 2095 97 0 0 0 0 332 0.00 0.00 0.00 99.84 1.80 98.02 34 38 0.37 0.13 300.26 154.74 0.00 0.00
0 0 32 0 0.02 1001 2095 17 0 0 0 0 19 0.00 0.00 0.00 99.99 1.96
0 1 4 0 0.02 1000 2095 39 0 0 0 0 39 0.00 0.00 0.00 99.98 0.20 99.78 36
0 1 36 0 0.01 1000 2095 20 0 0 0 0 19 0.00 0.00 0.00 99.99 0.21
0 2 8 0 0.02 1000 2095 32 0 0 0 0 36 0.00 0.00 0.00 99.98 0.23 99.74 35
0 2 40 0 0.01 1000 2095 27 0 0 0 0 27 0.00 0.00 0.00 99.99 0.24
0 3 12 0 0.01 1000 2095 22 0 0 0 0 24 0.00 0.00 0.00 99.99 0.16 99.83 34
0 3 44 0 0.01 1000 2095 18 0 0 0 0 18 0.00 0.00 0.00 99.99 0.16
0 4 14 0 0.01 1000 2095 23 0 0 0 0 24 0.00 0.00 0.00 99.99 0.13 99.86 36
0 4 46 0 0.01 1000 2095 24 0 0 0 0 23 0.00 0.00 0.00 99.99 0.13
0 5 10 991 99.35 1000 2095 2522 0 0 0 1 17 0.00 0.00 0.01 0.64 0.08 0.57 35
0 5 42 0 0.01 1000 2095 18 0 0 0 0 18 0.00 0.00 0.00 100.00 99.42
0 6 6 0 0.01 1000 2095 36 0 0 2 1 33 0.00 0.08 0.00 99.91 0.26 99.73 34
0 6 38 0 0.01 1000 2095 18 0 0 0 1 18 0.00 0.00 0.00 99.99 0.26
0 7 2 0 0.01 1000 2095 21 0 0 0 0 20 0.00 0.00 0.00 99.99 0.12 99.87 38
0 7 34 0 0.01 1000 2095 21 0 0 0 0 21 0.00 0.00 0.00 99.99 0.12
0 8 16 0 0.01 1000 2095 18 0 0 0 0 20 0.00 0.00 0.00 99.99 0.10 99.89 35
0 8 48 0 0.01 1000 2095 18 0 0 0 0 18 0.00 0.00 0.00 99.99 0.10
0 9 20 0 0.01 1000 2095 26 0 0 0 0 19 0.00 0.00 0.00 99.72 0.44 99.55 35
0 9 52 0 0.01 1000 2095 29 0 0 0 0 29 0.00 0.00 0.00 99.99 0.17
0 10 24 0 0.01 1000 2095 21 0 0 0 0 21 0.00 0.00 0.00 99.99 0.13 99.86 35
0 10 56 0 0.01 1000 2095 21 0 0 0 0 21 0.00 0.00 0.00 99.99 0.13
0 11 28 0 0.01 1000 2095 23 0 0 0 0 24 0.00 0.00 0.00 99.99 0.14 99.85 35
0 11 60 0 0.01 1000 2095 27 0 0 0 0 22 0.00 0.00 0.00 99.99 0.14
0 12 30 0 0.01 1000 2095 20 0 0 0 0 21 0.00 0.00 0.00 99.99 0.12 99.87 36
0 12 62 0 0.01 1000 2095 23 0 0 0 0 20 0.00 0.00 0.00 99.99 0.12
0 13 26 0 0.01 1000 2095 20 0 0 0 0 20 0.00 0.00 0.00 99.99 0.17 99.82 35
0 13 58 0 0.01 1000 2095 20 0 0 0 0 20 0.00 0.00 0.00 99.99 0.17
0 14 22 0 0.01 1000 2095 25 0 0 0 0 23 0.00 0.00 0.00 99.99 0.17 99.82 35
0 14 54 0 0.01 1001 2095 18 0 0 0 0 18 0.00 0.00 0.00 99.99 0.17
0 15 18 0 0.01 1000 2095 12 0 0 0 1 10 0.00 0.00 0.01 99.99 0.06 99.93 37
0 15 50 0 0.01 1000 2095 8 0 0 0 1 8 0.00 0.00 0.00 99.99 0.06
1 0 1 0 0.01 1000 2095 27 0 1 4 0 9 0.00 0.08 0.00 99.91 0.13 99.86 28 31 97.98 0.00 279.71 166.02 0.00 0.00
1 0 33 0 0.01 1000 2095 10 0 0 0 0 9 0.00 0.00 0.00 100.00 0.14
1 1 5 0 0.02 1000 2095 28 0 0 0 0 30 0.00 0.00 0.00 99.98 0.25 99.73 29
1 1 37 0 0.03 1000 2095 37 0 0 0 0 36 0.00 0.00 0.00 99.97 0.24
1 2 9 0 0.01 1000 2095 23 0 0 0 0 23 0.00 0.00 0.00 99.99 0.11 99.88 28
1 2 41 0 0.01 1000 2095 25 0 0 0 0 24 0.00 0.00 0.00 99.99 0.11
1 3 13 1 0.07 1000 2095 126 0 0 0 0 121 0.00 0.00 0.00 99.94 0.43 99.50 28
1 3 45 0 0.01 1001 2095 19 0 0 0 0 18 0.00 0.00 0.00 99.99 0.49
1 4 15 0 0.01 1000 2095 14 0 0 0 0 12 0.00 0.00 0.00 99.99 0.10 99.89 29
1 4 47 0 0.01 1000 2095 27 0 0 0 0 19 0.00 0.00 0.00 99.99 0.10
1 5 11 0 0.01 1000 2095 29 0 0 0 0 27 0.00 0.00 0.00 99.99 0.47 99.51 27
1 5 43 0 0.01 1000 2095 39 0 0 8 1 31 0.00 0.28 0.04 99.67 0.47
1 6 7 0 0.01 1000 2095 23 0 0 0 1 21 0.00 0.00 0.01 99.98 0.11 99.89 29
1 6 39 0 0.01 1000 2095 24 0 0 0 1 21 0.00 0.00 0.01 99.99 0.11
1 7 3 0 0.04 1000 2095 41 0 0 0 3 30 0.00 0.00 0.01 99.95 0.14 99.82 30
1 7 35 0 0.01 1000 2095 7 0 0 0 1 7 0.00 0.00 0.00 99.99 0.17
1 8 17 0 0.03 1000 2095 43 0 0 3 1 43 0.00 0.12 0.00 99.85 0.31 99.66 29
1 8 49 0 0.01 1000 2095 25 0 0 0 0 18 0.00 0.00 0.00 99.99 0.33
1 9 21 0 0.01 1000 2095 13 0 0 0 0 13 0.00 0.00 0.00 99.99 0.29 99.71 28
1 9 53 1 0.07 1000 2095 46 0 0 0 2 46 0.00 0.00 0.02 99.92 0.23
1 10 25 0 0.01 1000 2095 23 0 0 0 0 21 0.00 0.00 0.00 99.99 0.11 99.88 29
1 10 57 0 0.02 1000 2095 21 0 0 0 0 20 0.00 0.00 0.00 99.99 0.10
1 11 29 0 0.02 1000 2095 42 0 0 0 0 40 0.00 0.00 0.00 99.98 0.34 99.64 28
1 11 61 1 0.07 1000 2095 43 0 0 0 0 49 0.00 0.00 0.00 99.94 0.29
1 12 31 0 0.01 1000 2095 15 0 0 0 0 11 0.00 0.00 0.00 99.99 0.19 99.80 28
1 12 63 0 0.03 1000 2095 46 0 0 0 2 38 0.00 0.00 0.01 99.97 0.17
1 13 27 0 0.01 1000 2095 18 0 0 0 0 11 0.00 0.00 0.00 99.99 0.17 99.82 29
1 13 59 0 0.03 1000 2095 43 0 0 0 0 33 0.00 0.00 0.00 99.97 0.15
1 14 23 0 0.01 1000 2095 19 0 0 0 0 13 0.00 0.00 0.00 99.99 0.18 99.81 28
1 14 55 0 0.03 1000 2095 47 0 0 2 1 35 0.00 0.00 0.01 99.97 0.17
1 15 19 0 0.01 1000 2095 20 0 0 1 1 10 0.00 0.00 0.00 99.99 0.17 99.82 29
1 15 51 1 0.07 1000 2095 27 0 0 0 0 22 0.00 0.00 0.00 99.94 0.11
next prev parent reply other threads:[~2021-12-30 18:44 UTC|newest]
Thread overview: 53+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-12-13 22:52 cpufreq: intel_pstate: map utilization into the pstate range Julia Lawall
2021-12-17 18:36 ` Rafael J. Wysocki
2021-12-17 19:32 ` Julia Lawall
2021-12-17 20:36 ` Francisco Jerez
2021-12-17 22:51 ` Julia Lawall
2021-12-18 0:04 ` Francisco Jerez
2021-12-18 6:12 ` Julia Lawall
2021-12-18 10:19 ` Francisco Jerez
2021-12-18 11:07 ` Julia Lawall
2021-12-18 22:12 ` Francisco Jerez
2021-12-19 6:42 ` Julia Lawall
2021-12-19 14:19 ` Rafael J. Wysocki
2021-12-19 14:30 ` Rafael J. Wysocki
2021-12-19 21:47 ` Julia Lawall
2021-12-19 22:10 ` Francisco Jerez
2021-12-19 22:41 ` Julia Lawall
2021-12-19 23:31 ` Francisco Jerez
2021-12-21 17:04 ` Rafael J. Wysocki
2021-12-21 23:56 ` Francisco Jerez
2021-12-22 14:54 ` Rafael J. Wysocki
2021-12-24 11:08 ` Julia Lawall
2021-12-28 16:58 ` Julia Lawall
2021-12-28 17:40 ` Rafael J. Wysocki
2021-12-28 17:46 ` Julia Lawall
2021-12-28 18:06 ` Rafael J. Wysocki
2021-12-28 18:16 ` Julia Lawall
2021-12-29 9:13 ` Julia Lawall
2021-12-30 17:03 ` Rafael J. Wysocki
2021-12-30 17:54 ` Julia Lawall
2021-12-30 17:58 ` Rafael J. Wysocki
2021-12-30 18:20 ` Julia Lawall
2021-12-30 18:37 ` Rafael J. Wysocki
2021-12-30 18:44 ` Julia Lawall [this message]
2022-01-03 15:50 ` Rafael J. Wysocki
2022-01-03 16:41 ` Julia Lawall
2022-01-03 18:23 ` Julia Lawall
2022-01-03 19:58 ` Rafael J. Wysocki
2022-01-03 20:51 ` Julia Lawall
2022-01-04 14:09 ` Rafael J. Wysocki
2022-01-04 15:49 ` Julia Lawall
2022-01-04 19:22 ` Rafael J. Wysocki
2022-01-05 20:19 ` Julia Lawall
2022-01-05 23:46 ` Francisco Jerez
2022-01-06 19:49 ` Julia Lawall
2022-01-06 20:28 ` Srinivas Pandruvada
2022-01-06 20:43 ` Julia Lawall
2022-01-06 21:55 ` srinivas pandruvada
2022-01-06 21:58 ` Julia Lawall
2022-01-05 0:38 ` Francisco Jerez
2021-12-19 14:14 ` Rafael J. Wysocki
2021-12-19 17:03 ` Julia Lawall
2021-12-19 22:30 ` Francisco Jerez
2021-12-21 18:10 ` Rafael J. Wysocki
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=alpine.DEB.2.22.394.2112301942360.15550@hadrien \
--to=julia.lawall@inria.fr \
--cc=currojerez@riseup.net \
--cc=juri.lelli@redhat.com \
--cc=lenb@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=rafael@kernel.org \
--cc=srinivas.pandruvada@linux.intel.com \
--cc=vincent.guittot@linaro.org \
--cc=viresh.kumar@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.