linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Phil Auld <pauld@redhat.com>
To: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Ingo Molnar <mingo@kernel.org>,
	Mel Gorman <mgorman@techsingularity.net>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	Ingo Molnar <mingo@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Valentin Schneider <valentin.schneider@arm.com>,
	Srikar Dronamraju <srikar@linux.vnet.ibm.com>,
	Quentin Perret <quentin.perret@arm.com>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Morten Rasmussen <Morten.Rasmussen@arm.com>,
	Hillf Danton <hdanton@sina.com>, Parth Shah <parth@linux.ibm.com>,
	Rik van Riel <riel@surriel.com>
Subject: Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance
Date: Fri, 25 Oct 2019 09:33:26 -0400	[thread overview]
Message-ID: <20191025133325.GA2421@pauld.bos.csb> (raw)
In-Reply-To: <CAKfTPtB0VruWXq+wGgvNOMFJvvZQiZyi2AgBoJP3Uaeduu2Lqg@mail.gmail.com>


Hi Vincent,


On Thu, Oct 24, 2019 at 04:59:05PM +0200 Vincent Guittot wrote:
> On Thu, 24 Oct 2019 at 15:47, Phil Auld <pauld@redhat.com> wrote:
> >
> > On Thu, Oct 24, 2019 at 08:38:44AM -0400 Phil Auld wrote:
> > > Hi Vincent,
> > >
> > > On Mon, Oct 21, 2019 at 10:44:20AM +0200 Vincent Guittot wrote:
> > > > On Mon, 21 Oct 2019 at 09:50, Ingo Molnar <mingo@kernel.org> wrote:
> > > > >
> 
> [...]
> 
> > > > > A full run on Mel Gorman's magic scalability test-suite would be super
> > > > > useful ...
> > > > >
> > > > > Anyway, please be on the lookout for such performance regression reports.
> > > >
> > > > Yes I monitor the regressions on the mailing list
> > >
> > >
> > > Our kernel perf tests show good results across the board for v4.
> > >
> > > The issue we hit on the 8-node system is fixed. Thanks!
> > >
> > > As we didn't see the fairness issue I don't expect the results to be
> > > that different on v4a (with the followup patch) but those tests are
> > > queued up now and we'll see what they look like.
> > >
> >
> > Initial results with fix patch (v4a) show that the outlier issues on
> > the 8-node system have returned.  Median time for 152 and 156 threads
> > (160 cpu system) goes up significantly and worst case goes from 340
> > and 250 to 550 sec. for both. And doubles from 150 to 300 for 144
> 
> For v3, you had a x4 slow down IIRC.
> 

Sorry, that was a confusing change of data point :)

 
That 4x was the normal versus group result for v3.  I.e. the usual 
view of this test case's data. 

These numbers above are the group vs group difference between 
v4 and v4a. 

The similar data points are that for v4 there was no difference 
in performance between group and normal at 152 threads and a 35% 
drop off from normal to group at 156. 

With v4a there was 100% drop (2x slowdown) normal to group at 152 
and close to that at 156 (~75-80% drop off).

So, yes, not as severe as v3. But significantly off from v4. 

> 
> > threads. These look more like the results from v3.
> 
> OK. For v3, we were not sure that your UC triggers the slow path but
> it seems that we have the confirmation now.
> The problem happens only for this  8 node 160 cores system, isn't it ?

Yes. It only shows up now on this 8-node system.

> 
> The fix favors the local group so your UC seems to prefer spreading
> tasks at wake up
> If you have any traces that you can share, this could help to
> understand what's going on. I will try to reproduce the problem on my
> system

I'm not actually sure the fix here is causing this. Looking at the data 
more closely I see similar imbalances on v4, v4a and v3. 

When you say slow versus fast wakeup paths what do you mean? I'm still
learning my way around all this code. 

This particular test is specifically designed to highlight the imbalance 
cause by the use of group scheduler defined load and averages. The threads
are mostly CPU bound but will join up every time step. So if each thread
more or less gets its own CPU (we run with fewer threads than CPUs) they
all finish the timestep at about the same time.  If threads are stuck
sharing cpus then those finish later and the whole computation is slowed
down.  In addition to the NAS benchmark threads there are 2 stress CPU
burners. These are either run in their own cgroups (thus having full "load")
or all in the same cgroup with the benchmarck, thus all having tiny "loads".

In this system, there are 20 cpus per node. We track average number of 
benchmark threads running in each node. Generally for a balanced case 
we should not have any much over 20 and indeed in the normal case (every
one in one cgroup) we see pretty nice balance. In the cgroup case we are 
still seeing numbers much higher than 20.

Here are some eye charts:

This is the GROUP numbers from that machine on the v1 series (I don't have the 
NORMAL lines handy for this one):
lu.C.x_152_GROUP_1 Average   18.08  18.17  19.58  19.29  19.25  17.50  21.46  18.67
lu.C.x_152_GROUP_2 Average   17.12  17.48  17.88  17.62  19.57  17.31  23.00  22.02
lu.C.x_152_GROUP_3 Average   17.82  17.97  18.12  18.18  24.55  22.18  16.97  16.21
lu.C.x_152_GROUP_4 Average   18.47  19.08  18.50  18.66  21.45  25.00  15.47  15.37
lu.C.x_152_GROUP_5 Average   20.46  20.71  27.38  24.75  17.06  16.65  12.81  12.19

lu.C.x_156_GROUP_1 Average   18.70  18.80  20.25  19.50  20.45  20.30  19.55  18.45
lu.C.x_156_GROUP_2 Average   19.29  19.90  17.71  18.10  20.76  21.57  19.81  18.86
lu.C.x_156_GROUP_3 Average   25.09  29.19  21.83  21.33  18.67  18.57  11.03  10.29
lu.C.x_156_GROUP_4 Average   18.60  19.10  19.20  18.70  20.30  20.00  19.70  20.40
lu.C.x_156_GROUP_5 Average   18.58  18.95  18.63  18.1   17.32  19.37  23.92  21.08

There are a couple that did not balance well but the overall results were good. 

This is v4:
lu.C.x_152_GROUP_1   Average    18.80  19.25  21.95  21.25  17.55  17.25  17.85  18.10
lu.C.x_152_GROUP_2   Average    20.57  20.62  19.76  17.76  18.95  18.33  18.52  17.48
lu.C.x_152_GROUP_3   Average    15.39  12.22  13.96  12.19  25.51  28.91  21.88  21.94
lu.C.x_152_GROUP_4   Average    20.30  19.75  20.75  19.45  18.15  17.80  18.15  17.65
lu.C.x_152_GROUP_5   Average    15.13  12.21  13.63  11.39  25.42  30.21  21.55  22.46
lu.C.x_152_NORMAL_1  Average    17.00  16.88  19.52  18.28  19.24  19.08  21.08  20.92
lu.C.x_152_NORMAL_2  Average    18.61  16.56  18.56  17.00  20.56  20.28  20.00  20.44
lu.C.x_152_NORMAL_3  Average    19.27  19.77  21.23  20.86  18.00  17.68  17.73  17.45
lu.C.x_152_NORMAL_4  Average    20.24  19.33  21.33  21.10  17.33  18.43  17.57  16.67
lu.C.x_152_NORMAL_5  Average    21.27  20.36  20.86  19.36  17.50  17.77  17.32  17.55

lu.C.x_156_GROUP_1   Average    18.60  18.68  21.16  23.40  18.96  19.72  17.76  17.72
lu.C.x_156_GROUP_2   Average    22.76  21.71  20.55  21.32  18.18  16.42  17.58  17.47
lu.C.x_156_GROUP_3   Average    13.62  11.52  15.54  15.58  25.42  28.54  23.22  22.56
lu.C.x_156_GROUP_4   Average    17.73  18.14  21.95  21.82  19.73  19.68  18.55  18.41
lu.C.x_156_GROUP_5   Average    15.32  15.14  17.30  17.11  23.59  25.75  20.77  21.02
lu.C.x_156_NORMAL_1  Average    19.06  18.72  19.56  18.72  19.72  21.28  19.44  19.50
lu.C.x_156_NORMAL_2  Average    20.25  19.86  22.61  23.18  18.32  17.93  16.39  17.46
lu.C.x_156_NORMAL_3  Average    18.84  17.88  19.24  17.76  21.04  20.64  20.16  20.44
lu.C.x_156_NORMAL_4  Average    20.67  19.44  20.74  22.15  18.89  18.85  18.00  17.26
lu.C.x_156_NORMAL_5  Average    20.12  19.65  24.12  24.15  17.40  16.62  17.10  16.83

This one is better overall, but there are some mid 20s abd 152_GROUP_5 is pretty bad.  


This is v4a
lu.C.x_152_GROUP_1   Average    28.64  34.49  23.60  24.48  10.35  11.99  8.36  10.09
lu.C.x_152_GROUP_2   Average    17.36  17.33  15.48  13.12  24.90  24.43  18.55  20.83
lu.C.x_152_GROUP_3   Average    20.00  19.92  20.21  21.33  18.50  18.50  16.50  17.04
lu.C.x_152_GROUP_4   Average    18.07  17.87  18.40  17.87  23.07  22.73  17.60  16.40
lu.C.x_152_GROUP_5   Average    25.50  24.69  21.48  21.46  16.85  16.00  14.06  11.96
lu.C.x_152_NORMAL_1  Average    22.27  20.77  20.60  19.83  16.73  17.53  15.83  18.43
lu.C.x_152_NORMAL_2  Average    19.83  20.81  23.06  21.97  17.28  16.92  15.83  16.31
lu.C.x_152_NORMAL_3  Average    17.85  19.31  18.85  19.08  19.00  19.31  19.08  19.54
lu.C.x_152_NORMAL_4  Average    18.87  18.13  19.00  20.27  18.20  18.67  19.73  19.13
lu.C.x_152_NORMAL_5  Average    18.16  18.63  18.11  17.00  19.79  20.63  19.47  20.21

lu.C.x_156_GROUP_1   Average    24.96  26.15  21.78  21.48  18.52  19.11  12.98  11.02
lu.C.x_156_GROUP_2   Average    18.69  19.00  18.65  18.42  20.50  20.46  19.85  20.42
lu.C.x_156_GROUP_3   Average    24.32  23.79  20.82  20.95  16.63  16.61  18.47  14.42
lu.C.x_156_GROUP_4   Average    18.27  18.34  14.88  16.07  27.00  21.93  20.56  18.95
lu.C.x_156_GROUP_5   Average    19.18  20.99  33.43  29.57  15.63  15.54  12.13  9.53
lu.C.x_156_NORMAL_1  Average    21.60  23.37  20.11  19.60  17.11  17.83  18.17  18.20
lu.C.x_156_NORMAL_2  Average    21.00  20.54  19.88  18.79  17.62  18.67  19.29  20.21
lu.C.x_156_NORMAL_3  Average    19.50  19.94  20.12  18.62  19.88  19.50  19.00  19.44
lu.C.x_156_NORMAL_4  Average    20.62  19.72  20.03  22.17  18.21  18.55  18.45  18.24
lu.C.x_156_NORMAL_5  Average    19.64  19.86  21.46  22.43  17.21  17.89  18.96  18.54


This shows much more imblance in the GROUP case. There are some single digits 
and some 30s.

For comparison here are some from my 4-node (80 cpu) system:

v4
lu.C.x_76_GROUP_1.ps.numa.hist   Average    19.58  17.67  18.25  20.50
lu.C.x_76_GROUP_2.ps.numa.hist   Average    19.08  19.17  17.67  20.08
lu.C.x_76_GROUP_3.ps.numa.hist   Average    19.42  18.58  18.42  19.58
lu.C.x_76_NORMAL_1.ps.numa.hist  Average    20.50  17.33  19.08  19.08
lu.C.x_76_NORMAL_2.ps.numa.hist  Average    19.45  18.73  19.27  18.55


v4a
lu.C.x_76_GROUP_1.ps.numa.hist   Average    19.46  19.15  18.62  18.77
lu.C.x_76_GROUP_2.ps.numa.hist   Average    19.00  18.58  17.75  20.67
lu.C.x_76_GROUP_3.ps.numa.hist   Average    19.08  17.08  20.08  19.77
lu.C.x_76_NORMAL_1.ps.numa.hist  Average    18.67  18.93  18.60  19.80
lu.C.x_76_NORMAL_2.ps.numa.hist  Average    19.08  18.67  18.58  19.67

Nicely balanced in both kernels and normal and group are basically the 
same. 

There's still something between v1 and v4 on that 8-node system that is 
still illustrating the original problem.  On our other test systems this
series really works nicely to solve this problem. And even if we can't get
to the bottom if this it's a significant improvement.


Here is v3 for the 8-node system
lu.C.x_152_GROUP_1  Average    17.52  16.86  17.90  18.52  20.00  19.00  22.00  20.19
lu.C.x_152_GROUP_2  Average    15.70  15.04  15.65  15.72  23.30  28.98  20.09  17.52
lu.C.x_152_GROUP_3  Average    27.72  32.79  22.89  22.62  11.01  12.90  12.14  9.93
lu.C.x_152_GROUP_4  Average    18.13  18.87  18.40  17.87  18.80  19.93  20.40  19.60
lu.C.x_152_GROUP_5  Average    24.14  26.46  20.92  21.43  14.70  16.05  15.14  13.16
lu.C.x_152_NORMAL_1 Average    21.03  22.43  20.27  19.97  18.37  18.80  16.27  14.87
lu.C.x_152_NORMAL_2 Average    19.24  18.29  18.41  17.41  19.71  19.00  20.29  19.65
lu.C.x_152_NORMAL_3 Average    19.43  20.00  19.05  20.24  18.76  17.38  18.52  18.62
lu.C.x_152_NORMAL_4 Average    17.19  18.25  17.81  18.69  20.44  19.75  20.12  19.75
lu.C.x_152_NORMAL_5 Average    19.25  19.56  19.12  19.56  19.38  19.38  18.12  17.62

lu.C.x_156_GROUP_1  Average    18.62  19.31  18.38  18.77  19.88  21.35  19.35  20.35
lu.C.x_156_GROUP_2  Average    15.58  12.72  14.96  14.83  20.59  19.35  29.75  28.22
lu.C.x_156_GROUP_3  Average    20.05  18.74  19.63  18.32  20.26  20.89  19.53  18.58
lu.C.x_156_GROUP_4  Average    14.77  11.42  13.01  10.09  27.05  33.52  23.16  22.98
lu.C.x_156_GROUP_5  Average    14.94  11.45  12.77  10.52  28.01  33.88  22.37  22.05
lu.C.x_156_NORMAL_1 Average    20.00  20.58  18.47  18.68  19.47  19.74  19.42  19.63
lu.C.x_156_NORMAL_2 Average    18.52  18.48  18.83  18.43  20.57  20.48  20.61  20.09
lu.C.x_156_NORMAL_3 Average    20.27  20.00  20.05  21.18  19.55  19.00  18.59  17.36
lu.C.x_156_NORMAL_4 Average    19.65  19.60  20.25  20.75  19.35  20.10  19.00  17.30
lu.C.x_156_NORMAL_5 Average    19.79  19.67  20.62  22.42  18.42  18.00  17.67  19.42


I'll try to find pre-patched results for this 8 node system.  Just to keep things
together for reference here is the 4-node system before this re-work series.

lu.C.x_76_GROUP_1  Average    15.84  24.06  23.37  12.73
lu.C.x_76_GROUP_2  Average    15.29  22.78  22.49  15.45
lu.C.x_76_GROUP_3  Average    13.45  23.90  22.97  15.68
lu.C.x_76_NORMAL_1 Average    18.31  19.54  19.54  18.62
lu.C.x_76_NORMAL_2 Average    19.73  19.18  19.45  17.64

This produced a 4.5x slowdown for the group runs versus the nicely balance
normal runs.  



I can try to get traces but this is not my system so it may take a little
while. I've found that the existing trace points don't give enough information
to see what is happening in this problem. But the visualization in kernelshark
does show the problem pretty well. Do you want just the existing sched tracepoints
or should I update some of the traceprintks I used in the earlier traces?



Cheers,
Phil  


> 
> >
> > We're re-running the test to get more samples.
> 
> Thanks
> Vincent
> 
> >
> >
> > Other tests and systems were still fine.
> >
> >
> > Cheers,
> > Phil
> >
> >
> > > Numbers for my specific testcase (the cgroup imbalance) are basically
> > > the same as I posted for v3 (plus the better 8-node numbers). I.e. this
> > > series solves that issue.
> > >
> > >
> > > Cheers,
> > > Phil
> > >
> > >
> > > >
> > > > >
> > > > > Also, we seem to have grown a fair amount of these TODO entries:
> > > > >
> > > > >   kernel/sched/fair.c: * XXX borrowed from update_sg_lb_stats
> > > > >   kernel/sched/fair.c: * XXX: only do this for the part of runnable > running ?
> > > > >   kernel/sched/fair.c:     * XXX illustrate
> > > > >   kernel/sched/fair.c:    } else if (sd_flag & SD_BALANCE_WAKE) { /* XXX always ? */
> > > > >   kernel/sched/fair.c: * can also include other factors [XXX].
> > > > >   kernel/sched/fair.c: * [XXX expand on:
> > > > >   kernel/sched/fair.c: * [XXX more?]
> > > > >   kernel/sched/fair.c: * [XXX write more on how we solve this.. _after_ merging pjt's patches that
> > > > >   kernel/sched/fair.c:             * XXX for now avg_load is not computed and always 0 so we
> > > > >   kernel/sched/fair.c:            /* XXX broken for overlapping NUMA groups */
> > > > >
> > > >
> > > > I will have a look :-)
> > > >
> > > > > :-)
> > > > >
> > > > > Thanks,
> > > > >
> > > > >         Ingo
> > >
> > > --
> > >
> >
> > --
> >

-- 


  reply	other threads:[~2019-10-25 13:33 UTC|newest]

Thread overview: 89+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-18 13:26 [PATCH v4 00/10] sched/fair: rework the CFS load balance Vincent Guittot
2019-10-18 13:26 ` [PATCH v4 01/11] sched/fair: clean up asym packing Vincent Guittot
2019-10-21  9:12   ` [tip: sched/core] sched/fair: Clean " tip-bot2 for Vincent Guittot
2019-10-30 14:51   ` [PATCH v4 01/11] sched/fair: clean " Mel Gorman
2019-10-30 16:03     ` Vincent Guittot
2019-10-18 13:26 ` [PATCH v4 02/11] sched/fair: rename sum_nr_running to sum_h_nr_running Vincent Guittot
2019-10-21  9:12   ` [tip: sched/core] sched/fair: Rename sg_lb_stats::sum_nr_running " tip-bot2 for Vincent Guittot
2019-10-30 14:53   ` [PATCH v4 02/11] sched/fair: rename sum_nr_running " Mel Gorman
2019-10-18 13:26 ` [PATCH v4 03/11] sched/fair: remove meaningless imbalance calculation Vincent Guittot
2019-10-21  9:12   ` [tip: sched/core] sched/fair: Remove " tip-bot2 for Vincent Guittot
2019-10-18 13:26 ` [PATCH v4 04/11] sched/fair: rework load_balance Vincent Guittot
2019-10-21  9:12   ` [tip: sched/core] sched/fair: Rework load_balance() tip-bot2 for Vincent Guittot
2019-10-30 15:45   ` [PATCH v4 04/11] sched/fair: rework load_balance Mel Gorman
2019-10-30 16:16     ` Valentin Schneider
2019-10-31  9:09     ` Vincent Guittot
2019-10-31 10:15       ` Mel Gorman
2019-10-31 11:13         ` Vincent Guittot
2019-10-31 11:40           ` Mel Gorman
2019-11-08 16:35             ` Vincent Guittot
2019-11-08 18:37               ` Mel Gorman
2019-11-12 10:58                 ` Vincent Guittot
2019-11-12 15:06                   ` Mel Gorman
2019-11-12 15:40                     ` Vincent Guittot
2019-11-12 17:45                       ` Mel Gorman
2019-11-18 13:50     ` Ingo Molnar
2019-11-18 13:57       ` Vincent Guittot
2019-11-18 14:51       ` Mel Gorman
2019-10-18 13:26 ` [PATCH v4 05/11] sched/fair: use rq->nr_running when balancing load Vincent Guittot
2019-10-21  9:12   ` [tip: sched/core] sched/fair: Use " tip-bot2 for Vincent Guittot
2019-10-30 15:54   ` [PATCH v4 05/11] sched/fair: use " Mel Gorman
2019-10-18 13:26 ` [PATCH v4 06/11] sched/fair: use load instead of runnable load in load_balance Vincent Guittot
2019-10-21  9:12   ` [tip: sched/core] sched/fair: Use load instead of runnable load in load_balance() tip-bot2 for Vincent Guittot
2019-10-30 15:58   ` [PATCH v4 06/11] sched/fair: use load instead of runnable load in load_balance Mel Gorman
2019-10-18 13:26 ` [PATCH v4 07/11] sched/fair: evenly spread tasks when not overloaded Vincent Guittot
2019-10-21  9:12   ` [tip: sched/core] sched/fair: Spread out tasks evenly " tip-bot2 for Vincent Guittot
2019-10-30 16:03   ` [PATCH v4 07/11] sched/fair: evenly spread tasks " Mel Gorman
2019-10-18 13:26 ` [PATCH v4 08/11] sched/fair: use utilization to select misfit task Vincent Guittot
2019-10-21  9:12   ` [tip: sched/core] sched/fair: Use " tip-bot2 for Vincent Guittot
2019-10-18 13:26 ` [PATCH v4 09/11] sched/fair: use load instead of runnable load in wakeup path Vincent Guittot
2019-10-21  9:12   ` [tip: sched/core] sched/fair: Use " tip-bot2 for Vincent Guittot
2019-10-18 13:26 ` [PATCH v4 10/11] sched/fair: optimize find_idlest_group Vincent Guittot
2019-10-21  9:12   ` [tip: sched/core] sched/fair: Optimize find_idlest_group() tip-bot2 for Vincent Guittot
2019-10-18 13:26 ` [PATCH v4 11/11] sched/fair: rework find_idlest_group Vincent Guittot
2019-10-21  9:12   ` [tip: sched/core] sched/fair: Rework find_idlest_group() tip-bot2 for Vincent Guittot
2019-10-22 16:46   ` [PATCH] sched/fair: fix rework of find_idlest_group() Vincent Guittot
2019-10-23  7:50     ` Chen, Rong A
2019-10-30 16:07     ` Mel Gorman
2019-11-18 17:42     ` [tip: sched/core] sched/fair: Fix " tip-bot2 for Vincent Guittot
2019-11-22 14:37     ` [PATCH] sched/fair: fix " Valentin Schneider
2019-11-25  9:16       ` Vincent Guittot
2019-11-25 11:03         ` Valentin Schneider
2019-11-20 11:58   ` [PATCH v4 11/11] sched/fair: rework find_idlest_group Qais Yousef
2019-11-20 13:21     ` Vincent Guittot
2019-11-20 16:53       ` Vincent Guittot
2019-11-20 17:34         ` Qais Yousef
2019-11-20 17:43           ` Vincent Guittot
2019-11-20 18:10             ` Qais Yousef
2019-11-20 18:20               ` Vincent Guittot
2019-11-20 18:27                 ` Qais Yousef
2019-11-20 19:28                   ` Vincent Guittot
2019-11-20 19:55                     ` Qais Yousef
2019-11-21 14:58                       ` Qais Yousef
2019-11-22 14:34   ` Valentin Schneider
2019-11-25  9:59     ` Vincent Guittot
2019-11-25 11:13       ` Valentin Schneider
2019-10-21  7:50 ` [PATCH v4 00/10] sched/fair: rework the CFS load balance Ingo Molnar
2019-10-21  8:44   ` Vincent Guittot
2019-10-21 12:56     ` Phil Auld
2019-10-24 12:38     ` Phil Auld
2019-10-24 13:46       ` Phil Auld
2019-10-24 14:59         ` Vincent Guittot
2019-10-25 13:33           ` Phil Auld [this message]
2019-10-28 13:03             ` Vincent Guittot
2019-10-30 14:39               ` Phil Auld
2019-10-30 16:24                 ` Dietmar Eggemann
2019-10-30 16:35                   ` Valentin Schneider
2019-10-30 17:19                     ` Phil Auld
2019-10-30 17:25                       ` Valentin Schneider
2019-10-30 17:29                         ` Phil Auld
2019-10-30 17:28                       ` Vincent Guittot
2019-10-30 17:44                         ` Phil Auld
2019-10-30 17:25                 ` Vincent Guittot
2019-10-31 13:57                   ` Phil Auld
2019-10-31 16:41                     ` Vincent Guittot
2019-10-30 16:24   ` Mel Gorman
2019-10-30 16:35     ` Vincent Guittot
2019-11-18 13:15     ` Ingo Molnar
2019-11-25 12:48 ` Valentin Schneider
2020-01-03 16:39   ` Valentin Schneider

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191025133325.GA2421@pauld.bos.csb \
    --to=pauld@redhat.com \
    --cc=Morten.Rasmussen@arm.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=hdanton@sina.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@techsingularity.net \
    --cc=mingo@kernel.org \
    --cc=mingo@redhat.com \
    --cc=parth@linux.ibm.com \
    --cc=peterz@infradead.org \
    --cc=quentin.perret@arm.com \
    --cc=riel@surriel.com \
    --cc=srikar@linux.vnet.ibm.com \
    --cc=valentin.schneider@arm.com \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).