All of lore.kernel.org
 help / color / mirror / Atom feed
From: Phil Auld <pauld@redhat.com>
To: Mel Gorman <mgorman@techsingularity.net>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@kernel.org>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>,
	Valentin Schneider <valentin.schneider@arm.com>,
	Hillf Danton <hdanton@sina.com>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 00/13] Reconcile NUMA balancing decisions with the load balancer v6
Date: Mon, 9 Mar 2020 15:12:34 -0400	[thread overview]
Message-ID: <20200309191233.GG10065@pauld.bos.csb> (raw)
In-Reply-To: <20200224095223.13361-1-mgorman@techsingularity.net>

Hi Mel,

On Mon, Feb 24, 2020 at 09:52:10AM +0000 Mel Gorman wrote:
> The only differences in V6 are due to Vincent's latest patch series.
> 
> This is V5 which includes the latest versions of Vincent's patch
> addressing review feedback. Patches 4-9 are Vincent's work plus one
> important performance fix. Vincent's patches were retested and while
> not presented in detail, it was mostly an improvement.
> 
> Changelog since V5:
> o Import Vincent's latest patch set
> 
> Changelog since V4:
> o The V4 send was the completely wrong versions of the patches and
>   is useless.
> 
> Changelog since V3:
> o Remove stray tab						(Valentin)
> o Update comment about allowing a move when src is imbalanced	(Hillf)
> o Updated "sched/pelt: Add a new runnable average signal"	(Vincent)
> 
> Changelog since V2:
> o Rebase on top of Vincent's series again
> o Fix a missed rcu_read_unlock
> o Reduce overhead of tracepoint
> 
> Changelog since V1:
> o Rebase on top of Vincent's series and rework
> 
> Note: The baseline for this series is tip/sched/core as of February
> 	12th rebased on top of v5.6-rc1. The series includes patches from
> 	Vincent as I needed to add a fix and build on top of it. Vincent's
> 	series on its own introduces performance regressions for *some*
> 	but not *all* machines so it's easily missed. This series overall
> 	is close to performance-neutral with some gains depending on the
> 	machine. However, the end result does less work on NUMA balancing
> 	and the fact that both the NUMA balancer and load balancer uses
> 	similar logic makes it much easier to understand.
> 
> The NUMA balancer makes placement decisions on tasks that partially
> take the load balancer into account and vice versa but there are
> inconsistencies. This can result in placement decisions that override
> each other leading to unnecessary migrations -- both task placement
> and page placement. This series reconciles many of the decisions --
> partially Vincent's work with some fixes and optimisations on top to
> merge our two series.
> 
> The first patch is unrelated. It's picked up by tip but was not present in
> the tree at the time of the fork. I'm including it here because I tested
> with it.
> 
> The second and third patches are tracing only and was needed to get
> sensible data out of ftrace with respect to task placement for NUMA
> balancing. The NUMA balancer is *far* easier to analyse with the
> patches and informed how the series should be developed.
> 
> Patches 4-5 are Vincent's and use very similar code patterns and logic
> between NUMA and load balancer. Patch 6 is a fix to Vincent's work that
> is necessary to avoid serious imbalances being introduced by the NUMA
> balancer. Patches 7-9 are also Vincents and while I have not reviewed
> them closely myself, others have.
> 
> The rest of the series are a mix of optimisations and improvements, one
> of which stops the NUMA balancer fighting with itself.
> 
> Note that this is not necessarily a universal performance win although
> performance results are generally ok (small gains/losses depending on
> the machine and workload). However, task migrations, page migrations,
> variability and overall overhead are generally reduced.
> 
> The main reference workload I used was specjbb running one JVM per node
> which typically would be expected to split evenly. It's an interesting
> workload because the number of "warehouses" does not linearly related
> to the number of running tasks due to the creation of GC threads
> and other interfering activity. The mmtests configuration used is
> jvm-specjbb2005-multi with two runs -- one with ftrace enabling relevant
> scheduler tracepoints.
> 
> An example of the headline performance of the series is below and the
> tested kernels are
> 
> baseline-v3r1	Patches 1-3 for the tracing
> loadavg-v3	Patches 1-5 (Add half of Vincent's work)
> lbidle-v6	Patches 1-6 Vincent's work with a fix on top
> classify-v6	Patches 1-9 Rest of Vincent's work
> stopsearch-v6	All patches
> 
>                                5.6.0-rc1              5.6.0-rc1              5.6.0-rc1              5.6.0-rc1              5.6.0-rc1
>                              baseline-v3             loadavg-v3              lbidle-v3            classify-v6          stopsearch-v6
> Hmean     tput-1     43593.49 (   0.00%)    41616.85 (  -4.53%)    43657.25 (   0.15%)    38110.46 * -12.58%*    42213.29 (  -3.17%)
> Hmean     tput-2     95692.84 (   0.00%)    93196.89 *  -2.61%*    92287.78 *  -3.56%*    89077.29 (  -6.91%)    96474.49 *   0.82%*
> Hmean     tput-3    143813.12 (   0.00%)   134447.05 *  -6.51%*   134587.84 *  -6.41%*   133706.98 (  -7.03%)   144279.90 (   0.32%)
> Hmean     tput-4    190702.67 (   0.00%)   176533.79 *  -7.43%*   182278.42 *  -4.42%*   181405.19 (  -4.88%)   189948.10 (  -0.40%)
> Hmean     tput-5    230242.39 (   0.00%)   209059.51 *  -9.20%*   223219.06 (  -3.05%)   227188.16 (  -1.33%)   225220.39 (  -2.18%)
> Hmean     tput-6    274868.74 (   0.00%)   246470.42 * -10.33%*   258387.09 *  -6.00%*   264252.76 (  -3.86%)   271429.49 (  -1.25%)
> Hmean     tput-7    312281.15 (   0.00%)   284564.06 *  -8.88%*   296446.00 *  -5.07%*   302682.72 (  -3.07%)   309187.26 (  -0.99%)
> Hmean     tput-8    347261.31 (   0.00%)   332019.39 *  -4.39%*   331202.25 *  -4.62%*   339469.52 (  -2.24%)   345504.60 (  -0.51%)
> Hmean     tput-9    387336.25 (   0.00%)   352219.62 *  -9.07%*   370222.03 *  -4.42%*   367077.01 (  -5.23%)   381610.17 (  -1.48%)
> Hmean     tput-10   421586.76 (   0.00%)   397304.22 (  -5.76%)   405458.01 (  -3.83%)   416689.66 (  -1.16%)   415549.97 (  -1.43%)
> Hmean     tput-11   459422.43 (   0.00%)   398023.51 * -13.36%*   441999.08 (  -3.79%)   449912.39 (  -2.07%)   454458.04 (  -1.08%)
> Hmean     tput-12   499087.97 (   0.00%)   400914.35 * -19.67%*   475755.59 (  -4.68%)   493678.32 (  -1.08%)   493936.79 (  -1.03%)
> Hmean     tput-13   536335.59 (   0.00%)   406101.41 * -24.28%*   514858.97 (  -4.00%)   528496.01 (  -1.46%)   530662.68 (  -1.06%)
> Hmean     tput-14   571542.75 (   0.00%)   478797.13 * -16.23%*   551716.00 (  -3.47%)   553771.29 (  -3.11%)   565915.55 (  -0.98%)
> Hmean     tput-15   601412.81 (   0.00%)   534776.98 * -11.08%*   580105.28 (  -3.54%)   597513.89 (  -0.65%)   596192.34 (  -0.87%)
> Hmean     tput-16   629817.55 (   0.00%)   407294.29 * -35.33%*   615606.40 (  -2.26%)   630044.12 (   0.04%)   627806.13 (  -0.32%)
> Hmean     tput-17   667025.18 (   0.00%)   457416.34 * -31.42%*   626074.81 (  -6.14%)   659706.41 (  -1.10%)   658350.40 (  -1.30%)
> Hmean     tput-18   688148.21 (   0.00%)   518534.45 * -24.65%*   663161.87 (  -3.63%)   675616.08 (  -1.82%)   682224.35 (  -0.86%)
> Hmean     tput-19   705092.87 (   0.00%)   466530.37 * -33.83%*   689430.29 (  -2.22%)   691050.89 (  -1.99%)   705532.41 (   0.06%)
> Hmean     tput-20   711481.44 (   0.00%)   564355.80 * -20.68%*   692170.67 (  -2.71%)   717866.36 (   0.90%)   716243.50 (   0.67%)
> Hmean     tput-21   739790.92 (   0.00%)   508542.10 * -31.26%*   712348.91 (  -3.71%)   724666.68 (  -2.04%)   723361.87 (  -2.22%)
> Hmean     tput-22   730593.57 (   0.00%)   540881.37 ( -25.97%)   709794.02 (  -2.85%)   727177.54 (  -0.47%)   721353.36 (  -1.26%)
> Hmean     tput-23   738401.59 (   0.00%)   561474.46 * -23.96%*   702869.93 (  -4.81%)   720954.73 (  -2.36%)   720813.53 (  -2.38%)
> Hmean     tput-24   731301.95 (   0.00%)   582929.73 * -20.29%*   704337.59 (  -3.69%)   717204.03 *  -1.93%*   714131.38 *  -2.35%*
> Hmean     tput-25   734414.40 (   0.00%)   591635.13 ( -19.44%)   702334.30 (  -4.37%)   720272.39 (  -1.93%)   714245.12 (  -2.75%)
> Hmean     tput-26   724774.17 (   0.00%)   701310.59 (  -3.24%)   700771.85 (  -3.31%)   718084.92 (  -0.92%)   712988.02 (  -1.63%)
> Hmean     tput-27   713484.55 (   0.00%)   632795.43 ( -11.31%)   692213.36 (  -2.98%)   710432.96 (  -0.43%)   703087.86 (  -1.46%)
> Hmean     tput-28   723111.86 (   0.00%)   697438.61 (  -3.55%)   695934.68 (  -3.76%)   708413.26 (  -2.03%)   703449.60 (  -2.72%)
> Hmean     tput-29   714690.69 (   0.00%)   675820.16 (  -5.44%)   689400.90 (  -3.54%)   698436.85 (  -2.27%)   699981.24 (  -2.06%)
> Hmean     tput-30   711106.03 (   0.00%)   699748.68 (  -1.60%)   688439.96 (  -3.19%)   698258.70 (  -1.81%)   691636.96 (  -2.74%)
> Hmean     tput-31   701632.39 (   0.00%)   698807.56 (  -0.40%)   682588.20 (  -2.71%)   696608.99 (  -0.72%)   691015.36 (  -1.51%)
> Hmean     tput-32   703479.77 (   0.00%)   679020.34 (  -3.48%)   674057.11 *  -4.18%*   690706.86 (  -1.82%)   684958.62 (  -2.63%)
> Hmean     tput-33   691594.71 (   0.00%)   686583.04 (  -0.72%)   673382.64 (  -2.63%)   687319.97 (  -0.62%)   683367.65 (  -1.19%)
> Hmean     tput-34   693435.51 (   0.00%)   685137.16 (  -1.20%)   674883.97 (  -2.68%)   684897.97 (  -1.23%)   674923.39 (  -2.67%)
> Hmean     tput-35   688036.06 (   0.00%)   682612.92 (  -0.79%)   668159.93 (  -2.89%)   679301.53 (  -1.27%)   678117.69 (  -1.44%)
> Hmean     tput-36   678957.95 (   0.00%)   670160.33 (  -1.30%)   662395.36 (  -2.44%)   672165.17 (  -1.00%)   668512.57 (  -1.54%)
> Hmean     tput-37   679748.70 (   0.00%)   675428.41 (  -0.64%)   666970.33 (  -1.88%)   674127.70 (  -0.83%)   667644.78 (  -1.78%)
> Hmean     tput-38   669969.62 (   0.00%)   670976.06 (   0.15%)   660499.74 (  -1.41%)   670848.38 (   0.13%)   666646.89 (  -0.50%)
> Hmean     tput-39   669291.41 (   0.00%)   665367.66 (  -0.59%)   649337.71 (  -2.98%)   659685.61 (  -1.44%)   658818.08 (  -1.56%)
> Hmean     tput-40   668074.80 (   0.00%)   672478.06 (   0.66%)   661273.87 (  -1.02%)   665147.36 (  -0.44%)   660279.43 (  -1.17%)
> 
> Note the regression with the first two patches of Vincent's work
> (loadavg-v3) followed by lbidle-v3 which mostly restores the performance
> and the final version keeping things close to performance neutral (showing
> a mix but within noise). This is not universal as a different 2-socket
> machine with fewer cores and older CPUs showed no difference. EPYC 1 and
> EPYC 2 were both affected by the regression as well as a 4-socket Intel
> box but again, the full series is mostly performance neutral for specjbb
> but with less NUMA balancing work.
> 
> While not presented here, the full series also shows that the throughput
> measured by each JVM is less variable.
> 
> The high-level NUMA stats from /proc/vmstat look like this
> 
>                                       5.6.0-rc1      5.6.0-rc1      5.6.0-rc1      5.6.0-rc1      5.6.0-rc1
>                                     baseline-v3     loadavg-v3      lbidle-v3    classify-v3  stopsearch-v3
> Ops NUMA alloc hit                    878062.00      882981.00      957762.00      961630.00      880821.00
> Ops NUMA alloc miss                        0.00           0.00           0.00           0.00           0.00
> Ops NUMA interleave hit               225582.00      237785.00      242554.00      234671.00      234818.00
> Ops NUMA alloc local                  764932.00      763850.00      835939.00      843950.00      763005.00
> Ops NUMA base-page range updates     2517600.00     3707398.00     2889034.00     2442203.00     3303790.00
> Ops NUMA PTE updates                 1754720.00     1672198.00     1569610.00     1356763.00     1591662.00
> Ops NUMA PMD updates                    1490.00        3975.00        2577.00        2120.00        3344.00
> Ops NUMA hint faults                 1678620.00     1586860.00     1475303.00     1285152.00     1512208.00
> Ops NUMA hint local faults %         1461203.00     1389234.00     1181790.00     1085517.00     1411194.00
> Ops NUMA hint local percent               87.05          87.55          80.10          84.47          93.32
> Ops NUMA pages migrated                69473.00       62504.00      121893.00       80802.00       46266.00
> Ops AutoNUMA cost                       8412.04        7961.44        7399.05        6444.39        7585.05
> 
> Overall, the local hints percentage is slightly better but crucially,
> it's done with much less page migrations.
> 
> A separate run gathered information from ftrace and analysed it
> offline. This is based on an earlier version of the series but the changes
> are not significant enough to warrant a rerun as there are no changes in
> the NUMA balancing optimisations.
> 
>                                              5.6.0-rc1       5.6.0-rc1
>                                            baseline-v2   stopsearch-v2
> Ops Migrate failed no CPU                      1871.00          689.00
> Ops Migrate failed move to   idle                 0.00            0.00
> Ops Migrate failed swap task fail               872.00          568.00
> Ops Task Migrated swapped                      6702.00         3344.00
> Ops Task Migrated swapped local NID               0.00            0.00
> Ops Task Migrated swapped within group         1094.00          124.00
> Ops Task Migrated idle CPU                    14409.00        14610.00
> Ops Task Migrated idle CPU local NID              0.00            0.00
> Ops Task Migrate retry                         2355.00         1074.00
> Ops Task Migrate retry success                    0.00            0.00
> Ops Task Migrate retry failed                  2355.00         1074.00
> Ops Load Balance cross NUMA                 1248401.00      1261853.00
> 
> "Migrate failed no CPU" is the times when NUMA balancing did not
> find a suitable page on a preferred node. This is increased because
> the series avoids making decisions that the LB would override.
> 
> "Migrate failed swap task fail" is when migrate_swap fails and it
> can fail for a lot of reasons.
> 
> "Task Migrated swapped" is lower which would would be a concern but in
> this test, locality was higher unlike the test with tracing disabled.
> This event triggers when two tasks are swapped to keep load neutral or
> improved from the perspective of the load balancer. The series attempts
> to swap tasks that both move to their preferred node.
> 
> "Task Migrated idle CPU" is similar and while the the series does try to
> avoid NUMA Balancer and LB fighting each other, it also continues to
> obey overall CPU load balancer.
> 
> "Task Migrate retry failed" happens when NUMA balancing makes multiple
> attempts to place a task on a preferred node. It is slightly reduced here
> but it would generally be expected to happen to maintain CPU load balance.
> 
> A variety of other workloads were evaluated and appear to be mostly
> neutral or improved. netperf running on localhost shows gains between 1-8%
> depending on the machine. hackbench is a mixed bag -- small regressions
> on one machine around 1-2% depending on the group count but up to 15%
> gain on another machine. dbench looks generally ok, very small performance
> gains. pgbench looks ok, small gains and losses, much of which is within
> the noise. schbench (Facebook workload that is sensitive to wakeup
> latencies) is mostly good.  The autonuma benchmark also generally looks
> good, most differences are within the noise but with higher locality
> success rates and fewer page migrations. Other long lived workloads are
> still running but I'm not expecting many surprises.
> 
>  include/linux/sched.h        |  31 ++-
>  include/trace/events/sched.h |  49 ++--
>  kernel/sched/core.c          |  13 -
>  kernel/sched/debug.c         |  17 +-
>  kernel/sched/fair.c          | 626 ++++++++++++++++++++++++++++---------------
>  kernel/sched/pelt.c          |  59 ++--
>  kernel/sched/sched.h         |  42 ++-
>  7 files changed, 535 insertions(+), 302 deletions(-)
> 
> -- 
> 2.16.4
> 

Our Perf team was been comparing tip/sched/core (v5.6-rc3 + lb_numa series) 
with upstream v5.6-rc3 and has noticed some regressions. 

Here's a summary of what Jirka Hladky reported to me:

---
We see following problems when comparing 5.6.0_rc3.tip_lb_numa+-5 against
5.6.0-0.rc3.1.elrdy:

  • performance drop by 20% - 40% across almost all benchmarks especially 
    for low and medium thread counts and especially on 4 NUMA and 8 NUMA nodes
    servers
  • 2 NUMA nodes servers are affected as well, but performance drop is less
    significant (10-20%)
  • servers with just one NUMA node are NOT affected
  • we see big load imbalances between different NUMA nodes
---

The actual data reports are on an intranet web page so they are harder to 
share. I can create PDFs or screenshots but I didn't want to just blast 
those to the list. I'd be happy to send some direclty if you are interested. 

Some data in text format I can easily include shows imbalances across the
numa nodes. This is for NAS sp.C.x benchmark because it was easiest to
pull and see the data in text. The regressions can be seen in other tests
as well.

For example:

5.6.0_rc3.tip_lb_numa+
sp.C.x_008_02  - CPU load average across the individual NUMA nodes 
(timestep is 5 seconds)
# NUMA | AVR | Utilization over time in percentage
  0    | 5   |  12  9  3  0  0 11  8  0  1  3  5 17  9  5  0  0  0 11  3
  1    | 16  |  20 21 10 10  2  6  9 12 11  9  9 23 24 23 24 24 24 19 20
  2    | 21  |  19 23 26 22 22 23 25 20 25 34 38 17 13 13 13 13 13 27 13
  3    | 15  |  19 23 20 21 21 15 15 20 20 18 10 10  9  9  9  9  9  9 11
  4    | 19  |  13 14 15 22 23 20 19 20 17 12 15 15 25 25 24 24 24 14 24
  5    | 3   |   0  2 11  6 20  8  0  0  0  0  0  0  0  0  0  0  0  0  9
  6    | 0   |   0  0  0  5  0  0  0  0  0  0  1  0  0  0  0  0  0  0  0
  7    | 4   |   0  0  0  0  0  0  4 11  9  0  0  0  0  5 12 12 12  3  0

5.6.0-0.rc3.1.elrdy
sp.C.x_008_01  - CPU load average across the individual NUMA nodes 
(timestep is 5 seconds)
# NUMA | AVR | Utilization over time in percentage
  0    | 13  |   6  8 10 10 11 10 18 13 20 17 14 15
  1    | 11  |  10 10 11 11  9 16 12 14  9 11 11 10
  2    | 17  |  25 19 16 11 13 12 11 16 17 22 22 16
  3    | 21  |  21 22 22 23 21 23 23 21 21 17 22 21
  4    | 14  |  20 23 11 12 15 18 12 10  9 13 12 18
  5    | 4   |   0  0  8 10  7  0  8  2  0  0  8  2
  6    | 1   |   0  5  1  0  0  0  0  0  0  1  0  0
  7    | 7   |   7  3 10 10 10 11  3  8 10  4  0  5


mops/s sp_C_x
kernel      		threads	8 	16 	32 	48 	56 	64
5.6.0-0.rc3.1.elrdy 	mean 	22819.8 39955.3 34301.6 31086.8 30316.2 30089.2
			max 	23911.4 47185.6 37995.9 33742.6 33048.0 30853.4
			min 	20799.3 36518.0 31459.4 27559.9 28565.7 29306.3
			stdev 	1096.7 	3965.3 	2350.2 	2134.7 	1871.1 	612.1
			first_q 22780.4 37263.7 32081.7 29955.8 28609.8 29632.0
			median 	22936.7 37577.6 34866.0 32020.8 29299.9 29906.3
			third_q 23671.0 41231.7 35105.1 32154.7 32057.8 30748.0
5.6.0_rc3.tip_lb_numa 	mean 	12111.1 24712.6 30147.8 32560.7 31040.4 28219.4
			max 	17772.9 28934.4 33828.3 33659.3 32275.3 30434.9
			min 	9869.9 	18407.9 25092.7 31512.9 29964.3 25652.8
			stdev 	2868.4 	3673.6 	2877.6 	832.2 	765.8 	1800.6
			first_q 10763.4 23346.1 29608.5 31827.2 30609.4 27008.8
			median 	10855.0 25415.4 30804.4 32462.1 31061.8 27992.6
			third_q 11294.5 27459.2 31405.0 33341.8 31291.2 30007.9
Comparison 		mean 	-47 	-38 	-12 	5 	2 	-6
			median 	-53 	-32 	-12 	1 	6 	-6


On 5.6.0-rc3.tip-lb_numa+ we see:

  • BIG fluctuation in runtime
  • NAS running up to 2x slower than on 5.6.0-0.rc3.1.elrdy

$ grep "Time in seconds" *log
sp.C.x.defaultRun.008threads.loop01.log: Time in seconds = 125.73
sp.C.x.defaultRun.008threads.loop02.log: Time in seconds = 87.54
sp.C.x.defaultRun.008threads.loop03.log: Time in seconds = 86.93
sp.C.x.defaultRun.008threads.loop04.log: Time in seconds = 165.98
sp.C.x.defaultRun.008threads.loop05.log: Time in seconds = 114.78

On the other hand, runtime on 5.6.0-0.rc3.1.elrdy is stable:
$ grep "Time in seconds" *log
sp.C.x.defaultRun.008threads.loop01.log: Time in seconds = 59.83
sp.C.x.defaultRun.008threads.loop02.log: Time in seconds = 67.72
sp.C.x.defaultRun.008threads.loop03.log: Time in seconds = 63.62
sp.C.x.defaultRun.008threads.loop04.log: Time in seconds = 55.01
sp.C.x.defaultRun.008threads.loop05.log: Time in seconds = 65.20

It looks like things are moving around a lot but not getting balanced
as well across the numa nodes. I have a couple of nice heat maps that 
show this if you want to see them. 


Thanks,
Phil

-- 


  parent reply	other threads:[~2020-03-09 19:12 UTC|newest]

Thread overview: 83+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-24  9:52 [PATCH 00/13] Reconcile NUMA balancing decisions with the load balancer v6 Mel Gorman
2020-02-24  9:52 ` [PATCH 01/13] sched/fair: Allow a per-CPU kthread waking a task to stack on the same CPU, to fix XFS performance regression Mel Gorman
2020-02-24  9:52 ` [PATCH 02/13] sched/numa: Trace when no candidate CPU was found on the preferred node Mel Gorman
2020-02-24 15:20   ` [tip: sched/core] " tip-bot2 for Mel Gorman
2020-02-24  9:52 ` [PATCH 03/13] sched/numa: Distinguish between the different task_numa_migrate failure cases Mel Gorman
2020-02-24 15:20   ` [tip: sched/core] sched/numa: Distinguish between the different task_numa_migrate() " tip-bot2 for Mel Gorman
2020-02-24  9:52 ` [PATCH 04/13] sched/fair: Reorder enqueue/dequeue_task_fair path Mel Gorman
2020-02-24 15:20   ` [tip: sched/core] " tip-bot2 for Vincent Guittot
2020-02-24  9:52 ` [PATCH 05/13] sched/numa: Replace runnable_load_avg by load_avg Mel Gorman
2020-02-24 15:20   ` [tip: sched/core] " tip-bot2 for Vincent Guittot
2020-02-24  9:52 ` [PATCH 06/13] sched/numa: Use similar logic to the load balancer for moving between domains with spare capacity Mel Gorman
2020-02-24 15:20   ` [tip: sched/core] " tip-bot2 for Mel Gorman
2020-02-24  9:52 ` [PATCH 07/13] sched/pelt: Remove unused runnable load average Mel Gorman
2020-02-24 15:20   ` [tip: sched/core] " tip-bot2 for Vincent Guittot
2020-02-24  9:52 ` [PATCH 08/13] sched/pelt: Add a new runnable average signal Mel Gorman
2020-02-24 15:20   ` [tip: sched/core] " tip-bot2 for Vincent Guittot
2020-02-24 16:01     ` Valentin Schneider
2020-02-24 16:34       ` Mel Gorman
2020-02-25  8:23       ` Vincent Guittot
2020-02-24  9:52 ` [PATCH 09/13] sched/fair: Take into account runnable_avg to classify group Mel Gorman
2020-02-24 15:20   ` [tip: sched/core] " tip-bot2 for Vincent Guittot
2020-02-24  9:52 ` [PATCH 10/13] sched/numa: Prefer using an idle cpu as a migration target instead of comparing tasks Mel Gorman
2020-02-24 15:20   ` [tip: sched/core] sched/numa: Prefer using an idle CPU " tip-bot2 for Mel Gorman
2020-02-24  9:52 ` [PATCH 11/13] sched/numa: Find an alternative idle CPU if the CPU is part of an active NUMA balance Mel Gorman
2020-02-24 15:20   ` [tip: sched/core] " tip-bot2 for Mel Gorman
2020-02-24  9:52 ` [PATCH 12/13] sched/numa: Bias swapping tasks based on their preferred node Mel Gorman
2020-02-24 15:20   ` [tip: sched/core] " tip-bot2 for Mel Gorman
2020-02-24  9:52 ` [PATCH 13/13] sched/numa: Stop an exhastive search if a reasonable swap candidate or idle CPU is found Mel Gorman
2020-02-24 15:20   ` [tip: sched/core] " tip-bot2 for Mel Gorman
2020-02-24 15:16 ` [PATCH 00/13] Reconcile NUMA balancing decisions with the load balancer v6 Ingo Molnar
2020-02-25 11:59   ` Mel Gorman
2020-02-25 13:28     ` Vincent Guittot
2020-02-25 14:24       ` Mel Gorman
2020-02-25 14:53         ` Vincent Guittot
2020-02-27  9:09         ` Ingo Molnar
2020-03-09 19:12 ` Phil Auld [this message]
2020-03-09 20:36   ` Mel Gorman
2020-03-12  9:54     ` Mel Gorman
2020-03-12 12:17       ` Jirka Hladky
     [not found]       ` <CAE4VaGA4q4_qfC5qe3zaLRfiJhvMaSb2WADgOcQeTwmPvNat+A@mail.gmail.com>
2020-03-12 15:56         ` Mel Gorman
2020-03-12 17:06           ` Jirka Hladky
     [not found]           ` <CAE4VaGD8DUEi6JnKd8vrqUL_8HZXnNyHMoK2D+1-F5wo+5Z53Q@mail.gmail.com>
2020-03-12 21:47             ` Mel Gorman
2020-03-12 22:24               ` Jirka Hladky
2020-03-20 15:08                 ` Jirka Hladky
     [not found]                 ` <CAE4VaGC09OfU2zXeq2yp_N0zXMbTku5ETz0KEocGi-RSiKXv-w@mail.gmail.com>
2020-03-20 15:22                   ` Mel Gorman
2020-03-20 15:33                     ` Jirka Hladky
     [not found]                     ` <CAE4VaGBGbTT8dqNyLWAwuiqL8E+3p1_SqP6XTTV71wNZMjc9Zg@mail.gmail.com>
2020-03-20 16:38                       ` Mel Gorman
2020-03-20 17:21                         ` Jirka Hladky
2020-05-07 15:24                         ` Jirka Hladky
2020-05-07 15:54                           ` Mel Gorman
2020-05-07 16:29                             ` Jirka Hladky
2020-05-07 17:49                               ` Phil Auld
     [not found]                                 ` <20200508034741.13036-1-hdanton@sina.com>
2020-05-18 14:52                                   ` Jirka Hladky
     [not found]                                     ` <20200519043154.10876-1-hdanton@sina.com>
2020-05-20 13:58                                       ` Jirka Hladky
2020-05-20 16:01                                         ` Jirka Hladky
2020-05-21 11:06                                         ` Mel Gorman
     [not found]                                         ` <20200521140931.15232-1-hdanton@sina.com>
2020-05-21 16:04                                           ` Mel Gorman
     [not found]                                           ` <20200522010950.3336-1-hdanton@sina.com>
2020-05-22 11:05                                             ` Mel Gorman
2020-05-08  9:22                               ` Mel Gorman
2020-05-08 11:05                                 ` Jirka Hladky
     [not found]                                 ` <CAE4VaGC_v6On-YvqdTwAWu3Mq4ofiV0pLov-QpV+QHr_SJr+Rw@mail.gmail.com>
2020-05-13 14:57                                   ` Jirka Hladky
2020-05-13 15:30                                     ` Mel Gorman
2020-05-13 16:20                                       ` Jirka Hladky
2020-05-14  9:50                                         ` Mel Gorman
     [not found]                                           ` <CAE4VaGCGUFOAZ+YHDnmeJ95o4W0j04Yb7EWnf8a43caUQs_WuQ@mail.gmail.com>
2020-05-14 10:08                                             ` Mel Gorman
2020-05-14 10:22                                               ` Jirka Hladky
2020-05-14 11:50                                                 ` Mel Gorman
2020-05-14 13:34                                                   ` Jirka Hladky
2020-05-14 15:31                                       ` Peter Zijlstra
2020-05-15  8:47                                         ` Mel Gorman
2020-05-15 11:17                                           ` Peter Zijlstra
2020-05-15 13:03                                             ` Mel Gorman
2020-05-15 13:12                                               ` Peter Zijlstra
2020-05-15 13:28                                                 ` Peter Zijlstra
2020-05-15 14:24                                             ` Peter Zijlstra
2020-05-21 10:38                                               ` Mel Gorman
2020-05-21 11:41                                                 ` Peter Zijlstra
2020-05-22 13:28                                                   ` Mel Gorman
2020-05-22 14:38                                                     ` Peter Zijlstra
2020-05-15 11:28                                           ` Peter Zijlstra
2020-05-15 12:22                                             ` Mel Gorman
2020-05-15 12:51                                               ` Peter Zijlstra
2020-05-15 14:43                                       ` Jirka Hladky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200309191233.GG10065@pauld.bos.csb \
    --to=pauld@redhat.com \
    --cc=bsegall@google.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=hdanton@sina.com \
    --cc=juri.lelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@techsingularity.net \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=valentin.schneider@arm.com \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.