linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] sched/fair: NOHZ cleanups and misfit improvement
@ 2019-02-11 17:59 Valentin Schneider
  2019-02-11 17:59 ` [PATCH v2 1/3] sched/fair: Comment some nohz_balancer_kick() kick conditions Valentin Schneider
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Valentin Schneider @ 2019-02-11 17:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, peterz, vincent.guittot, morten.rasmussen, Dietmar.Eggemann

In

  commit 5fbdfae5221a ("sched/fair: Kick nohz balance if rq->misfit_task_load")

was added a trigger for nohz kicks, which is required to offload misfit tasks
from LITTLE to big CPUs. However, those kicks could be issued a lot more
frequently than what is strictly needed.

This patch-set tunes down unneeded nohz kicks.

- Patch 1 adds some more comments to nohz_balancer_kick()
- Patches [2-3] tweak the nohz kick conditions for asymmetric systems

* Changes since v1

  - Patches 1-3 from v1 are in tip/sched/core and thus not included
    tip HEAD is 1b5500d73466 ("sched/fair: Remove unused 'sd' parameter from select_idle_smt()")
  - Patch 1 from v2 is new (Peter)
  - Patch 3 from v2 (5 from v1) now shuffles conditions to avoid a goto (Peter)

* nohz_balancer_kick() shuffling impact

  The ASYM_PACKING loop used to be towards the end of nohz_balancer_kick(),
  and the LLC condition was higher up. Since the LLC condition is very
  often true, we probably were avoiding the loop most of the time on systems
  that use ASYM_PACKING. However, I don't have one at hand and I'm not sure
  hacking up a kernel to enable ASYM_PACKING on a system that doesn't need it
  would be truly relevant.

  I ran 20 iterations of

    'hackbench -g 1 -l 100000'

  on a 2-sockets Xeon E5 (40 logical cores, no ASYM_PACKING) but the difference
  (hackbench duration & nohz_balancer_kick() FTrace profiling) lies in the noise.

--------------------------------------------------------------------------------
* Testing
** kick_ilb() hits
  This causes a large reduction in calls to kick_ilb() (and thus subsequent
  rescheduling interrupts & useless nohz balance calls) in most scenarios.

  The "best case" one is running NR_BIG_CPUS big tasks, which I tested with
  4 50% periodic tasks running for 5 seconds on my HiKey960 (4x4 big.LITTLE):

  | CPU | hits (baseline) | hits (patchset) |
  |-----+-----------------+-----------------|
  |   0 |              31 |              41 |
  |   1 |              21 |               3 |
  |   2 |              35 |               2 |
  |   3 |               9 |               4 |
  |-----+-----------------+-----------------|
  |   4 |             170 |               4 |
  |   5 |             573 |               4 |
  |   6 |             544 |               4 |
  |   7 |             579 |               4 |


  Something a bit less idealistic with NR_CPUS-1 big tasks still shows some
  improvements (7 100% tasks running for 5 seconds on my HiKey960):

  | CPU | hits (baseline) | hits (patchset) |
  |-----+-----------------+-----------------|
  |   0 |              14 |             122 |
  |   1 |              47 |             162 |
  |   2 |              11 |             156 |
  |   3 |               9 |               3 |
  |-----+-----------------+-----------------|
  |   4 |              53 |               6 |
  |   5 |             276 |              13 |
  |   6 |             312 |               7 |
  |   7 |             250 |              11 |

  I was surprised to see such an increase in calls to kick_ilb() from LITTLE
  CPUs ([0-3]), but after a bit of investigation it turns out that the big
  CPUs would always run nohz_balancer_kick() a jiffy before the LITTLEs, so
  the LITTLEs would always bail out because nohz.next_balance had just been
  updated before they called nohz_balancer_kick(). IOW,

      time_before(now, nohz.next_balance)

  would always be true on CPUs [0-3] during my workload. Quieting the kicks
  issued by the big CPUs allowed the LITTLEs to execute nohz_balancer_kick()
  past that condition, explaining the higher number of kicks issued from LITTLE
  CPUs.
  
** misfit behaviour
  For good measure I also ran the usual misfit tests [1] which showed no
  particular change.

[1]: https://github.com/ARM-software/lisa/blob/next/lisa/tests/kernel/scheduler/misfit.py

Valentin Schneider (3):
  sched/fair: Comment some nohz_balancer_kick() kick conditions
  sched/fair: Tune down misfit nohz kicks
  sched/fair: Skip LLC nohz logic for asymmetric systems

 kernel/sched/fair.c | 84 +++++++++++++++++++++++++++++++++------------
 1 file changed, 63 insertions(+), 21 deletions(-)

--
2.20.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2 1/3] sched/fair: Comment some nohz_balancer_kick() kick conditions
  2019-02-11 17:59 [PATCH v2 0/3] sched/fair: NOHZ cleanups and misfit improvement Valentin Schneider
@ 2019-02-11 17:59 ` Valentin Schneider
  2019-03-09 14:36   ` [tip:sched/urgent] " tip-bot for Valentin Schneider
  2019-03-19 11:12   ` tip-bot for Valentin Schneider
  2019-02-11 17:59 ` [PATCH v2 2/3] sched/fair: Tune down misfit nohz kicks Valentin Schneider
  2019-02-11 17:59 ` [PATCH v2 3/3] sched/fair: Skip LLC nohz logic for asymmetric systems Valentin Schneider
  2 siblings, 2 replies; 10+ messages in thread
From: Valentin Schneider @ 2019-02-11 17:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, peterz, vincent.guittot, morten.rasmussen, Dietmar.Eggemann

We now have a comment explaining the first sched_domain based NOHZ kick,
so might as well comment them all.

While at it, unwrap a line that fits under 80 characters.

Co-authored-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
---
 kernel/sched/fair.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 17fcd15400e1..0f32e9830ec3 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9612,8 +9612,12 @@ static noinline void nohz_balancer_kick(struct rq *rq)
 
 	sd = rcu_dereference(rq->sd);
 	if (sd) {
-		if ((rq->cfs.h_nr_running >= 1) &&
-		    check_cpu_capacity(rq, sd)) {
+		/*
+		 * If there's a CFS task and the current CPU has reduced
+		 * capacity; kick the ILB to see if there's a better CPU to run
+		 * on.
+		 */
+		if (rq->cfs.h_nr_running >= 1 && check_cpu_capacity(rq, sd)) {
 			flags = NOHZ_KICK_MASK;
 			goto unlock;
 		}
@@ -9621,6 +9625,11 @@ static noinline void nohz_balancer_kick(struct rq *rq)
 
 	sd = rcu_dereference(per_cpu(sd_asym_packing, cpu));
 	if (sd) {
+		/*
+		 * When ASYM_PACKING; see if there's a more preferred CPU
+		 * currently idle; in which case, kick the ILB to move tasks
+		 * around.
+		 */
 		for_each_cpu_and(i, sched_domain_span(sd), nohz.idle_cpus_mask) {
 			if (sched_asym_prefer(i, cpu)) {
 				flags = NOHZ_KICK_MASK;
--
2.20.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 2/3] sched/fair: Tune down misfit nohz kicks
  2019-02-11 17:59 [PATCH v2 0/3] sched/fair: NOHZ cleanups and misfit improvement Valentin Schneider
  2019-02-11 17:59 ` [PATCH v2 1/3] sched/fair: Comment some nohz_balancer_kick() kick conditions Valentin Schneider
@ 2019-02-11 17:59 ` Valentin Schneider
  2019-03-09 14:37   ` [tip:sched/urgent] sched/fair: Tune down misfit NOHZ kicks tip-bot for Valentin Schneider
  2019-03-19 11:13   ` tip-bot for Valentin Schneider
  2019-02-11 17:59 ` [PATCH v2 3/3] sched/fair: Skip LLC nohz logic for asymmetric systems Valentin Schneider
  2 siblings, 2 replies; 10+ messages in thread
From: Valentin Schneider @ 2019-02-11 17:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, peterz, vincent.guittot, morten.rasmussen, Dietmar.Eggemann

In

  commmit 3b1baa6496e6 ("sched/fair: Add 'group_misfit_task' load-balance type")

we set rq->misfit_task_load whenever the current running task has a
utilization greater than 80% of rq->cpu_capacity. A non-zero value in
this field enables misfit load balancing.

However, if the task being looked at is already running on a CPU of
highest capacity, there's nothing more we can do for it. We can
currently spot this in update_sd_pick_busiest(), which prevents us
from selecting a sched_group of group_type == group_misfit_task as the
busiest group, but we don't do any of that in nohz_balancer_kick().

This means that we could repeatedly kick nohz CPUs when there's no
improvements in terms of load balance to be done.

Introduce a check_misfit_status() helper that returns true iff there
is a CPU in the system that could give more CPU capacity to a rq's
misfit task - IOW, there exists a CPU of higher capacity_orig or the
rq's CPU is severely pressured by rt/IRQ.

Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
---
 kernel/sched/fair.c | 26 +++++++++++++++++++++++++-
 1 file changed, 25 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0f32e9830ec3..f9f11e140ba3 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8058,6 +8058,18 @@ check_cpu_capacity(struct rq *rq, struct sched_domain *sd)
 				(rq->cpu_capacity_orig * 100));
 }
 
+/*
+ * Check whether a rq has a misfit task and if it looks like we can actually
+ * help that task: we can migrate the task to a CPU of higher capacity, or
+ * the task's current CPU is heavily pressured.
+ */
+static inline int check_misfit_status(struct rq *rq, struct sched_domain *sd)
+{
+	return rq->misfit_task_load &&
+		(rq->cpu_capacity_orig < rq->rd->max_cpu_capacity ||
+		 check_cpu_capacity(rq, sd));
+}
+
 /*
  * Group imbalance indicates (and tries to solve) the problem where balancing
  * groups is inadequate due to ->cpus_allowed constraints.
@@ -9585,7 +9597,7 @@ static noinline void nohz_balancer_kick(struct rq *rq)
 	if (time_before(now, nohz.next_balance))
 		goto out;
 
-	if (rq->nr_running >= 2 || rq->misfit_task_load) {
+	if (rq->nr_running >= 2) {
 		flags = NOHZ_KICK_MASK;
 		goto out;
 	}
@@ -9623,6 +9635,18 @@ static noinline void nohz_balancer_kick(struct rq *rq)
 		}
 	}
 
+	sd = rcu_dereference(per_cpu(sd_asym_cpucapacity, cpu));
+	if (sd) {
+		/*
+		 * When ASYM_CPUCAPACITY; see if there's a higher capacity CPU
+		 * to run the misfit task on.
+		 */
+		if (check_misfit_status(rq, sd)) {
+			flags = NOHZ_KICK_MASK;
+			goto unlock;
+		}
+	}
+
 	sd = rcu_dereference(per_cpu(sd_asym_packing, cpu));
 	if (sd) {
 		/*
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 3/3] sched/fair: Skip LLC nohz logic for asymmetric systems
  2019-02-11 17:59 [PATCH v2 0/3] sched/fair: NOHZ cleanups and misfit improvement Valentin Schneider
  2019-02-11 17:59 ` [PATCH v2 1/3] sched/fair: Comment some nohz_balancer_kick() kick conditions Valentin Schneider
  2019-02-11 17:59 ` [PATCH v2 2/3] sched/fair: Tune down misfit nohz kicks Valentin Schneider
@ 2019-02-11 17:59 ` Valentin Schneider
  2019-03-09 14:38   ` [tip:sched/urgent] sched/fair: Skip LLC NOHZ " tip-bot for Valentin Schneider
  2019-03-19 11:13   ` tip-bot for Valentin Schneider
  2 siblings, 2 replies; 10+ messages in thread
From: Valentin Schneider @ 2019-02-11 17:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, peterz, vincent.guittot, morten.rasmussen, Dietmar.Eggemann

The LLC nohz condition will become true as soon as >=2 CPUs in a
single LLC domain are busy. On big.LITTLE systems, this translates to
two or more CPUs of a "cluster" (big or LITTLE) being busy.

Issuing a nohz kick in these conditions isn't desired for asymmetric
systems, as if the busy CPUs can provide enough compute capacity to
the running tasks, then we can leave the nohz CPUs in peace.

Skip the LLC nohz condition for asymmetric systems, and rely on
nr_running & capacity checks to trigger nohz kicks when the system
actually needs them.

Suggested-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
---
 kernel/sched/fair.c | 65 ++++++++++++++++++++++++++-------------------
 1 file changed, 37 insertions(+), 28 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f9f11e140ba3..a34e1610e21a 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9603,24 +9603,6 @@ static noinline void nohz_balancer_kick(struct rq *rq)
 	}
 
 	rcu_read_lock();
-	sds = rcu_dereference(per_cpu(sd_llc_shared, cpu));
-	if (sds) {
-		/*
-		 * If there is an imbalance between LLC domains (IOW we could
-		 * increase the overall cache use), we need some less-loaded LLC
-		 * domain to pull some load. Likewise, we may need to spread
-		 * load within the current LLC domain (e.g. packed SMT cores but
-		 * other CPUs are idle). We can't really know from here how busy
-		 * the others are - so just get a nohz balance going if it looks
-		 * like this LLC domain has tasks we could move.
-		 */
-		nr_busy = atomic_read(&sds->nr_busy_cpus);
-		if (nr_busy > 1) {
-			flags = NOHZ_KICK_MASK;
-			goto unlock;
-		}
-
-	}
 
 	sd = rcu_dereference(rq->sd);
 	if (sd) {
@@ -9635,6 +9617,21 @@ static noinline void nohz_balancer_kick(struct rq *rq)
 		}
 	}
 
+	sd = rcu_dereference(per_cpu(sd_asym_packing, cpu));
+	if (sd) {
+		/*
+		 * When ASYM_PACKING; see if there's a more preferred CPU
+		 * currently idle; in which case, kick the ILB to move tasks
+		 * around.
+		 */
+		for_each_cpu_and(i, sched_domain_span(sd), nohz.idle_cpus_mask) {
+			if (sched_asym_prefer(i, cpu)) {
+				flags = NOHZ_KICK_MASK;
+				goto unlock;
+			}
+		}
+	}
+
 	sd = rcu_dereference(per_cpu(sd_asym_cpucapacity, cpu));
 	if (sd) {
 		/*
@@ -9645,20 +9642,32 @@ static noinline void nohz_balancer_kick(struct rq *rq)
 			flags = NOHZ_KICK_MASK;
 			goto unlock;
 		}
+
+		/*
+		 * For asymmetric systems, we do not want to nicely balance
+		 * cache use, instead we want to embrace asymmetry and only
+		 * ensure tasks have enough CPU capacity.
+		 *
+		 * Skip the LLC logic because it's not relevant in that case.
+		 */
+		goto unlock;
 	}
 
-	sd = rcu_dereference(per_cpu(sd_asym_packing, cpu));
-	if (sd) {
+	sds = rcu_dereference(per_cpu(sd_llc_shared, cpu));
+	if (sds) {
 		/*
-		 * When ASYM_PACKING; see if there's a more preferred CPU
-		 * currently idle; in which case, kick the ILB to move tasks
-		 * around.
+		 * If there is an imbalance between LLC domains (IOW we could
+		 * increase the overall cache use), we need some less-loaded LLC
+		 * domain to pull some load. Likewise, we may need to spread
+		 * load within the current LLC domain (e.g. packed SMT cores but
+		 * other CPUs are idle). We can't really know from here how busy
+		 * the others are - so just get a nohz balance going if it looks
+		 * like this LLC domain has tasks we could move.
 		 */
-		for_each_cpu_and(i, sched_domain_span(sd), nohz.idle_cpus_mask) {
-			if (sched_asym_prefer(i, cpu)) {
-				flags = NOHZ_KICK_MASK;
-				goto unlock;
-			}
+		nr_busy = atomic_read(&sds->nr_busy_cpus);
+		if (nr_busy > 1) {
+			flags = NOHZ_KICK_MASK;
+			goto unlock;
 		}
 	}
 unlock:
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [tip:sched/urgent] sched/fair: Comment some nohz_balancer_kick() kick conditions
  2019-02-11 17:59 ` [PATCH v2 1/3] sched/fair: Comment some nohz_balancer_kick() kick conditions Valentin Schneider
@ 2019-03-09 14:36   ` tip-bot for Valentin Schneider
  2019-03-19 11:12   ` tip-bot for Valentin Schneider
  1 sibling, 0 replies; 10+ messages in thread
From: tip-bot for Valentin Schneider @ 2019-03-09 14:36 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: mingo, peterz, tglx, valentin.schneider, luto, dave.hansen,
	linux-kernel, torvalds, hpa, bp, riel

Commit-ID:  66856ff8e727e7da7da44a634f314b236510f419
Gitweb:     https://git.kernel.org/tip/66856ff8e727e7da7da44a634f314b236510f419
Author:     Valentin Schneider <valentin.schneider@arm.com>
AuthorDate: Mon, 11 Feb 2019 17:59:44 +0000
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Sat, 9 Mar 2019 14:03:52 +0100

sched/fair: Comment some nohz_balancer_kick() kick conditions

We now have a comment explaining the first sched_domain based NOHZ kick,
so might as well comment them all.

While at it, unwrap a line that fits under 80 characters.

Co-authored-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Dietmar.Eggemann@arm.com
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: morten.rasmussen@arm.com
Cc: vincent.guittot@linaro.org
Link: https://lkml.kernel.org/r/20190211175946.4961-2-valentin.schneider@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8213ff6e365d..e6f7d39d4d45 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9612,8 +9612,12 @@ static void nohz_balancer_kick(struct rq *rq)
 
 	sd = rcu_dereference(rq->sd);
 	if (sd) {
-		if ((rq->cfs.h_nr_running >= 1) &&
-		    check_cpu_capacity(rq, sd)) {
+		/*
+		 * If there's a CFS task and the current CPU has reduced
+		 * capacity; kick the ILB to see if there's a better CPU to run
+		 * on.
+		 */
+		if (rq->cfs.h_nr_running >= 1 && check_cpu_capacity(rq, sd)) {
 			flags = NOHZ_KICK_MASK;
 			goto unlock;
 		}
@@ -9621,6 +9625,11 @@ static void nohz_balancer_kick(struct rq *rq)
 
 	sd = rcu_dereference(per_cpu(sd_asym_packing, cpu));
 	if (sd) {
+		/*
+		 * When ASYM_PACKING; see if there's a more preferred CPU
+		 * currently idle; in which case, kick the ILB to move tasks
+		 * around.
+		 */
 		for_each_cpu_and(i, sched_domain_span(sd), nohz.idle_cpus_mask) {
 			if (sched_asym_prefer(i, cpu)) {
 				flags = NOHZ_KICK_MASK;

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [tip:sched/urgent] sched/fair: Tune down misfit NOHZ kicks
  2019-02-11 17:59 ` [PATCH v2 2/3] sched/fair: Tune down misfit nohz kicks Valentin Schneider
@ 2019-03-09 14:37   ` tip-bot for Valentin Schneider
  2019-03-19 11:13   ` tip-bot for Valentin Schneider
  1 sibling, 0 replies; 10+ messages in thread
From: tip-bot for Valentin Schneider @ 2019-03-09 14:37 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: mingo, linux-kernel, tglx, riel, dave.hansen, peterz, hpa,
	valentin.schneider, bp, luto, torvalds

Commit-ID:  88a2f7ffd19245a75143a8180f75c0972cff0350
Gitweb:     https://git.kernel.org/tip/88a2f7ffd19245a75143a8180f75c0972cff0350
Author:     Valentin Schneider <valentin.schneider@arm.com>
AuthorDate: Mon, 11 Feb 2019 17:59:45 +0000
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Sat, 9 Mar 2019 14:03:53 +0100

sched/fair: Tune down misfit NOHZ kicks

In this commit:

  3b1baa6496e6 ("sched/fair: Add 'group_misfit_task' load-balance type")

we set rq->misfit_task_load whenever the current running task has a
utilization greater than 80% of rq->cpu_capacity. A non-zero value in
this field enables misfit load balancing.

However, if the task being looked at is already running on a CPU of
highest capacity, there's nothing more we can do for it. We can
currently spot this in update_sd_pick_busiest(), which prevents us
from selecting a sched_group of group_type == group_misfit_task as the
busiest group, but we don't do any of that in nohz_balancer_kick().

This means that we could repeatedly kick NOHZ CPUs when there's no
improvements in terms of load balance to be done.

Introduce a check_misfit_status() helper that returns true iff there
is a CPU in the system that could give more CPU capacity to a rq's
misfit task - IOW, there exists a CPU of higher capacity_orig or the
rq's CPU is severely pressured by rt/IRQ.

Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Dietmar.Eggemann@arm.com
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: morten.rasmussen@arm.com
Cc: vincent.guittot@linaro.org
Link: https://lkml.kernel.org/r/20190211175946.4961-3-valentin.schneider@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 26 +++++++++++++++++++++++++-
 1 file changed, 25 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e6f7d39d4d45..f0d2f8a352bf 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8058,6 +8058,18 @@ check_cpu_capacity(struct rq *rq, struct sched_domain *sd)
 				(rq->cpu_capacity_orig * 100));
 }
 
+/*
+ * Check whether a rq has a misfit task and if it looks like we can actually
+ * help that task: we can migrate the task to a CPU of higher capacity, or
+ * the task's current CPU is heavily pressured.
+ */
+static inline int check_misfit_status(struct rq *rq, struct sched_domain *sd)
+{
+	return rq->misfit_task_load &&
+		(rq->cpu_capacity_orig < rq->rd->max_cpu_capacity ||
+		 check_cpu_capacity(rq, sd));
+}
+
 /*
  * Group imbalance indicates (and tries to solve) the problem where balancing
  * groups is inadequate due to ->cpus_allowed constraints.
@@ -9585,7 +9597,7 @@ static void nohz_balancer_kick(struct rq *rq)
 	if (time_before(now, nohz.next_balance))
 		goto out;
 
-	if (rq->nr_running >= 2 || rq->misfit_task_load) {
+	if (rq->nr_running >= 2) {
 		flags = NOHZ_KICK_MASK;
 		goto out;
 	}
@@ -9623,6 +9635,18 @@ static void nohz_balancer_kick(struct rq *rq)
 		}
 	}
 
+	sd = rcu_dereference(per_cpu(sd_asym_cpucapacity, cpu));
+	if (sd) {
+		/*
+		 * When ASYM_CPUCAPACITY; see if there's a higher capacity CPU
+		 * to run the misfit task on.
+		 */
+		if (check_misfit_status(rq, sd)) {
+			flags = NOHZ_KICK_MASK;
+			goto unlock;
+		}
+	}
+
 	sd = rcu_dereference(per_cpu(sd_asym_packing, cpu));
 	if (sd) {
 		/*

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [tip:sched/urgent] sched/fair: Skip LLC NOHZ logic for asymmetric systems
  2019-02-11 17:59 ` [PATCH v2 3/3] sched/fair: Skip LLC nohz logic for asymmetric systems Valentin Schneider
@ 2019-03-09 14:38   ` tip-bot for Valentin Schneider
  2019-03-19 11:13   ` tip-bot for Valentin Schneider
  1 sibling, 0 replies; 10+ messages in thread
From: tip-bot for Valentin Schneider @ 2019-03-09 14:38 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: peterz, morten.rasmussen, hpa, linux-kernel, mingo, bp, riel,
	torvalds, dave.hansen, luto, valentin.schneider, tglx

Commit-ID:  ce28d2e53cda890771360d32259495dd6a9c4253
Gitweb:     https://git.kernel.org/tip/ce28d2e53cda890771360d32259495dd6a9c4253
Author:     Valentin Schneider <valentin.schneider@arm.com>
AuthorDate: Mon, 11 Feb 2019 17:59:46 +0000
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Sat, 9 Mar 2019 14:03:53 +0100

sched/fair: Skip LLC NOHZ logic for asymmetric systems

The LLC NOHZ condition will become true as soon as >=2 CPUs in a
single LLC domain are busy. On big.LITTLE systems, this translates to
two or more CPUs of a "cluster" (big or LITTLE) being busy.

Issuing a NOHZ kick in these conditions isn't desired for asymmetric
systems, as if the busy CPUs can provide enough compute capacity to
the running tasks, then we can leave the NOHZ CPUs in peace.

Skip the LLC NOHZ condition for asymmetric systems, and rely on
nr_running & capacity checks to trigger NOHZ kicks when the system
actually needs them.

Suggested-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Dietmar.Eggemann@arm.com
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: vincent.guittot@linaro.org
Link: https://lkml.kernel.org/r/20190211175946.4961-4-valentin.schneider@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 65 ++++++++++++++++++++++++++++++-----------------------
 1 file changed, 37 insertions(+), 28 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f0d2f8a352bf..51003e1c794d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9603,24 +9603,6 @@ static void nohz_balancer_kick(struct rq *rq)
 	}
 
 	rcu_read_lock();
-	sds = rcu_dereference(per_cpu(sd_llc_shared, cpu));
-	if (sds) {
-		/*
-		 * If there is an imbalance between LLC domains (IOW we could
-		 * increase the overall cache use), we need some less-loaded LLC
-		 * domain to pull some load. Likewise, we may need to spread
-		 * load within the current LLC domain (e.g. packed SMT cores but
-		 * other CPUs are idle). We can't really know from here how busy
-		 * the others are - so just get a nohz balance going if it looks
-		 * like this LLC domain has tasks we could move.
-		 */
-		nr_busy = atomic_read(&sds->nr_busy_cpus);
-		if (nr_busy > 1) {
-			flags = NOHZ_KICK_MASK;
-			goto unlock;
-		}
-
-	}
 
 	sd = rcu_dereference(rq->sd);
 	if (sd) {
@@ -9635,6 +9617,21 @@ static void nohz_balancer_kick(struct rq *rq)
 		}
 	}
 
+	sd = rcu_dereference(per_cpu(sd_asym_packing, cpu));
+	if (sd) {
+		/*
+		 * When ASYM_PACKING; see if there's a more preferred CPU
+		 * currently idle; in which case, kick the ILB to move tasks
+		 * around.
+		 */
+		for_each_cpu_and(i, sched_domain_span(sd), nohz.idle_cpus_mask) {
+			if (sched_asym_prefer(i, cpu)) {
+				flags = NOHZ_KICK_MASK;
+				goto unlock;
+			}
+		}
+	}
+
 	sd = rcu_dereference(per_cpu(sd_asym_cpucapacity, cpu));
 	if (sd) {
 		/*
@@ -9645,20 +9642,32 @@ static void nohz_balancer_kick(struct rq *rq)
 			flags = NOHZ_KICK_MASK;
 			goto unlock;
 		}
+
+		/*
+		 * For asymmetric systems, we do not want to nicely balance
+		 * cache use, instead we want to embrace asymmetry and only
+		 * ensure tasks have enough CPU capacity.
+		 *
+		 * Skip the LLC logic because it's not relevant in that case.
+		 */
+		goto unlock;
 	}
 
-	sd = rcu_dereference(per_cpu(sd_asym_packing, cpu));
-	if (sd) {
+	sds = rcu_dereference(per_cpu(sd_llc_shared, cpu));
+	if (sds) {
 		/*
-		 * When ASYM_PACKING; see if there's a more preferred CPU
-		 * currently idle; in which case, kick the ILB to move tasks
-		 * around.
+		 * If there is an imbalance between LLC domains (IOW we could
+		 * increase the overall cache use), we need some less-loaded LLC
+		 * domain to pull some load. Likewise, we may need to spread
+		 * load within the current LLC domain (e.g. packed SMT cores but
+		 * other CPUs are idle). We can't really know from here how busy
+		 * the others are - so just get a nohz balance going if it looks
+		 * like this LLC domain has tasks we could move.
 		 */
-		for_each_cpu_and(i, sched_domain_span(sd), nohz.idle_cpus_mask) {
-			if (sched_asym_prefer(i, cpu)) {
-				flags = NOHZ_KICK_MASK;
-				goto unlock;
-			}
+		nr_busy = atomic_read(&sds->nr_busy_cpus);
+		if (nr_busy > 1) {
+			flags = NOHZ_KICK_MASK;
+			goto unlock;
 		}
 	}
 unlock:

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [tip:sched/urgent] sched/fair: Comment some nohz_balancer_kick() kick conditions
  2019-02-11 17:59 ` [PATCH v2 1/3] sched/fair: Comment some nohz_balancer_kick() kick conditions Valentin Schneider
  2019-03-09 14:36   ` [tip:sched/urgent] " tip-bot for Valentin Schneider
@ 2019-03-19 11:12   ` tip-bot for Valentin Schneider
  1 sibling, 0 replies; 10+ messages in thread
From: tip-bot for Valentin Schneider @ 2019-03-19 11:12 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: hpa, riel, dave.hansen, bp, valentin.schneider, luto,
	linux-kernel, tglx, peterz, torvalds, mingo

Commit-ID:  e25a7a944f1936b5134b7ee06bc432fc701e4aa3
Gitweb:     https://git.kernel.org/tip/e25a7a944f1936b5134b7ee06bc432fc701e4aa3
Author:     Valentin Schneider <valentin.schneider@arm.com>
AuthorDate: Mon, 11 Feb 2019 17:59:44 +0000
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 19 Mar 2019 12:06:15 +0100

sched/fair: Comment some nohz_balancer_kick() kick conditions

We now have a comment explaining the first sched_domain based NOHZ kick,
so might as well comment them all.

While at it, unwrap a line that fits under 80 characters.

Co-authored-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Dietmar.Eggemann@arm.com
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: morten.rasmussen@arm.com
Cc: vincent.guittot@linaro.org
Link: https://lkml.kernel.org/r/20190211175946.4961-2-valentin.schneider@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8213ff6e365d..e6f7d39d4d45 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9612,8 +9612,12 @@ static void nohz_balancer_kick(struct rq *rq)
 
 	sd = rcu_dereference(rq->sd);
 	if (sd) {
-		if ((rq->cfs.h_nr_running >= 1) &&
-		    check_cpu_capacity(rq, sd)) {
+		/*
+		 * If there's a CFS task and the current CPU has reduced
+		 * capacity; kick the ILB to see if there's a better CPU to run
+		 * on.
+		 */
+		if (rq->cfs.h_nr_running >= 1 && check_cpu_capacity(rq, sd)) {
 			flags = NOHZ_KICK_MASK;
 			goto unlock;
 		}
@@ -9621,6 +9625,11 @@ static void nohz_balancer_kick(struct rq *rq)
 
 	sd = rcu_dereference(per_cpu(sd_asym_packing, cpu));
 	if (sd) {
+		/*
+		 * When ASYM_PACKING; see if there's a more preferred CPU
+		 * currently idle; in which case, kick the ILB to move tasks
+		 * around.
+		 */
 		for_each_cpu_and(i, sched_domain_span(sd), nohz.idle_cpus_mask) {
 			if (sched_asym_prefer(i, cpu)) {
 				flags = NOHZ_KICK_MASK;

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [tip:sched/urgent] sched/fair: Tune down misfit NOHZ kicks
  2019-02-11 17:59 ` [PATCH v2 2/3] sched/fair: Tune down misfit nohz kicks Valentin Schneider
  2019-03-09 14:37   ` [tip:sched/urgent] sched/fair: Tune down misfit NOHZ kicks tip-bot for Valentin Schneider
@ 2019-03-19 11:13   ` tip-bot for Valentin Schneider
  1 sibling, 0 replies; 10+ messages in thread
From: tip-bot for Valentin Schneider @ 2019-03-19 11:13 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: valentin.schneider, torvalds, mingo, riel, tglx, bp, luto,
	dave.hansen, linux-kernel, hpa, peterz

Commit-ID:  a0fe2cf086aef213d1b4bca1b1291a3dee8357c9
Gitweb:     https://git.kernel.org/tip/a0fe2cf086aef213d1b4bca1b1291a3dee8357c9
Author:     Valentin Schneider <valentin.schneider@arm.com>
AuthorDate: Mon, 11 Feb 2019 17:59:45 +0000
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 19 Mar 2019 12:06:15 +0100

sched/fair: Tune down misfit NOHZ kicks

In this commit:

  3b1baa6496e6 ("sched/fair: Add 'group_misfit_task' load-balance type")

we set rq->misfit_task_load whenever the current running task has a
utilization greater than 80% of rq->cpu_capacity. A non-zero value in
this field enables misfit load balancing.

However, if the task being looked at is already running on a CPU of
highest capacity, there's nothing more we can do for it. We can
currently spot this in update_sd_pick_busiest(), which prevents us
from selecting a sched_group of group_type == group_misfit_task as the
busiest group, but we don't do any of that in nohz_balancer_kick().

This means that we could repeatedly kick NOHZ CPUs when there's no
improvements in terms of load balance to be done.

Introduce a check_misfit_status() helper that returns true iff there
is a CPU in the system that could give more CPU capacity to a rq's
misfit task - IOW, there exists a CPU of higher capacity_orig or the
rq's CPU is severely pressured by rt/IRQ.

Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Dietmar.Eggemann@arm.com
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: morten.rasmussen@arm.com
Cc: vincent.guittot@linaro.org
Link: https://lkml.kernel.org/r/20190211175946.4961-3-valentin.schneider@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 26 +++++++++++++++++++++++++-
 1 file changed, 25 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e6f7d39d4d45..f0d2f8a352bf 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8058,6 +8058,18 @@ check_cpu_capacity(struct rq *rq, struct sched_domain *sd)
 				(rq->cpu_capacity_orig * 100));
 }
 
+/*
+ * Check whether a rq has a misfit task and if it looks like we can actually
+ * help that task: we can migrate the task to a CPU of higher capacity, or
+ * the task's current CPU is heavily pressured.
+ */
+static inline int check_misfit_status(struct rq *rq, struct sched_domain *sd)
+{
+	return rq->misfit_task_load &&
+		(rq->cpu_capacity_orig < rq->rd->max_cpu_capacity ||
+		 check_cpu_capacity(rq, sd));
+}
+
 /*
  * Group imbalance indicates (and tries to solve) the problem where balancing
  * groups is inadequate due to ->cpus_allowed constraints.
@@ -9585,7 +9597,7 @@ static void nohz_balancer_kick(struct rq *rq)
 	if (time_before(now, nohz.next_balance))
 		goto out;
 
-	if (rq->nr_running >= 2 || rq->misfit_task_load) {
+	if (rq->nr_running >= 2) {
 		flags = NOHZ_KICK_MASK;
 		goto out;
 	}
@@ -9623,6 +9635,18 @@ static void nohz_balancer_kick(struct rq *rq)
 		}
 	}
 
+	sd = rcu_dereference(per_cpu(sd_asym_cpucapacity, cpu));
+	if (sd) {
+		/*
+		 * When ASYM_CPUCAPACITY; see if there's a higher capacity CPU
+		 * to run the misfit task on.
+		 */
+		if (check_misfit_status(rq, sd)) {
+			flags = NOHZ_KICK_MASK;
+			goto unlock;
+		}
+	}
+
 	sd = rcu_dereference(per_cpu(sd_asym_packing, cpu));
 	if (sd) {
 		/*

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [tip:sched/urgent] sched/fair: Skip LLC NOHZ logic for asymmetric systems
  2019-02-11 17:59 ` [PATCH v2 3/3] sched/fair: Skip LLC nohz logic for asymmetric systems Valentin Schneider
  2019-03-09 14:38   ` [tip:sched/urgent] sched/fair: Skip LLC NOHZ " tip-bot for Valentin Schneider
@ 2019-03-19 11:13   ` tip-bot for Valentin Schneider
  1 sibling, 0 replies; 10+ messages in thread
From: tip-bot for Valentin Schneider @ 2019-03-19 11:13 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: mingo, hpa, tglx, valentin.schneider, peterz, luto,
	morten.rasmussen, linux-kernel, dave.hansen, riel, torvalds, bp

Commit-ID:  b9a7b8831600afc51c9ba52c05f12db2266f01c7
Gitweb:     https://git.kernel.org/tip/b9a7b8831600afc51c9ba52c05f12db2266f01c7
Author:     Valentin Schneider <valentin.schneider@arm.com>
AuthorDate: Mon, 11 Feb 2019 17:59:46 +0000
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 19 Mar 2019 12:06:15 +0100

sched/fair: Skip LLC NOHZ logic for asymmetric systems

The LLC NOHZ condition will become true as soon as >=2 CPUs in a
single LLC domain are busy. On big.LITTLE systems, this translates to
two or more CPUs of a "cluster" (big or LITTLE) being busy.

Issuing a NOHZ kick in these conditions isn't desired for asymmetric
systems, as if the busy CPUs can provide enough compute capacity to
the running tasks, then we can leave the NOHZ CPUs in peace.

Skip the LLC NOHZ condition for asymmetric systems, and rely on
nr_running & capacity checks to trigger NOHZ kicks when the system
actually needs them.

Suggested-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Dietmar.Eggemann@arm.com
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: vincent.guittot@linaro.org
Link: https://lkml.kernel.org/r/20190211175946.4961-4-valentin.schneider@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 65 ++++++++++++++++++++++++++++++-----------------------
 1 file changed, 37 insertions(+), 28 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f0d2f8a352bf..51003e1c794d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9603,24 +9603,6 @@ static void nohz_balancer_kick(struct rq *rq)
 	}
 
 	rcu_read_lock();
-	sds = rcu_dereference(per_cpu(sd_llc_shared, cpu));
-	if (sds) {
-		/*
-		 * If there is an imbalance between LLC domains (IOW we could
-		 * increase the overall cache use), we need some less-loaded LLC
-		 * domain to pull some load. Likewise, we may need to spread
-		 * load within the current LLC domain (e.g. packed SMT cores but
-		 * other CPUs are idle). We can't really know from here how busy
-		 * the others are - so just get a nohz balance going if it looks
-		 * like this LLC domain has tasks we could move.
-		 */
-		nr_busy = atomic_read(&sds->nr_busy_cpus);
-		if (nr_busy > 1) {
-			flags = NOHZ_KICK_MASK;
-			goto unlock;
-		}
-
-	}
 
 	sd = rcu_dereference(rq->sd);
 	if (sd) {
@@ -9635,6 +9617,21 @@ static void nohz_balancer_kick(struct rq *rq)
 		}
 	}
 
+	sd = rcu_dereference(per_cpu(sd_asym_packing, cpu));
+	if (sd) {
+		/*
+		 * When ASYM_PACKING; see if there's a more preferred CPU
+		 * currently idle; in which case, kick the ILB to move tasks
+		 * around.
+		 */
+		for_each_cpu_and(i, sched_domain_span(sd), nohz.idle_cpus_mask) {
+			if (sched_asym_prefer(i, cpu)) {
+				flags = NOHZ_KICK_MASK;
+				goto unlock;
+			}
+		}
+	}
+
 	sd = rcu_dereference(per_cpu(sd_asym_cpucapacity, cpu));
 	if (sd) {
 		/*
@@ -9645,20 +9642,32 @@ static void nohz_balancer_kick(struct rq *rq)
 			flags = NOHZ_KICK_MASK;
 			goto unlock;
 		}
+
+		/*
+		 * For asymmetric systems, we do not want to nicely balance
+		 * cache use, instead we want to embrace asymmetry and only
+		 * ensure tasks have enough CPU capacity.
+		 *
+		 * Skip the LLC logic because it's not relevant in that case.
+		 */
+		goto unlock;
 	}
 
-	sd = rcu_dereference(per_cpu(sd_asym_packing, cpu));
-	if (sd) {
+	sds = rcu_dereference(per_cpu(sd_llc_shared, cpu));
+	if (sds) {
 		/*
-		 * When ASYM_PACKING; see if there's a more preferred CPU
-		 * currently idle; in which case, kick the ILB to move tasks
-		 * around.
+		 * If there is an imbalance between LLC domains (IOW we could
+		 * increase the overall cache use), we need some less-loaded LLC
+		 * domain to pull some load. Likewise, we may need to spread
+		 * load within the current LLC domain (e.g. packed SMT cores but
+		 * other CPUs are idle). We can't really know from here how busy
+		 * the others are - so just get a nohz balance going if it looks
+		 * like this LLC domain has tasks we could move.
 		 */
-		for_each_cpu_and(i, sched_domain_span(sd), nohz.idle_cpus_mask) {
-			if (sched_asym_prefer(i, cpu)) {
-				flags = NOHZ_KICK_MASK;
-				goto unlock;
-			}
+		nr_busy = atomic_read(&sds->nr_busy_cpus);
+		if (nr_busy > 1) {
+			flags = NOHZ_KICK_MASK;
+			goto unlock;
 		}
 	}
 unlock:

^ permalink raw reply related	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2019-03-19 11:16 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-11 17:59 [PATCH v2 0/3] sched/fair: NOHZ cleanups and misfit improvement Valentin Schneider
2019-02-11 17:59 ` [PATCH v2 1/3] sched/fair: Comment some nohz_balancer_kick() kick conditions Valentin Schneider
2019-03-09 14:36   ` [tip:sched/urgent] " tip-bot for Valentin Schneider
2019-03-19 11:12   ` tip-bot for Valentin Schneider
2019-02-11 17:59 ` [PATCH v2 2/3] sched/fair: Tune down misfit nohz kicks Valentin Schneider
2019-03-09 14:37   ` [tip:sched/urgent] sched/fair: Tune down misfit NOHZ kicks tip-bot for Valentin Schneider
2019-03-19 11:13   ` tip-bot for Valentin Schneider
2019-02-11 17:59 ` [PATCH v2 3/3] sched/fair: Skip LLC nohz logic for asymmetric systems Valentin Schneider
2019-03-09 14:38   ` [tip:sched/urgent] sched/fair: Skip LLC NOHZ " tip-bot for Valentin Schneider
2019-03-19 11:13   ` tip-bot for Valentin Schneider

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).