All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] sched/fair: Fix load_above_capacity fixed point arithmetic width
@ 2016-08-10 10:27 Dietmar Eggemann
  2016-08-10 12:28 ` Vincent Guittot
  2016-09-05 11:55 ` [tip:sched/core] " tip-bot for Dietmar Eggemann
  0 siblings, 2 replies; 3+ messages in thread
From: Dietmar Eggemann @ 2016-08-10 10:27 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, linux-kernel
  Cc: Morten Rasmussen, Vincent Guittot, Yuyang Du

Since commit 2159197d6677 ("sched/core: Enable increased load resolution
on 64-bit kernels") we now have two different fixed point units for
load.
load_above_capacity has to have 10 bit fixed point unit like PELT
whereas NICE_0_LOAD has 20 bit fixed point unit on 64-bit kernels.
Fix this by scaling down NICE_0_LOAD when multiplying
load_above_capacity with it.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Acked-by: Morten Rasmussen <morten.rasmussen@arm.com>
---
 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 4088eedea763..fe4093807852 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7147,7 +7147,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
 		load_above_capacity = busiest->sum_nr_running * SCHED_CAPACITY_SCALE;
 		if (load_above_capacity > busiest->group_capacity) {
 			load_above_capacity -= busiest->group_capacity;
-			load_above_capacity *= NICE_0_LOAD;
+			load_above_capacity *= scale_load_down(NICE_0_LOAD);
 			load_above_capacity /= busiest->group_capacity;
 		} else
 			load_above_capacity = ~0UL;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] sched/fair: Fix load_above_capacity fixed point arithmetic width
  2016-08-10 10:27 [PATCH] sched/fair: Fix load_above_capacity fixed point arithmetic width Dietmar Eggemann
@ 2016-08-10 12:28 ` Vincent Guittot
  2016-09-05 11:55 ` [tip:sched/core] " tip-bot for Dietmar Eggemann
  1 sibling, 0 replies; 3+ messages in thread
From: Vincent Guittot @ 2016-08-10 12:28 UTC (permalink / raw)
  To: Dietmar Eggemann
  Cc: Peter Zijlstra, Ingo Molnar, linux-kernel, Morten Rasmussen, Yuyang Du

On 10 August 2016 at 12:27, Dietmar Eggemann <dietmar.eggemann@arm.com> wrote:
> Since commit 2159197d6677 ("sched/core: Enable increased load resolution
> on 64-bit kernels") we now have two different fixed point units for
> load.
> load_above_capacity has to have 10 bit fixed point unit like PELT
> whereas NICE_0_LOAD has 20 bit fixed point unit on 64-bit kernels.
> Fix this by scaling down NICE_0_LOAD when multiplying
> load_above_capacity with it.
>
> Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
> Acked-by: Morten Rasmussen <morten.rasmussen@arm.com>
> ---
>  kernel/sched/fair.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 4088eedea763..fe4093807852 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -7147,7 +7147,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
>                 load_above_capacity = busiest->sum_nr_running * SCHED_CAPACITY_SCALE;
>                 if (load_above_capacity > busiest->group_capacity) {
>                         load_above_capacity -= busiest->group_capacity;
> -                       load_above_capacity *= NICE_0_LOAD;
> +                       load_above_capacity *= scale_load_down(NICE_0_LOAD);

FWIW, Acked-by: Vincent Guittot <vincent.guittot@linaro.org>

>                         load_above_capacity /= busiest->group_capacity;
>                 } else
>                         load_above_capacity = ~0UL;
> --
> 1.9.1
>

^ permalink raw reply	[flat|nested] 3+ messages in thread

* [tip:sched/core] sched/fair: Fix load_above_capacity fixed point arithmetic width
  2016-08-10 10:27 [PATCH] sched/fair: Fix load_above_capacity fixed point arithmetic width Dietmar Eggemann
  2016-08-10 12:28 ` Vincent Guittot
@ 2016-09-05 11:55 ` tip-bot for Dietmar Eggemann
  1 sibling, 0 replies; 3+ messages in thread
From: tip-bot for Dietmar Eggemann @ 2016-09-05 11:55 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: dietmar.eggemann, peterz, torvalds, morten.rasmussen,
	linux-kernel, vincent.guittot, tglx, hpa, mingo, yuyang.du

Commit-ID:  2665621506e178a1f62e59200403c359c463ea5e
Gitweb:     http://git.kernel.org/tip/2665621506e178a1f62e59200403c359c463ea5e
Author:     Dietmar Eggemann <dietmar.eggemann@arm.com>
AuthorDate: Wed, 10 Aug 2016 11:27:27 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 5 Sep 2016 13:29:44 +0200

sched/fair: Fix load_above_capacity fixed point arithmetic width

Since commit:

  2159197d6677 ("sched/core: Enable increased load resolution on 64-bit kernels")

we now have two different fixed point units for load.

load_above_capacity has to have 10 bits fixed point unit like PELT,
whereas NICE_0_LOAD has 20 bit fixed point unit on 64-bit kernels.

Fix this by scaling down NICE_0_LOAD when multiplying
load_above_capacity with it.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Vincent Guittot <vincent.guittot@linaro.org>
Acked-by: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Yuyang Du <yuyang.du@intel.com>
Link: http://lkml.kernel.org/r/1470824847-5316-1-git-send-email-dietmar.eggemann@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9a18aae..6011bfe 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7193,7 +7193,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
 		load_above_capacity = busiest->sum_nr_running * SCHED_CAPACITY_SCALE;
 		if (load_above_capacity > busiest->group_capacity) {
 			load_above_capacity -= busiest->group_capacity;
-			load_above_capacity *= NICE_0_LOAD;
+			load_above_capacity *= scale_load_down(NICE_0_LOAD);
 			load_above_capacity /= busiest->group_capacity;
 		} else
 			load_above_capacity = ~0UL;

^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2016-09-05 11:55 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-10 10:27 [PATCH] sched/fair: Fix load_above_capacity fixed point arithmetic width Dietmar Eggemann
2016-08-10 12:28 ` Vincent Guittot
2016-09-05 11:55 ` [tip:sched/core] " tip-bot for Dietmar Eggemann

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.