All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] sched/fair: Fix fixed point arithmetic width for shares and effective load
@ 2016-08-22 14:00 Dietmar Eggemann
  2016-09-30 11:55 ` [tip:sched/core] " tip-bot for Dietmar Eggemann
  0 siblings, 1 reply; 2+ messages in thread
From: Dietmar Eggemann @ 2016-08-22 14:00 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, linux-kernel

Since commit 2159197d6677 ("sched/core: Enable increased load resolution
on 64-bit kernels") we now have two different fixed point units for
load.
shares in calc_cfs_shares() has 20 bit fixed point unit on 64-bit
kernels. Therefore use scale_load() on MIN_SHARES.
wl in effective_load() has 10 bit fixed point unit. Therefore use
scale_load_down() on tg->shares which has 20 bit fixed point unit on
64-bit kernels.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
---

Besides the issue with load_above_capacity when it comes to different
fixed point units for load "[PATCH] sched/fair: Fix load_above_capacity
fixed point arithmetic width" there are similar issues for shares in
calc_cfs_shares() as well as wl in effective_load().

 kernel/sched/fair.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 61d485421bed..18f80c4c7737 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2530,8 +2530,8 @@ static long calc_cfs_shares(struct cfs_rq *cfs_rq, struct task_group *tg)
 	if (tg_weight)
 		shares /= tg_weight;
 
-	if (shares < MIN_SHARES)
-		shares = MIN_SHARES;
+	if (shares < scale_load(MIN_SHARES))
+		shares = scale_load(MIN_SHARES);
 	if (shares > tg->shares)
 		shares = tg->shares;
 
@@ -5023,9 +5023,9 @@ static long effective_load(struct task_group *tg, int cpu, long wl, long wg)
 		 * wl = S * s'_i; see (2)
 		 */
 		if (W > 0 && w < W)
-			wl = (w * (long)tg->shares) / W;
+			wl = (w * (long)scale_load_down(tg->shares)) / W;
 		else
-			wl = tg->shares;
+			wl = scale_load_down(tg->shares);
 
 		/*
 		 * Per the above, wl is the new se->load.weight value; since
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 2+ messages in thread

* [tip:sched/core] sched/fair: Fix fixed point arithmetic width for shares and effective load
  2016-08-22 14:00 [PATCH] sched/fair: Fix fixed point arithmetic width for shares and effective load Dietmar Eggemann
@ 2016-09-30 11:55 ` tip-bot for Dietmar Eggemann
  0 siblings, 0 replies; 2+ messages in thread
From: tip-bot for Dietmar Eggemann @ 2016-09-30 11:55 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: tglx, hpa, peterz, efault, torvalds, mingo, dietmar.eggemann,
	linux-kernel

Commit-ID:  ab522e33f91799661aad47bebb691f241a9f6bb8
Gitweb:     http://git.kernel.org/tip/ab522e33f91799661aad47bebb691f241a9f6bb8
Author:     Dietmar Eggemann <dietmar.eggemann@arm.com>
AuthorDate: Mon, 22 Aug 2016 15:00:41 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 30 Sep 2016 10:53:19 +0200

sched/fair: Fix fixed point arithmetic width for shares and effective load

Since commit:

  2159197d6677 ("sched/core: Enable increased load resolution on 64-bit kernels")

we now have two different fixed point units for load:

- 'shares' in calc_cfs_shares() has 20 bit fixed point unit on 64-bit
  kernels. Therefore use scale_load() on MIN_SHARES.

- 'wl' in effective_load() has 10 bit fixed point unit. Therefore use
  scale_load_down() on tg->shares which has 20 bit fixed point unit on
  64-bit kernels.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1471874441-24701-1-git-send-email-dietmar.eggemann@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8fb4d19..786ef94 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5017,9 +5017,9 @@ static long effective_load(struct task_group *tg, int cpu, long wl, long wg)
 		 * wl = S * s'_i; see (2)
 		 */
 		if (W > 0 && w < W)
-			wl = (w * (long)tg->shares) / W;
+			wl = (w * (long)scale_load_down(tg->shares)) / W;
 		else
-			wl = tg->shares;
+			wl = scale_load_down(tg->shares);
 
 		/*
 		 * Per the above, wl is the new se->load.weight value; since

^ permalink raw reply related	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2016-09-30 11:56 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-22 14:00 [PATCH] sched/fair: Fix fixed point arithmetic width for shares and effective load Dietmar Eggemann
2016-09-30 11:55 ` [tip:sched/core] " tip-bot for Dietmar Eggemann

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.