All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2] sched/fair: Remove setting task's se->runnable_weight during PELT update
@ 2018-08-03 14:05 Dietmar Eggemann
  2018-09-10 23:49 ` Dietmar Eggemann
  2018-10-02 10:07 ` [tip:sched/core] " tip-bot for Dietmar Eggemann
  0 siblings, 2 replies; 4+ messages in thread
From: Dietmar Eggemann @ 2018-08-03 14:05 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar
  Cc: Vincent Guittot, Morten Rasmussen, Patrick Bellasi,
	Quentin Perret, Joel Fernandes, linux-kernel

A CFS (SCHED_OTHER, SCHED_BATCH or SCHED_IDLE policy) task's
se->runnable_weight must always be in sync with its se->load.weight.

se->runnable_weight is set to se->load.weight when the task is
forked (init_entity_runnable_average()) or reniced (reweight_entity()).

There are two cases in set_load_weight() which since they currently only
set se->load.weight could lead to a situation in which se->load.weight
is different to se->runnable_weight for a CFS task:

(1) A task switches to SCHED_IDLE.

(2) A SCHED_FIFO, SCHED_RR or SCHED_DEADLINE task which has been reniced
    (during which only its static priority gets set) switches to
    SCHED_OTHER or SCHED_BATCH.

Set se->runnable_weight to se->load.weight in these two cases to prevent
this. This eliminates the need to explicitly set it to se->load.weight
during PELT updates in the CFS scheduler fastpath.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
---

Changes v1->v2:
- Rebased on latest tip/sched/core

This patch has been tested w/ appropriate BUG_ON()'s in
__update_load_avg_blocked_se() and __update_load_avg_se() on an Ubuntu
18.04 desktop.

 kernel/sched/core.c | 2 ++
 kernel/sched/pelt.c | 6 ------
 2 files changed, 2 insertions(+), 6 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index deafa9fe602b..2a08418db3d1 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -701,6 +701,7 @@ static void set_load_weight(struct task_struct *p, bool update_load)
 	if (idle_policy(p->policy)) {
 		load->weight = scale_load(WEIGHT_IDLEPRIO);
 		load->inv_weight = WMULT_IDLEPRIO;
+		p->se.runnable_weight = load->weight;
 		return;
 	}
 
@@ -713,6 +714,7 @@ static void set_load_weight(struct task_struct *p, bool update_load)
 	} else {
 		load->weight = scale_load(sched_prio_to_weight[prio]);
 		load->inv_weight = sched_prio_to_wmult[prio];
+		p->se.runnable_weight = load->weight;
 	}
 }
 
diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
index 35475c0c5419..d0016b16d23a 100644
--- a/kernel/sched/pelt.c
+++ b/kernel/sched/pelt.c
@@ -269,9 +269,6 @@ ___update_load_avg(struct sched_avg *sa, unsigned long load, unsigned long runna
 
 int __update_load_avg_blocked_se(u64 now, int cpu, struct sched_entity *se)
 {
-	if (entity_is_task(se))
-		se->runnable_weight = se->load.weight;
-
 	if (___update_load_sum(now, cpu, &se->avg, 0, 0, 0)) {
 		___update_load_avg(&se->avg, se_weight(se), se_runnable(se));
 		return 1;
@@ -282,9 +279,6 @@ int __update_load_avg_blocked_se(u64 now, int cpu, struct sched_entity *se)
 
 int __update_load_avg_se(u64 now, int cpu, struct cfs_rq *cfs_rq, struct sched_entity *se)
 {
-	if (entity_is_task(se))
-		se->runnable_weight = se->load.weight;
-
 	if (___update_load_sum(now, cpu, &se->avg, !!se->on_rq, !!se->on_rq,
 				cfs_rq->curr == se)) {
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] sched/fair: Remove setting task's se->runnable_weight during PELT update
  2018-08-03 14:05 [PATCH v2] sched/fair: Remove setting task's se->runnable_weight during PELT update Dietmar Eggemann
@ 2018-09-10 23:49 ` Dietmar Eggemann
  2018-09-11 11:30   ` Peter Zijlstra
  2018-10-02 10:07 ` [tip:sched/core] " tip-bot for Dietmar Eggemann
  1 sibling, 1 reply; 4+ messages in thread
From: Dietmar Eggemann @ 2018-09-10 23:49 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar
  Cc: Vincent Guittot, Morten Rasmussen, Patrick Bellasi,
	Quentin Perret, Joel Fernandes, linux-kernel

Hi,

On 08/03/2018 07:05 AM, Dietmar Eggemann wrote:
> A CFS (SCHED_OTHER, SCHED_BATCH or SCHED_IDLE policy) task's
> se->runnable_weight must always be in sync with its se->load.weight.
> 
> se->runnable_weight is set to se->load.weight when the task is
> forked (init_entity_runnable_average()) or reniced (reweight_entity()).
> 
> There are two cases in set_load_weight() which since they currently only
> set se->load.weight could lead to a situation in which se->load.weight
> is different to se->runnable_weight for a CFS task:
> 
> (1) A task switches to SCHED_IDLE.
> 
> (2) A SCHED_FIFO, SCHED_RR or SCHED_DEADLINE task which has been reniced
>      (during which only its static priority gets set) switches to
>      SCHED_OTHER or SCHED_BATCH.
> 
> Set se->runnable_weight to se->load.weight in these two cases to prevent
> this. This eliminates the need to explicitly set it to se->load.weight
> during PELT updates in the CFS scheduler fastpath.
> 
> Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
> ---
> 
> Changes v1->v2:
> - Rebased on latest tip/sched/core
> 
> This patch has been tested w/ appropriate BUG_ON()'s in
> __update_load_avg_blocked_se() and __update_load_avg_se() on an Ubuntu
> 18.04 desktop.
> 
>   kernel/sched/core.c | 2 ++
>   kernel/sched/pelt.c | 6 ------
>   2 files changed, 2 insertions(+), 6 deletions(-)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index deafa9fe602b..2a08418db3d1 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -701,6 +701,7 @@ static void set_load_weight(struct task_struct *p, bool update_load)
>   	if (idle_policy(p->policy)) {
>   		load->weight = scale_load(WEIGHT_IDLEPRIO);
>   		load->inv_weight = WMULT_IDLEPRIO;
> +		p->se.runnable_weight = load->weight;
>   		return;
>   	}
>   
> @@ -713,6 +714,7 @@ static void set_load_weight(struct task_struct *p, bool update_load)
>   	} else {
>   		load->weight = scale_load(sched_prio_to_weight[prio]);
>   		load->inv_weight = sched_prio_to_wmult[prio];
> +		p->se.runnable_weight = load->weight;
>   	}
>   }
>   
> diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
> index 35475c0c5419..d0016b16d23a 100644
> --- a/kernel/sched/pelt.c
> +++ b/kernel/sched/pelt.c
> @@ -269,9 +269,6 @@ ___update_load_avg(struct sched_avg *sa, unsigned long load, unsigned long runna
>   
>   int __update_load_avg_blocked_se(u64 now, int cpu, struct sched_entity *se)
>   {
> -	if (entity_is_task(se))
> -		se->runnable_weight = se->load.weight;
> -
>   	if (___update_load_sum(now, cpu, &se->avg, 0, 0, 0)) {
>   		___update_load_avg(&se->avg, se_weight(se), se_runnable(se));
>   		return 1;
> @@ -282,9 +279,6 @@ int __update_load_avg_blocked_se(u64 now, int cpu, struct sched_entity *se)
>   
>   int __update_load_avg_se(u64 now, int cpu, struct cfs_rq *cfs_rq, struct sched_entity *se)
>   {
> -	if (entity_is_task(se))
> -		se->runnable_weight = se->load.weight;
> -
>   	if (___update_load_sum(now, cpu, &se->avg, !!se->on_rq, !!se->on_rq,
>   				cfs_rq->curr == se)) {
>   
> 

Is there anything else I should do for this patch ?

Thanks,

-- Dietmar


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] sched/fair: Remove setting task's se->runnable_weight during PELT update
  2018-09-10 23:49 ` Dietmar Eggemann
@ 2018-09-11 11:30   ` Peter Zijlstra
  0 siblings, 0 replies; 4+ messages in thread
From: Peter Zijlstra @ 2018-09-11 11:30 UTC (permalink / raw)
  To: Dietmar Eggemann
  Cc: Ingo Molnar, Vincent Guittot, Morten Rasmussen, Patrick Bellasi,
	Quentin Perret, Joel Fernandes, linux-kernel

On Mon, Sep 10, 2018 at 04:49:05PM -0700, Dietmar Eggemann wrote:
> 
> Is there anything else I should do for this patch ?

This. Got it now, thanks!

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [tip:sched/core] sched/fair: Remove setting task's se->runnable_weight during PELT update
  2018-08-03 14:05 [PATCH v2] sched/fair: Remove setting task's se->runnable_weight during PELT update Dietmar Eggemann
  2018-09-10 23:49 ` Dietmar Eggemann
@ 2018-10-02 10:07 ` tip-bot for Dietmar Eggemann
  1 sibling, 0 replies; 4+ messages in thread
From: tip-bot for Dietmar Eggemann @ 2018-10-02 10:07 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: joelaf, patrick.bellasi, dietmar.eggemann, mingo, hpa,
	quentin.perret, vincent.guittot, linux-kernel, tglx, peterz,
	morten.rasmussen, torvalds

Commit-ID:  4a465e3ebbc8004ce4f7f08f6022ee8315a94edf
Gitweb:     https://git.kernel.org/tip/4a465e3ebbc8004ce4f7f08f6022ee8315a94edf
Author:     Dietmar Eggemann <dietmar.eggemann@arm.com>
AuthorDate: Fri, 3 Aug 2018 15:05:38 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 2 Oct 2018 09:45:03 +0200

sched/fair: Remove setting task's se->runnable_weight during PELT update

A CFS (SCHED_OTHER, SCHED_BATCH or SCHED_IDLE policy) task's
se->runnable_weight must always be in sync with its se->load.weight.

se->runnable_weight is set to se->load.weight when the task is
forked (init_entity_runnable_average()) or reniced (reweight_entity()).

There are two cases in set_load_weight() which since they currently only
set se->load.weight could lead to a situation in which se->load.weight
is different to se->runnable_weight for a CFS task:

(1) A task switches to SCHED_IDLE.

(2) A SCHED_FIFO, SCHED_RR or SCHED_DEADLINE task which has been reniced
    (during which only its static priority gets set) switches to
    SCHED_OTHER or SCHED_BATCH.

Set se->runnable_weight to se->load.weight in these two cases to prevent
this. This eliminates the need to explicitly set it to se->load.weight
during PELT updates in the CFS scheduler fastpath.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Joel Fernandes <joelaf@google.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Patrick Bellasi <patrick.bellasi@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Quentin Perret <quentin.perret@arm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Link: http://lkml.kernel.org/r/20180803140538.1178-1-dietmar.eggemann@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/core.c | 2 ++
 kernel/sched/pelt.c | 6 ------
 2 files changed, 2 insertions(+), 6 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index f2caf1bae4a3..56b3c1781276 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -700,6 +700,7 @@ static void set_load_weight(struct task_struct *p, bool update_load)
 	if (idle_policy(p->policy)) {
 		load->weight = scale_load(WEIGHT_IDLEPRIO);
 		load->inv_weight = WMULT_IDLEPRIO;
+		p->se.runnable_weight = load->weight;
 		return;
 	}
 
@@ -712,6 +713,7 @@ static void set_load_weight(struct task_struct *p, bool update_load)
 	} else {
 		load->weight = scale_load(sched_prio_to_weight[prio]);
 		load->inv_weight = sched_prio_to_wmult[prio];
+		p->se.runnable_weight = load->weight;
 	}
 }
 
diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
index 48a126486435..90fb5bc12ad4 100644
--- a/kernel/sched/pelt.c
+++ b/kernel/sched/pelt.c
@@ -269,9 +269,6 @@ ___update_load_avg(struct sched_avg *sa, unsigned long load, unsigned long runna
 
 int __update_load_avg_blocked_se(u64 now, int cpu, struct sched_entity *se)
 {
-	if (entity_is_task(se))
-		se->runnable_weight = se->load.weight;
-
 	if (___update_load_sum(now, cpu, &se->avg, 0, 0, 0)) {
 		___update_load_avg(&se->avg, se_weight(se), se_runnable(se));
 		return 1;
@@ -282,9 +279,6 @@ int __update_load_avg_blocked_se(u64 now, int cpu, struct sched_entity *se)
 
 int __update_load_avg_se(u64 now, int cpu, struct cfs_rq *cfs_rq, struct sched_entity *se)
 {
-	if (entity_is_task(se))
-		se->runnable_weight = se->load.weight;
-
 	if (___update_load_sum(now, cpu, &se->avg, !!se->on_rq, !!se->on_rq,
 				cfs_rq->curr == se)) {
 

^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2018-10-02 10:07 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-08-03 14:05 [PATCH v2] sched/fair: Remove setting task's se->runnable_weight during PELT update Dietmar Eggemann
2018-09-10 23:49 ` Dietmar Eggemann
2018-09-11 11:30   ` Peter Zijlstra
2018-10-02 10:07 ` [tip:sched/core] " tip-bot for Dietmar Eggemann

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.