All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/2] Fix RT/DL utilization during policy change
@ 2021-06-21 10:37 Vincent Donnefort
  2021-06-21 10:37 ` [PATCH v2 1/2] sched/rt: Fix RT utilization tracking " Vincent Donnefort
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Vincent Donnefort @ 2021-06-21 10:37 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli
  Cc: vincent.guittot, dietmar.eggemann, valentin.schneider, rostedt,
	linux-kernel, Vincent Donnefort

changelog since V1:
  * Simplify and merge "if" conditions for RT's switched_to
  * Collect Reviewed-by

Vincent Donnefort (2):
  sched/rt: Fix RT utilization tracking during policy change
  sched/rt: Fix Deadline utilization tracking during policy change

 kernel/sched/deadline.c |  2 ++
 kernel/sched/rt.c       | 17 ++++++++++++-----
 2 files changed, 14 insertions(+), 5 deletions(-)

-- 
2.7.4


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v2 1/2] sched/rt: Fix RT utilization tracking during policy change
  2021-06-21 10:37 [PATCH v2 0/2] Fix RT/DL utilization during policy change Vincent Donnefort
@ 2021-06-21 10:37 ` Vincent Donnefort
  2021-06-21 12:03   ` Vincent Guittot
  2021-06-23  8:19   ` [tip: sched/core] " tip-bot2 for Vincent Donnefort
  2021-06-21 10:37 ` [PATCH v2 2/2] sched/rt: Fix Deadline " Vincent Donnefort
  2021-06-22 13:16 ` [PATCH v2 0/2] Fix RT/DL utilization " Peter Zijlstra
  2 siblings, 2 replies; 7+ messages in thread
From: Vincent Donnefort @ 2021-06-21 10:37 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli
  Cc: vincent.guittot, dietmar.eggemann, valentin.schneider, rostedt,
	linux-kernel, Vincent Donnefort

RT keeps track of the utilization on a per-rq basis with the structure
avg_rt. This utilization is updated during task_tick_rt(),
put_prev_task_rt() and set_next_task_rt(). However, when the current
running task changes its policy, set_next_task_rt() which would usually
take care of updating the utilization when the rq starts running RT tasks,
will not see a such change, leaving the avg_rt structure outdated. When
that very same task will be dequeued later, put_prev_task_rt() will then
update the utilization, based on a wrong last_update_time, leading to a
huge spike in the RT utilization signal.

The signal would eventually recover from this issue after few ms. Even if
no RT tasks are run, avg_rt is also updated in __update_blocked_others().
But as the CPU capacity depends partly on the avg_rt, this issue has
nonetheless a significant impact on the scheduler.

Fix this issue by ensuring a load update when a running task changes
its policy to RT.

Fixes: 371bf427 ("sched/rt: Add rt_rq utilization tracking")
Signed-off-by: Vincent Donnefort <vincent.donnefort@arm.com>

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index a525447..3daf42a 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -2341,13 +2341,20 @@ void __init init_sched_rt_class(void)
 static void switched_to_rt(struct rq *rq, struct task_struct *p)
 {
 	/*
-	 * If we are already running, then there's nothing
-	 * that needs to be done. But if we are not running
-	 * we may need to preempt the current running task.
-	 * If that current running task is also an RT task
+	 * If we are running, update the avg_rt tracking, as the running time
+	 * will now on be accounted into the latter.
+	 */
+	if (task_current(rq, p)) {
+		update_rt_rq_load_avg(rq_clock_pelt(rq), rq, 0);
+		return;
+	}
+
+	/*
+	 * If we are not running we may need to preempt the current
+	 * running task. If that current running task is also an RT task
 	 * then see if we can move to another run queue.
 	 */
-	if (task_on_rq_queued(p) && rq->curr != p) {
+	if (task_on_rq_queued(p)) {
 #ifdef CONFIG_SMP
 		if (p->nr_cpus_allowed > 1 && rq->rt.overloaded)
 			rt_queue_push_tasks(rq);
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 2/2] sched/rt: Fix Deadline utilization tracking during policy change
  2021-06-21 10:37 [PATCH v2 0/2] Fix RT/DL utilization during policy change Vincent Donnefort
  2021-06-21 10:37 ` [PATCH v2 1/2] sched/rt: Fix RT utilization tracking " Vincent Donnefort
@ 2021-06-21 10:37 ` Vincent Donnefort
  2021-06-23  8:19   ` [tip: sched/core] " tip-bot2 for Vincent Donnefort
  2021-06-22 13:16 ` [PATCH v2 0/2] Fix RT/DL utilization " Peter Zijlstra
  2 siblings, 1 reply; 7+ messages in thread
From: Vincent Donnefort @ 2021-06-21 10:37 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli
  Cc: vincent.guittot, dietmar.eggemann, valentin.schneider, rostedt,
	linux-kernel, Vincent Donnefort

DL keeps track of the utilization on a per-rq basis with the structure
avg_dl. This utilization is updated during task_tick_dl(),
put_prev_task_dl() and set_next_task_dl(). However, when the current
running task changes its policy, set_next_task_dl() which would usually
take care of updating the utilization when the rq starts running DL
tasks, will not see a such change, leaving the avg_dl structure outdated.
When that very same task will be dequeued later, put_prev_task_dl() will
then update the utilization, based on a wrong last_update_time, leading to
a huge spike in the DL utilization signal.

The signal would eventually recover from this issue after few ms. Even
if no DL tasks are run, avg_dl is also updated in
__update_blocked_others(). But as the CPU capacity depends partly on the
avg_dl, this issue has nonetheless a significant impact on the scheduler.

Fix this issue by ensuring a load update when a running task changes
its policy to DL.

Fixes: 3727e0e ("sched/dl: Add dl_rq utilization tracking")
Signed-off-by: Vincent Donnefort <vincent.donnefort@arm.com>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 3829c5a..915227a 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -2497,6 +2497,8 @@ static void switched_to_dl(struct rq *rq, struct task_struct *p)
 			check_preempt_curr_dl(rq, p, 0);
 		else
 			resched_curr(rq);
+	} else {
+		update_dl_rq_load_avg(rq_clock_pelt(rq), rq, 0);
 	}
 }
 
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 1/2] sched/rt: Fix RT utilization tracking during policy change
  2021-06-21 10:37 ` [PATCH v2 1/2] sched/rt: Fix RT utilization tracking " Vincent Donnefort
@ 2021-06-21 12:03   ` Vincent Guittot
  2021-06-23  8:19   ` [tip: sched/core] " tip-bot2 for Vincent Donnefort
  1 sibling, 0 replies; 7+ messages in thread
From: Vincent Guittot @ 2021-06-21 12:03 UTC (permalink / raw)
  To: Vincent Donnefort
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
	Valentin Schneider, Steven Rostedt, linux-kernel

On Mon, 21 Jun 2021 at 12:39, Vincent Donnefort
<vincent.donnefort@arm.com> wrote:
>
> RT keeps track of the utilization on a per-rq basis with the structure
> avg_rt. This utilization is updated during task_tick_rt(),
> put_prev_task_rt() and set_next_task_rt(). However, when the current
> running task changes its policy, set_next_task_rt() which would usually
> take care of updating the utilization when the rq starts running RT tasks,
> will not see a such change, leaving the avg_rt structure outdated. When
> that very same task will be dequeued later, put_prev_task_rt() will then
> update the utilization, based on a wrong last_update_time, leading to a
> huge spike in the RT utilization signal.
>
> The signal would eventually recover from this issue after few ms. Even if
> no RT tasks are run, avg_rt is also updated in __update_blocked_others().
> But as the CPU capacity depends partly on the avg_rt, this issue has
> nonetheless a significant impact on the scheduler.
>
> Fix this issue by ensuring a load update when a running task changes
> its policy to RT.
>
> Fixes: 371bf427 ("sched/rt: Add rt_rq utilization tracking")
> Signed-off-by: Vincent Donnefort <vincent.donnefort@arm.com>

Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>

>
> diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
> index a525447..3daf42a 100644
> --- a/kernel/sched/rt.c
> +++ b/kernel/sched/rt.c
> @@ -2341,13 +2341,20 @@ void __init init_sched_rt_class(void)
>  static void switched_to_rt(struct rq *rq, struct task_struct *p)
>  {
>         /*
> -        * If we are already running, then there's nothing
> -        * that needs to be done. But if we are not running
> -        * we may need to preempt the current running task.
> -        * If that current running task is also an RT task
> +        * If we are running, update the avg_rt tracking, as the running time
> +        * will now on be accounted into the latter.
> +        */
> +       if (task_current(rq, p)) {
> +               update_rt_rq_load_avg(rq_clock_pelt(rq), rq, 0);
> +               return;
> +       }
> +
> +       /*
> +        * If we are not running we may need to preempt the current
> +        * running task. If that current running task is also an RT task
>          * then see if we can move to another run queue.
>          */
> -       if (task_on_rq_queued(p) && rq->curr != p) {
> +       if (task_on_rq_queued(p)) {
>  #ifdef CONFIG_SMP
>                 if (p->nr_cpus_allowed > 1 && rq->rt.overloaded)
>                         rt_queue_push_tasks(rq);
> --
> 2.7.4
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 0/2] Fix RT/DL utilization during policy change
  2021-06-21 10:37 [PATCH v2 0/2] Fix RT/DL utilization during policy change Vincent Donnefort
  2021-06-21 10:37 ` [PATCH v2 1/2] sched/rt: Fix RT utilization tracking " Vincent Donnefort
  2021-06-21 10:37 ` [PATCH v2 2/2] sched/rt: Fix Deadline " Vincent Donnefort
@ 2021-06-22 13:16 ` Peter Zijlstra
  2 siblings, 0 replies; 7+ messages in thread
From: Peter Zijlstra @ 2021-06-22 13:16 UTC (permalink / raw)
  To: Vincent Donnefort
  Cc: mingo, juri.lelli, vincent.guittot, dietmar.eggemann,
	valentin.schneider, rostedt, linux-kernel

On Mon, Jun 21, 2021 at 11:37:50AM +0100, Vincent Donnefort wrote:
> changelog since V1:
>   * Simplify and merge "if" conditions for RT's switched_to
>   * Collect Reviewed-by
> 
> Vincent Donnefort (2):
>   sched/rt: Fix RT utilization tracking during policy change
>   sched/rt: Fix Deadline utilization tracking during policy change
> 
>  kernel/sched/deadline.c |  2 ++
>  kernel/sched/rt.c       | 17 ++++++++++++-----
>  2 files changed, 14 insertions(+), 5 deletions(-)

Thanks!

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [tip: sched/core] sched/rt: Fix Deadline utilization tracking during policy change
  2021-06-21 10:37 ` [PATCH v2 2/2] sched/rt: Fix Deadline " Vincent Donnefort
@ 2021-06-23  8:19   ` tip-bot2 for Vincent Donnefort
  0 siblings, 0 replies; 7+ messages in thread
From: tip-bot2 for Vincent Donnefort @ 2021-06-23  8:19 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Vincent Donnefort, Peter Zijlstra (Intel),
	Vincent Guittot, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     d7d607096ae6d378b4e92d49946d22739c047d4c
Gitweb:        https://git.kernel.org/tip/d7d607096ae6d378b4e92d49946d22739c047d4c
Author:        Vincent Donnefort <vincent.donnefort@arm.com>
AuthorDate:    Mon, 21 Jun 2021 11:37:52 +01:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Tue, 22 Jun 2021 16:41:59 +02:00

sched/rt: Fix Deadline utilization tracking during policy change

DL keeps track of the utilization on a per-rq basis with the structure
avg_dl. This utilization is updated during task_tick_dl(),
put_prev_task_dl() and set_next_task_dl(). However, when the current
running task changes its policy, set_next_task_dl() which would usually
take care of updating the utilization when the rq starts running DL
tasks, will not see a such change, leaving the avg_dl structure outdated.
When that very same task will be dequeued later, put_prev_task_dl() will
then update the utilization, based on a wrong last_update_time, leading to
a huge spike in the DL utilization signal.

The signal would eventually recover from this issue after few ms. Even
if no DL tasks are run, avg_dl is also updated in
__update_blocked_others(). But as the CPU capacity depends partly on the
avg_dl, this issue has nonetheless a significant impact on the scheduler.

Fix this issue by ensuring a load update when a running task changes
its policy to DL.

Fixes: 3727e0e ("sched/dl: Add dl_rq utilization tracking")
Signed-off-by: Vincent Donnefort <vincent.donnefort@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Link: https://lore.kernel.org/r/1624271872-211872-3-git-send-email-vincent.donnefort@arm.com
---
 kernel/sched/deadline.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 22878cd..aaacd6c 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -2497,6 +2497,8 @@ static void switched_to_dl(struct rq *rq, struct task_struct *p)
 			check_preempt_curr_dl(rq, p, 0);
 		else
 			resched_curr(rq);
+	} else {
+		update_dl_rq_load_avg(rq_clock_pelt(rq), rq, 0);
 	}
 }
 

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [tip: sched/core] sched/rt: Fix RT utilization tracking during policy change
  2021-06-21 10:37 ` [PATCH v2 1/2] sched/rt: Fix RT utilization tracking " Vincent Donnefort
  2021-06-21 12:03   ` Vincent Guittot
@ 2021-06-23  8:19   ` tip-bot2 for Vincent Donnefort
  1 sibling, 0 replies; 7+ messages in thread
From: tip-bot2 for Vincent Donnefort @ 2021-06-23  8:19 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Vincent Donnefort, Peter Zijlstra (Intel),
	Vincent Guittot, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     fecfcbc288e9f4923f40fd23ca78a6acdc7fdf6c
Gitweb:        https://git.kernel.org/tip/fecfcbc288e9f4923f40fd23ca78a6acdc7fdf6c
Author:        Vincent Donnefort <vincent.donnefort@arm.com>
AuthorDate:    Mon, 21 Jun 2021 11:37:51 +01:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Tue, 22 Jun 2021 16:41:59 +02:00

sched/rt: Fix RT utilization tracking during policy change

RT keeps track of the utilization on a per-rq basis with the structure
avg_rt. This utilization is updated during task_tick_rt(),
put_prev_task_rt() and set_next_task_rt(). However, when the current
running task changes its policy, set_next_task_rt() which would usually
take care of updating the utilization when the rq starts running RT tasks,
will not see a such change, leaving the avg_rt structure outdated. When
that very same task will be dequeued later, put_prev_task_rt() will then
update the utilization, based on a wrong last_update_time, leading to a
huge spike in the RT utilization signal.

The signal would eventually recover from this issue after few ms. Even if
no RT tasks are run, avg_rt is also updated in __update_blocked_others().
But as the CPU capacity depends partly on the avg_rt, this issue has
nonetheless a significant impact on the scheduler.

Fix this issue by ensuring a load update when a running task changes
its policy to RT.

Fixes: 371bf427 ("sched/rt: Add rt_rq utilization tracking")
Signed-off-by: Vincent Donnefort <vincent.donnefort@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Link: https://lore.kernel.org/r/1624271872-211872-2-git-send-email-vincent.donnefort@arm.com
---
 kernel/sched/rt.c | 17 ++++++++++++-----
 1 file changed, 12 insertions(+), 5 deletions(-)

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index a525447..3daf42a 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -2341,13 +2341,20 @@ void __init init_sched_rt_class(void)
 static void switched_to_rt(struct rq *rq, struct task_struct *p)
 {
 	/*
-	 * If we are already running, then there's nothing
-	 * that needs to be done. But if we are not running
-	 * we may need to preempt the current running task.
-	 * If that current running task is also an RT task
+	 * If we are running, update the avg_rt tracking, as the running time
+	 * will now on be accounted into the latter.
+	 */
+	if (task_current(rq, p)) {
+		update_rt_rq_load_avg(rq_clock_pelt(rq), rq, 0);
+		return;
+	}
+
+	/*
+	 * If we are not running we may need to preempt the current
+	 * running task. If that current running task is also an RT task
 	 * then see if we can move to another run queue.
 	 */
-	if (task_on_rq_queued(p) && rq->curr != p) {
+	if (task_on_rq_queued(p)) {
 #ifdef CONFIG_SMP
 		if (p->nr_cpus_allowed > 1 && rq->rt.overloaded)
 			rt_queue_push_tasks(rq);

^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2021-06-23  8:19 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-21 10:37 [PATCH v2 0/2] Fix RT/DL utilization during policy change Vincent Donnefort
2021-06-21 10:37 ` [PATCH v2 1/2] sched/rt: Fix RT utilization tracking " Vincent Donnefort
2021-06-21 12:03   ` Vincent Guittot
2021-06-23  8:19   ` [tip: sched/core] " tip-bot2 for Vincent Donnefort
2021-06-21 10:37 ` [PATCH v2 2/2] sched/rt: Fix Deadline " Vincent Donnefort
2021-06-23  8:19   ` [tip: sched/core] " tip-bot2 for Vincent Donnefort
2021-06-22 13:16 ` [PATCH v2 0/2] Fix RT/DL utilization " Peter Zijlstra

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.