[2/2] sched/rt: Fix Deadline utilization tracking during policy change
diff mbox series

Message ID 1624023139-66147-2-git-send-email-vincent.donnefort@arm.com
State New, archived
Headers show
Series
  • [1/2] sched/rt: Fix RT utilization tracking during policy change
Related show

Commit Message

Vincent Donnefort June 18, 2021, 1:32 p.m. UTC
DL keeps track of the utilization on a per-rq basis with the structure
avg_dl. This utilization is updated during task_tick_dl(),
put_prev_task_dl() and set_next_task_dl(). However, when the current
running task changes its policy, set_next_task_dl() which would usually
take care of updating the utilization when the rq starts running DL
tasks, will not see a such change, leaving the avg_dl structure outdated.
When that very same task will be dequeued later, put_prev_task_dl() will
then update the utilization, based on a wrong last_update_time, leading a
huge spike in the DL utilization signal.

The signal would eventually recover from this issue after few ms. Even
if no DL tasks are run, avg_dl is also updated in
__update_blocked_others(). But as the CPU capacity depends partly on the
avg_dl, this issue has nonetheless a significant impact on the scheduler.

Fix this issue by ensuring a load update when a running task changes
its policy to DL.

Fixes: 3727e0e ("sched/dl: Add dl_rq utilization tracking")
Signed-off-by: Vincent Donnefort <vincent.donnefort@arm.com>

Comments

Vincent Guittot June 21, 2021, 8:40 a.m. UTC | #1
On Fri, 18 Jun 2021 at 15:32, Vincent Donnefort
<vincent.donnefort@arm.com> wrote:
>
> DL keeps track of the utilization on a per-rq basis with the structure
> avg_dl. This utilization is updated during task_tick_dl(),
> put_prev_task_dl() and set_next_task_dl(). However, when the current
> running task changes its policy, set_next_task_dl() which would usually
> take care of updating the utilization when the rq starts running DL
> tasks, will not see a such change, leaving the avg_dl structure outdated.
> When that very same task will be dequeued later, put_prev_task_dl() will
> then update the utilization, based on a wrong last_update_time, leading a
> huge spike in the DL utilization signal.
>
> The signal would eventually recover from this issue after few ms. Even
> if no DL tasks are run, avg_dl is also updated in
> __update_blocked_others(). But as the CPU capacity depends partly on the
> avg_dl, this issue has nonetheless a significant impact on the scheduler.
>
> Fix this issue by ensuring a load update when a running task changes
> its policy to DL.
>
> Fixes: 3727e0e ("sched/dl: Add dl_rq utilization tracking")
> Signed-off-by: Vincent Donnefort <vincent.donnefort@arm.com>

Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>

>
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index 3829c5a..915227a 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -2497,6 +2497,8 @@ static void switched_to_dl(struct rq *rq, struct task_struct *p)
>                         check_preempt_curr_dl(rq, p, 0);
>                 else
>                         resched_curr(rq);
> +       } else {
> +               update_dl_rq_load_avg(rq_clock_pelt(rq), rq, 0);
>         }
>  }
>
> --
> 2.7.4
>

Patch
diff mbox series

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 3829c5a..915227a 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -2497,6 +2497,8 @@  static void switched_to_dl(struct rq *rq, struct task_struct *p)
 			check_preempt_curr_dl(rq, p, 0);
 		else
 			resched_curr(rq);
+	} else {
+		update_dl_rq_load_avg(rq_clock_pelt(rq), rq, 0);
 	}
 }