linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] sched/fair: Fix overutilized update in enqueue_task_fair()
@ 2020-11-12 11:12 Quentin Perret
  2020-11-12 12:17 ` Vincent Guittot
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Quentin Perret @ 2020-11-12 11:12 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Morten Rasmussen, Quentin Perret,
	open list:SCHEDULER
  Cc: kernel-team, Quentin Perret, Rick Yiu

enqueue_task_fair() attempts to skip the overutilized update for new
tasks as their util_avg is not accurate yet. However, the flag we check
to do so is overwritten earlier on in the function, which makes the
condition pretty much a nop.

Fix this by saving the flag early on.

Fixes: 2802bf3cd936 ("sched/fair: Add over-utilization/tipping point
indicator")
Reported-by: Rick Yiu <rickyiu@google.com>
Signed-off-by: Quentin Perret <qperret@google.com>
---
 kernel/sched/fair.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 290f9e38378c..f3ee60b92718 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5477,6 +5477,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 	struct cfs_rq *cfs_rq;
 	struct sched_entity *se = &p->se;
 	int idle_h_nr_running = task_has_idle_policy(p);
+	int task_new = !(flags & ENQUEUE_WAKEUP);
 
 	/*
 	 * The code below (indirectly) updates schedutil which looks at
@@ -5549,7 +5550,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 	 * into account, but that is not straightforward to implement,
 	 * and the following generally works well enough in practice.
 	 */
-	if (flags & ENQUEUE_WAKEUP)
+	if (!task_new)
 		update_overutilized_status(rq);
 
 enqueue_throttle:
-- 
2.29.2.222.g5d2a92d10f8-goog


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH] sched/fair: Fix overutilized update in enqueue_task_fair()
  2020-11-12 11:12 [PATCH] sched/fair: Fix overutilized update in enqueue_task_fair() Quentin Perret
@ 2020-11-12 12:17 ` Vincent Guittot
  2020-11-12 12:29 ` Valentin Schneider
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 7+ messages in thread
From: Vincent Guittot @ 2020-11-12 12:17 UTC (permalink / raw)
  To: Quentin Perret
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Morten Rasmussen, Quentin Perret,
	open list:SCHEDULER, Android Kernel Team, Rick Yiu

On Thu, 12 Nov 2020 at 12:12, Quentin Perret <qperret@google.com> wrote:
>
> enqueue_task_fair() attempts to skip the overutilized update for new
> tasks as their util_avg is not accurate yet. However, the flag we check
> to do so is overwritten earlier on in the function, which makes the
> condition pretty much a nop.
>
> Fix this by saving the flag early on.
>
> Fixes: 2802bf3cd936 ("sched/fair: Add over-utilization/tipping point
> indicator")
> Reported-by: Rick Yiu <rickyiu@google.com>
> Signed-off-by: Quentin Perret <qperret@google.com>

Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
> ---
>  kernel/sched/fair.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 290f9e38378c..f3ee60b92718 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5477,6 +5477,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
>         struct cfs_rq *cfs_rq;
>         struct sched_entity *se = &p->se;
>         int idle_h_nr_running = task_has_idle_policy(p);
> +       int task_new = !(flags & ENQUEUE_WAKEUP);
>
>         /*
>          * The code below (indirectly) updates schedutil which looks at
> @@ -5549,7 +5550,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
>          * into account, but that is not straightforward to implement,
>          * and the following generally works well enough in practice.
>          */
> -       if (flags & ENQUEUE_WAKEUP)
> +       if (!task_new)
>                 update_overutilized_status(rq);
>
>  enqueue_throttle:
> --
> 2.29.2.222.g5d2a92d10f8-goog
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] sched/fair: Fix overutilized update in enqueue_task_fair()
  2020-11-12 11:12 [PATCH] sched/fair: Fix overutilized update in enqueue_task_fair() Quentin Perret
  2020-11-12 12:17 ` Vincent Guittot
@ 2020-11-12 12:29 ` Valentin Schneider
  2020-11-12 12:38   ` Quentin Perret
  2020-11-13  8:18 ` Peter Zijlstra
  2020-11-19  9:55 ` [tip: sched/urgent] " tip-bot2 for Quentin Perret
  3 siblings, 1 reply; 7+ messages in thread
From: Valentin Schneider @ 2020-11-12 12:29 UTC (permalink / raw)
  To: Quentin Perret
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Morten Rasmussen, Quentin Perret,
	open list:SCHEDULER, kernel-team, Rick Yiu


Hi Quentin,

On 12/11/20 11:12, Quentin Perret wrote:
> enqueue_task_fair() attempts to skip the overutilized update for new
> tasks as their util_avg is not accurate yet. However, the flag we check
> to do so is overwritten earlier on in the function, which makes the
> condition pretty much a nop.
>
> Fix this by saving the flag early on.
>
> Fixes: 2802bf3cd936 ("sched/fair: Add over-utilization/tipping point
> indicator")
> Reported-by: Rick Yiu <rickyiu@google.com>
> Signed-off-by: Quentin Perret <qperret@google.com>

Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>

Alternatively: how much does not updating the overutilized status here help
us? The next tick will unconditionally update it, which for arm64 is
anywhere in the next ]0, 4]ms. That "fake" fork-time util_avg should already
be accounted in the rq util_avg, and even if the new task was running the
entire time, 4ms doesn't buy us much decay.

> ---
>  kernel/sched/fair.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 290f9e38378c..f3ee60b92718 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5477,6 +5477,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
>       struct cfs_rq *cfs_rq;
>       struct sched_entity *se = &p->se;
>       int idle_h_nr_running = task_has_idle_policy(p);
> +	int task_new = !(flags & ENQUEUE_WAKEUP);
>
>       /*
>        * The code below (indirectly) updates schedutil which looks at
> @@ -5549,7 +5550,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
>        * into account, but that is not straightforward to implement,
>        * and the following generally works well enough in practice.
>        */
> -	if (flags & ENQUEUE_WAKEUP)
> +	if (!task_new)
>               update_overutilized_status(rq);
>
>  enqueue_throttle:

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] sched/fair: Fix overutilized update in enqueue_task_fair()
  2020-11-12 12:29 ` Valentin Schneider
@ 2020-11-12 12:38   ` Quentin Perret
  2020-11-12 12:52     ` Valentin Schneider
  0 siblings, 1 reply; 7+ messages in thread
From: Quentin Perret @ 2020-11-12 12:38 UTC (permalink / raw)
  To: Valentin Schneider
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Morten Rasmussen, Quentin Perret,
	open list:SCHEDULER, kernel-team, Rick Yiu

On Thursday 12 Nov 2020 at 12:29:59 (+0000), Valentin Schneider wrote:
> Alternatively: how much does not updating the overutilized status here help
> us? The next tick will unconditionally update it, which for arm64 is
> anywhere in the next ]0, 4]ms. That "fake" fork-time util_avg should already
> be accounted in the rq util_avg, and even if the new task was running the
> entire time, 4ms doesn't buy us much decay.

Yes, this is arguably a dodgy hack, which will not save us in a number
of cases. The only situation where this helps is for short-lived tasks
that will run only once. And this is a sadly common programming pattern.

So yeah, this is not the prettiest thing in the world, but it doesn't
cost us much and helps some real-world workloads, so ...

Thanks,
Quentin

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] sched/fair: Fix overutilized update in enqueue_task_fair()
  2020-11-12 12:38   ` Quentin Perret
@ 2020-11-12 12:52     ` Valentin Schneider
  0 siblings, 0 replies; 7+ messages in thread
From: Valentin Schneider @ 2020-11-12 12:52 UTC (permalink / raw)
  To: Quentin Perret
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Morten Rasmussen, Quentin Perret,
	open list:SCHEDULER, kernel-team, Rick Yiu


On 12/11/20 12:38, Quentin Perret wrote:
> On Thursday 12 Nov 2020 at 12:29:59 (+0000), Valentin Schneider wrote:
>> Alternatively: how much does not updating the overutilized status here help
>> us? The next tick will unconditionally update it, which for arm64 is
>> anywhere in the next ]0, 4]ms. That "fake" fork-time util_avg should already
>> be accounted in the rq util_avg, and even if the new task was running the
>> entire time, 4ms doesn't buy us much decay.
>
> Yes, this is arguably a dodgy hack, which will not save us in a number
> of cases. The only situation where this helps is for short-lived tasks
> that will run only once. And this is a sadly common programming pattern.
>
> So yeah, this is not the prettiest thing in the world, but it doesn't
> cost us much and helps some real-world workloads, so ...
>

Aye aye

> Thanks,
> Quentin

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] sched/fair: Fix overutilized update in enqueue_task_fair()
  2020-11-12 11:12 [PATCH] sched/fair: Fix overutilized update in enqueue_task_fair() Quentin Perret
  2020-11-12 12:17 ` Vincent Guittot
  2020-11-12 12:29 ` Valentin Schneider
@ 2020-11-13  8:18 ` Peter Zijlstra
  2020-11-19  9:55 ` [tip: sched/urgent] " tip-bot2 for Quentin Perret
  3 siblings, 0 replies; 7+ messages in thread
From: Peter Zijlstra @ 2020-11-13  8:18 UTC (permalink / raw)
  To: Quentin Perret
  Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Morten Rasmussen, Quentin Perret,
	open list:SCHEDULER, kernel-team, Rick Yiu

On Thu, Nov 12, 2020 at 11:12:01AM +0000, Quentin Perret wrote:
> enqueue_task_fair() attempts to skip the overutilized update for new
> tasks as their util_avg is not accurate yet. However, the flag we check
> to do so is overwritten earlier on in the function, which makes the
> condition pretty much a nop.
> 
> Fix this by saving the flag early on.
> 
> Fixes: 2802bf3cd936 ("sched/fair: Add over-utilization/tipping point
> indicator")

Fix that wrapping for you :-)

> Reported-by: Rick Yiu <rickyiu@google.com>
> Signed-off-by: Quentin Perret <qperret@google.com>

Thanks!

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [tip: sched/urgent] sched/fair: Fix overutilized update in enqueue_task_fair()
  2020-11-12 11:12 [PATCH] sched/fair: Fix overutilized update in enqueue_task_fair() Quentin Perret
                   ` (2 preceding siblings ...)
  2020-11-13  8:18 ` Peter Zijlstra
@ 2020-11-19  9:55 ` tip-bot2 for Quentin Perret
  3 siblings, 0 replies; 7+ messages in thread
From: tip-bot2 for Quentin Perret @ 2020-11-19  9:55 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Rick Yiu, Quentin Perret, Peter Zijlstra (Intel),
	Vincent Guittot, Valentin Schneider, x86, linux-kernel

The following commit has been merged into the sched/urgent branch of tip:

Commit-ID:     8e1ac4299a6e8726de42310d9c1379f188140c71
Gitweb:        https://git.kernel.org/tip/8e1ac4299a6e8726de42310d9c1379f188140c71
Author:        Quentin Perret <qperret@google.com>
AuthorDate:    Thu, 12 Nov 2020 11:12:01 
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Tue, 17 Nov 2020 13:15:27 +01:00

sched/fair: Fix overutilized update in enqueue_task_fair()

enqueue_task_fair() attempts to skip the overutilized update for new
tasks as their util_avg is not accurate yet. However, the flag we check
to do so is overwritten earlier on in the function, which makes the
condition pretty much a nop.

Fix this by saving the flag early on.

Fixes: 2802bf3cd936 ("sched/fair: Add over-utilization/tipping point indicator")
Reported-by: Rick Yiu <rickyiu@google.com>
Signed-off-by: Quentin Perret <qperret@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
Link: https://lkml.kernel.org/r/20201112111201.2081902-1-qperret@google.com
---
 kernel/sched/fair.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8e563cf..56a8ca9 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5477,6 +5477,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 	struct cfs_rq *cfs_rq;
 	struct sched_entity *se = &p->se;
 	int idle_h_nr_running = task_has_idle_policy(p);
+	int task_new = !(flags & ENQUEUE_WAKEUP);
 
 	/*
 	 * The code below (indirectly) updates schedutil which looks at
@@ -5549,7 +5550,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 	 * into account, but that is not straightforward to implement,
 	 * and the following generally works well enough in practice.
 	 */
-	if (flags & ENQUEUE_WAKEUP)
+	if (!task_new)
 		update_overutilized_status(rq);
 
 enqueue_throttle:

^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2020-11-19  9:55 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-12 11:12 [PATCH] sched/fair: Fix overutilized update in enqueue_task_fair() Quentin Perret
2020-11-12 12:17 ` Vincent Guittot
2020-11-12 12:29 ` Valentin Schneider
2020-11-12 12:38   ` Quentin Perret
2020-11-12 12:52     ` Valentin Schneider
2020-11-13  8:18 ` Peter Zijlstra
2020-11-19  9:55 ` [tip: sched/urgent] " tip-bot2 for Quentin Perret

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).