All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/2] Improve schedutil integration for FAIR tasks
@ 2018-05-24 14:10 Patrick Bellasi
  2018-05-24 14:10 ` [PATCH v3 1/2] sched/cpufreq: always consider blocked FAIR utilization Patrick Bellasi
  2018-05-24 14:10 ` [PATCH v3 2/2] sched/fair: util_est: update before schedutil Patrick Bellasi
  0 siblings, 2 replies; 8+ messages in thread
From: Patrick Bellasi @ 2018-05-24 14:10 UTC (permalink / raw)
  To: linux-kernel, linux-pm
  Cc: Ingo Molnar, Peter Zijlstra, Rafael J . Wysocki, Viresh Kumar,
	Vincent Guittot, Dietmar Eggemann, Morten Rasmussen, Juri Lelli,
	Joel Fernandes, Steve Muckle

Here is the (hopefully) final update of:

   https://lkml.org/lkml/2018/5/11/319
   20180511131509.16275-1-patrick.bellasi@arm.com

including only the first two patches of the original series, which have been
already reviewed and acked by Viresh and Vincent.

The last patch has been removed from this series since, during the discussion,
we agree that it's not completely fixing the problem it was addressing and we
would like to explore a better and more complete solution. Thus, I'll follow up
on a separated and dedicated series.

Cheers Patrick

Changes in v3:
 - add "Tested-by" and "Acked-by" Vincent tags

Changes in v2:
 - improve comment in enqueue_task_fair() (Peter)
 - add "Fixes" tag
 - add "Acked-by" Viresh tag

Patrick Bellasi (2):
  sched/cpufreq: always consider blocked FAIR utilization
  sched/fair: util_est: update before schedutil

 kernel/sched/cpufreq_schedutil.c | 17 ++++++++---------
 kernel/sched/fair.c              |  9 ++++++++-
 2 files changed, 16 insertions(+), 10 deletions(-)

-- 
2.15.1

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v3 1/2] sched/cpufreq: always consider blocked FAIR utilization
  2018-05-24 14:10 [PATCH v3 0/2] Improve schedutil integration for FAIR tasks Patrick Bellasi
@ 2018-05-24 14:10 ` Patrick Bellasi
  2018-05-25  9:47   ` [tip:sched/core] sched/cpufreq: Modify aggregate utilization to always include " tip-bot for Patrick Bellasi
  2018-05-27  9:50   ` [PATCH v3 1/2] sched/cpufreq: always consider " Rafael J. Wysocki
  2018-05-24 14:10 ` [PATCH v3 2/2] sched/fair: util_est: update before schedutil Patrick Bellasi
  1 sibling, 2 replies; 8+ messages in thread
From: Patrick Bellasi @ 2018-05-24 14:10 UTC (permalink / raw)
  To: linux-kernel, linux-pm
  Cc: Ingo Molnar, Peter Zijlstra, Rafael J . Wysocki, Viresh Kumar,
	Vincent Guittot, Dietmar Eggemann, Morten Rasmussen, Juri Lelli,
	Joel Fernandes, Steve Muckle

Since the refactoring introduced by:

   commit 8f111bc357aa ("cpufreq/schedutil: Rewrite CPUFREQ_RT support")

we aggregate FAIR utilization only if this class has runnable tasks.
This was mainly due to avoid the risk to stay on an high frequency just
because of the blocked utilization of a CPU not being properly decayed
while the CPU was idle.

However, since:

   commit 31e77c93e432 ("sched/fair: Update blocked load when newly idle")

the FAIR blocked utilization is properly decayed also for IDLE CPUs.

This allows us to use the FAIR blocked utilization as a safe mechanism
to gracefully reduce the frequency only if no FAIR tasks show up on a
CPU for a reasonable period of time.

Moreover, we also reduce the frequency drops of CPUs running periodic
tasks which, depending on the task periodicity and the time required
for a frequency switch, was increasing the chances to introduce some
undesirable performance variations.

Reported-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Acked-by: Vincent Guittot <vincent.guittot@linaro.org>
Tested-by: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Viresh Kumar <viresh.kumar@linaro.org>
Cc: Joel Fernandes <joelaf@google.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-pm@vger.kernel.org

---
Changes in v3:
 - add "Tested-by" and "Acked-by" Vincent tags

Changes in v2:
 - add "Acked-by" Viresh tag
---
 kernel/sched/cpufreq_schedutil.c | 17 ++++++++---------
 1 file changed, 8 insertions(+), 9 deletions(-)

diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index e13df951aca7..28592b62b1d5 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -183,22 +183,21 @@ static void sugov_get_util(struct sugov_cpu *sg_cpu)
 static unsigned long sugov_aggregate_util(struct sugov_cpu *sg_cpu)
 {
 	struct rq *rq = cpu_rq(sg_cpu->cpu);
-	unsigned long util;
 
-	if (rq->rt.rt_nr_running) {
-		util = sg_cpu->max;
-	} else {
-		util = sg_cpu->util_dl;
-		if (rq->cfs.h_nr_running)
-			util += sg_cpu->util_cfs;
-	}
+	if (rq->rt.rt_nr_running)
+		return sg_cpu->max;
 
 	/*
+	 * Utilization required by DEADLINE must always be granted while, for
+	 * FAIR, we use blocked utilization of IDLE CPUs as a mechanism to
+	 * gracefully reduce the frequency when no tasks show up for longer
+	 * periods of time.
+	 *
 	 * Ideally we would like to set util_dl as min/guaranteed freq and
 	 * util_cfs + util_dl as requested freq. However, cpufreq is not yet
 	 * ready for such an interface. So, we only do the latter for now.
 	 */
-	return min(util, sg_cpu->max);
+	return min(sg_cpu->max, (sg_cpu->util_dl + sg_cpu->util_cfs));
 }
 
 static void sugov_set_iowait_boost(struct sugov_cpu *sg_cpu, u64 time, unsigned int flags)
-- 
2.15.1

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v3 2/2] sched/fair: util_est: update before schedutil
  2018-05-24 14:10 [PATCH v3 0/2] Improve schedutil integration for FAIR tasks Patrick Bellasi
  2018-05-24 14:10 ` [PATCH v3 1/2] sched/cpufreq: always consider blocked FAIR utilization Patrick Bellasi
@ 2018-05-24 14:10 ` Patrick Bellasi
  2018-05-25  9:48   ` [tip:sched/core] sched/fair: Update util_est before updating schedutil tip-bot for Patrick Bellasi
  1 sibling, 1 reply; 8+ messages in thread
From: Patrick Bellasi @ 2018-05-24 14:10 UTC (permalink / raw)
  To: linux-kernel, linux-pm
  Cc: Ingo Molnar, Peter Zijlstra, Rafael J . Wysocki, Viresh Kumar,
	Vincent Guittot, Dietmar Eggemann, Morten Rasmussen, Juri Lelli,
	Joel Fernandes, Steve Muckle

When a task is enqueue the estimated utilization of a CPU is updated
to better support the selection of the required frequency.
However, schedutil is (implicitly) updated by update_load_avg() which
always happens before util_est_{en,de}queue(), thus potentially
introducing a latency between estimated utilization updates and
frequency selections.

Let's update util_est at the beginning of enqueue_task_fair(),
which will ensure that all schedutil updates will see the most
updated estimated utilization value for a CPU.

Reported-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Acked-by: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: linux-kernel@vger.kernel.org
Cc: linux-pm@vger.kernel.org
Fixes: 7f65ea42eb00 ("sched/fair: Add util_est on top of PELT")

---
Changes in v3:
 - add "Acked-by" Vincent tags

Changes in v2:
 - improve comment in enqueue_task_fair() (Peter)
 - add "Fixes" tag
 - add "Acked-by" Viresh tag
---
 kernel/sched/fair.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 748cb054fefd..e497c05aab7f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5385,6 +5385,14 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 	struct cfs_rq *cfs_rq;
 	struct sched_entity *se = &p->se;
 
+	/*
+	 * The code below (indirectly) updates schedutil which looks at
+	 * the cfs_rq utilization to select a frequency.
+	 * Let's add the task's estimated utilization to the cfs_rq's
+	 * estimated utilization, before we update schedutil.
+	 */
+	util_est_enqueue(&rq->cfs, p);
+
 	/*
 	 * If in_iowait is set, the code below may not trigger any cpufreq
 	 * utilization updates, so do it here explicitly with the IOWAIT flag
@@ -5426,7 +5434,6 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 	if (!se)
 		add_nr_running(rq, 1);
 
-	util_est_enqueue(&rq->cfs, p);
 	hrtick_update(rq);
 }
 
-- 
2.15.1

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [tip:sched/core] sched/cpufreq: Modify aggregate utilization to always include blocked FAIR utilization
  2018-05-24 14:10 ` [PATCH v3 1/2] sched/cpufreq: always consider blocked FAIR utilization Patrick Bellasi
@ 2018-05-25  9:47   ` tip-bot for Patrick Bellasi
  2018-05-27  9:50   ` [PATCH v3 1/2] sched/cpufreq: always consider " Rafael J. Wysocki
  1 sibling, 0 replies; 8+ messages in thread
From: tip-bot for Patrick Bellasi @ 2018-05-25  9:47 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: joelaf, smuckle, juri.lelli, viresh.kumar, dietmar.eggemann,
	peterz, rafael.j.wysocki, tglx, vincent.guittot, linux-kernel,
	patrick.bellasi, torvalds, morten.rasmussen, hpa, mingo

Commit-ID:  8ecf04e11283a28ca88b8b8049ac93c3a99fcd2c
Gitweb:     https://git.kernel.org/tip/8ecf04e11283a28ca88b8b8049ac93c3a99fcd2c
Author:     Patrick Bellasi <patrick.bellasi@arm.com>
AuthorDate: Thu, 24 May 2018 15:10:22 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 25 May 2018 08:04:52 +0200

sched/cpufreq: Modify aggregate utilization to always include blocked FAIR utilization

Since the refactoring introduced by:

   commit 8f111bc357aa ("cpufreq/schedutil: Rewrite CPUFREQ_RT support")

we aggregate FAIR utilization only if this class has runnable tasks.

This was mainly due to avoid the risk to stay on an high frequency just
because of the blocked utilization of a CPU not being properly decayed
while the CPU was idle.

However, since:

   commit 31e77c93e432 ("sched/fair: Update blocked load when newly idle")

the FAIR blocked utilization is properly decayed also for IDLE CPUs.

This allows us to use the FAIR blocked utilization as a safe mechanism
to gracefully reduce the frequency only if no FAIR tasks show up on a
CPU for a reasonable period of time.

Moreover, we also reduce the frequency drops of CPUs running periodic
tasks which, depending on the task periodicity and the time required
for a frequency switch, was increasing the chances to introduce some
undesirable performance variations.

Reported-by: Vincent Guittot <vincent.guittot@linaro.org>
Tested-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Acked-by: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Joel Fernandes <joelaf@google.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rafael J . Wysocki <rafael.j.wysocki@intel.com>
Cc: Steve Muckle <smuckle@google.com>
Link: http://lkml.kernel.org/r/20180524141023.13765-2-patrick.bellasi@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/cpufreq_schedutil.c | 17 ++++++++---------
 1 file changed, 8 insertions(+), 9 deletions(-)

diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index e13df951aca7..28592b62b1d5 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -183,22 +183,21 @@ static void sugov_get_util(struct sugov_cpu *sg_cpu)
 static unsigned long sugov_aggregate_util(struct sugov_cpu *sg_cpu)
 {
 	struct rq *rq = cpu_rq(sg_cpu->cpu);
-	unsigned long util;
 
-	if (rq->rt.rt_nr_running) {
-		util = sg_cpu->max;
-	} else {
-		util = sg_cpu->util_dl;
-		if (rq->cfs.h_nr_running)
-			util += sg_cpu->util_cfs;
-	}
+	if (rq->rt.rt_nr_running)
+		return sg_cpu->max;
 
 	/*
+	 * Utilization required by DEADLINE must always be granted while, for
+	 * FAIR, we use blocked utilization of IDLE CPUs as a mechanism to
+	 * gracefully reduce the frequency when no tasks show up for longer
+	 * periods of time.
+	 *
 	 * Ideally we would like to set util_dl as min/guaranteed freq and
 	 * util_cfs + util_dl as requested freq. However, cpufreq is not yet
 	 * ready for such an interface. So, we only do the latter for now.
 	 */
-	return min(util, sg_cpu->max);
+	return min(sg_cpu->max, (sg_cpu->util_dl + sg_cpu->util_cfs));
 }
 
 static void sugov_set_iowait_boost(struct sugov_cpu *sg_cpu, u64 time, unsigned int flags)

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [tip:sched/core] sched/fair: Update util_est before updating schedutil
  2018-05-24 14:10 ` [PATCH v3 2/2] sched/fair: util_est: update before schedutil Patrick Bellasi
@ 2018-05-25  9:48   ` tip-bot for Patrick Bellasi
  0 siblings, 0 replies; 8+ messages in thread
From: tip-bot for Patrick Bellasi @ 2018-05-25  9:48 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: smuckle, viresh.kumar, linux-kernel, rafael.j.wysocki, joelaf,
	torvalds, morten.rasmussen, dietmar.eggemann, mingo, tglx,
	peterz, vincent.guittot, juri.lelli, patrick.bellasi, hpa

Commit-ID:  2539fc82aa9b07d968cf9ba1ffeec3e0416ac721
Gitweb:     https://git.kernel.org/tip/2539fc82aa9b07d968cf9ba1ffeec3e0416ac721
Author:     Patrick Bellasi <patrick.bellasi@arm.com>
AuthorDate: Thu, 24 May 2018 15:10:23 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 25 May 2018 08:04:56 +0200

sched/fair: Update util_est before updating schedutil

When a task is enqueued the estimated utilization of a CPU is updated
to better support the selection of the required frequency.

However, schedutil is (implicitly) updated by update_load_avg() which
always happens before util_est_{en,de}queue(), thus potentially
introducing a latency between estimated utilization updates and
frequency selections.

Let's update util_est at the beginning of enqueue_task_fair(),
which will ensure that all schedutil updates will see the most
updated estimated utilization value for a CPU.

Reported-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Acked-by: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Joel Fernandes <joelaf@google.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rafael J . Wysocki <rafael.j.wysocki@intel.com>
Cc: Steve Muckle <smuckle@google.com>
Fixes: 7f65ea42eb00 ("sched/fair: Add util_est on top of PELT")
Link: http://lkml.kernel.org/r/20180524141023.13765-3-patrick.bellasi@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 748cb054fefd..e497c05aab7f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5385,6 +5385,14 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 	struct cfs_rq *cfs_rq;
 	struct sched_entity *se = &p->se;
 
+	/*
+	 * The code below (indirectly) updates schedutil which looks at
+	 * the cfs_rq utilization to select a frequency.
+	 * Let's add the task's estimated utilization to the cfs_rq's
+	 * estimated utilization, before we update schedutil.
+	 */
+	util_est_enqueue(&rq->cfs, p);
+
 	/*
 	 * If in_iowait is set, the code below may not trigger any cpufreq
 	 * utilization updates, so do it here explicitly with the IOWAIT flag
@@ -5426,7 +5434,6 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 	if (!se)
 		add_nr_running(rq, 1);
 
-	util_est_enqueue(&rq->cfs, p);
 	hrtick_update(rq);
 }
 

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 1/2] sched/cpufreq: always consider blocked FAIR utilization
  2018-05-24 14:10 ` [PATCH v3 1/2] sched/cpufreq: always consider blocked FAIR utilization Patrick Bellasi
  2018-05-25  9:47   ` [tip:sched/core] sched/cpufreq: Modify aggregate utilization to always include " tip-bot for Patrick Bellasi
@ 2018-05-27  9:50   ` Rafael J. Wysocki
  2018-05-30 16:50     ` Patrick Bellasi
  1 sibling, 1 reply; 8+ messages in thread
From: Rafael J. Wysocki @ 2018-05-27  9:50 UTC (permalink / raw)
  To: Patrick Bellasi
  Cc: Linux Kernel Mailing List, Linux PM, Ingo Molnar, Peter Zijlstra,
	Rafael J . Wysocki, Viresh Kumar, Vincent Guittot,
	Dietmar Eggemann, Morten Rasmussen, Juri Lelli, Joel Fernandes,
	Steve Muckle

On Thu, May 24, 2018 at 4:10 PM, Patrick Bellasi
<patrick.bellasi@arm.com> wrote:
> Since the refactoring introduced by:
>
>    commit 8f111bc357aa ("cpufreq/schedutil: Rewrite CPUFREQ_RT support")
>
> we aggregate FAIR utilization only if this class has runnable tasks.
> This was mainly due to avoid the risk to stay on an high frequency just
> because of the blocked utilization of a CPU not being properly decayed
> while the CPU was idle.
>
> However, since:
>
>    commit 31e77c93e432 ("sched/fair: Update blocked load when newly idle")
>
> the FAIR blocked utilization is properly decayed also for IDLE CPUs.
>
> This allows us to use the FAIR blocked utilization as a safe mechanism
> to gracefully reduce the frequency only if no FAIR tasks show up on a
> CPU for a reasonable period of time.
>
> Moreover, we also reduce the frequency drops of CPUs running periodic
> tasks which, depending on the task periodicity and the time required
> for a frequency switch, was increasing the chances to introduce some
> undesirable performance variations.
>
> Reported-by: Vincent Guittot <vincent.guittot@linaro.org>
> Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
> Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
> Acked-by: Vincent Guittot <vincent.guittot@linaro.org>
> Tested-by: Vincent Guittot <vincent.guittot@linaro.org>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> Cc: Vincent Guittot <vincent.guittot@linaro.org>
> Cc: Viresh Kumar <viresh.kumar@linaro.org>
> Cc: Joel Fernandes <joelaf@google.com>
> Cc: linux-kernel@vger.kernel.org
> Cc: linux-pm@vger.kernel.org

Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

Or please let me know if you want me to apply this one.

> ---
> Changes in v3:
>  - add "Tested-by" and "Acked-by" Vincent tags
>
> Changes in v2:
>  - add "Acked-by" Viresh tag
> ---
>  kernel/sched/cpufreq_schedutil.c | 17 ++++++++---------
>  1 file changed, 8 insertions(+), 9 deletions(-)
>
> diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> index e13df951aca7..28592b62b1d5 100644
> --- a/kernel/sched/cpufreq_schedutil.c
> +++ b/kernel/sched/cpufreq_schedutil.c
> @@ -183,22 +183,21 @@ static void sugov_get_util(struct sugov_cpu *sg_cpu)
>  static unsigned long sugov_aggregate_util(struct sugov_cpu *sg_cpu)
>  {
>         struct rq *rq = cpu_rq(sg_cpu->cpu);
> -       unsigned long util;
>
> -       if (rq->rt.rt_nr_running) {
> -               util = sg_cpu->max;
> -       } else {
> -               util = sg_cpu->util_dl;
> -               if (rq->cfs.h_nr_running)
> -                       util += sg_cpu->util_cfs;
> -       }
> +       if (rq->rt.rt_nr_running)
> +               return sg_cpu->max;
>
>         /*
> +        * Utilization required by DEADLINE must always be granted while, for
> +        * FAIR, we use blocked utilization of IDLE CPUs as a mechanism to
> +        * gracefully reduce the frequency when no tasks show up for longer
> +        * periods of time.
> +        *
>          * Ideally we would like to set util_dl as min/guaranteed freq and
>          * util_cfs + util_dl as requested freq. However, cpufreq is not yet
>          * ready for such an interface. So, we only do the latter for now.
>          */
> -       return min(util, sg_cpu->max);
> +       return min(sg_cpu->max, (sg_cpu->util_dl + sg_cpu->util_cfs));
>  }
>
>  static void sugov_set_iowait_boost(struct sugov_cpu *sg_cpu, u64 time, unsigned int flags)
> --
> 2.15.1
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 1/2] sched/cpufreq: always consider blocked FAIR utilization
  2018-05-27  9:50   ` [PATCH v3 1/2] sched/cpufreq: always consider " Rafael J. Wysocki
@ 2018-05-30 16:50     ` Patrick Bellasi
  2018-05-31 10:46       ` Rafael J. Wysocki
  0 siblings, 1 reply; 8+ messages in thread
From: Patrick Bellasi @ 2018-05-30 16:50 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Linux Kernel Mailing List, Linux PM, Ingo Molnar, Peter Zijlstra,
	Rafael J . Wysocki, Viresh Kumar, Vincent Guittot,
	Dietmar Eggemann, Morten Rasmussen, Juri Lelli, Joel Fernandes,
	Steve Muckle

On 27-May 11:50, Rafael J. Wysocki wrote:
> On Thu, May 24, 2018 at 4:10 PM, Patrick Bellasi
> <patrick.bellasi@arm.com> wrote:
> > Since the refactoring introduced by:
> >
> >    commit 8f111bc357aa ("cpufreq/schedutil: Rewrite CPUFREQ_RT support")
> >
> > we aggregate FAIR utilization only if this class has runnable tasks.
> > This was mainly due to avoid the risk to stay on an high frequency just
> > because of the blocked utilization of a CPU not being properly decayed
> > while the CPU was idle.
> >
> > However, since:
> >
> >    commit 31e77c93e432 ("sched/fair: Update blocked load when newly idle")
> >
> > the FAIR blocked utilization is properly decayed also for IDLE CPUs.
> >
> > This allows us to use the FAIR blocked utilization as a safe mechanism
> > to gracefully reduce the frequency only if no FAIR tasks show up on a
> > CPU for a reasonable period of time.
> >
> > Moreover, we also reduce the frequency drops of CPUs running periodic
> > tasks which, depending on the task periodicity and the time required
> > for a frequency switch, was increasing the chances to introduce some
> > undesirable performance variations.
> >
> > Reported-by: Vincent Guittot <vincent.guittot@linaro.org>
> > Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
> > Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
> > Acked-by: Vincent Guittot <vincent.guittot@linaro.org>
> > Tested-by: Vincent Guittot <vincent.guittot@linaro.org>
> > Cc: Ingo Molnar <mingo@redhat.com>
> > Cc: Peter Zijlstra <peterz@infradead.org>
> > Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> > Cc: Vincent Guittot <vincent.guittot@linaro.org>
> > Cc: Viresh Kumar <viresh.kumar@linaro.org>
> > Cc: Joel Fernandes <joelaf@google.com>
> > Cc: linux-kernel@vger.kernel.org
> > Cc: linux-pm@vger.kernel.org
> 
> Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> 
> Or please let me know if you want me to apply this one.

Hi Rafael, seems this patch has already been applied in tip/sched/core.
However is missing your tag above. :/

Dunno if I have / I can to do something about that.

> > ---
> > Changes in v3:
> >  - add "Tested-by" and "Acked-by" Vincent tags
> >
> > Changes in v2:
> >  - add "Acked-by" Viresh tag
> > ---
> >  kernel/sched/cpufreq_schedutil.c | 17 ++++++++---------
> >  1 file changed, 8 insertions(+), 9 deletions(-)
> >
> > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> > index e13df951aca7..28592b62b1d5 100644
> > --- a/kernel/sched/cpufreq_schedutil.c
> > +++ b/kernel/sched/cpufreq_schedutil.c
> > @@ -183,22 +183,21 @@ static void sugov_get_util(struct sugov_cpu *sg_cpu)
> >  static unsigned long sugov_aggregate_util(struct sugov_cpu *sg_cpu)
> >  {
> >         struct rq *rq = cpu_rq(sg_cpu->cpu);
> > -       unsigned long util;
> >
> > -       if (rq->rt.rt_nr_running) {
> > -               util = sg_cpu->max;
> > -       } else {
> > -               util = sg_cpu->util_dl;
> > -               if (rq->cfs.h_nr_running)
> > -                       util += sg_cpu->util_cfs;
> > -       }
> > +       if (rq->rt.rt_nr_running)
> > +               return sg_cpu->max;
> >
> >         /*
> > +        * Utilization required by DEADLINE must always be granted while, for
> > +        * FAIR, we use blocked utilization of IDLE CPUs as a mechanism to
> > +        * gracefully reduce the frequency when no tasks show up for longer
> > +        * periods of time.
> > +        *
> >          * Ideally we would like to set util_dl as min/guaranteed freq and
> >          * util_cfs + util_dl as requested freq. However, cpufreq is not yet
> >          * ready for such an interface. So, we only do the latter for now.
> >          */
> > -       return min(util, sg_cpu->max);
> > +       return min(sg_cpu->max, (sg_cpu->util_dl + sg_cpu->util_cfs));
> >  }
> >
> >  static void sugov_set_iowait_boost(struct sugov_cpu *sg_cpu, u64 time, unsigned int flags)
> > --
> > 2.15.1
> >

-- 
#include <best/regards.h>

Patrick Bellasi

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 1/2] sched/cpufreq: always consider blocked FAIR utilization
  2018-05-30 16:50     ` Patrick Bellasi
@ 2018-05-31 10:46       ` Rafael J. Wysocki
  0 siblings, 0 replies; 8+ messages in thread
From: Rafael J. Wysocki @ 2018-05-31 10:46 UTC (permalink / raw)
  To: Patrick Bellasi
  Cc: Rafael J. Wysocki, Linux Kernel Mailing List, Linux PM,
	Ingo Molnar, Peter Zijlstra, Rafael J . Wysocki, Viresh Kumar,
	Vincent Guittot, Dietmar Eggemann, Morten Rasmussen, Juri Lelli,
	Joel Fernandes, Steve Muckle

On Wednesday, May 30, 2018 6:50:10 PM CEST Patrick Bellasi wrote:
> On 27-May 11:50, Rafael J. Wysocki wrote:
> > On Thu, May 24, 2018 at 4:10 PM, Patrick Bellasi
> > <patrick.bellasi@arm.com> wrote:
> > > Since the refactoring introduced by:
> > >
> > >    commit 8f111bc357aa ("cpufreq/schedutil: Rewrite CPUFREQ_RT support")
> > >
> > > we aggregate FAIR utilization only if this class has runnable tasks.
> > > This was mainly due to avoid the risk to stay on an high frequency just
> > > because of the blocked utilization of a CPU not being properly decayed
> > > while the CPU was idle.
> > >
> > > However, since:
> > >
> > >    commit 31e77c93e432 ("sched/fair: Update blocked load when newly idle")
> > >
> > > the FAIR blocked utilization is properly decayed also for IDLE CPUs.
> > >
> > > This allows us to use the FAIR blocked utilization as a safe mechanism
> > > to gracefully reduce the frequency only if no FAIR tasks show up on a
> > > CPU for a reasonable period of time.
> > >
> > > Moreover, we also reduce the frequency drops of CPUs running periodic
> > > tasks which, depending on the task periodicity and the time required
> > > for a frequency switch, was increasing the chances to introduce some
> > > undesirable performance variations.
> > >
> > > Reported-by: Vincent Guittot <vincent.guittot@linaro.org>
> > > Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
> > > Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
> > > Acked-by: Vincent Guittot <vincent.guittot@linaro.org>
> > > Tested-by: Vincent Guittot <vincent.guittot@linaro.org>
> > > Cc: Ingo Molnar <mingo@redhat.com>
> > > Cc: Peter Zijlstra <peterz@infradead.org>
> > > Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> > > Cc: Vincent Guittot <vincent.guittot@linaro.org>
> > > Cc: Viresh Kumar <viresh.kumar@linaro.org>
> > > Cc: Joel Fernandes <joelaf@google.com>
> > > Cc: linux-kernel@vger.kernel.org
> > > Cc: linux-pm@vger.kernel.org
> > 
> > Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> > 
> > Or please let me know if you want me to apply this one.
> 
> Hi Rafael, seems this patch has already been applied in tip/sched/core.
> However is missing your tag above. :/

That's OK.

I just wanted to let people know my opinion. :-)

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2018-05-31 10:47 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-24 14:10 [PATCH v3 0/2] Improve schedutil integration for FAIR tasks Patrick Bellasi
2018-05-24 14:10 ` [PATCH v3 1/2] sched/cpufreq: always consider blocked FAIR utilization Patrick Bellasi
2018-05-25  9:47   ` [tip:sched/core] sched/cpufreq: Modify aggregate utilization to always include " tip-bot for Patrick Bellasi
2018-05-27  9:50   ` [PATCH v3 1/2] sched/cpufreq: always consider " Rafael J. Wysocki
2018-05-30 16:50     ` Patrick Bellasi
2018-05-31 10:46       ` Rafael J. Wysocki
2018-05-24 14:10 ` [PATCH v3 2/2] sched/fair: util_est: update before schedutil Patrick Bellasi
2018-05-25  9:48   ` [tip:sched/core] sched/fair: Update util_est before updating schedutil tip-bot for Patrick Bellasi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.