linux-pm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] cpufreq/schedutil: Cleanup and document iowait boost
@ 2018-03-28  9:07 Patrick Bellasi
  2018-04-05  9:58 ` Viresh Kumar
  0 siblings, 1 reply; 5+ messages in thread
From: Patrick Bellasi @ 2018-03-28  9:07 UTC (permalink / raw)
  To: linux-kernel, linux-pm
  Cc: Ingo Molnar, Peter Zijlstra, Rafael J . Wysocki, Viresh Kumar,
	Vincent Guittot, Dietmar Eggemann, Morten Rasmussen, Juri Lelli,
	Todd Kjos, Joel Fernandes, Steve Muckle

The iowait boosting code has been recently updated to add a progressive
boosting behavior which allows to be less aggressive in boosting tasks
doing only sporadic IO operations, thus being more energy efficient for
example on mobile platforms.

The current code is now however a bit convoluted. Some functionalities
(e.g. iowait boost reset) are replicated in different paths and their
documentation is slightly misaligned.

Let's cleanup the code by consolidating all the IO wait boosting related
functionality inside the already existing functions and better define
their role:

- sugov_set_iowait_boost: is now in charge only to set/increase the IO
     wait boost, every time a task wakes up from an IO wait.

- sugov_iowait_boost: is now in charge to reset/reduce the IO wait
     boost, every time a sugov update is triggered, as well as
     to (eventually) enforce the currently required IO boost value.

This is possible since these two functions are already used one after
the other, both in single and shared frequency domains, following the
same template:

   /* Configure IO boost, if required */
   sugov_set_iowait_boost()

   /* Return here if freq change is in progress or throttled */

   /* Collect and aggregate utilization information */
   sugov_get_util()
   sugov_aggregate_util()

   /* Add IO boost if currently enabled */
   sugov_iowait_boost()

As a extra bonus, let's also add the documentation for these two
functions and better align the in-code documentation.

Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: Viresh Kumar <viresh.kumar@linaro.org>
Cc: Joel Fernandes <joelaf@google.com>
Cc: Steve Muckle <smuckle@google.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-pm@vger.kernel.org

---
Based on today's tip/sched/core:
   b720342 sched/core: Update preempt_notifier_key to modern API
---
 kernel/sched/cpufreq_schedutil.c | 111 ++++++++++++++++++++++++---------------
 1 file changed, 68 insertions(+), 43 deletions(-)

diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index 2b124811947d..c840b0626735 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -201,43 +201,80 @@ static unsigned long sugov_aggregate_util(struct sugov_cpu *sg_cpu)
 	return min(util, sg_cpu->max);
 }
 
-static void sugov_set_iowait_boost(struct sugov_cpu *sg_cpu, u64 time, unsigned int flags)
+/**
+ * sugov_set_iowait_boost updates the IO boost at each wakeup from IO.
+ * @sg_cpu: the sugov data for the CPU to boost
+ * @flags:  SCHED_CPUFREQ_IOWAIT if the task is waking up after an IO wait
+ *
+ * Each time a task wakes up after an IO operation, the CPU utilization can be
+ * boosted to a certain utilization which is doubled at each wakeup
+ * from IO, starting from the utilization of the minimum OPP to that of the
+ * maximum one.
+ */
+static void sugov_set_iowait_boost(struct sugov_cpu *sg_cpu, unsigned int flags)
 {
-	if (flags & SCHED_CPUFREQ_IOWAIT) {
-		if (sg_cpu->iowait_boost_pending)
-			return;
 
-		sg_cpu->iowait_boost_pending = true;
+	/* Boost only tasks waking up after IO */
+	if (!(flags & SCHED_CPUFREQ_IOWAIT))
+		return;
 
-		if (sg_cpu->iowait_boost) {
-			sg_cpu->iowait_boost <<= 1;
-			if (sg_cpu->iowait_boost > sg_cpu->iowait_boost_max)
-				sg_cpu->iowait_boost = sg_cpu->iowait_boost_max;
-		} else {
-			sg_cpu->iowait_boost = sg_cpu->sg_policy->policy->min;
-		}
-	} else if (sg_cpu->iowait_boost) {
-		s64 delta_ns = time - sg_cpu->last_update;
+	/* Ensure IO boost doubles only one time at each frequency increase */
+	if (sg_cpu->iowait_boost_pending)
+		return;
+	sg_cpu->iowait_boost_pending = true;
 
-		/* Clear iowait_boost if the CPU apprears to have been idle. */
-		if (delta_ns > TICK_NSEC) {
-			sg_cpu->iowait_boost = 0;
-			sg_cpu->iowait_boost_pending = false;
-		}
+	/* Double the IO boost at each frequency increase */
+	if (sg_cpu->iowait_boost) {
+		sg_cpu->iowait_boost <<= 1;
+		if (sg_cpu->iowait_boost > sg_cpu->iowait_boost_max)
+			sg_cpu->iowait_boost = sg_cpu->iowait_boost_max;
+		return;
 	}
+
+	/* At first wakeup after IO, start with minimum boost */
+	sg_cpu->iowait_boost = sg_cpu->sg_policy->policy->min;
 }
 
-static void sugov_iowait_boost(struct sugov_cpu *sg_cpu, unsigned long *util,
-			       unsigned long *max)
+/**
+ * sugov_iowait_boost boosts a CPU after a wakeup from IO.
+ * @sg_cpu: the sugov data for the cpu to boost
+ * @time:   the update time from the caller
+ * @util:   the utilization to (eventually) boost
+ * @max:    the maximum value the utilization can be boosted to
+ *
+ * A CPU running a task which woken up after an IO operation can have its
+ * utilization boosted to speed up the completion of those IO operations.
+ * The IO boost value is increased each time a task wakes up from IO, in
+ * sugov_set_iowait_boost(), and it's instead decreased by this function,
+ * each time an increase has not been requested (!iowait_boost_pending).
+ *
+ * A CPU which also appears to have been idle for at least one tick has also
+ * its IO boost utilization reset.
+ *
+ * This mechanism is designed to boost high frequently IO waiting tasks, while
+ * being more conservative on tasks which does sporadic IO operations.
+ */
+static void sugov_iowait_boost(struct sugov_cpu *sg_cpu, u64 time,
+			       unsigned long *util, unsigned long *max)
 {
 	unsigned int boost_util, boost_max;
 
-	if (!sg_cpu->iowait_boost)
+	/* Clear boost if the CPU appears to have been idle enough */
+	if (sg_cpu->iowait_boost) {
+		s64 delta_ns = time - sg_cpu->last_update;
+
+		if (delta_ns > TICK_NSEC) {
+			sg_cpu->iowait_boost = 0;
+			sg_cpu->iowait_boost_pending = false;
+		}
 		return;
+	}
 
+	/* An IO waiting task has just woken up, use the boost value */
 	if (sg_cpu->iowait_boost_pending) {
 		sg_cpu->iowait_boost_pending = false;
 	} else {
+		/* Reduce the boost value otherwise */
 		sg_cpu->iowait_boost >>= 1;
 		if (sg_cpu->iowait_boost < sg_cpu->sg_policy->policy->min) {
 			sg_cpu->iowait_boost = 0;
@@ -248,6 +285,10 @@ static void sugov_iowait_boost(struct sugov_cpu *sg_cpu, unsigned long *util,
 	boost_util = sg_cpu->iowait_boost;
 	boost_max = sg_cpu->iowait_boost_max;
 
+	/*
+	 * A CPU is boosted only if its current utilization is smaller then
+	 * the current IO boost level.
+	 */
 	if (*util * boost_max < *max * boost_util) {
 		*util = boost_util;
 		*max = boost_max;
@@ -286,7 +327,7 @@ static void sugov_update_single(struct update_util_data *hook, u64 time,
 	unsigned int next_f;
 	bool busy;
 
-	sugov_set_iowait_boost(sg_cpu, time, flags);
+	sugov_set_iowait_boost(sg_cpu, flags);
 	sg_cpu->last_update = time;
 
 	ignore_dl_rate_limit(sg_cpu, sg_policy);
@@ -299,7 +340,7 @@ static void sugov_update_single(struct update_util_data *hook, u64 time,
 	sugov_get_util(sg_cpu);
 	max = sg_cpu->max;
 	util = sugov_aggregate_util(sg_cpu);
-	sugov_iowait_boost(sg_cpu, &util, &max);
+	sugov_iowait_boost(sg_cpu, time, &util, &max);
 	next_f = get_next_freq(sg_policy, util, max);
 	/*
 	 * Do not reduce the frequency if the CPU has not been idle
@@ -325,28 +366,12 @@ static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, u64 time)
 	for_each_cpu(j, policy->cpus) {
 		struct sugov_cpu *j_sg_cpu = &per_cpu(sugov_cpu, j);
 		unsigned long j_util, j_max;
-		s64 delta_ns;
 
 		sugov_get_util(j_sg_cpu);
-
-		/*
-		 * If the CFS CPU utilization was last updated before the
-		 * previous frequency update and the time elapsed between the
-		 * last update of the CPU utilization and the last frequency
-		 * update is long enough, reset iowait_boost and util_cfs, as
-		 * they are now probably stale. However, still consider the
-		 * CPU contribution if it has some DEADLINE utilization
-		 * (util_dl).
-		 */
-		delta_ns = time - j_sg_cpu->last_update;
-		if (delta_ns > TICK_NSEC) {
-			j_sg_cpu->iowait_boost = 0;
-			j_sg_cpu->iowait_boost_pending = false;
-		}
-
 		j_max = j_sg_cpu->max;
 		j_util = sugov_aggregate_util(j_sg_cpu);
-		sugov_iowait_boost(j_sg_cpu, &j_util, &j_max);
+		sugov_iowait_boost(j_sg_cpu, time, &j_util, &j_max);
+
 		if (j_util * max > j_max * util) {
 			util = j_util;
 			max = j_max;
@@ -365,7 +390,7 @@ sugov_update_shared(struct update_util_data *hook, u64 time, unsigned int flags)
 
 	raw_spin_lock(&sg_policy->update_lock);
 
-	sugov_set_iowait_boost(sg_cpu, time, flags);
+	sugov_set_iowait_boost(sg_cpu, flags);
 	sg_cpu->last_update = time;
 
 	ignore_dl_rate_limit(sg_cpu, sg_policy);
-- 
2.15.1

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] cpufreq/schedutil: Cleanup and document iowait boost
  2018-03-28  9:07 [PATCH] cpufreq/schedutil: Cleanup and document iowait boost Patrick Bellasi
@ 2018-04-05  9:58 ` Viresh Kumar
  2018-04-10 10:43   ` Patrick Bellasi
  0 siblings, 1 reply; 5+ messages in thread
From: Viresh Kumar @ 2018-04-05  9:58 UTC (permalink / raw)
  To: Patrick Bellasi
  Cc: linux-kernel, linux-pm, Ingo Molnar, Peter Zijlstra,
	Rafael J . Wysocki, Vincent Guittot, Dietmar Eggemann,
	Morten Rasmussen, Juri Lelli, Todd Kjos, Joel Fernandes,
	Steve Muckle

On 28-03-18, 10:07, Patrick Bellasi wrote:
> diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> index 2b124811947d..c840b0626735 100644
> --- a/kernel/sched/cpufreq_schedutil.c
> +++ b/kernel/sched/cpufreq_schedutil.c
> @@ -201,43 +201,80 @@ static unsigned long sugov_aggregate_util(struct sugov_cpu *sg_cpu)
>  	return min(util, sg_cpu->max);
>  }

I like the general idea but there are few things which look incorrect
to me, even in the current code.

> -static void sugov_set_iowait_boost(struct sugov_cpu *sg_cpu, u64 time, unsigned int flags)
> +/**
> + * sugov_set_iowait_boost updates the IO boost at each wakeup from IO.
> + * @sg_cpu: the sugov data for the CPU to boost
> + * @flags:  SCHED_CPUFREQ_IOWAIT if the task is waking up after an IO wait
> + *
> + * Each time a task wakes up after an IO operation, the CPU utilization can be
> + * boosted to a certain utilization which is doubled at each wakeup
> + * from IO, starting from the utilization of the minimum OPP to that of the
> + * maximum one.

You may also want to write here that the doubling of boost value is
restricted by rate_limit_us duration, its not that we double every
time this routine is called.

> + */
> +static void sugov_set_iowait_boost(struct sugov_cpu *sg_cpu, unsigned int flags)
>  {

>  
> -		/* Clear iowait_boost if the CPU apprears to have been idle. */
> -		if (delta_ns > TICK_NSEC) {
> -			sg_cpu->iowait_boost = 0;
> -			sg_cpu->iowait_boost_pending = false;
> -		}

So this is the only difference in this routine, everything else is
re-arrangement IIUC.

There is a problem that I see in existing code as well as code after
this commit.

Consider this sequence of events on a platform where cpufreq policies
aren't shared, i.e. each CPU has his own policy.

sugov_set_iowait_boost() gets called multiple times for a CPU with
IOWAIT flag set that leads us to a big boost value, like fmax. The CPU
goes to idle then and the task wakes up after few ticks.  Because we
are first checking the IOWAIT flag in this routine, we will double the
iowait boost. Ideally, based on the TICK_NSEC logic we have, we should
have first set the iowait boost to 0 and then because the flag was
set, set the boost to fmin. So the order of this routine needs to get
fixed in the first patch.

The same problem can happen for cases where the policy is shared as
well, but chances are less.

> -static void sugov_iowait_boost(struct sugov_cpu *sg_cpu, unsigned long *util,
> -			       unsigned long *max)
> +/**
> + * sugov_iowait_boost boosts a CPU after a wakeup from IO.
> + * @sg_cpu: the sugov data for the cpu to boost
> + * @time:   the update time from the caller
> + * @util:   the utilization to (eventually) boost
> + * @max:    the maximum value the utilization can be boosted to
> + *
> + * A CPU running a task which woken up after an IO operation can have its
> + * utilization boosted to speed up the completion of those IO operations.
> + * The IO boost value is increased each time a task wakes up from IO, in
> + * sugov_set_iowait_boost(), and it's instead decreased by this function,
> + * each time an increase has not been requested (!iowait_boost_pending).
> + *
> + * A CPU which also appears to have been idle for at least one tick has also
> + * its IO boost utilization reset.
> + *
> + * This mechanism is designed to boost high frequently IO waiting tasks, while
> + * being more conservative on tasks which does sporadic IO operations.
> + */
> +static void sugov_iowait_boost(struct sugov_cpu *sg_cpu, u64 time,
> +			       unsigned long *util, unsigned long *max)
>  {
>  	unsigned int boost_util, boost_max;
>  
> -	if (!sg_cpu->iowait_boost)
> +	/* Clear boost if the CPU appears to have been idle enough */
> +	if (sg_cpu->iowait_boost) {
> +		s64 delta_ns = time - sg_cpu->last_update;
> +
> +		if (delta_ns > TICK_NSEC) {
> +			sg_cpu->iowait_boost = 0;
> +			sg_cpu->iowait_boost_pending = false;
> +		}
>  		return;

This looks incorrect. I have read this 10 times and it looked
incorrect every single time :(

The "return" statement should be part of the if block itself ? Else we
will never boost.

> +	}
>  

Now we can reach here even on !sg_cpu->iowait_boost which wasn't the
case earlier. Though we will eventually return from the routine
without doing any damage, but we will waste some time running useless
if/else expressions.

Maybe still have something like

        if (!sg_cpu->iowait_boost)
                return;

??

-- 
viresh

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] cpufreq/schedutil: Cleanup and document iowait boost
  2018-04-05  9:58 ` Viresh Kumar
@ 2018-04-10 10:43   ` Patrick Bellasi
  2018-04-10 10:56     ` Viresh Kumar
  0 siblings, 1 reply; 5+ messages in thread
From: Patrick Bellasi @ 2018-04-10 10:43 UTC (permalink / raw)
  To: Viresh Kumar
  Cc: linux-kernel, linux-pm, Ingo Molnar, Peter Zijlstra,
	Rafael J . Wysocki, Vincent Guittot, Dietmar Eggemann,
	Morten Rasmussen, Juri Lelli, Todd Kjos, Joel Fernandes,
	Steve Muckle

Hi Vincent,

On 05-Apr 15:28, Viresh Kumar wrote:
> On 28-03-18, 10:07, Patrick Bellasi wrote:
> > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> > index 2b124811947d..c840b0626735 100644
> > --- a/kernel/sched/cpufreq_schedutil.c
> > +++ b/kernel/sched/cpufreq_schedutil.c
> > @@ -201,43 +201,80 @@ static unsigned long sugov_aggregate_util(struct sugov_cpu *sg_cpu)
> >  	return min(util, sg_cpu->max);
> >  }
> 
> I like the general idea but there are few things which look incorrect
> to me, even in the current code.
> 
> > -static void sugov_set_iowait_boost(struct sugov_cpu *sg_cpu, u64 time, unsigned int flags)
> > +/**
> > + * sugov_set_iowait_boost updates the IO boost at each wakeup from IO.
> > + * @sg_cpu: the sugov data for the CPU to boost
> > + * @flags:  SCHED_CPUFREQ_IOWAIT if the task is waking up after an IO wait
> > + *
> > + * Each time a task wakes up after an IO operation, the CPU utilization can be
> > + * boosted to a certain utilization which is doubled at each wakeup
> > + * from IO, starting from the utilization of the minimum OPP to that of the
> > + * maximum one.
> 
> You may also want to write here that the doubling of boost value is
> restricted by rate_limit_us duration, its not that we double every
> time this routine is called.

Right, so we are fine to double the boost value although this is not
always translated into a frequency increase. Will add that note to v2.

> 
> > + */
> > +static void sugov_set_iowait_boost(struct sugov_cpu *sg_cpu, unsigned int flags)
> >  {
> 
> >  
> > -		/* Clear iowait_boost if the CPU apprears to have been idle. */
> > -		if (delta_ns > TICK_NSEC) {
> > -			sg_cpu->iowait_boost = 0;
> > -			sg_cpu->iowait_boost_pending = false;
> > -		}
> 
> So this is the only difference in this routine, everything else is
> re-arrangement IIUC.

Yes, it's mostly code re-arrangement with the goal (not necessarily
already achieved) to make it more easy to maintain...

> There is a problem that I see in existing code as well as code after
> this commit.
> 
> Consider this sequence of events on a platform where cpufreq policies
> aren't shared, i.e. each CPU has his own policy.
> 
> sugov_set_iowait_boost() gets called multiple times for a CPU with
> IOWAIT flag set that leads us to a big boost value, like fmax. The CPU
> goes to idle then and the task wakes up after few ticks.  Because we
> are first checking the IOWAIT flag in this routine, we will double the
> iowait boost. Ideally, based on the TICK_NSEC logic we have, we should
> have first set the iowait boost to 0 and then because the flag was
> set, set the boost to fmin. So the order of this routine needs to get
> fixed in the first patch.

Yes, I agree... that sounds more logical. Moreover, I found that
moving the above check in the following function is also broken, since
we always update last_update right after sugov_set_iowait_boost()
calls...

> The same problem can happen for cases where the policy is shared as
> well, but chances are less.
> 
> > -static void sugov_iowait_boost(struct sugov_cpu *sg_cpu, unsigned long *util,
> > -			       unsigned long *max)
> > +/**
> > + * sugov_iowait_boost boosts a CPU after a wakeup from IO.
> > + * @sg_cpu: the sugov data for the cpu to boost
> > + * @time:   the update time from the caller
> > + * @util:   the utilization to (eventually) boost
> > + * @max:    the maximum value the utilization can be boosted to
> > + *
> > + * A CPU running a task which woken up after an IO operation can have its
> > + * utilization boosted to speed up the completion of those IO operations.
> > + * The IO boost value is increased each time a task wakes up from IO, in
> > + * sugov_set_iowait_boost(), and it's instead decreased by this function,
> > + * each time an increase has not been requested (!iowait_boost_pending).
> > + *
> > + * A CPU which also appears to have been idle for at least one tick has also
> > + * its IO boost utilization reset.
> > + *
> > + * This mechanism is designed to boost high frequently IO waiting tasks, while
> > + * being more conservative on tasks which does sporadic IO operations.
> > + */
> > +static void sugov_iowait_boost(struct sugov_cpu *sg_cpu, u64 time,
> > +			       unsigned long *util, unsigned long *max)
> >  {
> >  	unsigned int boost_util, boost_max;
> >  
> > -	if (!sg_cpu->iowait_boost)
> > +	/* Clear boost if the CPU appears to have been idle enough */
> > +	if (sg_cpu->iowait_boost) {
> > +		s64 delta_ns = time - sg_cpu->last_update;
> > +
> > +		if (delta_ns > TICK_NSEC) {
> > +			sg_cpu->iowait_boost = 0;
> > +			sg_cpu->iowait_boost_pending = false;
> > +		}
> >  		return;
> 
> This looks incorrect. I have read this 10 times and it looked
> incorrect every single time :(
> 
> The "return" statement should be part of the if block itself ? Else we
> will never boost.

Right...

> > +	}
> >  
> 
> Now we can reach here even on !sg_cpu->iowait_boost which wasn't the
> case earlier. Though we will eventually return from the routine
> without doing any damage, but we will waste some time running useless
> if/else expressions.
> 
> Maybe still have something like
> 
>         if (!sg_cpu->iowait_boost)
>                 return;
> 
> ??

What about this new version for the two functions,
just compile tested:

---8<---

static void sugov_set_iowait_boost(struct sugov_cpu *sg_cpu, u64 time,
				   unsigned int flags)
{
	bool iowait = flags & SCHED_CPUFREQ_IOWAIT;

	/* Reset boost if the CPU appears to have been idle enough */
	if (sg_cpu->iowait_boost) {
		s64 delta_ns = time - sg_cpu->last_update;

		if (delta_ns > TICK_NSEC) {
			sg_cpu->iowait_boost = iowait
				? sg_cpu->sg_policy->policy->min : 0;
			sg_cpu->iowait_boost_pending = iowait;
			return;
		}
	}

	/* Boost only tasks waking up after IO */
	if (!iowait)
		return;

	/* Ensure IO boost doubles only one time at each frequency increase */
	if (sg_cpu->iowait_boost_pending)
		return;
	sg_cpu->iowait_boost_pending = true;

	/* Double the IO boost at each frequency increase */
	if (sg_cpu->iowait_boost) {
		sg_cpu->iowait_boost <<= 1;
		if (sg_cpu->iowait_boost > sg_cpu->iowait_boost_max)
			sg_cpu->iowait_boost = sg_cpu->iowait_boost_max;
		return;
	}

	/* At first wakeup after IO, start with minimum boost */
	sg_cpu->iowait_boost = sg_cpu->sg_policy->policy->min;
}

static void sugov_iowait_boost(struct sugov_cpu *sg_cpu,
			       unsigned long *util, unsigned long *max)
{
	unsigned int boost_util, boost_max;

        /* No IOWait boost active */
        if (!sg_cpu->iowait_boost)
                return;

	/* An IO waiting task has just woken up, use the boost value */
	if (sg_cpu->iowait_boost_pending) {
		sg_cpu->iowait_boost_pending = false;
	} else {
		/* Reduce the boost value otherwise */
		sg_cpu->iowait_boost >>= 1;
		if (sg_cpu->iowait_boost < sg_cpu->sg_policy->policy->min) {
			sg_cpu->iowait_boost = 0;
			return;
		}
	}

	boost_util = sg_cpu->iowait_boost;
	boost_max = sg_cpu->iowait_boost_max;

	/*
	 * A CPU is boosted only if its current utilization is smaller then
	 * the current IO boost level.
	 */
	if (*util * boost_max < *max * boost_util) {
		*util = boost_util;
		*max = boost_max;
	}
}

---8<---

-- 
#include <best/regards.h>

Patrick Bellasi

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] cpufreq/schedutil: Cleanup and document iowait boost
  2018-04-10 10:43   ` Patrick Bellasi
@ 2018-04-10 10:56     ` Viresh Kumar
  2018-04-10 11:57       ` Patrick Bellasi
  0 siblings, 1 reply; 5+ messages in thread
From: Viresh Kumar @ 2018-04-10 10:56 UTC (permalink / raw)
  To: Patrick Bellasi
  Cc: linux-kernel, linux-pm, Ingo Molnar, Peter Zijlstra,
	Rafael J . Wysocki, Vincent Guittot, Dietmar Eggemann,
	Morten Rasmussen, Juri Lelli, Todd Kjos, Joel Fernandes,
	Steve Muckle

On 10-04-18, 11:43, Patrick Bellasi wrote:
> On 05-Apr 15:28, Viresh Kumar wrote:
> What about this new version for the two functions,
> just compile tested:
> 
> ---8<---
> 
> static void sugov_set_iowait_boost(struct sugov_cpu *sg_cpu, u64 time,
> 				   unsigned int flags)
> {
> 	bool iowait = flags & SCHED_CPUFREQ_IOWAIT;
> 
> 	/* Reset boost if the CPU appears to have been idle enough */
> 	if (sg_cpu->iowait_boost) {
> 		s64 delta_ns = time - sg_cpu->last_update;
> 
> 		if (delta_ns > TICK_NSEC) {
> 			sg_cpu->iowait_boost = iowait
> 				? sg_cpu->sg_policy->policy->min : 0;

Yeah, I see you are trying to optimize it a bit here but this makes
things more confusing I would say :)

I would just set iowait_boost to 0 and drop the return; from below and
let the code fall through and reach the end of this routine.

> 			sg_cpu->iowait_boost_pending = iowait;
> 			return;
> 		}
> 	}
> 
> 	/* Boost only tasks waking up after IO */
> 	if (!iowait)
> 		return;
> 
> 	/* Ensure IO boost doubles only one time at each frequency increase */
> 	if (sg_cpu->iowait_boost_pending)
> 		return;
> 	sg_cpu->iowait_boost_pending = true;
> 
> 	/* Double the IO boost at each frequency increase */
> 	if (sg_cpu->iowait_boost) {
> 		sg_cpu->iowait_boost <<= 1;
> 		if (sg_cpu->iowait_boost > sg_cpu->iowait_boost_max)
> 			sg_cpu->iowait_boost = sg_cpu->iowait_boost_max;
> 		return;
> 	}
> 
> 	/* At first wakeup after IO, start with minimum boost */
> 	sg_cpu->iowait_boost = sg_cpu->sg_policy->policy->min;
> }
> 
> static void sugov_iowait_boost(struct sugov_cpu *sg_cpu,
> 			       unsigned long *util, unsigned long *max)
> {
> 	unsigned int boost_util, boost_max;
> 
>         /* No IOWait boost active */
>         if (!sg_cpu->iowait_boost)
>                 return;
> 
> 	/* An IO waiting task has just woken up, use the boost value */
> 	if (sg_cpu->iowait_boost_pending) {
> 		sg_cpu->iowait_boost_pending = false;
> 	} else {
> 		/* Reduce the boost value otherwise */
> 		sg_cpu->iowait_boost >>= 1;
> 		if (sg_cpu->iowait_boost < sg_cpu->sg_policy->policy->min) {
> 			sg_cpu->iowait_boost = 0;
> 			return;
> 		}
> 	}
> 
> 	boost_util = sg_cpu->iowait_boost;
> 	boost_max = sg_cpu->iowait_boost_max;
> 
> 	/*
> 	 * A CPU is boosted only if its current utilization is smaller then
> 	 * the current IO boost level.
> 	 */
> 	if (*util * boost_max < *max * boost_util) {
> 		*util = boost_util;
> 		*max = boost_max;
> 	}
> }

So this is quite different than what you proposed, it is only fixing
the existing problem which I pointed out to. Looks fine, not much
changed really from the current state of code.

-- 
viresh

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] cpufreq/schedutil: Cleanup and document iowait boost
  2018-04-10 10:56     ` Viresh Kumar
@ 2018-04-10 11:57       ` Patrick Bellasi
  0 siblings, 0 replies; 5+ messages in thread
From: Patrick Bellasi @ 2018-04-10 11:57 UTC (permalink / raw)
  To: Viresh Kumar
  Cc: linux-kernel, linux-pm, Ingo Molnar, Peter Zijlstra,
	Rafael J . Wysocki, Vincent Guittot, Dietmar Eggemann,
	Morten Rasmussen, Juri Lelli, Todd Kjos, Joel Fernandes,
	Steve Muckle

On 10-Apr 16:26, Viresh Kumar wrote:
> On 10-04-18, 11:43, Patrick Bellasi wrote:
> > On 05-Apr 15:28, Viresh Kumar wrote:
> > What about this new version for the two functions,
> > just compile tested:
> > 
> > ---8<---
> > 
> > static void sugov_set_iowait_boost(struct sugov_cpu *sg_cpu, u64 time,
> > 				   unsigned int flags)
> > {
> > 	bool iowait = flags & SCHED_CPUFREQ_IOWAIT;
> > 
> > 	/* Reset boost if the CPU appears to have been idle enough */
> > 	if (sg_cpu->iowait_boost) {
> > 		s64 delta_ns = time - sg_cpu->last_update;
> > 
> > 		if (delta_ns > TICK_NSEC) {
> > 			sg_cpu->iowait_boost = iowait
> > 				? sg_cpu->sg_policy->policy->min : 0;
> 
> Yeah, I see you are trying to optimize it a bit here but this makes
> things more confusing I would say :)
> 
> I would just set iowait_boost to 0 and drop the return; from below and
> let the code fall through and reach the end of this routine.

Yes, that's possible... althought it's in the same scope of the
optimization you suggested for the next function, bail out ASAP to
avoid other branches.

> > 			sg_cpu->iowait_boost_pending = iowait;
> > 			return;
> > 		}
> > 	}
> > 
> > 	/* Boost only tasks waking up after IO */
> > 	if (!iowait)
> > 		return;
> > 
> > 	/* Ensure IO boost doubles only one time at each frequency increase */
> > 	if (sg_cpu->iowait_boost_pending)
> > 		return;
> > 	sg_cpu->iowait_boost_pending = true;
> > 
> > 	/* Double the IO boost at each frequency increase */
> > 	if (sg_cpu->iowait_boost) {
> > 		sg_cpu->iowait_boost <<= 1;
> > 		if (sg_cpu->iowait_boost > sg_cpu->iowait_boost_max)
> > 			sg_cpu->iowait_boost = sg_cpu->iowait_boost_max;
> > 		return;
> > 	}
> > 
> > 	/* At first wakeup after IO, start with minimum boost */
> > 	sg_cpu->iowait_boost = sg_cpu->sg_policy->policy->min;
> > }
> > 
> > static void sugov_iowait_boost(struct sugov_cpu *sg_cpu,
> > 			       unsigned long *util, unsigned long *max)
> > {
> > 	unsigned int boost_util, boost_max;
> > 
> >         /* No IOWait boost active */
> >         if (!sg_cpu->iowait_boost)
> >                 return;
> > 
> > 	/* An IO waiting task has just woken up, use the boost value */
> > 	if (sg_cpu->iowait_boost_pending) {
> > 		sg_cpu->iowait_boost_pending = false;
> > 	} else {
> > 		/* Reduce the boost value otherwise */
> > 		sg_cpu->iowait_boost >>= 1;
> > 		if (sg_cpu->iowait_boost < sg_cpu->sg_policy->policy->min) {
> > 			sg_cpu->iowait_boost = 0;
> > 			return;
> > 		}
> > 	}
> > 
> > 	boost_util = sg_cpu->iowait_boost;
> > 	boost_max = sg_cpu->iowait_boost_max;
> > 
> > 	/*
> > 	 * A CPU is boosted only if its current utilization is smaller then
> > 	 * the current IO boost level.
> > 	 */
> > 	if (*util * boost_max < *max * boost_util) {
> > 		*util = boost_util;
> > 		*max = boost_max;
> > 	}
> > }
> 
> So this is quite different than what you proposed, it is only fixing
> the existing problem which I pointed out to. Looks fine, not much
> changed really from the current state of code.

Yes, maybe... I think we still get the benefit to consolidate all the
iowait boost code into just these two function, while instead
currently the reset is in sugov_next_freq_shared().

Moreoverer, IMO it's easy to read and with a documentation more
aligned... but, lemme post a v2 and then we will see if it still makes
sense or I should just drop it.

-- 
#include <best/regards.h>

Patrick Bellasi

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2018-04-10 11:57 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-28  9:07 [PATCH] cpufreq/schedutil: Cleanup and document iowait boost Patrick Bellasi
2018-04-05  9:58 ` Viresh Kumar
2018-04-10 10:43   ` Patrick Bellasi
2018-04-10 10:56     ` Viresh Kumar
2018-04-10 11:57       ` Patrick Bellasi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).