All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC] schedutil: Allow cpufreq requests to be made even when kthread kicked
@ 2018-05-16 22:45 Joel Fernandes (Google)
  2018-05-17  5:06 ` Viresh Kumar
  2018-05-17  7:00 ` Juri Lelli
  0 siblings, 2 replies; 11+ messages in thread
From: Joel Fernandes (Google) @ 2018-05-16 22:45 UTC (permalink / raw)
  To: linux-kernel
  Cc: Joel Fernandes (Google),
	Viresh Kumar, Rafael J . Wysocki, Peter Zijlstra, Ingo Molnar,
	Patrick Bellasi, Juri Lelli, Luca Abeni, Joel Fernandes,
	linux-pm

Currently there is a chance of a schedutil cpufreq update request to be
dropped if there is a pending update request. This pending request can
be delayed if there is a scheduling delay of the irq_work and the wake
up of the schedutil governor kthread.

A very bad scenario is when a schedutil request was already just made,
such as to reduce the CPU frequency, then a newer request to increase
CPU frequency (even sched deadline urgent frequency increase requests)
can be dropped, even though the rate limits suggest that its Ok to
process a request. This is because of the way the work_in_progress flag
is used.

This patch improves the situation by allowing new requests to happen
even though the old one is still being processed. Note that in this
approach, if an irq_work was already issued, we just update next_freq
and don't bother to queue another request so there's no extra work being
done to make this happen.

I had brought up this issue at the OSPM conference and Claudio had a
discussion RFC with an alternate approach [1]. I prefer the approach as
done in the patch below since it doesn't need any new flags and doesn't
cause any other extra overhead.

[1] https://patchwork.kernel.org/patch/10384261/

CC: Viresh Kumar <viresh.kumar@linaro.org>
CC: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
CC: Peter Zijlstra <peterz@infradead.org>
CC: Ingo Molnar <mingo@redhat.com>
CC: Patrick Bellasi <patrick.bellasi@arm.com>
CC: Juri Lelli <juri.lelli@redhat.com>
Cc: Luca Abeni <luca.abeni@santannapisa.it>
CC: Joel Fernandes <joelaf@google.com>
CC: linux-pm@vger.kernel.org
Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
---
Claudio,
Could you also test this patch for your usecase?

 kernel/sched/cpufreq_schedutil.c | 36 +++++++++++++++++++++++++-------
 1 file changed, 28 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index e13df951aca7..a87fc281893d 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -92,9 +92,6 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time)
 	    !cpufreq_can_do_remote_dvfs(sg_policy->policy))
 		return false;
 
-	if (sg_policy->work_in_progress)
-		return false;
-
 	if (unlikely(sg_policy->need_freq_update)) {
 		sg_policy->need_freq_update = false;
 		/*
@@ -129,8 +126,11 @@ static void sugov_update_commit(struct sugov_policy *sg_policy, u64 time,
 		policy->cur = next_freq;
 		trace_cpu_frequency(next_freq, smp_processor_id());
 	} else {
-		sg_policy->work_in_progress = true;
-		irq_work_queue(&sg_policy->irq_work);
+		/* Don't queue request if one was already queued */
+		if (!sg_policy->work_in_progress) {
+			sg_policy->work_in_progress = true;
+			irq_work_queue(&sg_policy->irq_work);
+		}
 	}
 }
 
@@ -291,6 +291,15 @@ static void sugov_update_single(struct update_util_data *hook, u64 time,
 
 	ignore_dl_rate_limit(sg_cpu, sg_policy);
 
+	/*
+	 * For slow-switch systems, single policy requests can't run at the
+	 * moment if the governor thread is already processing a pending
+	 * frequency switch request, this can be fixed by acquiring update_lock
+	 * while updating next_freq and work_in_progress but we prefer not to.
+	 */
+	if (sg_policy->work_in_progress)
+		return;
+
 	if (!sugov_should_update_freq(sg_policy, time))
 		return;
 
@@ -382,13 +391,24 @@ sugov_update_shared(struct update_util_data *hook, u64 time, unsigned int flags)
 static void sugov_work(struct kthread_work *work)
 {
 	struct sugov_policy *sg_policy = container_of(work, struct sugov_policy, work);
+	unsigned int freq;
+	unsigned long flags;
+
+	/*
+	 * Hold sg_policy->update_lock shortly to handle the case where:
+	 * incase sg_policy->next_freq is read here, and then updated by
+	 * sugov_update_shared just before work_in_progress is set to false
+	 * here, we may miss queueing the new update.
+	 */
+	raw_spin_lock_irqsave(&sg_policy->update_lock, flags);
+	freq = sg_policy->next_freq;
+	sg_policy->work_in_progress = false;
+	raw_spin_unlock_irqrestore(&sg_policy->update_lock, flags);
 
 	mutex_lock(&sg_policy->work_lock);
-	__cpufreq_driver_target(sg_policy->policy, sg_policy->next_freq,
+	__cpufreq_driver_target(sg_policy->policy, freq,
 				CPUFREQ_RELATION_L);
 	mutex_unlock(&sg_policy->work_lock);
-
-	sg_policy->work_in_progress = false;
 }
 
 static void sugov_irq_work(struct irq_work *irq_work)
-- 
2.17.0.441.gb46fe60e1d-goog

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH RFC] schedutil: Allow cpufreq requests to be made even when kthread kicked
  2018-05-16 22:45 [PATCH RFC] schedutil: Allow cpufreq requests to be made even when kthread kicked Joel Fernandes (Google)
@ 2018-05-17  5:06 ` Viresh Kumar
  2018-05-17 13:11   ` Joel Fernandes
  2018-05-17  7:00 ` Juri Lelli
  1 sibling, 1 reply; 11+ messages in thread
From: Viresh Kumar @ 2018-05-17  5:06 UTC (permalink / raw)
  To: Joel Fernandes (Google)
  Cc: linux-kernel, Rafael J . Wysocki, Peter Zijlstra, Ingo Molnar,
	Patrick Bellasi, Juri Lelli, Luca Abeni, Joel Fernandes,
	linux-pm

On 16-05-18, 15:45, Joel Fernandes (Google) wrote:
>  kernel/sched/cpufreq_schedutil.c | 36 +++++++++++++++++++++++++-------
>  1 file changed, 28 insertions(+), 8 deletions(-)
> 
> diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> index e13df951aca7..a87fc281893d 100644
> --- a/kernel/sched/cpufreq_schedutil.c
> +++ b/kernel/sched/cpufreq_schedutil.c
> @@ -92,9 +92,6 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time)
>  	    !cpufreq_can_do_remote_dvfs(sg_policy->policy))
>  		return false;
>  
> -	if (sg_policy->work_in_progress)
> -		return false;
> -
>  	if (unlikely(sg_policy->need_freq_update)) {
>  		sg_policy->need_freq_update = false;
>  		/*
> @@ -129,8 +126,11 @@ static void sugov_update_commit(struct sugov_policy *sg_policy, u64 time,
>  		policy->cur = next_freq;
>  		trace_cpu_frequency(next_freq, smp_processor_id());
>  	} else {
> -		sg_policy->work_in_progress = true;
> -		irq_work_queue(&sg_policy->irq_work);
> +		/* Don't queue request if one was already queued */
> +		if (!sg_policy->work_in_progress) {

Merge it above to make it "else if".

> +			sg_policy->work_in_progress = true;
> +			irq_work_queue(&sg_policy->irq_work);
> +		}
>  	}
>  }
>  
> @@ -291,6 +291,15 @@ static void sugov_update_single(struct update_util_data *hook, u64 time,
>  
>  	ignore_dl_rate_limit(sg_cpu, sg_policy);
>  
> +	/*
> +	 * For slow-switch systems, single policy requests can't run at the
> +	 * moment if the governor thread is already processing a pending
> +	 * frequency switch request, this can be fixed by acquiring update_lock
> +	 * while updating next_freq and work_in_progress but we prefer not to.
> +	 */
> +	if (sg_policy->work_in_progress)
> +		return;
> +

@Rafael: Do you think its worth start using the lock now for unshared
policies ?

>  	if (!sugov_should_update_freq(sg_policy, time))
>  		return;
>  
> @@ -382,13 +391,24 @@ sugov_update_shared(struct update_util_data *hook, u64 time, unsigned int flags)
>  static void sugov_work(struct kthread_work *work)
>  {
>  	struct sugov_policy *sg_policy = container_of(work, struct sugov_policy, work);
> +	unsigned int freq;
> +	unsigned long flags;
> +
> +	/*
> +	 * Hold sg_policy->update_lock shortly to handle the case where:
> +	 * incase sg_policy->next_freq is read here, and then updated by
> +	 * sugov_update_shared just before work_in_progress is set to false
> +	 * here, we may miss queueing the new update.
> +	 */
> +	raw_spin_lock_irqsave(&sg_policy->update_lock, flags);
> +	freq = sg_policy->next_freq;
> +	sg_policy->work_in_progress = false;
> +	raw_spin_unlock_irqrestore(&sg_policy->update_lock, flags);
>  
>  	mutex_lock(&sg_policy->work_lock);
> -	__cpufreq_driver_target(sg_policy->policy, sg_policy->next_freq,
> +	__cpufreq_driver_target(sg_policy->policy, freq,
>  				CPUFREQ_RELATION_L);

No need of line break anymore.

>  	mutex_unlock(&sg_policy->work_lock);
> -
> -	sg_policy->work_in_progress = false;
>  }
>  
>  static void sugov_irq_work(struct irq_work *irq_work)

LGTM.

-- 
viresh

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH RFC] schedutil: Allow cpufreq requests to be made even when kthread kicked
  2018-05-16 22:45 [PATCH RFC] schedutil: Allow cpufreq requests to be made even when kthread kicked Joel Fernandes (Google)
  2018-05-17  5:06 ` Viresh Kumar
@ 2018-05-17  7:00 ` Juri Lelli
  2018-05-17 10:20   ` Viresh Kumar
  1 sibling, 1 reply; 11+ messages in thread
From: Juri Lelli @ 2018-05-17  7:00 UTC (permalink / raw)
  To: Joel Fernandes (Google)
  Cc: linux-kernel, Viresh Kumar, Rafael J . Wysocki, Peter Zijlstra,
	Ingo Molnar, Patrick Bellasi, Luca Abeni, Joel Fernandes,
	linux-pm

Hi Joel,

On 16/05/18 15:45, Joel Fernandes (Google) wrote:

[...]

> @@ -382,13 +391,24 @@ sugov_update_shared(struct update_util_data *hook, u64 time, unsigned int flags)
>  static void sugov_work(struct kthread_work *work)
>  {
>  	struct sugov_policy *sg_policy = container_of(work, struct sugov_policy, work);
> +	unsigned int freq;
> +	unsigned long flags;
> +
> +	/*
> +	 * Hold sg_policy->update_lock shortly to handle the case where:
> +	 * incase sg_policy->next_freq is read here, and then updated by
> +	 * sugov_update_shared just before work_in_progress is set to false
> +	 * here, we may miss queueing the new update.
> +	 */
> +	raw_spin_lock_irqsave(&sg_policy->update_lock, flags);
> +	freq = sg_policy->next_freq;
> +	sg_policy->work_in_progress = false;
> +	raw_spin_unlock_irqrestore(&sg_policy->update_lock, flags);

OK, we queue the new request up, but still we need to let this kthread
activation complete and then wake it up again to service the request
already queued, right? Wasn't what Claudio proposed (service back to
back requests all in the same kthread activation) better from an
overhead pow?

Also, I assume that there's no problem kicking the irq_work thing while
the kthread that it's going to be woken up it's already running?

>  
>  	mutex_lock(&sg_policy->work_lock);
> -	__cpufreq_driver_target(sg_policy->policy, sg_policy->next_freq,
> +	__cpufreq_driver_target(sg_policy->policy, freq,
>  				CPUFREQ_RELATION_L);
>  	mutex_unlock(&sg_policy->work_lock);
> -
> -	sg_policy->work_in_progress = false;
>  }

Best,

- Juri

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH RFC] schedutil: Allow cpufreq requests to be made even when kthread kicked
  2018-05-17  7:00 ` Juri Lelli
@ 2018-05-17 10:20   ` Viresh Kumar
  2018-05-17 10:53     ` Juri Lelli
  0 siblings, 1 reply; 11+ messages in thread
From: Viresh Kumar @ 2018-05-17 10:20 UTC (permalink / raw)
  To: Juri Lelli
  Cc: Joel Fernandes (Google),
	linux-kernel, Rafael J . Wysocki, Peter Zijlstra, Ingo Molnar,
	Patrick Bellasi, Luca Abeni, Joel Fernandes, linux-pm

On 17-05-18, 09:00, Juri Lelli wrote:
> Hi Joel,
> 
> On 16/05/18 15:45, Joel Fernandes (Google) wrote:
> 
> [...]
> 
> > @@ -382,13 +391,24 @@ sugov_update_shared(struct update_util_data *hook, u64 time, unsigned int flags)
> >  static void sugov_work(struct kthread_work *work)
> >  {
> >  	struct sugov_policy *sg_policy = container_of(work, struct sugov_policy, work);
> > +	unsigned int freq;
> > +	unsigned long flags;
> > +
> > +	/*
> > +	 * Hold sg_policy->update_lock shortly to handle the case where:
> > +	 * incase sg_policy->next_freq is read here, and then updated by
> > +	 * sugov_update_shared just before work_in_progress is set to false
> > +	 * here, we may miss queueing the new update.
> > +	 */
> > +	raw_spin_lock_irqsave(&sg_policy->update_lock, flags);
> > +	freq = sg_policy->next_freq;
> > +	sg_policy->work_in_progress = false;
> > +	raw_spin_unlock_irqrestore(&sg_policy->update_lock, flags);
> 
> OK, we queue the new request up, but still we need to let this kthread
> activation complete and then wake it up again to service the request
> already queued, right? Wasn't what Claudio proposed (service back to
> back requests all in the same kthread activation) better from an
> overhead pow?

We would need more locking stuff in the work handler in that case and
I think there maybe a chance of missing the request in that solution
if the request happens right at the end of when sugov_work returns.

-- 
viresh

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH RFC] schedutil: Allow cpufreq requests to be made even when kthread kicked
  2018-05-17 10:20   ` Viresh Kumar
@ 2018-05-17 10:53     ` Juri Lelli
  2018-05-17 13:07       ` Joel Fernandes
  0 siblings, 1 reply; 11+ messages in thread
From: Juri Lelli @ 2018-05-17 10:53 UTC (permalink / raw)
  To: Viresh Kumar
  Cc: Joel Fernandes (Google),
	linux-kernel, Rafael J . Wysocki, Peter Zijlstra, Ingo Molnar,
	Patrick Bellasi, Luca Abeni, Joel Fernandes, linux-pm

On 17/05/18 15:50, Viresh Kumar wrote:
> On 17-05-18, 09:00, Juri Lelli wrote:
> > Hi Joel,
> > 
> > On 16/05/18 15:45, Joel Fernandes (Google) wrote:
> > 
> > [...]
> > 
> > > @@ -382,13 +391,24 @@ sugov_update_shared(struct update_util_data *hook, u64 time, unsigned int flags)
> > >  static void sugov_work(struct kthread_work *work)
> > >  {
> > >  	struct sugov_policy *sg_policy = container_of(work, struct sugov_policy, work);
> > > +	unsigned int freq;
> > > +	unsigned long flags;
> > > +
> > > +	/*
> > > +	 * Hold sg_policy->update_lock shortly to handle the case where:
> > > +	 * incase sg_policy->next_freq is read here, and then updated by
> > > +	 * sugov_update_shared just before work_in_progress is set to false
> > > +	 * here, we may miss queueing the new update.
> > > +	 */
> > > +	raw_spin_lock_irqsave(&sg_policy->update_lock, flags);
> > > +	freq = sg_policy->next_freq;
> > > +	sg_policy->work_in_progress = false;
> > > +	raw_spin_unlock_irqrestore(&sg_policy->update_lock, flags);
> > 
> > OK, we queue the new request up, but still we need to let this kthread
> > activation complete and then wake it up again to service the request
> > already queued, right? Wasn't what Claudio proposed (service back to
> > back requests all in the same kthread activation) better from an
> > overhead pow?
> 
> We would need more locking stuff in the work handler in that case and
> I think there maybe a chance of missing the request in that solution
> if the request happens right at the end of when sugov_work returns.

Mmm, true. Ideally we might want to use some sort of queue where to
atomically insert requests and then consume until queue is empty from
sugov kthread.

But, I guess that's going to be too much complexity for an (hopefully)
corner case.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH RFC] schedutil: Allow cpufreq requests to be made even when kthread kicked
  2018-05-17 10:53     ` Juri Lelli
@ 2018-05-17 13:07       ` Joel Fernandes
  2018-05-17 14:28         ` Juri Lelli
  0 siblings, 1 reply; 11+ messages in thread
From: Joel Fernandes @ 2018-05-17 13:07 UTC (permalink / raw)
  To: Juri Lelli
  Cc: Viresh Kumar, linux-kernel, Rafael J . Wysocki, Peter Zijlstra,
	Ingo Molnar, Patrick Bellasi, Luca Abeni, Joel Fernandes,
	linux-pm, kernel-team

On Thu, May 17, 2018 at 12:53:58PM +0200, Juri Lelli wrote:
> On 17/05/18 15:50, Viresh Kumar wrote:
> > On 17-05-18, 09:00, Juri Lelli wrote:
> > > Hi Joel,
> > > 
> > > On 16/05/18 15:45, Joel Fernandes (Google) wrote:
> > > 
> > > [...]
> > > 
> > > > @@ -382,13 +391,24 @@ sugov_update_shared(struct update_util_data *hook, u64 time, unsigned int flags)
> > > >  static void sugov_work(struct kthread_work *work)
> > > >  {
> > > >  	struct sugov_policy *sg_policy = container_of(work, struct sugov_policy, work);
> > > > +	unsigned int freq;
> > > > +	unsigned long flags;
> > > > +
> > > > +	/*
> > > > +	 * Hold sg_policy->update_lock shortly to handle the case where:
> > > > +	 * incase sg_policy->next_freq is read here, and then updated by
> > > > +	 * sugov_update_shared just before work_in_progress is set to false
> > > > +	 * here, we may miss queueing the new update.
> > > > +	 */
> > > > +	raw_spin_lock_irqsave(&sg_policy->update_lock, flags);
> > > > +	freq = sg_policy->next_freq;
> > > > +	sg_policy->work_in_progress = false;
> > > > +	raw_spin_unlock_irqrestore(&sg_policy->update_lock, flags);
> > > 
> > > OK, we queue the new request up, but still we need to let this kthread
> > > activation complete and then wake it up again to service the request
> > > already queued, right? Wasn't what Claudio proposed (service back to
> > > back requests all in the same kthread activation) better from an
> > > overhead pow?

Hmm, from that perspective, yeah. But note that my patch doesn't increase the
overhead from what it already is.. because we don't queue the irq_work again
unless work_in_progress is cleared, which wouldn't be if the kthread didn't
run yet.

> > 
> > We would need more locking stuff in the work handler in that case and
> > I think there maybe a chance of missing the request in that solution
> > if the request happens right at the end of when sugov_work returns.
> 
> Mmm, true. Ideally we might want to use some sort of queue where to
> atomically insert requests and then consume until queue is empty from
> sugov kthread.

IMO we don't really need a queue or anything, we should need the kthread to
process the *latest* request it sees since that's the only one that matters.

> But, I guess that's going to be too much complexity for an (hopefully)
> corner case.

I thought of this corner case too, I'd argue its still an improvement over
not doing anything, but we could tighten this up a bit more if you wanted by
doing something like this on top of my patch. Thoughts?

---8<-----------------------

diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index a87fc281893d..e45ec24b810b 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -394,6 +394,7 @@ static void sugov_work(struct kthread_work *work)
 	unsigned int freq;
 	unsigned long flags;
 
+redo_work:
 	/*
 	 * Hold sg_policy->update_lock shortly to handle the case where:
 	 * incase sg_policy->next_freq is read here, and then updated by
@@ -409,6 +410,9 @@ static void sugov_work(struct kthread_work *work)
 	__cpufreq_driver_target(sg_policy->policy, freq,
 				CPUFREQ_RELATION_L);
 	mutex_unlock(&sg_policy->work_lock);
+
+	if (sg_policy->work_in_progress)
+		goto redo_work;
 }
 
 static void sugov_irq_work(struct irq_work *irq_work)

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH RFC] schedutil: Allow cpufreq requests to be made even when kthread kicked
  2018-05-17  5:06 ` Viresh Kumar
@ 2018-05-17 13:11   ` Joel Fernandes
  0 siblings, 0 replies; 11+ messages in thread
From: Joel Fernandes @ 2018-05-17 13:11 UTC (permalink / raw)
  To: Viresh Kumar
  Cc: linux-kernel, Rafael J . Wysocki, Peter Zijlstra, Ingo Molnar,
	Patrick Bellasi, Juri Lelli, Luca Abeni, kernel-team, linux-pm

On Thu, May 17, 2018 at 10:36:11AM +0530, Viresh Kumar wrote:
> On 16-05-18, 15:45, Joel Fernandes (Google) wrote:
> >  kernel/sched/cpufreq_schedutil.c | 36 +++++++++++++++++++++++++-------
> >  1 file changed, 28 insertions(+), 8 deletions(-)
> > 
> > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> > index e13df951aca7..a87fc281893d 100644
> > --- a/kernel/sched/cpufreq_schedutil.c
> > +++ b/kernel/sched/cpufreq_schedutil.c
> > @@ -92,9 +92,6 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time)
> >  	    !cpufreq_can_do_remote_dvfs(sg_policy->policy))
> >  		return false;
> >  
> > -	if (sg_policy->work_in_progress)
> > -		return false;
> > -
> >  	if (unlikely(sg_policy->need_freq_update)) {
> >  		sg_policy->need_freq_update = false;
> >  		/*
> > @@ -129,8 +126,11 @@ static void sugov_update_commit(struct sugov_policy *sg_policy, u64 time,
> >  		policy->cur = next_freq;
> >  		trace_cpu_frequency(next_freq, smp_processor_id());
> >  	} else {
> > -		sg_policy->work_in_progress = true;
> > -		irq_work_queue(&sg_policy->irq_work);
> > +		/* Don't queue request if one was already queued */
> > +		if (!sg_policy->work_in_progress) {
> 
> Merge it above to make it "else if".

Sure.

> > +			sg_policy->work_in_progress = true;
> > +			irq_work_queue(&sg_policy->irq_work);
> > +		}
> >  	}
> >  }
> >  
> > @@ -291,6 +291,15 @@ static void sugov_update_single(struct update_util_data *hook, u64 time,
> >  
> >  	ignore_dl_rate_limit(sg_cpu, sg_policy);
> >  
> > +	/*
> > +	 * For slow-switch systems, single policy requests can't run at the
> > +	 * moment if the governor thread is already processing a pending
> > +	 * frequency switch request, this can be fixed by acquiring update_lock
> > +	 * while updating next_freq and work_in_progress but we prefer not to.
> > +	 */
> > +	if (sg_policy->work_in_progress)
> > +		return;
> > +
> 
> @Rafael: Do you think its worth start using the lock now for unshared
> policies ?

Will wait for confirmation before next revision.

> >  	if (!sugov_should_update_freq(sg_policy, time))
> >  		return;
> >  
> > @@ -382,13 +391,24 @@ sugov_update_shared(struct update_util_data *hook, u64 time, unsigned int flags)
> >  static void sugov_work(struct kthread_work *work)
> >  {
> >  	struct sugov_policy *sg_policy = container_of(work, struct sugov_policy, work);
> > +	unsigned int freq;
> > +	unsigned long flags;
> > +
> > +	/*
> > +	 * Hold sg_policy->update_lock shortly to handle the case where:
> > +	 * incase sg_policy->next_freq is read here, and then updated by
> > +	 * sugov_update_shared just before work_in_progress is set to false
> > +	 * here, we may miss queueing the new update.
> > +	 */
> > +	raw_spin_lock_irqsave(&sg_policy->update_lock, flags);
> > +	freq = sg_policy->next_freq;
> > +	sg_policy->work_in_progress = false;
> > +	raw_spin_unlock_irqrestore(&sg_policy->update_lock, flags);
> >  
> >  	mutex_lock(&sg_policy->work_lock);
> > -	__cpufreq_driver_target(sg_policy->policy, sg_policy->next_freq,
> > +	__cpufreq_driver_target(sg_policy->policy, freq,
> >  				CPUFREQ_RELATION_L);
> 
> No need of line break anymore.

Yes, will fix.

> >  	mutex_unlock(&sg_policy->work_lock);
> > -
> > -	sg_policy->work_in_progress = false;
> >  }
> >  
> >  static void sugov_irq_work(struct irq_work *irq_work)
> 
> LGTM.

Cool, thanks.

- Joel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH RFC] schedutil: Allow cpufreq requests to be made even when kthread kicked
  2018-05-17 13:07       ` Joel Fernandes
@ 2018-05-17 14:28         ` Juri Lelli
  2018-05-17 14:43           ` Joel Fernandes
  0 siblings, 1 reply; 11+ messages in thread
From: Juri Lelli @ 2018-05-17 14:28 UTC (permalink / raw)
  To: Joel Fernandes
  Cc: Viresh Kumar, linux-kernel, Rafael J . Wysocki, Peter Zijlstra,
	Ingo Molnar, Patrick Bellasi, Luca Abeni, Joel Fernandes,
	linux-pm, kernel-team

On 17/05/18 06:07, Joel Fernandes wrote:
> On Thu, May 17, 2018 at 12:53:58PM +0200, Juri Lelli wrote:
> > On 17/05/18 15:50, Viresh Kumar wrote:
> > > On 17-05-18, 09:00, Juri Lelli wrote:
> > > > Hi Joel,
> > > > 
> > > > On 16/05/18 15:45, Joel Fernandes (Google) wrote:
> > > > 
> > > > [...]
> > > > 
> > > > > @@ -382,13 +391,24 @@ sugov_update_shared(struct update_util_data *hook, u64 time, unsigned int flags)
> > > > >  static void sugov_work(struct kthread_work *work)
> > > > >  {
> > > > >  	struct sugov_policy *sg_policy = container_of(work, struct sugov_policy, work);
> > > > > +	unsigned int freq;
> > > > > +	unsigned long flags;
> > > > > +
> > > > > +	/*
> > > > > +	 * Hold sg_policy->update_lock shortly to handle the case where:
> > > > > +	 * incase sg_policy->next_freq is read here, and then updated by
> > > > > +	 * sugov_update_shared just before work_in_progress is set to false
> > > > > +	 * here, we may miss queueing the new update.
> > > > > +	 */
> > > > > +	raw_spin_lock_irqsave(&sg_policy->update_lock, flags);
> > > > > +	freq = sg_policy->next_freq;
> > > > > +	sg_policy->work_in_progress = false;
> > > > > +	raw_spin_unlock_irqrestore(&sg_policy->update_lock, flags);
> > > > 
> > > > OK, we queue the new request up, but still we need to let this kthread
> > > > activation complete and then wake it up again to service the request
> > > > already queued, right? Wasn't what Claudio proposed (service back to
> > > > back requests all in the same kthread activation) better from an
> > > > overhead pow?
> 
> Hmm, from that perspective, yeah. But note that my patch doesn't increase the
> overhead from what it already is.. because we don't queue the irq_work again
> unless work_in_progress is cleared, which wouldn't be if the kthread didn't
> run yet.
> 
> > > 
> > > We would need more locking stuff in the work handler in that case and
> > > I think there maybe a chance of missing the request in that solution
> > > if the request happens right at the end of when sugov_work returns.
> > 
> > Mmm, true. Ideally we might want to use some sort of queue where to
> > atomically insert requests and then consume until queue is empty from
> > sugov kthread.
> 
> IMO we don't really need a queue or anything, we should need the kthread to
> process the *latest* request it sees since that's the only one that matters.

Yep, makes sense.

> > But, I guess that's going to be too much complexity for an (hopefully)
> > corner case.
> 
> I thought of this corner case too, I'd argue its still an improvement over
> not doing anything, but we could tighten this up a bit more if you wanted by

Indeed! :)

> doing something like this on top of my patch. Thoughts?
> 
> ---8<-----------------------
> 
> diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> index a87fc281893d..e45ec24b810b 100644
> --- a/kernel/sched/cpufreq_schedutil.c
> +++ b/kernel/sched/cpufreq_schedutil.c
> @@ -394,6 +394,7 @@ static void sugov_work(struct kthread_work *work)
>  	unsigned int freq;
>  	unsigned long flags;
>  
> +redo_work:
>  	/*
>  	 * Hold sg_policy->update_lock shortly to handle the case where:
>  	 * incase sg_policy->next_freq is read here, and then updated by
> @@ -409,6 +410,9 @@ static void sugov_work(struct kthread_work *work)
>  	__cpufreq_driver_target(sg_policy->policy, freq,
>  				CPUFREQ_RELATION_L);
>  	mutex_unlock(&sg_policy->work_lock);
> +
> +	if (sg_policy->work_in_progress)
> +		goto redo_work;

Didn't we already queue up another irq_work at this point?

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH RFC] schedutil: Allow cpufreq requests to be made even when kthread kicked
  2018-05-17 14:28         ` Juri Lelli
@ 2018-05-17 14:43           ` Joel Fernandes
  2018-05-17 15:23             ` Juri Lelli
  0 siblings, 1 reply; 11+ messages in thread
From: Joel Fernandes @ 2018-05-17 14:43 UTC (permalink / raw)
  To: Juri Lelli
  Cc: Viresh Kumar, linux-kernel, Rafael J . Wysocki, Peter Zijlstra,
	Ingo Molnar, Patrick Bellasi, Luca Abeni, linux-pm, kernel-team

On Thu, May 17, 2018 at 04:28:23PM +0200, Juri Lelli wrote:
[...]
> > > > We would need more locking stuff in the work handler in that case and
> > > > I think there maybe a chance of missing the request in that solution
> > > > if the request happens right at the end of when sugov_work returns.
> > > 
> > > Mmm, true. Ideally we might want to use some sort of queue where to
> > > atomically insert requests and then consume until queue is empty from
> > > sugov kthread.
> > 
> > IMO we don't really need a queue or anything, we should need the kthread to
> > process the *latest* request it sees since that's the only one that matters.
> 
> Yep, makes sense.
> 
> > > But, I guess that's going to be too much complexity for an (hopefully)
> > > corner case.
> > 
> > I thought of this corner case too, I'd argue its still an improvement over
> > not doing anything, but we could tighten this up a bit more if you wanted by
> 
> Indeed! :)
> 
> > doing something like this on top of my patch. Thoughts?
> > 
> > ---8<-----------------------
> > 
> > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> > index a87fc281893d..e45ec24b810b 100644
> > --- a/kernel/sched/cpufreq_schedutil.c
> > +++ b/kernel/sched/cpufreq_schedutil.c
> > @@ -394,6 +394,7 @@ static void sugov_work(struct kthread_work *work)
> >  	unsigned int freq;
> >  	unsigned long flags;
> >  
> > +redo_work:
> >  	/*
> >  	 * Hold sg_policy->update_lock shortly to handle the case where:
> >  	 * incase sg_policy->next_freq is read here, and then updated by
> > @@ -409,6 +410,9 @@ static void sugov_work(struct kthread_work *work)
> >  	__cpufreq_driver_target(sg_policy->policy, freq,
> >  				CPUFREQ_RELATION_L);
> >  	mutex_unlock(&sg_policy->work_lock);
> > +
> > +	if (sg_policy->work_in_progress)
> > +		goto redo_work;
> 
> Didn't we already queue up another irq_work at this point?

Oh yeah, so the case I was thinking was if the kthread was active, while the
new irq_work raced and finished.

Since that would just mean a new kthread_work for the worker, the loop I
mentioned above isn't needed. Infact there's already a higher level loop
taking care of it in kthread_worker_fn as below. So the governor thread
will not sleep and we'll keep servicing all pending requests till
they're done. So I think we're good with my original patch.

repeat:
[...]
if (!list_empty(&worker->work_list)) {
		work = list_first_entry(&worker->work_list,
					struct kthread_work, node);
		list_del_init(&work->node);
	}
	worker->current_work = work;
	spin_unlock_irq(&worker->lock);

	if (work) {
		__set_current_state(TASK_RUNNING);
		work->func(work);
	} else if (!freezing(current))
		schedule();

	try_to_freeze();
	cond_resched();
	goto repeat;

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH RFC] schedutil: Allow cpufreq requests to be made even when kthread kicked
  2018-05-17 14:43           ` Joel Fernandes
@ 2018-05-17 15:23             ` Juri Lelli
  2018-05-17 16:04               ` Joel Fernandes
  0 siblings, 1 reply; 11+ messages in thread
From: Juri Lelli @ 2018-05-17 15:23 UTC (permalink / raw)
  To: Joel Fernandes
  Cc: Viresh Kumar, linux-kernel, Rafael J . Wysocki, Peter Zijlstra,
	Ingo Molnar, Patrick Bellasi, Luca Abeni, linux-pm, kernel-team

On 17/05/18 07:43, Joel Fernandes wrote:
> On Thu, May 17, 2018 at 04:28:23PM +0200, Juri Lelli wrote:
> [...]
> > > > > We would need more locking stuff in the work handler in that case and
> > > > > I think there maybe a chance of missing the request in that solution
> > > > > if the request happens right at the end of when sugov_work returns.
> > > > 
> > > > Mmm, true. Ideally we might want to use some sort of queue where to
> > > > atomically insert requests and then consume until queue is empty from
> > > > sugov kthread.
> > > 
> > > IMO we don't really need a queue or anything, we should need the kthread to
> > > process the *latest* request it sees since that's the only one that matters.
> > 
> > Yep, makes sense.
> > 
> > > > But, I guess that's going to be too much complexity for an (hopefully)
> > > > corner case.
> > > 
> > > I thought of this corner case too, I'd argue its still an improvement over
> > > not doing anything, but we could tighten this up a bit more if you wanted by
> > 
> > Indeed! :)
> > 
> > > doing something like this on top of my patch. Thoughts?
> > > 
> > > ---8<-----------------------
> > > 
> > > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> > > index a87fc281893d..e45ec24b810b 100644
> > > --- a/kernel/sched/cpufreq_schedutil.c
> > > +++ b/kernel/sched/cpufreq_schedutil.c
> > > @@ -394,6 +394,7 @@ static void sugov_work(struct kthread_work *work)
> > >  	unsigned int freq;
> > >  	unsigned long flags;
> > >  
> > > +redo_work:
> > >  	/*
> > >  	 * Hold sg_policy->update_lock shortly to handle the case where:
> > >  	 * incase sg_policy->next_freq is read here, and then updated by
> > > @@ -409,6 +410,9 @@ static void sugov_work(struct kthread_work *work)
> > >  	__cpufreq_driver_target(sg_policy->policy, freq,
> > >  				CPUFREQ_RELATION_L);
> > >  	mutex_unlock(&sg_policy->work_lock);
> > > +
> > > +	if (sg_policy->work_in_progress)
> > > +		goto redo_work;
> > 
> > Didn't we already queue up another irq_work at this point?
> 
> Oh yeah, so the case I was thinking was if the kthread was active, while the
> new irq_work raced and finished.
> 
> Since that would just mean a new kthread_work for the worker, the loop I
> mentioned above isn't needed. Infact there's already a higher level loop
> taking care of it in kthread_worker_fn as below. So the governor thread
> will not sleep and we'll keep servicing all pending requests till
> they're done. So I think we're good with my original patch.
> 
> repeat:
> [...]
> if (!list_empty(&worker->work_list)) {
> 		work = list_first_entry(&worker->work_list,
> 					struct kthread_work, node);
> 		list_del_init(&work->node);
> 	}
> 	worker->current_work = work;
> 	spin_unlock_irq(&worker->lock);
> 
> 	if (work) {
> 		__set_current_state(TASK_RUNNING);
> 		work->func(work);
> 	} else if (!freezing(current))
> 		schedule();
> 
> 	try_to_freeze();
> 	cond_resched();
> 	goto repeat;

Ah, right. Your original patch LGTM then. :)

Maybe add a comment about this higher level loop?

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH RFC] schedutil: Allow cpufreq requests to be made even when kthread kicked
  2018-05-17 15:23             ` Juri Lelli
@ 2018-05-17 16:04               ` Joel Fernandes
  0 siblings, 0 replies; 11+ messages in thread
From: Joel Fernandes @ 2018-05-17 16:04 UTC (permalink / raw)
  To: Juri Lelli
  Cc: Viresh Kumar, linux-kernel, Rafael J . Wysocki, Peter Zijlstra,
	Ingo Molnar, Patrick Bellasi, Luca Abeni, linux-pm, kernel-team

On Thu, May 17, 2018 at 05:23:12PM +0200, Juri Lelli wrote:
> On 17/05/18 07:43, Joel Fernandes wrote:
> > On Thu, May 17, 2018 at 04:28:23PM +0200, Juri Lelli wrote:
> > [...]
> > > > > > We would need more locking stuff in the work handler in that case and
> > > > > > I think there maybe a chance of missing the request in that solution
> > > > > > if the request happens right at the end of when sugov_work returns.
> > > > > 
> > > > > Mmm, true. Ideally we might want to use some sort of queue where to
> > > > > atomically insert requests and then consume until queue is empty from
> > > > > sugov kthread.
> > > > 
> > > > IMO we don't really need a queue or anything, we should need the kthread to
> > > > process the *latest* request it sees since that's the only one that matters.
> > > 
> > > Yep, makes sense.
> > > 
> > > > > But, I guess that's going to be too much complexity for an (hopefully)
> > > > > corner case.
> > > > 
> > > > I thought of this corner case too, I'd argue its still an improvement over
> > > > not doing anything, but we could tighten this up a bit more if you wanted by
> > > 
> > > Indeed! :)
> > > 
> > > > doing something like this on top of my patch. Thoughts?
> > > > 
> > > > ---8<-----------------------
> > > > 
> > > > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> > > > index a87fc281893d..e45ec24b810b 100644
> > > > --- a/kernel/sched/cpufreq_schedutil.c
> > > > +++ b/kernel/sched/cpufreq_schedutil.c
> > > > @@ -394,6 +394,7 @@ static void sugov_work(struct kthread_work *work)
> > > >  	unsigned int freq;
> > > >  	unsigned long flags;
> > > >  
> > > > +redo_work:
> > > >  	/*
> > > >  	 * Hold sg_policy->update_lock shortly to handle the case where:
> > > >  	 * incase sg_policy->next_freq is read here, and then updated by
> > > > @@ -409,6 +410,9 @@ static void sugov_work(struct kthread_work *work)
> > > >  	__cpufreq_driver_target(sg_policy->policy, freq,
> > > >  				CPUFREQ_RELATION_L);
> > > >  	mutex_unlock(&sg_policy->work_lock);
> > > > +
> > > > +	if (sg_policy->work_in_progress)
> > > > +		goto redo_work;
> > > 
> > > Didn't we already queue up another irq_work at this point?
> > 
> > Oh yeah, so the case I was thinking was if the kthread was active, while the
> > new irq_work raced and finished.
> > 
> > Since that would just mean a new kthread_work for the worker, the loop I
> > mentioned above isn't needed. Infact there's already a higher level loop
> > taking care of it in kthread_worker_fn as below. So the governor thread
> > will not sleep and we'll keep servicing all pending requests till
> > they're done. So I think we're good with my original patch.
> > 
> > repeat:
> > [...]
> > if (!list_empty(&worker->work_list)) {
> > 		work = list_first_entry(&worker->work_list,
> > 					struct kthread_work, node);
> > 		list_del_init(&work->node);
> > 	}
> > 	worker->current_work = work;
> > 	spin_unlock_irq(&worker->lock);
> > 
> > 	if (work) {
> > 		__set_current_state(TASK_RUNNING);
> > 		work->func(work);
> > 	} else if (!freezing(current))
> > 		schedule();
> > 
> > 	try_to_freeze();
> > 	cond_resched();
> > 	goto repeat;
> 
> Ah, right. Your original patch LGTM then. :)

Cool, thanks. :)

> Maybe add a comment about this higher level loop?

Sure, will do.

thanks,

- Joel

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2018-05-17 16:04 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-16 22:45 [PATCH RFC] schedutil: Allow cpufreq requests to be made even when kthread kicked Joel Fernandes (Google)
2018-05-17  5:06 ` Viresh Kumar
2018-05-17 13:11   ` Joel Fernandes
2018-05-17  7:00 ` Juri Lelli
2018-05-17 10:20   ` Viresh Kumar
2018-05-17 10:53     ` Juri Lelli
2018-05-17 13:07       ` Joel Fernandes
2018-05-17 14:28         ` Juri Lelli
2018-05-17 14:43           ` Joel Fernandes
2018-05-17 15:23             ` Juri Lelli
2018-05-17 16:04               ` Joel Fernandes

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.