All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] sched/fair: Fix the wrong throttled clock time for cfs_rq_clock_task()
@ 2016-05-10 13:03 Xunlei Pang
  2016-05-10 18:19 ` bsegall
  2016-06-03 10:47 ` [tip:sched/core] " tip-bot for Xunlei Pang
  0 siblings, 2 replies; 6+ messages in thread
From: Xunlei Pang @ 2016-05-10 13:03 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Juri Lelli, Ingo Molnar, Steven Rostedt, Xunlei Pang

Two minor fixes for cfs_rq_clock_task().
1) If cfs_rq is currently being throttled, we need to subtract the cfs
   throttled clock time.

2) Make "throttled_clock_task_time" update SMP unrelated. Now UP cases
   need it as well.

Signed-off-by: Xunlei Pang <xlpang@redhat.com>
---
 kernel/sched/fair.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1708729e..fb80a12 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3655,7 +3655,7 @@ static inline struct cfs_bandwidth *tg_cfs_bandwidth(struct task_group *tg)
 static inline u64 cfs_rq_clock_task(struct cfs_rq *cfs_rq)
 {
 	if (unlikely(cfs_rq->throttle_count))
-		return cfs_rq->throttled_clock_task;
+		return cfs_rq->throttled_clock_task - cfs_rq->throttled_clock_task_time;
 
 	return rq_clock_task(rq_of(cfs_rq)) - cfs_rq->throttled_clock_task_time;
 }
@@ -3793,13 +3793,11 @@ static int tg_unthrottle_up(struct task_group *tg, void *data)
 	struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
 
 	cfs_rq->throttle_count--;
-#ifdef CONFIG_SMP
 	if (!cfs_rq->throttle_count) {
 		/* adjust cfs_rq_clock_task() */
 		cfs_rq->throttled_clock_task_time += rq_clock_task(rq) -
 					     cfs_rq->throttled_clock_task;
 	}
-#endif
 
 	return 0;
 }
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] sched/fair: Fix the wrong throttled clock time for cfs_rq_clock_task()
  2016-05-10 13:03 [PATCH] sched/fair: Fix the wrong throttled clock time for cfs_rq_clock_task() Xunlei Pang
@ 2016-05-10 18:19 ` bsegall
  2016-05-11  6:49   ` Peter Zijlstra
  2016-06-03 10:47 ` [tip:sched/core] " tip-bot for Xunlei Pang
  1 sibling, 1 reply; 6+ messages in thread
From: bsegall @ 2016-05-10 18:19 UTC (permalink / raw)
  To: Xunlei Pang
  Cc: linux-kernel, Peter Zijlstra, Juri Lelli, Ingo Molnar,
	Steven Rostedt, pjt

Xunlei Pang <xlpang@redhat.com> writes:

> Two minor fixes for cfs_rq_clock_task().
> 1) If cfs_rq is currently being throttled, we need to subtract the cfs
>    throttled clock time.
>
> 2) Make "throttled_clock_task_time" update SMP unrelated. Now UP cases
>    need it as well.
>
> Signed-off-by: Xunlei Pang <xlpang@redhat.com>
> ---
>  kernel/sched/fair.c | 4 +---
>  1 file changed, 1 insertion(+), 3 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 1708729e..fb80a12 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3655,7 +3655,7 @@ static inline struct cfs_bandwidth *tg_cfs_bandwidth(struct task_group *tg)
>  static inline u64 cfs_rq_clock_task(struct cfs_rq *cfs_rq)
>  {
>  	if (unlikely(cfs_rq->throttle_count))
> -		return cfs_rq->throttled_clock_task;
> +		return cfs_rq->throttled_clock_task - cfs_rq->throttled_clock_task_time;
>  
>  	return rq_clock_task(rq_of(cfs_rq)) - cfs_rq->throttled_clock_task_time;
>  }
> @@ -3793,13 +3793,11 @@ static int tg_unthrottle_up(struct task_group *tg, void *data)
>  	struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
>  
>  	cfs_rq->throttle_count--;
> -#ifdef CONFIG_SMP
>  	if (!cfs_rq->throttle_count) {
>  		/* adjust cfs_rq_clock_task() */
>  		cfs_rq->throttled_clock_task_time += rq_clock_task(rq) -
>  					     cfs_rq->throttled_clock_task;
>  	}
> -#endif
>  
>  	return 0;
>  }

[Cc: pjt@google.com]

This looks reasonable to me (at least the first part; I'm not
certain why the CONFIG_SMP ifdef was put in place).

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] sched/fair: Fix the wrong throttled clock time for cfs_rq_clock_task()
  2016-05-10 18:19 ` bsegall
@ 2016-05-11  6:49   ` Peter Zijlstra
  2016-05-12  3:36     ` Xunlei Pang
  0 siblings, 1 reply; 6+ messages in thread
From: Peter Zijlstra @ 2016-05-11  6:49 UTC (permalink / raw)
  To: bsegall
  Cc: Xunlei Pang, linux-kernel, Juri Lelli, Ingo Molnar, Steven Rostedt, pjt

On Tue, May 10, 2016 at 11:19:44AM -0700, bsegall@google.com wrote:
> Xunlei Pang <xlpang@redhat.com> writes:
> 
> > Two minor fixes for cfs_rq_clock_task().
> > 1) If cfs_rq is currently being throttled, we need to subtract the cfs
> >    throttled clock time.
> >
> > 2) Make "throttled_clock_task_time" update SMP unrelated. Now UP cases
> >    need it as well.
> >
> > Signed-off-by: Xunlei Pang <xlpang@redhat.com>
> > ---
> >  kernel/sched/fair.c | 4 +---
> >  1 file changed, 1 insertion(+), 3 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 1708729e..fb80a12 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -3655,7 +3655,7 @@ static inline struct cfs_bandwidth *tg_cfs_bandwidth(struct task_group *tg)
> >  static inline u64 cfs_rq_clock_task(struct cfs_rq *cfs_rq)
> >  {
> >  	if (unlikely(cfs_rq->throttle_count))
> > -		return cfs_rq->throttled_clock_task;
> > +		return cfs_rq->throttled_clock_task - cfs_rq->throttled_clock_task_time;
> >  
> >  	return rq_clock_task(rq_of(cfs_rq)) - cfs_rq->throttled_clock_task_time;
> >  }

The alternative is obviously to do the subtraction in
tg_throttle_down(), were we set ->throttled_clock_task.

> > @@ -3793,13 +3793,11 @@ static int tg_unthrottle_up(struct task_group *tg, void *data)
> >  	struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
> >  
> >  	cfs_rq->throttle_count--;
> > -#ifdef CONFIG_SMP
> >  	if (!cfs_rq->throttle_count) {
> >  		/* adjust cfs_rq_clock_task() */
> >  		cfs_rq->throttled_clock_task_time += rq_clock_task(rq) -
> >  					     cfs_rq->throttled_clock_task;
> >  	}
> > -#endif
> >  
> >  	return 0;
> >  }
> 
> [Cc: pjt@google.com]
> 
> This looks reasonable to me (at least the first part; I'm not
> certain why the CONFIG_SMP ifdef was put in place).

64660c864f46 ("sched: Prevent interactions with throttled entities")

Introduced it, because at that time it was about updating shares, which
is only present on SMP. Then:

f1b17280efbd ("sched: Maintain runnable averages across throttled periods")

Added the clock thing inside it, and:

82958366cfea ("sched: Replace update_shares weight distribution with per-entity computation")

took out the shares update and left the clock update, resulting in the
current code.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] sched/fair: Fix the wrong throttled clock time for cfs_rq_clock_task()
  2016-05-11  6:49   ` Peter Zijlstra
@ 2016-05-12  3:36     ` Xunlei Pang
  2016-05-30  9:37       ` Xunlei Pang
  0 siblings, 1 reply; 6+ messages in thread
From: Xunlei Pang @ 2016-05-12  3:36 UTC (permalink / raw)
  To: Peter Zijlstra, bsegall
  Cc: Xunlei Pang, linux-kernel, Juri Lelli, Ingo Molnar, Steven Rostedt, pjt

On 2016/05/11 at 14:49, Peter Zijlstra wrote:
> On Tue, May 10, 2016 at 11:19:44AM -0700, bsegall@google.com wrote:
>> Xunlei Pang <xlpang@redhat.com> writes:
>>
>>> Two minor fixes for cfs_rq_clock_task().
>>> 1) If cfs_rq is currently being throttled, we need to subtract the cfs
>>>    throttled clock time.
>>>
>>> 2) Make "throttled_clock_task_time" update SMP unrelated. Now UP cases
>>>    need it as well.
>>>
>>> Signed-off-by: Xunlei Pang <xlpang@redhat.com>
>>> ---
>>>  kernel/sched/fair.c | 4 +---
>>>  1 file changed, 1 insertion(+), 3 deletions(-)
>>>
>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>> index 1708729e..fb80a12 100644
>>> --- a/kernel/sched/fair.c
>>> +++ b/kernel/sched/fair.c
>>> @@ -3655,7 +3655,7 @@ static inline struct cfs_bandwidth *tg_cfs_bandwidth(struct task_group *tg)
>>>  static inline u64 cfs_rq_clock_task(struct cfs_rq *cfs_rq)
>>>  {
>>>  	if (unlikely(cfs_rq->throttle_count))
>>> -		return cfs_rq->throttled_clock_task;
>>> +		return cfs_rq->throttled_clock_task - cfs_rq->throttled_clock_task_time;
>>>  
>>>  	return rq_clock_task(rq_of(cfs_rq)) - cfs_rq->throttled_clock_task_time;
>>>  }
> The alternative is obviously to do the subtraction in
> tg_throttle_down(), were we set ->throttled_clock_task.

It is possible, but throttled_clock_task is a timestamp, I think doing it here is semantically better.

>
>>> @@ -3793,13 +3793,11 @@ static int tg_unthrottle_up(struct task_group *tg, void *data)
>>>  	struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
>>>  
>>>  	cfs_rq->throttle_count--;
>>> -#ifdef CONFIG_SMP
>>>  	if (!cfs_rq->throttle_count) {
>>>  		/* adjust cfs_rq_clock_task() */
>>>  		cfs_rq->throttled_clock_task_time += rq_clock_task(rq) -
>>>  					     cfs_rq->throttled_clock_task;
>>>  	}
>>> -#endif
>>>  
>>>  	return 0;
>>>  }
>> [Cc: pjt@google.com]
>>
>> This looks reasonable to me (at least the first part; I'm not
>> certain why the CONFIG_SMP ifdef was put in place).
> 64660c864f46 ("sched: Prevent interactions with throttled entities")
>
> Introduced it, because at that time it was about updating shares, which
> is only present on SMP. Then:
>
> f1b17280efbd ("sched: Maintain runnable averages across throttled periods")
>
> Added the clock thing inside it, and:
>
> 82958366cfea ("sched: Replace update_shares weight distribution with per-entity computation")
>
> took out the shares update and left the clock update, resulting in the
> current code.
>
>

Thanks,
Xunlei

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] sched/fair: Fix the wrong throttled clock time for cfs_rq_clock_task()
  2016-05-12  3:36     ` Xunlei Pang
@ 2016-05-30  9:37       ` Xunlei Pang
  0 siblings, 0 replies; 6+ messages in thread
From: Xunlei Pang @ 2016-05-30  9:37 UTC (permalink / raw)
  To: Peter Zijlstra, bsegall
  Cc: Xunlei Pang, linux-kernel, Juri Lelli, Ingo Molnar, Steven Rostedt, pjt

Ping

On 2016/05/12 at 11:36, Xunlei Pang wrote:
> On 2016/05/11 at 14:49, Peter Zijlstra wrote:
>> On Tue, May 10, 2016 at 11:19:44AM -0700, bsegall@google.com wrote:
>>> Xunlei Pang <xlpang@redhat.com> writes:
>>>
>>>> Two minor fixes for cfs_rq_clock_task().
>>>> 1) If cfs_rq is currently being throttled, we need to subtract the cfs
>>>>    throttled clock time.
>>>>
>>>> 2) Make "throttled_clock_task_time" update SMP unrelated. Now UP cases
>>>>    need it as well.
>>>>
>>>> Signed-off-by: Xunlei Pang <xlpang@redhat.com>
>>>> ---
>>>>  kernel/sched/fair.c | 4 +---
>>>>  1 file changed, 1 insertion(+), 3 deletions(-)
>>>>
>>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>>> index 1708729e..fb80a12 100644
>>>> --- a/kernel/sched/fair.c
>>>> +++ b/kernel/sched/fair.c
>>>> @@ -3655,7 +3655,7 @@ static inline struct cfs_bandwidth *tg_cfs_bandwidth(struct task_group *tg)
>>>>  static inline u64 cfs_rq_clock_task(struct cfs_rq *cfs_rq)
>>>>  {
>>>>  	if (unlikely(cfs_rq->throttle_count))
>>>> -		return cfs_rq->throttled_clock_task;
>>>> +		return cfs_rq->throttled_clock_task - cfs_rq->throttled_clock_task_time;
>>>>  
>>>>  	return rq_clock_task(rq_of(cfs_rq)) - cfs_rq->throttled_clock_task_time;
>>>>  }
>> The alternative is obviously to do the subtraction in
>> tg_throttle_down(), were we set ->throttled_clock_task.
> It is possible, but throttled_clock_task is a timestamp, I think doing it here is semantically better.
>
>>>> @@ -3793,13 +3793,11 @@ static int tg_unthrottle_up(struct task_group *tg, void *data)
>>>>  	struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
>>>>  
>>>>  	cfs_rq->throttle_count--;
>>>> -#ifdef CONFIG_SMP
>>>>  	if (!cfs_rq->throttle_count) {
>>>>  		/* adjust cfs_rq_clock_task() */
>>>>  		cfs_rq->throttled_clock_task_time += rq_clock_task(rq) -
>>>>  					     cfs_rq->throttled_clock_task;
>>>>  	}
>>>> -#endif
>>>>  
>>>>  	return 0;
>>>>  }
>>> [Cc: pjt@google.com]
>>>
>>> This looks reasonable to me (at least the first part; I'm not
>>> certain why the CONFIG_SMP ifdef was put in place).
>> 64660c864f46 ("sched: Prevent interactions with throttled entities")
>>
>> Introduced it, because at that time it was about updating shares, which
>> is only present on SMP. Then:
>>
>> f1b17280efbd ("sched: Maintain runnable averages across throttled periods")
>>
>> Added the clock thing inside it, and:
>>
>> 82958366cfea ("sched: Replace update_shares weight distribution with per-entity computation")
>>
>> took out the shares update and left the clock update, resulting in the
>> current code.
>>
>>
> Thanks,
> Xunlei
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [tip:sched/core] sched/fair: Fix the wrong throttled clock time for cfs_rq_clock_task()
  2016-05-10 13:03 [PATCH] sched/fair: Fix the wrong throttled clock time for cfs_rq_clock_task() Xunlei Pang
  2016-05-10 18:19 ` bsegall
@ 2016-06-03 10:47 ` tip-bot for Xunlei Pang
  1 sibling, 0 replies; 6+ messages in thread
From: tip-bot for Xunlei Pang @ 2016-06-03 10:47 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: peterz, rostedt, mingo, torvalds, hpa, linux-kernel, tglx,
	xlpang, juri.lelli, efault

Commit-ID:  1a99ae3f00d3c7c7885ee529ac9a874b19caa0cf
Gitweb:     http://git.kernel.org/tip/1a99ae3f00d3c7c7885ee529ac9a874b19caa0cf
Author:     Xunlei Pang <xlpang@redhat.com>
AuthorDate: Tue, 10 May 2016 21:03:18 +0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 3 Jun 2016 09:18:56 +0200

sched/fair: Fix the wrong throttled clock time for cfs_rq_clock_task()

Two minor fixes for cfs_rq_clock_task():

 1) If cfs_rq is currently being throttled, we need to subtract the cfs
    throttled clock time.

 2) Make "throttled_clock_task_time" update SMP unrelated. Now UP cases
    need it as well.

Signed-off-by: Xunlei Pang <xlpang@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1462885398-14724-1-git-send-email-xlpang@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 218f8e8..1e87bb6 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3688,7 +3688,7 @@ static inline struct cfs_bandwidth *tg_cfs_bandwidth(struct task_group *tg)
 static inline u64 cfs_rq_clock_task(struct cfs_rq *cfs_rq)
 {
 	if (unlikely(cfs_rq->throttle_count))
-		return cfs_rq->throttled_clock_task;
+		return cfs_rq->throttled_clock_task - cfs_rq->throttled_clock_task_time;
 
 	return rq_clock_task(rq_of(cfs_rq)) - cfs_rq->throttled_clock_task_time;
 }
@@ -3826,13 +3826,11 @@ static int tg_unthrottle_up(struct task_group *tg, void *data)
 	struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
 
 	cfs_rq->throttle_count--;
-#ifdef CONFIG_SMP
 	if (!cfs_rq->throttle_count) {
 		/* adjust cfs_rq_clock_task() */
 		cfs_rq->throttled_clock_task_time += rq_clock_task(rq) -
 					     cfs_rq->throttled_clock_task;
 	}
-#endif
 
 	return 0;
 }

^ permalink raw reply related	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2016-06-03 10:48 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-05-10 13:03 [PATCH] sched/fair: Fix the wrong throttled clock time for cfs_rq_clock_task() Xunlei Pang
2016-05-10 18:19 ` bsegall
2016-05-11  6:49   ` Peter Zijlstra
2016-05-12  3:36     ` Xunlei Pang
2016-05-30  9:37       ` Xunlei Pang
2016-06-03 10:47 ` [tip:sched/core] " tip-bot for Xunlei Pang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.